In JavaScript, mistaking slice for splice (or vice versa) is a common mistake among rookies and even experts. These two functions, although they have similar names, are doing two completely different things. In practice, such a confusion can be avoided by choosing an API that telegraphs the const-correctness of the function.

Array’s slice (ECMAScript 5.1 Specification Section is quite similar to String’s slice. According to the specification, slice needs to accept two arguments, start and end. It will return a new array containing the elements from the given start index up the one right before the specified end index. It’s not very difficult to understand what slice does:

'abc'.slice(1,2)           // "b"
[14, 3, 77].slice(1, 2)    //  [3]

An important aspect of slice is that it does not change the array which invokes it. The following code fragment illustrates the behavior. As you can see, x keeps its elements and y gets the sliced version thereof.

var x = [14, 3, 77];
var y = x.slice(1, 2);
console.log(x);          // [14, 3, 77]
console.log(y);          // [3]

Although splice (Section also takes two arguments (at minimum), the meaning is very different:

[14, 3, 77].slice(1, 2)     //  [3]
[14, 3, 77].splice(1, 2)    //  [3, 77]

On top of that, splice also mutates the array that calls it. This is not supposed to be a surprise, after all the name splice implies it.

var x = [14, 3, 77]
var y = x.splice(1, 2)
console.log(x)           // [14]
console.log(y)           // [3, 77]

When you build your own module, it is important to choose an API which minimizes this slice vs splice confusion. Ideally, the user of your module should not always read the documentation to figure out which one they really want. What kind of naming convention shall we follow?

A convention I’m familiar with (from my past time involvement with Qt) is by choosing the right form of the verb: present to indicate a possibly modifying action and past participle to return a new version without mutating the object. If possible, provide a pair of those methods. The following example illustrates the concept.

var p = new Point(100, 75);
p.translate(25, 25);
console.log(p);       // { x: 125, y: 100 }
var q = new Point(200, 100);
var s = q.translated(10, 50);
console.log(q);       // { x: 200, y: 100 }
console.log(s);       // { x: 210, y: 150 }

Note the difference between translate() that moves the point (in 2-D Cartesian coordinate system) and translated() which only creates a translated version. The point object p changed because it calls translate. Meanwhile, the object q stays the same since translated() does not modify it and it only returns a fresh copy as the new object s.

If this convention is used consistently throughout your application, that kind of confusion will be massively reduced. And one day, you can let your users sing I Can See Clearly Now happily!

At the recent San Francisco JavaScript meetup, I gave a talk on the subject of Searching and Sorting without Loops as part of its Functional Monthly series. In that talk, I explained various higher-order functions of JavaScript Array and their usage to replace explicit loops (for or while) in several different code patterns.


I have published the slide deck for the talk (as PDF, 1.4 MB). The talk started with a quick refresh of five Array’s functions: map, filter, reduce, some, and every. Those functions will be used in the subsequent three parts of the talk: sequences, searching, and finally sorting. The first one, building sequences without any explicit loop, sets the stage for the rest of the discussion.

If you are following this blog corner, you may notice that this is not a new topic. This talk is basically a detailed walkthrough of some posts I have written in the past:

It seems that you will need a lot of effort to understand all this complicate stuff. Nevertheless, think about the reward. Being able to understand this code fragment (fits in a single tweet) that constructs the first n numbers in the Fibonacci series is quite satisfying:

function fibo(n) {
  return Array.apply(0, Array(n)).
    reduce(function(x, y, z){ return x.concat((z < 2) ? z : x[z-1] + x[z-2]); }, []);

The age of exploration is just beginning.


Many free/open-source projects often suffer from a very specific feedback where it is assumed that a certain feature will not be implemented because of a philosophical reason. It is what I called as the "flying car" problem.


As an illustration, with a lot of users and very few contributors, PhantomJS was bound to have that problem. In fact, it does already and it will continue to have it. I don’t have a clear idea why it happens, however I suspect that it is caused by the practical impedance mismatch between the fundamental core implementation and its users. As you can imagine, most PhantomJS users are web developers who are not always exposed to the intricacies and the challenges of what happens behind the scene. This is pretty much like an automatic gearbox, my car mechanic and I might have a completely different idea of how such a gearbox should operate.

In a mailing-list thread over a year, I summarize the situation as:

Any "why PhantomJS can’t do this" situation should be (at first) treated the same way as
"why my car can’t fly" question. A car designer loves to have it, but the technology might not be there yet.

Another example is this remark on Hacker News (I don’t regularly follow HN, but from time to time certain comments were brought to my attention):

I believe phantom made a fundamental mistake of not being nodejs based in the first place.

This is despite the subject itself is mentioned in the FAQ and has been discussed in the mailing-list (several times). I won’t go into the technical details (not the point here), but surely you can’t help but to notice a similar pattern here.

An engineering project is always handicapped by a certain engineering constraints. Many times the developers want to be pragmatic, there is nothing opinionated or philosophical about it. The recent Cambrian explosion of social media forums highlights the loudest noises and provokes heated flame wars to a different level. It is easy to fall into the trap whereby we assume (by means of extrapolation) that every project owner is hotheaded and opinionated. The calm and reasonable ones fade into the background.

Every sport team will welcome useful and constructive feedback. An emotional knee-jerk reaction from a trigger-happy armchair quarterback however hardly makes it to the prioritized items. As I have often expressed, we are not in the kindergarten anymore, screaming does not make the solution appear faster. While it is an opportunity to polish the art of self-restraint, any kind of flying car problem unnecessarily drains the energy of every project maintainer. The optimal win-win compromise is for both sides to always practice the principle of Audi alteram partem. The very least, give it five minutes.

Enough rambling already, I need to go back to my garage to fix that damn hoverboard…


Three years ago, the first version of PhantomJS was announced to the public. It is still a toddler, but hey, it is growing up and getting some traction at an unprecedented rate.

Looking at the number of downloads over the last few years, the trend is obviously "up to the right", a total of over 3 millions downloads. This can be explained easily. Many web applications and projects are using a various different test frameworks, most of them rely on PhantomJS to run the tests headlessly. Thus, those crazy downloads are mostly automatic as PhantomJS is being pulled as one of the dependencies, typically in a CI system.


The community also keeps growing. Our mailing-list grows from just over a thousand members to 1,600 members. The code repository github.com/ariya/phantomjs doubles its stargazers to more than 9,100 to date. There are countless new projects using PhantomJS directly or indirectly, it is harder and harder to keep track of them all.

Just like any other toddlers of its age, PhantomJS is not perfect. It screams and makes a lot of noises. It does things you don’t expect it to do. And yet it keeps walking, running around, playing with friends, and brings a lot of happiness to those around it.

It gives us an ideal to strive towards. In time, it will help us accomplish wonders.

Here is to another amazing year!



How does your engineering organization build and deliver products to its customers? Similar to the well-known capability maturity model, the maturity level of a build automation system falls into one of the following: chaotic, repeatable, defined, managed, or optimized.

Let’s take a look at the differences in these levels for a popular project, PhantomJS. At the start of the project, it was tricky to build PhantomJS unless you were a seasoned C++ developer. But over time, more things were automated, and eventually engineers without C++ backgrounds could run the build as well. At some point, a Vagrant-based setup was introduced and building deployable binaries became trivial. The virtualized build workflow is both predictable and consistent.

The first level, chaotic, is familiar to all new hires in a growing organization. You arrive in the new office and on that first day, an IT guy hands you a laptop. Now it is up to you to figure out all the necessary bits and pieces to start becoming productive. Commonly it takes several days to set up your environment – that’s several days before you can get any work done. Of course, it is still a disaster if the build process itself can be initiated in one step.

This process is painful and eventually someone will step up and write documentation on how to do it. Sometimes it is a grassroots, organic activity in the best interest of all. Effectively, this makes the process much more repeatable; the chance of going down the wrong path is reduced.

Just like any other type of document, build setup documentation can be out of sync without people realizing it. A new module may be added last week, which suddenly implies a new dependency. An important configuration file has changed and therefore simply following the outdated wiki leads to a mysterious failure.

To overcome this problem, consistency must be mandated by a defined process. In many cases, this is as simple as a standardized bootstrap script which resolves and pulls the dependency automatically (make deps, bundle install, npm install, etc). Any differences in the initial environment are normalized by that script. You do not need to remember all the yum-based setup when running CentOS and recall the apt-get counterparts when handling an Ubuntu box. At this level, product delivery via continuous deployment and integration becomes possible. No human interaction is necessary to prepare the build machine to start producing the artifacts.

In this wonderful and heterogenous environment it is unfortunately challenging to track delivery consistency. Upgrading the OS can trigger a completely different build. A test which fails on a RHEL-based is not reproducible on the engineer’s MacBook. We are lucky that virtualization (VirtualBox) or containment (Docker) can be leveraged to ensure a managed build environment. There is no need to install, provision, and babysit a virtualized build machine (even on Windows, thanks to PowerShell and especially Chocolatey). Anyone in the world can get a brand-new computer running a fresh operating system, get the bootstrap script, and start kicking the build right away.

There are two more benefits of this managed automation level. Firstly, a multi-platform application is easier to build since the process of creating and provisioning the virtual machine happens automatically. Secondly, it enables every engineer to check the testing/staging environment in isolation, i.e. without changing their own development setup. Point of fact, tools like Vagrant are quickly becoming popular because they give engineers and devops such power.

The last level is the continuous optimizing state. As the name implies, this step refers to ongoing workflow refinements. For example, this could involve speeding up the overall build process which is pretty important in a large software project. Other types of polishes concern the environment itself, whether creating the virtual machine from the .ISO image (Packer) or distributing the build jobs to a cloud-based array of EC2/GCE boxes.

My experience with automated build refinement may be described like this:

  • Chaotic: hunt the build dependencies by hand
  • Repeatable: follow the step-by-step instructions from a wiki page
  • Defined: have the environment differences normalized by a bootstrapping script
  • Managed: use Vagrant to create and provision a consistent virtualized environment
  • Optimizing: let Packer prepare the VM from scratch

How is your personal experience through these different maturity levels?

Note: Special thanks to Ann Robson for reviewing and proofreading the draft of this blog post.


In the fifth part of this JavaScript kinetic scrolling tutorial, a demo of the attractive Cover Flow effect (commonly found in some Apple products) using the deadly combination of kinetic scrolling and CSS 3-D will be shown. The entire logic is implemented in ~200 lines of JavaScript code.

The cool factor of Cover Flow is in its fluid animation of the covers in the 3-D space. If you haven’t seen it before, you miss a lot! To get the feeling of the effect, use your browser (preferably on a smartphone or a tablet) to visit ariya.github.io/kinetic/5. Swipe left and right to scroll through the cover images of some audiobooks.


My first encounter to this Cover Flow effect was 6 years ago as I implemented this with pure software in C/C++. It ran quite well even on a lowly hardware (such as this 200 MHz HTC Touch). Of course, with GPU-accelerated CSS animation, there is no need to write an optimized special renderer for this. For this example, we manipulate the transformation matrix directly using CSS Transform feature. To follow along, check the main code in coverflow.js.

There are several key elements in this demo. First, we will need a smooth scrolling with the right acceleration and deceleration using the exponential decay technique which was covered in Part 2. Also, stopping the scrolling at the right position is necessary (Part 3, see the snap-to-grid code) as we need to have all the cover images in the perfect arrangement, not halfway and not being stuck somewhere.

When you look at the code, the scrolling via mouse or touch events impacts the value of the variable offset. In fact, the code for doing that is line-by-line equivalent to what has been covered in the previous parts. On a normal circumstances, this offset is always an integer multiply of 200 (the cover image size, in pixels). When the user makes a touch gesture to drag the cover or when it is decelerating by itself, then the value could be anything. This is where we need to apply tweening to give the right transformation matrix for each cover image.

Such a decoupling results in a modular implementation. All the events (touch, mouse, keyboard, timer) only deal with the change in offset, it has no knowledge as to how the value will be useful. The main renderer, which is the scroll() function, knows nothing on how that offset value is computed. Its responsibility is very limited: given the offset, compute the right matrix for every single cover. This is carried out by going a loop through the stack of the covers, from the front-most image to the ones in the far background, as illustrated in this diagram. Note how the loop also serves to set the proper z-index so that the center image becomes the front cover and so on.

Every image is initially centered on the screen. After that, the right translation is in the X axis is computed based on the relative position of the image. For a depth perception, there will be an additional translation in the Z axis, making the cover going farther from the screen. Finally the image is rotated (in the Y axis) with a positive angle and a negative angle for the left and right side, respectively. The 3-D visual comes from the perspective of the main div element which is the parent of every single image element.


The entire scroll() is about 40 lines. If the explanation above is still confusing, it is always fun to step through the code and watch the variables as it goes through one cover to another. The code is not crazily optimized, it is deliberately chosen to be as clear as possible without revealing too much. Having said that, with the most recent iOS or Android phones, the effect should be pretty smooth and there is not any problem achieving over 30 fps most of the time. You will get some minor extra frames (particularly useful for older devices) if the reflection effect is removed.

Since this demo is intended to be educational, I left a few exercises for the brave readers. For example, the tweening causes only one cover to move at a time. If you are up for a challenge, see if you can move two covers, one that is going to the front and one that is going to the back. In addition, the perspective here is rather a cheat since it applies on the parent DOM element (as you can witness, these transformed images have the same vanishing point). Applying an individual perspective and Y-rotation requires wrapping every img element with its container. Of course, the 3-D effect does not have to be like Cover Flow as you could always tweak the matrix logic so that it resembles e.g. MontageJS Popcorn instead.

Once you leave the flatland, there is no turning back!