In the fifth part of this JavaScript kinetic scrolling tutorial, a demo of the attractive Cover Flow effect (commonly found in some Apple products) using the deadly combination of kinetic scrolling and CSS 3-D will be shown. The entire logic is implemented in ~200 lines of JavaScript code.

The cool factor of Cover Flow is in its fluid animation of the covers in the 3-D space. If you haven’t seen it before, you miss a lot! To get the feeling of the effect, use your browser (preferably on a smartphone or a tablet) to visit ariya.github.io/kinetic/5. Swipe left and right to scroll through the cover images of some audiobooks.


My first encounter to this Cover Flow effect was 6 years ago as I implemented this with pure software in C/C++. It ran quite well even on a lowly hardware (such as this 200 MHz HTC Touch). Of course, with GPU-accelerated CSS animation, there is no need to write an optimized special renderer for this. For this example, we manipulate the transformation matrix directly using CSS Transform feature. To follow along, check the main code in coverflow.js.

There are several key elements in this demo. First, we will need a smooth scrolling with the right acceleration and deceleration using the exponential decay technique which was covered in Part 2. Also, stopping the scrolling at the right position is necessary (Part 3, see the snap-to-grid code) as we need to have all the cover images in the perfect arrangement, not halfway and not being stuck somewhere.

When you look at the code, the scrolling via mouse or touch events impacts the value of the variable offset. In fact, the code for doing that is line-by-line equivalent to what has been covered in the previous parts. On a normal circumstances, this offset is always an integer multiply of 200 (the cover image size, in pixels). When the user makes a touch gesture to drag the cover or when it is decelerating by itself, then the value could be anything. This is where we need to apply tweening to give the right transformation matrix for each cover image.

Such a decoupling results in a modular implementation. All the events (touch, mouse, keyboard, timer) only deal with the change in offset, it has no knowledge as to how the value will be useful. The main renderer, which is the scroll() function, knows nothing on how that offset value is computed. Its responsibility is very limited: given the offset, compute the right matrix for every single cover. This is carried out by going a loop through the stack of the covers, from the front-most image to the ones in the far background, as illustrated in this diagram. Note how the loop also serves to set the proper z-index so that the center image becomes the front cover and so on.

Every image is initially centered on the screen. After that, the right translation is in the X axis is computed based on the relative position of the image. For a depth perception, there will be an additional translation in the Z axis, making the cover going farther from the screen. Finally the image is rotated (in the Y axis) with a positive angle and a negative angle for the left and right side, respectively. The 3-D visual comes from the perspective of the main div element which is the parent of every single image element.


The entire scroll() is about 40 lines. If the explanation above is still confusing, it is always fun to step through the code and watch the variables as it goes through one cover to another. The code is not crazily optimized, it is deliberately chosen to be as clear as possible without revealing too much. Having said that, with the most recent iOS or Android phones, the effect should be pretty smooth and there is not any problem achieving over 30 fps most of the time. You will get some minor extra frames (particularly useful for older devices) if the reflection effect is removed.

Since this demo is intended to be educational, I left a few exercises for the brave readers. For example, the tweening causes only one cover to move at a time. If you are up for a challenge, see if you can move two covers, one that is going to the front and one that is going to the back. In addition, the perspective here is rather a cheat since it applies on the parent DOM element (as you can witness, these transformed images have the same vanishing point). Applying an individual perspective and Y-rotation requires wrapping every img element with its container. Of course, the 3-D effect does not have to be like Cover Flow as you could always tweak the matrix logic so that it resembles e.g. MontageJS Popcorn instead.

Once you leave the flatland, there is no turning back!


In the first part, I showed you an easy way to test PageSpeed Module by running it as a proxy, thanks to nginx and ngx_pagespeed. Instead of a prebuilt binary of PSOL (PageSpeed Optimization Library), this second part demonstrates how to build it from source.

We will continue using the same Vagrant-based setup from the same Git repository, bitbucket.org/ariya/pagespeed-proxy. In fact, the modifications I have applied to the provisioning script setup-proxy.sh serves as the self-documented step-by-step of what needs to be done.

pagespeedJust like many other Google projects, mod_pagespeed is using Gyp for its build process. Thus, there is the need to grab the latest depot_tools (pretty straightforward). Also, care must be taken to ensure the availability of Git version 1.8 as the previous version does not work with Gyp on mod_pagespeed. This is also the reason why the script setup-proxy.sh needs to build Git 1.8 from source if the system-wide installed Git is not up-to-date.

The build process itself does not have any gotchas, as long as you stick with the given instructions in the Build PSOL from Source wiki page. Just make sure you explicitly choose the same versions for mod_pagespeed and ngx_pagespeed ( in this example), otherwise any API differences would cause the compilation to fail. Also, since mod_pagespeed relies on a lot of third party code (check its DEPS file), everything from JsonCpp, Apache Serf, libjpeg-turbo, and many others, including some bits from Chromium, the process is going to take some time, both for fetching those libraries and building them.

Once everything is completed, the end result is still the same like the previous part. You will have an HTTP forward proxy running on port 8000 which uses PageSpeed optimization library to compress the web pages passing through it. Verification is simple, run these two commands

curl http://ariya.github.io/js/random/index.html
curl -x localhost:8000 http://ariya.github.io/js/random/index.html

and you should the difference in the output as depicted in the screenshot. Notice the removal of the single HTML comment and the minification of JavaScript. In this simple example, the compressed HTML is reduced to only 87% of its original size.


See you in Part 3!


The fourth part of this JavaScript kinetic scrolling series shows an implementation of horizontal swipe to browse a gallery of photo. In addition to that, a simple parallax effect will be demonstrated as well.

Scrolling sideways is a common pattern in many application. The ubiquitous example is the home screen of many mobile tablets and phones, where the user needs to swipe left and right to scroll through the choices of many application icons. A typical photo application, even since the very first iPhone, allows the user to view the next and previous picture by a quick swipe. Tabbed view in many Android applications also feature the possibility of flicking to switch tabs (instead of tapping on the tab directly).

A clever tweak to a horizontal swipe interface is the extra parallax effect. It is easier to explain this if you see the Android home screen. As you swipe left/right, you can watch how the icons are moving at a different distance compared to the wallpaper. This gives the impression that the icons are floating in the air, as supposed to stick right on top of the wallpaper.

Another popular application leveraging the parallax effect is the native weather application (iOS and Android) from Yahoo!, as illustrated in the following diagram. In the right screenshot, I am dragging the screen to the right halfway. However, you can see that the black-and-white building does not disappear from the screen. If you look carefully at the left screenshot, that building is in the right-half of the screen. Without any parallax effect, if I push the background halfway, the right-half (and thus also the building) should not be visible anymore.


What does it take to implement this horizontal scrolling using vanilla JavaScript? Apparently, we already have the foundation in place. Dragging to the left/right is not foreign, this basic handling is in the Part 1 of this series. The flick gesture is already covered in Part 2. The fact that the screen needs to snap to the left or the right is simply the snap-to-grid feature, already explained in details in Part 3. What we need to change is the direction of the gesture detection as now we have to move horizontally.

To get the feeling of the implementation, open your browser and navigate to ariya.github.io/kinetic/4. Swipe left and right to scroll through the (wonderful) photos.

The core of the implementation is the three-card system, each represents the left, center, and right photo. Under normal circumstances, the center card occupies the entire view and both the left and right are out of the view. If the user swipes to the right, then we pulls the left card to the view (the center card is behind the left one). In addition to that, we also moves the center card at half the distance for the parallax effect.


The same scenario happens (in the reverse direction) if the user swipes to the left instead. Once the flick gesture is fully completed, we tweak all the three cards so that they all represents the final arrangement. All this logic takes only about 180 lines of JavaScript code (see scroll.js for details).

As an exercise for the reader, try to adjust the parallax implementation so that it faithfully resembles what Yahoo! Weather application does. With that application, you will notice that the new image (from the left or right) which got pulled as you swipe the center image has a peculiar parallax-ness as well. To implement something like this, you can’t use the img element directly as you need to wrap the image in another container, e.g. a div element, and adjust the offset of the image relative to its container. That seems challenging, but it is not too complicated to implement.

I hope this parallax effect is fascinating. See you in Part 5!


Many JavaScript projects are using Mocha to run the unit tests. Combining Mocha with Istanbul and Karma, it is easy to track the code coverage of the application code when running the tests.

While Mocha has a built-in support for running the tests from the command-line via Node.js, in some cases you still want to verify your code with the real web browsers. The easiest way to do that with by using Karma to automatically launch the browsers, control them, and execute the tests. The repository github.com/ariya/coverage-mocha-istanbul-karma shows a simple example on how to do this. To experiment with it, simply clone the repository and then run

npm install
npm test

You should get something like:

Running "karma:test" (karma) task
INFO [karma]: Karma v0.10.8 server started at http://localhost:9876/
INFO [launcher]: Starting browser PhantomJS
INFO [launcher]: Starting browser Firefox
INFO [PhantomJS 1.9.2 (Linux)]: Connected on socket ASrZI0wwAgPFSaxNmDHI
INFO [Firefox 25.0.0 (Ubuntu)]: Connected on socket sBSK-fR9V5s-8pWWmDHJ
PhantomJS 1.9.2 (Linux): Executed 2 of 2 SUCCESS (0.327 secs / 0.023 secs)
Firefox 26.0.0 (Ubuntu): Executed 2 of 2 SUCCESS (0.446 secs / 0.002 secs)
Done, without errors.

The configuration file for Karma, shown below, specifies that the tests should run on PhantomJS and Firefox. Of course, it is also easy to add other browsers.

module.exports = function(config) {
    basePath: '', 
    autoWatch: true,
    frameworks: ['mocha'],
    files: [
    browsers: ['PhantomJS', 'Firefox'],
    reporters: ['progress', 'coverage'],
    preprocessors: { '*.js': ['coverage'] },
    singleRun: true

npm test will run the tests by executing the right Karma task as mentioned inside Gruntfile.js. Note that Grunt is not mandatory here, running Karma directly is as easy as:

node_modules/.bin/karma start karma.conf.js

Just like in the previous coverage examples (using QUnit and Jasmine), the code we want to test is the following simple implementation of the square root function:

var My = {
    sqrt: function(x) {
        if (x < 0) throw new Error("sqrt can't work on negative number");
        return Math.exp(Math.log(x)/2);

The tests for that function are in test/test.sqrt.js. In fact, it is also possible to run the tests manually with a web browser, simply open test/index.html file and you will see the report:


As an exercise, try to remove one test and see what happens as you run the test again. In fact, open the coverage report (look under coverage directory) and you will find the glaring warning from Istanbul, indicating the worrying level of coverage confidence.


Just like a good code coverage tool, Istanbul reports missing branch coverage. This is very important as looking only at the statement coverage may hide some latent bugs.

Now, if make it this far, you are very close to implement full enforcement of coverage threshold.


PageSpeed Module from Google, available for Apache and nginx, is a quick solution to improve web application network performance. As a bonus, if we configure nginx as a proxy server, then it can also serve as a web acceleration solution.

pagespeedThere is a growing number of PageSpeed optimization filters, anything from a straightforward CSS minification to a sophisticated images recompression. If you are already using Apache or nginx, PageSpeed module is relatively easy to use.

A different use case for PageSpeed is as a proxy server which you use for your daily browsing, e.g. what @jedisct1 has implemented via Apache proxy module. In this blog post, I show an alternative setup by using nginx instead, mainly by following the steps described in the ngx_pagespeed project.

Rather that doing things manually, you can just use the automated setup using Vagrant with VirtualBox. Simply clone pagespeed-proxy Git repository and then you only need to run:

vagrant up

For this experiment, the virtual machine is based on Ubuntu 12.04 LTS (Precise Pangolin) 64-bit. If you tweak the configuration inside Vagrantfile to a CentOS-based Vagrant box, e.g. from Opscode Bento collection, it should also work just fine. Should you rather trust your own virtual machine, you can create a Vagrant box from scratch automatically, check out my previous blog post on Using Packer to Create Vagrant Boxes.

The provisioning script of the box takes care the step of downloading the latest nginx and compile it with Google’s PageSpeed module. When it is ready, the script will also launch nginx as a forward proxy at port 8000 (this port is also forwarded and hence the proxy is available for the host machine as well). The proxy will run two optimizations filters (as a starting point): HTML comment removal and JavaScript minification, this can be proved by looking at the provided configuration file nginx.conf:

http {
    server {
        listen 8000;
        server_name localhost;
        location / {
            proxy_pass http://$http_host$uri$is_args$args;
        pagespeed on;
        pagespeed RewriteLevel PassThrough;
        pagespeed EnableFilters remove_comments,rewrite_javascript;
        pagespeed FileCachePath /home/vagrant/cache;

To verify the operation of the optimizing proxy, run these two commands (on the host machine) and observe the different outcomes:

curl http://ariya.github.io/js/random/index.html
curl -x localhost:8000 http://ariya.github.io/js/random/index.html

The second one pipes the original HTML content through the enabled optimization filters, comment removal and script rewrite, and thereby it will result in a smaller page. This is definitely just an illustration, feel free to play with a number of other PageSpeed filters and observe the impacts.

Obviously, it is also possible to setup your web browser to use the proxy at localhost:8000. When looking at the same URL given above, it will result in something like the following screenshot. Again, compare it with the case where you view the page without using the proxy.


For a more practical usage of the proxy, stay tuned!


In the third part of this JavaScript kinetic scrolling series, the so-called snap to grid feature is demonstrated. The implementation is rather straightforward, it is a matter of figuring out where the scrolling should stop and then adjusting that target position to the intended location.

The second part of the series discussed the scrolling deceleration technique by using an exponential decay concept. With a flick list, this means that the list will slow down and stop at some point, depending on the launch velocity at the moment the user releases the grip. It turns out that this also becomes the important key to implement snap to grid.

Note that the exponential decay curve permits us to know about two things (1) when the scrolling will stop (2) where it will stop. As soon as the launch velocity is known, these two values can be computed analytically. This is very powerful, in particular since the final stop position can be adjusted right away. For a quick live demo, go to ariya.github.io/kinetic/3 (preferably using your smartphone browser), flick the list, and see how it always stops to align the color entry to the overlaid slot.

The following diagram illustrates the concept. Without any snap to grid, the curve of the scrolling position as a function of time is depicted in the gray line. Let us assume that the final position will be at 110 px. If we know that the snapping grid is every 25 px, then that final position needs to be adjusted either to 100 px or 125 px. Since the former is the closest, that will be the ultimate final position and the deceleration will follow the curve colored in blue.


The relevant code for this feature can be seen in the repository github.com/ariya/kinetic. Really, it is not much different than the code shown already in the previous second part. The new addition is this following fragment:

target = Math.round(target / snap) * snap;

The variable snap here stores the row height of each item (it is computed once during the initialization). If you already familiarize yourself with the exponential decay implementation, target contains the final scrolling position when the deceleration finishes. Without snapping feature, we don’t tweak this value. When we want to adjust the final scrolling position to be magnetically anchored to nearest snapping location, then we simply change this value.

Also, observe that there is no requirement that snap has to be a constant. A list with non-uniform row height can still enjoy snap to grid feature. All we need is an array of possible values for the final position and every deceleration target must be modified to the closest value. In case you study physics, this is quite similar to the idea of discrete packets of electromagnetic energy in quantum mechanics.

In some libraries which implement kinetic scrolling, snap to grid is implemented by continuously tracking the scrolling position. When it is about to stop, then the position is adjusted to move to the closest possible position based on the grid distance. Not only this results in a more complicated code, such an approach has a major drawback. In some cases, the list can overshoot before it bounces back and therefore it does not demonstrate a smooth scrolling effect at all. With the analytical technique described here, this awkward effect will not happen at all. As always, a proper solution often comes with the cleanest implementation!

What will be shown in the fourth part? Let’s keep it as a surprise!