Tags:

maturity

How does your engineering organization build and deliver products to its customers? Similar to the well-known capability maturity model, the maturity level of a build automation system falls into one of the following: chaotic, repeatable, defined, managed, or optimized.

Let’s take a look at the differences in these levels for a popular project, PhantomJS. At the start of the project, it was tricky to build PhantomJS unless you were a seasoned C++ developer. But over time, more things were automated, and eventually engineers without C++ backgrounds could run the build as well. At some point, a Vagrant-based setup was introduced and building deployable binaries became trivial. The virtualized build workflow is both predictable and consistent.

The first level, chaotic, is familiar to all new hires in a growing organization. You arrive in the new office and on that first day, an IT guy hands you a laptop. Now it is up to you to figure out all the necessary bits and pieces to start becoming productive. Commonly it takes several days to set up your environment – that’s several days before you can get any work done. Of course, it is still a disaster if the build process itself can be initiated in one step.

This process is painful and eventually someone will step up and write documentation on how to do it. Sometimes it is a grassroots, organic activity in the best interest of all. Effectively, this makes the process much more repeatable; the chance of going down the wrong path is reduced.

Just like any other type of document, build setup documentation can be out of sync without people realizing it. A new module may be added last week, which suddenly implies a new dependency. An important configuration file has changed and therefore simply following the outdated wiki leads to a mysterious failure.

To overcome this problem, consistency must be mandated by a defined process. In many cases, this is as simple as a standardized bootstrap script which resolves and pulls the dependency automatically (make deps, bundle install, npm install, etc). Any differences in the initial environment are normalized by that script. You do not need to remember all the yum-based setup when running CentOS and recall the apt-get counterparts when handling an Ubuntu box. At this level, product delivery via continuous deployment and integration becomes possible. No human interaction is necessary to prepare the build machine to start producing the artifacts.

In this wonderful and heterogenous environment it is unfortunately challenging to track delivery consistency. Upgrading the OS can trigger a completely different build. A test which fails on a RHEL-based is not reproducible on the engineer’s MacBook. We are lucky that virtualization (VirtualBox) or containment (Docker) can be leveraged to ensure a managed build environment. There is no need to install, provision, and babysit a virtualized build machine (even on Windows, thanks to PowerShell and especially Chocolatey). Anyone in the world can get a brand-new computer running a fresh operating system, get the bootstrap script, and start kicking the build right away.

There are two more benefits of this managed automation level. Firstly, a multi-platform application is easier to build since the process of creating and provisioning the virtual machine happens automatically. Secondly, it enables every engineer to check the testing/staging environment in isolation, i.e. without changing their own development setup. Point of fact, tools like Vagrant are quickly becoming popular because they give engineers and devops such power.

The last level is the continuous optimizing state. As the name implies, this step refers to ongoing workflow refinements. For example, this could involve speeding up the overall build process which is pretty important in a large software project. Other types of polishes concern the environment itself, whether creating the virtual machine from the .ISO image (Packer) or distributing the build jobs to a cloud-based array of EC2/GCE boxes.

My experience with automated build refinement may be described like this:

  • Chaotic: hunt the build dependencies by hand
  • Repeatable: follow the step-by-step instructions from a wiki page
  • Defined: have the environment differences normalized by a bootstrapping script
  • Managed: use Vagrant to create and provision a consistent virtualized environment
  • Optimizing: let Packer prepare the VM from scratch

How is your personal experience through these different maturity levels?

Note: Special thanks to Ann Robson for reviewing and proofreading the draft of this blog post.

Tags:

In the fifth part of this JavaScript kinetic scrolling tutorial, a demo of the attractive Cover Flow effect (commonly found in some Apple products) using the deadly combination of kinetic scrolling and CSS 3-D will be shown. The entire logic is implemented in ~200 lines of JavaScript code.

The cool factor of Cover Flow is in its fluid animation of the covers in the 3-D space. If you haven’t seen it before, you miss a lot! To get the feeling of the effect, use your browser (preferably on a smartphone or a tablet) to visit ariya.github.io/kinetic/5. Swipe left and right to scroll through the cover images of some audiobooks.

coverflow

My first encounter to this Cover Flow effect was 6 years ago as I implemented this with pure software in C/C++. It ran quite well even on a lowly hardware (such as this 200 MHz HTC Touch). Of course, with GPU-accelerated CSS animation, there is no need to write an optimized special renderer for this. For this example, we manipulate the transformation matrix directly using CSS Transform feature. To follow along, check the main code in coverflow.js.

There are several key elements in this demo. First, we will need a smooth scrolling with the right acceleration and deceleration using the exponential decay technique which was covered in Part 2. Also, stopping the scrolling at the right position is necessary (Part 3, see the snap-to-grid code) as we need to have all the cover images in the perfect arrangement, not halfway and not being stuck somewhere.

When you look at the code, the scrolling via mouse or touch events impacts the value of the variable offset. In fact, the code for doing that is line-by-line equivalent to what has been covered in the previous parts. On a normal circumstances, this offset is always an integer multiply of 200 (the cover image size, in pixels). When the user makes a touch gesture to drag the cover or when it is decelerating by itself, then the value could be anything. This is where we need to apply tweening to give the right transformation matrix for each cover image.

Such a decoupling results in a modular implementation. All the events (touch, mouse, keyboard, timer) only deal with the change in offset, it has no knowledge as to how the value will be useful. The main renderer, which is the scroll() function, knows nothing on how that offset value is computed. Its responsibility is very limited: given the offset, compute the right matrix for every single cover. This is carried out by going a loop through the stack of the covers, from the front-most image to the ones in the far background, as illustrated in this diagram. Note how the loop also serves to set the proper z-index so that the center image becomes the front cover and so on.

Every image is initially centered on the screen. After that, the right translation is in the X axis is computed based on the relative position of the image. For a depth perception, there will be an additional translation in the Z axis, making the cover going farther from the screen. Finally the image is rotated (in the Y axis) with a positive angle and a negative angle for the left and right side, respectively. The 3-D visual comes from the perspective of the main div element which is the parent of every single image element.

stacking

The entire scroll() is about 40 lines. If the explanation above is still confusing, it is always fun to step through the code and watch the variables as it goes through one cover to another. The code is not crazily optimized, it is deliberately chosen to be as clear as possible without revealing too much. Having said that, with the most recent iOS or Android phones, the effect should be pretty smooth and there is not any problem achieving over 30 fps most of the time. You will get some minor extra frames (particularly useful for older devices) if the reflection effect is removed.

Since this demo is intended to be educational, I left a few exercises for the brave readers. For example, the tweening causes only one cover to move at a time. If you are up for a challenge, see if you can move two covers, one that is going to the front and one that is going to the back. In addition, the perspective here is rather a cheat since it applies on the parent DOM element (as you can witness, these transformed images have the same vanishing point). Applying an individual perspective and Y-rotation requires wrapping every img element with its container. Of course, the 3-D effect does not have to be like Cover Flow as you could always tweak the matrix logic so that it resembles e.g. MontageJS Popcorn instead.

Once you leave the flatland, there is no turning back!

Tags:

In the first part, I showed you an easy way to test PageSpeed Module by running it as a proxy, thanks to nginx and ngx_pagespeed. Instead of a prebuilt binary of PSOL (PageSpeed Optimization Library), this second part demonstrates how to build it from source.

We will continue using the same Vagrant-based setup from the same Git repository, bitbucket.org/ariya/pagespeed-proxy. In fact, the modifications I have applied to the provisioning script setup-proxy.sh serves as the self-documented step-by-step of what needs to be done.

pagespeedJust like many other Google projects, mod_pagespeed is using Gyp for its build process. Thus, there is the need to grab the latest depot_tools (pretty straightforward). Also, care must be taken to ensure the availability of Git version 1.8 as the previous version does not work with Gyp on mod_pagespeed. This is also the reason why the script setup-proxy.sh needs to build Git 1.8 from source if the system-wide installed Git is not up-to-date.

The build process itself does not have any gotchas, as long as you stick with the given instructions in the Build PSOL from Source wiki page. Just make sure you explicitly choose the same versions for mod_pagespeed and ngx_pagespeed (1.7.30.1 in this example), otherwise any API differences would cause the compilation to fail. Also, since mod_pagespeed relies on a lot of third party code (check its DEPS file), everything from JsonCpp, Apache Serf, libjpeg-turbo, and many others, including some bits from Chromium, the process is going to take some time, both for fetching those libraries and building them.

Once everything is completed, the end result is still the same like the previous part. You will have an HTTP forward proxy running on port 8000 which uses PageSpeed optimization library to compress the web pages passing through it. Verification is simple, run these two commands

curl http://ariya.github.io/js/random/index.html
curl -x localhost:8000 http://ariya.github.io/js/random/index.html

and you should the difference in the output as depicted in the screenshot. Notice the removal of the single HTML comment and the minification of JavaScript. In this simple example, the compressed HTML is reduced to only 87% of its original size.

pagespeed_diff

See you in Part 3!

Tags:

swipe
The fourth part of this JavaScript kinetic scrolling series shows an implementation of horizontal swipe to browse a gallery of photo. In addition to that, a simple parallax effect will be demonstrated as well.

Scrolling sideways is a common pattern in many application. The ubiquitous example is the home screen of many mobile tablets and phones, where the user needs to swipe left and right to scroll through the choices of many application icons. A typical photo application, even since the very first iPhone, allows the user to view the next and previous picture by a quick swipe. Tabbed view in many Android applications also feature the possibility of flicking to switch tabs (instead of tapping on the tab directly).

A clever tweak to a horizontal swipe interface is the extra parallax effect. It is easier to explain this if you see the Android home screen. As you swipe left/right, you can watch how the icons are moving at a different distance compared to the wallpaper. This gives the impression that the icons are floating in the air, as supposed to stick right on top of the wallpaper.

Another popular application leveraging the parallax effect is the native weather application (iOS and Android) from Yahoo!, as illustrated in the following diagram. In the right screenshot, I am dragging the screen to the right halfway. However, you can see that the black-and-white building does not disappear from the screen. If you look carefully at the left screenshot, that building is in the right-half of the screen. Without any parallax effect, if I push the background halfway, the right-half (and thus also the building) should not be visible anymore.

weather

What does it take to implement this horizontal scrolling using vanilla JavaScript? Apparently, we already have the foundation in place. Dragging to the left/right is not foreign, this basic handling is in the Part 1 of this series. The flick gesture is already covered in Part 2. The fact that the screen needs to snap to the left or the right is simply the snap-to-grid feature, already explained in details in Part 3. What we need to change is the direction of the gesture detection as now we have to move horizontally.

To get the feeling of the implementation, open your browser and navigate to ariya.github.io/kinetic/4. Swipe left and right to scroll through the (wonderful) photos.

The core of the implementation is the three-card system, each represents the left, center, and right photo. Under normal circumstances, the center card occupies the entire view and both the left and right are out of the view. If the user swipes to the right, then we pulls the left card to the view (the center card is behind the left one). In addition to that, we also moves the center card at half the distance for the parallax effect.

parallaxview

The same scenario happens (in the reverse direction) if the user swipes to the left instead. Once the flick gesture is fully completed, we tweak all the three cards so that they all represents the final arrangement. All this logic takes only about 180 lines of JavaScript code (see scroll.js for details).

As an exercise for the reader, try to adjust the parallax implementation so that it faithfully resembles what Yahoo! Weather application does. With that application, you will notice that the new image (from the left or right) which got pulled as you swipe the center image has a peculiar parallax-ness as well. To implement something like this, you can’t use the img element directly as you need to wrap the image in another container, e.g. a div element, and adjust the offset of the image relative to its container. That seems challenging, but it is not too complicated to implement.

I hope this parallax effect is fascinating. See you in Part 5!

Tags:

Many JavaScript projects are using Mocha to run the unit tests. Combining Mocha with Istanbul and Karma, it is easy to track the code coverage of the application code when running the tests.

While Mocha has a built-in support for running the tests from the command-line via Node.js, in some cases you still want to verify your code with the real web browsers. The easiest way to do that with by using Karma to automatically launch the browsers, control them, and execute the tests. The repository github.com/ariya/coverage-mocha-istanbul-karma shows a simple example on how to do this. To experiment with it, simply clone the repository and then run

npm install
npm test

You should get something like:

Running "karma:test" (karma) task
INFO [karma]: Karma v0.10.8 server started at http://localhost:9876/
INFO [launcher]: Starting browser PhantomJS
INFO [launcher]: Starting browser Firefox
INFO [PhantomJS 1.9.2 (Linux)]: Connected on socket ASrZI0wwAgPFSaxNmDHI
INFO [Firefox 25.0.0 (Ubuntu)]: Connected on socket sBSK-fR9V5s-8pWWmDHJ
PhantomJS 1.9.2 (Linux): Executed 2 of 2 SUCCESS (0.327 secs / 0.023 secs)
Firefox 26.0.0 (Ubuntu): Executed 2 of 2 SUCCESS (0.446 secs / 0.002 secs)
TOTAL: 4 SUCCESS
Done, without errors.

The configuration file for Karma, shown below, specifies that the tests should run on PhantomJS and Firefox. Of course, it is also easy to add other browsers.

module.exports = function(config) {
  config.set({
    basePath: '', 
    autoWatch: true,
    frameworks: ['mocha'],
    files: [
      'sqrt.js',
      'test/*.js'
    ],  
    browsers: ['PhantomJS', 'Firefox'],
 
    reporters: ['progress', 'coverage'],
    preprocessors: { '*.js': ['coverage'] },
 
    singleRun: true
  }); 
};

npm test will run the tests by executing the right Karma task as mentioned inside Gruntfile.js. Note that Grunt is not mandatory here, running Karma directly is as easy as:

node_modules/.bin/karma start karma.conf.js

Just like in the previous coverage examples (using QUnit and Jasmine), the code we want to test is the following simple implementation of the square root function:

var My = {
    sqrt: function(x) {
        if (x < 0) throw new Error("sqrt can't work on negative number");
        return Math.exp(Math.log(x)/2);
    }
};

The tests for that function are in test/test.sqrt.js. In fact, it is also possible to run the tests manually with a web browser, simply open test/index.html file and you will see the report:

mocha

As an exercise, try to remove one test and see what happens as you run the test again. In fact, open the coverage report (look under coverage directory) and you will find the glaring warning from Istanbul, indicating the worrying level of coverage confidence.

coverage

Just like a good code coverage tool, Istanbul reports missing branch coverage. This is very important as looking only at the statement coverage may hide some latent bugs.

Now, if make it this far, you are very close to implement full enforcement of coverage threshold.

Tags:

PageSpeed Module from Google, available for Apache and nginx, is a quick solution to improve web application network performance. As a bonus, if we configure nginx as a proxy server, then it can also serve as a web acceleration solution.

pagespeedThere is a growing number of PageSpeed optimization filters, anything from a straightforward CSS minification to a sophisticated images recompression. If you are already using Apache or nginx, PageSpeed module is relatively easy to use.

A different use case for PageSpeed is as a proxy server which you use for your daily browsing, e.g. what @jedisct1 has implemented via Apache proxy module. In this blog post, I show an alternative setup by using nginx instead, mainly by following the steps described in the ngx_pagespeed project.

Rather that doing things manually, you can just use the automated setup using Vagrant with VirtualBox. Simply clone pagespeed-proxy Git repository and then you only need to run:

vagrant up

For this experiment, the virtual machine is based on Ubuntu 12.04 LTS (Precise Pangolin) 64-bit. If you tweak the configuration inside Vagrantfile to a CentOS-based Vagrant box, e.g. from Opscode Bento collection, it should also work just fine. Should you rather trust your own virtual machine, you can create a Vagrant box from scratch automatically, check out my previous blog post on Using Packer to Create Vagrant Boxes.

The provisioning script of the box takes care the step of downloading the latest nginx and compile it with Google’s PageSpeed module. When it is ready, the script will also launch nginx as a forward proxy at port 8000 (this port is also forwarded and hence the proxy is available for the host machine as well). The proxy will run two optimizations filters (as a starting point): HTML comment removal and JavaScript minification, this can be proved by looking at the provided configuration file nginx.conf:

http {
    server {
        listen 8000;
        server_name localhost;
        location / {
            resolver 8.8.8.8;
            proxy_pass http://$http_host$uri$is_args$args;
        }
        pagespeed on;
        pagespeed RewriteLevel PassThrough;
        pagespeed EnableFilters remove_comments,rewrite_javascript;
        pagespeed FileCachePath /home/vagrant/cache;
    }
}

To verify the operation of the optimizing proxy, run these two commands (on the host machine) and observe the different outcomes:

curl http://ariya.github.io/js/random/index.html
curl -x localhost:8000 http://ariya.github.io/js/random/index.html

The second one pipes the original HTML content through the enabled optimization filters, comment removal and script rewrite, and thereby it will result in a smaller page. This is definitely just an illustration, feel free to play with a number of other PageSpeed filters and observe the impacts.

Obviously, it is also possible to setup your web browser to use the proxy at localhost:8000. When looking at the same URL given above, it will result in something like the following screenshot. Again, compare it with the case where you view the page without using the proxy.

optimized

For a more practical usage of the proxy, stay tuned!