At the recent San Francisco JavaScript meetup, I gave a talk on the subject of Searching and Sorting without Loops as part of its Functional Monthly series. In that talk, I explained various higher-order functions of JavaScript Array and their usage to replace explicit loops (for or while) in several different code patterns.

noloop

I have published the slide deck for the talk (as PDF, 1.4 MB). The talk started with a quick refresh of five Array’s functions: map, filter, reduce, some, and every. Those functions will be used in the subsequent three parts of the talk: sequences, searching, and finally sorting. The first one, building sequences without any explicit loop, sets the stage for the rest of the discussion.

If you are following this blog corner, you may notice that this is not a new topic. This talk is basically a detailed walkthrough of some posts I have written in the past:

It seems that you will need a lot of effort to understand all this complicate stuff. Nevertheless, think about the reward. Being able to understand this code fragment (fits in a single tweet) that constructs the first n numbers in the Fibonacci series is quite satisfying:

function fibo(n) {
  return Array.apply(0, Array(n)).
    reduce(function(x, y, z){ return x.concat((z < 2) ? z : x[z-1] + x[z-2]); }, []);
}

The age of exploration is just beginning.

Tags:

Many free/open-source projects often suffer from a very specific feedback where it is assumed that a certain feature will not be implemented because of a philosophical reason. It is what I called as the "flying car" problem.

delorean

As an illustration, with a lot of users and very few contributors, PhantomJS was bound to have that problem. In fact, it does already and it will continue to have it. I don’t have a clear idea why it happens, however I suspect that it is caused by the practical impedance mismatch between the fundamental core implementation and its users. As you can imagine, most PhantomJS users are web developers who are not always exposed to the intricacies and the challenges of what happens behind the scene. This is pretty much like an automatic gearbox, my car mechanic and I might have a completely different idea of how such a gearbox should operate.

In a mailing-list thread over a year, I summarize the situation as:

Any "why PhantomJS can’t do this" situation should be (at first) treated the same way as
"why my car can’t fly" question. A car designer loves to have it, but the technology might not be there yet.

Another example is this remark on Hacker News (I don’t regularly follow HN, but from time to time certain comments were brought to my attention):

I believe phantom made a fundamental mistake of not being nodejs based in the first place.

This is despite the subject itself is mentioned in the FAQ and has been discussed in the mailing-list (several times). I won’t go into the technical details (not the point here), but surely you can’t help but to notice a similar pattern here.

An engineering project is always handicapped by a certain engineering constraints. Many times the developers want to be pragmatic, there is nothing opinionated or philosophical about it. The recent Cambrian explosion of social media forums highlights the loudest noises and provokes heated flame wars to a different level. It is easy to fall into the trap whereby we assume (by means of extrapolation) that every project owner is hotheaded and opinionated. The calm and reasonable ones fade into the background.

Every sport team will welcome useful and constructive feedback. An emotional knee-jerk reaction from a trigger-happy armchair quarterback however hardly makes it to the prioritized items. As I have often expressed, we are not in the kindergarten anymore, screaming does not make the solution appear faster. While it is an opportunity to polish the art of self-restraint, any kind of flying car problem unnecessarily drains the energy of every project maintainer. The optimal win-win compromise is for both sides to always practice the principle of Audi alteram partem. The very least, give it five minutes.

Enough rambling already, I need to go back to my garage to fix that damn hoverboard…

Tags:

monkeys
Three years ago, the first version of PhantomJS was announced to the public. It is still a toddler, but hey, it is growing up and getting some traction at an unprecedented rate.

Looking at the number of downloads over the last few years, the trend is obviously "up to the right", a total of over 3 millions downloads. This can be explained easily. Many web applications and projects are using a various different test frameworks, most of them rely on PhantomJS to run the tests headlessly. Thus, those crazy downloads are mostly automatic as PhantomJS is being pulled as one of the dependencies, typically in a CI system.

releases

The community also keeps growing. Our mailing-list grows from just over a thousand members to 1,600 members. The code repository github.com/ariya/phantomjs doubles its stargazers to more than 9,100 to date. There are countless new projects using PhantomJS directly or indirectly, it is harder and harder to keep track of them all.

Just like any other toddlers of its age, PhantomJS is not perfect. It screams and makes a lot of noises. It does things you don’t expect it to do. And yet it keeps walking, running around, playing with friends, and brings a lot of happiness to those around it.

It gives us an ideal to strive towards. In time, it will help us accomplish wonders.

Here is to another amazing year!

Tags:

maturity

How does your engineering organization build and deliver products to its customers? Similar to the well-known capability maturity model, the maturity level of a build automation system falls into one of the following: chaotic, repeatable, defined, managed, or optimized.

Let’s take a look at the differences in these levels for a popular project, PhantomJS. At the start of the project, it was tricky to build PhantomJS unless you were a seasoned C++ developer. But over time, more things were automated, and eventually engineers without C++ backgrounds could run the build as well. At some point, a Vagrant-based setup was introduced and building deployable binaries became trivial. The virtualized build workflow is both predictable and consistent.

The first level, chaotic, is familiar to all new hires in a growing organization. You arrive in the new office and on that first day, an IT guy hands you a laptop. Now it is up to you to figure out all the necessary bits and pieces to start becoming productive. Commonly it takes several days to set up your environment – that’s several days before you can get any work done. Of course, it is still a disaster if the build process itself can be initiated in one step.

This process is painful and eventually someone will step up and write documentation on how to do it. Sometimes it is a grassroots, organic activity in the best interest of all. Effectively, this makes the process much more repeatable; the chance of going down the wrong path is reduced.

Just like any other type of document, build setup documentation can be out of sync without people realizing it. A new module may be added last week, which suddenly implies a new dependency. An important configuration file has changed and therefore simply following the outdated wiki leads to a mysterious failure.

To overcome this problem, consistency must be mandated by a defined process. In many cases, this is as simple as a standardized bootstrap script which resolves and pulls the dependency automatically (make deps, bundle install, npm install, etc). Any differences in the initial environment are normalized by that script. You do not need to remember all the yum-based setup when running CentOS and recall the apt-get counterparts when handling an Ubuntu box. At this level, product delivery via continuous deployment and integration becomes possible. No human interaction is necessary to prepare the build machine to start producing the artifacts.

In this wonderful and heterogenous environment it is unfortunately challenging to track delivery consistency. Upgrading the OS can trigger a completely different build. A test which fails on a RHEL-based is not reproducible on the engineer’s MacBook. We are lucky that virtualization (VirtualBox) or containment (Docker) can be leveraged to ensure a managed build environment. There is no need to install, provision, and babysit a virtualized build machine (even on Windows, thanks to PowerShell and especially Chocolatey). Anyone in the world can get a brand-new computer running a fresh operating system, get the bootstrap script, and start kicking the build right away.

There are two more benefits of this managed automation level. Firstly, a multi-platform application is easier to build since the process of creating and provisioning the virtual machine happens automatically. Secondly, it enables every engineer to check the testing/staging environment in isolation, i.e. without changing their own development setup. Point of fact, tools like Vagrant are quickly becoming popular because they give engineers and devops such power.

The last level is the continuous optimizing state. As the name implies, this step refers to ongoing workflow refinements. For example, this could involve speeding up the overall build process which is pretty important in a large software project. Other types of polishes concern the environment itself, whether creating the virtual machine from the .ISO image (Packer) or distributing the build jobs to a cloud-based array of EC2/GCE boxes.

My experience with automated build refinement may be described like this:

  • Chaotic: hunt the build dependencies by hand
  • Repeatable: follow the step-by-step instructions from a wiki page
  • Defined: have the environment differences normalized by a bootstrapping script
  • Managed: use Vagrant to create and provision a consistent virtualized environment
  • Optimizing: let Packer prepare the VM from scratch

How is your personal experience through these different maturity levels?

Note: Special thanks to Ann Robson for reviewing and proofreading the draft of this blog post.

Tags:

In the fifth part of this JavaScript kinetic scrolling tutorial, a demo of the attractive Cover Flow effect (commonly found in some Apple products) using the deadly combination of kinetic scrolling and CSS 3-D will be shown. The entire logic is implemented in ~200 lines of JavaScript code.

The cool factor of Cover Flow is in its fluid animation of the covers in the 3-D space. If you haven’t seen it before, you miss a lot! To get the feeling of the effect, use your browser (preferably on a smartphone or a tablet) to visit ariya.github.io/kinetic/5. Swipe left and right to scroll through the cover images of some audiobooks.

coverflow

My first encounter to this Cover Flow effect was 6 years ago as I implemented this with pure software in C/C++. It ran quite well even on a lowly hardware (such as this 200 MHz HTC Touch). Of course, with GPU-accelerated CSS animation, there is no need to write an optimized special renderer for this. For this example, we manipulate the transformation matrix directly using CSS Transform feature. To follow along, check the main code in coverflow.js.

There are several key elements in this demo. First, we will need a smooth scrolling with the right acceleration and deceleration using the exponential decay technique which was covered in Part 2. Also, stopping the scrolling at the right position is necessary (Part 3, see the snap-to-grid code) as we need to have all the cover images in the perfect arrangement, not halfway and not being stuck somewhere.

When you look at the code, the scrolling via mouse or touch events impacts the value of the variable offset. In fact, the code for doing that is line-by-line equivalent to what has been covered in the previous parts. On a normal circumstances, this offset is always an integer multiply of 200 (the cover image size, in pixels). When the user makes a touch gesture to drag the cover or when it is decelerating by itself, then the value could be anything. This is where we need to apply tweening to give the right transformation matrix for each cover image.

Such a decoupling results in a modular implementation. All the events (touch, mouse, keyboard, timer) only deal with the change in offset, it has no knowledge as to how the value will be useful. The main renderer, which is the scroll() function, knows nothing on how that offset value is computed. Its responsibility is very limited: given the offset, compute the right matrix for every single cover. This is carried out by going a loop through the stack of the covers, from the front-most image to the ones in the far background, as illustrated in this diagram. Note how the loop also serves to set the proper z-index so that the center image becomes the front cover and so on.

Every image is initially centered on the screen. After that, the right translation is in the X axis is computed based on the relative position of the image. For a depth perception, there will be an additional translation in the Z axis, making the cover going farther from the screen. Finally the image is rotated (in the Y axis) with a positive angle and a negative angle for the left and right side, respectively. The 3-D visual comes from the perspective of the main div element which is the parent of every single image element.

stacking

The entire scroll() is about 40 lines. If the explanation above is still confusing, it is always fun to step through the code and watch the variables as it goes through one cover to another. The code is not crazily optimized, it is deliberately chosen to be as clear as possible without revealing too much. Having said that, with the most recent iOS or Android phones, the effect should be pretty smooth and there is not any problem achieving over 30 fps most of the time. You will get some minor extra frames (particularly useful for older devices) if the reflection effect is removed.

Since this demo is intended to be educational, I left a few exercises for the brave readers. For example, the tweening causes only one cover to move at a time. If you are up for a challenge, see if you can move two covers, one that is going to the front and one that is going to the back. In addition, the perspective here is rather a cheat since it applies on the parent DOM element (as you can witness, these transformed images have the same vanishing point). Applying an individual perspective and Y-rotation requires wrapping every img element with its container. Of course, the 3-D effect does not have to be like Cover Flow as you could always tweak the matrix logic so that it resembles e.g. MontageJS Popcorn instead.

Once you leave the flatland, there is no turning back!

Tags:

In the first part, I showed you an easy way to test PageSpeed Module by running it as a proxy, thanks to nginx and ngx_pagespeed. Instead of a prebuilt binary of PSOL (PageSpeed Optimization Library), this second part demonstrates how to build it from source.

We will continue using the same Vagrant-based setup from the same Git repository, bitbucket.org/ariya/pagespeed-proxy. In fact, the modifications I have applied to the provisioning script setup-proxy.sh serves as the self-documented step-by-step of what needs to be done.

pagespeedJust like many other Google projects, mod_pagespeed is using Gyp for its build process. Thus, there is the need to grab the latest depot_tools (pretty straightforward). Also, care must be taken to ensure the availability of Git version 1.8 as the previous version does not work with Gyp on mod_pagespeed. This is also the reason why the script setup-proxy.sh needs to build Git 1.8 from source if the system-wide installed Git is not up-to-date.

The build process itself does not have any gotchas, as long as you stick with the given instructions in the Build PSOL from Source wiki page. Just make sure you explicitly choose the same versions for mod_pagespeed and ngx_pagespeed (1.7.30.1 in this example), otherwise any API differences would cause the compilation to fail. Also, since mod_pagespeed relies on a lot of third party code (check its DEPS file), everything from JsonCpp, Apache Serf, libjpeg-turbo, and many others, including some bits from Chromium, the process is going to take some time, both for fetching those libraries and building them.

Once everything is completed, the end result is still the same like the previous part. You will have an HTTP forward proxy running on port 8000 which uses PageSpeed optimization library to compress the web pages passing through it. Verification is simple, run these two commands

curl http://ariya.github.io/js/random/index.html
curl -x localhost:8000 http://ariya.github.io/js/random/index.html

and you should the difference in the output as depicted in the screenshot. Notice the removal of the single HTML comment and the minification of JavaScript. In this simple example, the compressed HTML is reduced to only 87% of its original size.

pagespeed_diff

See you in Part 3!