The process of managing a virtual machine is heavily simplifed via the use of Vagrant. However, there is still a manual or a semi-automatic process involved for creating the base box itself. There are many tools designed to solve to this problem, the most recent one is Packer from @mitchellh, the very same man behind Vagrant.

Packer allows you to create a personal Vagrant base box easily. This means that you don’t need to rely anymore on some random ready-made boxes from the Internet. With Packer, you know what is being installed into your base box and hence the box can be more trustworthy. While Packer supports Vagrant, it can also be used to prepare a system for Amazon EC2, VMware, and many others.

Using Packer to create CentOS and Ubuntu boxes is not difficult. If you want to follow along, I have prepared a Git repository ariya/packer-vagrant-linux which contains all the necessary bits to create CentOS 5.4 and/or Ubuntu 12.04 LTS 64-bit boxes. Make sure you have the latest version of VirtualBox, Vagrant, and Packer installed properly in your machine before you follow these step-by-step instructions.

Packer works with a template file. In the repository mentioned above, there are two templates, each for CentOS and Ubuntu. As an example, if you want to build the base CentOS box, you need to invoke the command:

packer build centos-6.4-x86_64.json

This triggers the download of VirtualBox Guest Additions image and the actual CentOS 5.4 installation image. These two images will be cached, see the subdirectory packer_cache, so that any subsequent build does not trigger a full download again. Obviously, if you rather create a Ubuntu box, just replace the specified file with the one for Ubuntu.

Using the installation image, Packer will prepare a blank temporary virtual machine (clearly visible if you have VirtualBox Manager running as the machine is called packer-virtualbox) and install CentOS into that machine. Unless you are running it in a headless mode, a window will show up the actual installation process:


For many sysadmins, unattended Linux installation may sound familiar: CentOS uses kickstart while Ubuntu uses preseeding. The configuration files for this automated installation are in the http subdirectory (served via HTTP to the installer). You can open the template file, centos-6.4-x86_64.json in the above example, to get the understanding of this unattended installation configuration.

Once the intended Linux distribution is installed, the template file tells Packer to do some basic provisioning by running several shell scripts (check the subdirectory scripts). After this provisioning step is completed, Packer will export the temporary virtual machine and create a Vagrant base box out of it. In this example, it will be stored in the build subdirectory. At this point you are ready to use your base box, it is a matter of using vagrant init with the path to the box in that build directory.

Now, who said packing can’t be fun?



Last week I started a new chapter in my professional career. I am now working for Shape Security, a cool startup in Mountain View focusing on the next-generation web content security.

I am pretty excited about the product, the team, and the technology. With a serious funding, along with an impressive array of investors, the journey has just started. Personally, I can’t wait to share with you some of the cutting-edge security systems we are working on!

Stay hungry, stay secure.


With my new partner-in-crime Ann Robson, we had a presentation JavaScript Insights at the most recent HTML5 Developer Conference in San Francisco. In this talk, we discussed several important JavaScript code quality tools.

You might have seen my previous renditions of this theme (Fluent, Velocity, and a few others), yet those variants were quite jam-packed and too condensed. As Ann has written in her blog post, one primary objective of this new attempt is to make it "more palatable and practical".

The biggest challenge we experienced during the brewing process of the presentation is figuring out the right composition. We have enough materials to talk all day long about JavaScript language tooling, but we need to pack it in such a way that it can be thought-provoking enough and yet not too cliché. We covered topics such as multi-layer defense, pre-commit hook, code coverage, and cyclomatic complexity. There were also further discussion on tools to catch things like stray logging, Boolean traps, strict mode violations, polluting and unused variables, and nested conditionals.

If you missed this talk, enjoy the slide deck (download as PDF, 1.5 MB).
Update: Check also the 40-minute video.

Of course, feel free to send us your feedback, just hit @arobson and/or @ariyahidayat on Twitter!


An implementation of a sorting algorithm usually uses one or more iterating loops in the form of for/while statement. With JavaScript and its powerful Array object, these loops can be avoided since Array’s higher-order functions are enough for iterating the array. One candidate for this technique is the implementation of sorting networks.

Before we start, let us do a quick refresh on some higher-order functions. In my previous blog post Searching using Array.prototype.reduce, there is an implementation of insertion sort without any loop statements at all. If we change the problem into sorting an array of numbers, the complete code will be:

function sort(entries) {
  return Array.apply(0, Array(entries.length)).map(function () {
    return entries.splice(entries.reduce(function (max, e, i) {
      return e > max.value ? { index: i, value: e } : max;
    }, { value: null }).index, 1).pop();
console.log(sort([14, 3, 77])); // [ 77, 14, 3 ]

Like a typical insertion sort, the outer loop picks the largest value one at a time while the inner loop searches for the largest number in the working array. For the outer loop, Array.apply(0, Array(N)) is the trick to generate a usable empty array, see my other blog post on Sequence using JavaScript Array. In the inner loop, reduce is used to locate the largest number as well as its index. The index is needed to yank that number out of the array. At the same time, the number is being stashed into the sorting result.

If you are still confused, try to deconstruct and debug the above code. When necessary, write the imperative version, possibly using the classical for loop, and compare both versions. It is quite useful to understand this properly to make it easier to follow the next part.

For the sorting network, the process involves two steps. The first step is to build the comparator network, the second is the actual sorting process via comparison and swap according to the constructed network. For the second step, the core operation is the following function (that acts like a comparator unit):

function compareswap(array, p, q) {
  if (array[p] < array[q]) {
    var temp = array[q];
    array[q] = array[p];
    array[p] = temp;

As an illustration, if the array to be sorted has 3 numbers only, practically the sorting will be a series of the following steps:

compareswap(entries, 0, 1); 
compareswap(entries, 1, 2); 
compareswap(entries, 0, 1);

For 4-number array, it will be like:

compareswap(entries, 0, 1); 
compareswap(entries, 1, 2); 
compareswap(entries, 2, 3); 
compareswap(entries, 0, 1); 
compareswap(entries, 1, 2); 
compareswap(entries, 0, 1);

If we draw the sequence, the sorting network annotation look like the following diagram. You probably can already see the pattern here, in particular if you relate it to the previous implementation of insertion sort. There is a few alternatives to this configuration of sorting networks such as odd-even mergesort, Bitonic, and many others.


The comparator network simply formalizes this so that we can put every compare-and-swap action in a single loop. As long as we have the right network for the given array size, sorting is a matter of running:

function sort(network, entries) {
  for (var i = 0; i < network.length; ++i)
    compareswap(entries, network[i], network[i] + 1)

Quiz: what kind of sorting algorithm is that?

How to create the network? A quick way is shown below. Note that the network will be always the same for the given array size (N), thus it may make sense to memoize it in some scenarios.

function createNetwork(N) {
  var network = [];
  for (var i = N - 1; i >= 0; --i)
    for (var j = 0; j < i; ++j)
  return network;

Obviously, why use for loop if we can leverage Array object? Here is the version, out of a gazillions other possibilities, which uses only Array’s higher-order functions. Like what I have discussed in the Fibonacci series construction, reduce can be (ab)used to accumulate elements into an array and this serves as the outer loop. The inner loop is way simpler, it only needs to create a sequence of numbers from 0 to the current limit.

function createNetwork(N) {
  return Array.apply(0, Array(N)).reduce(function (network, _, y) {
    return network.concat(Array.apply(0, Array(N - y - 1)).map(function(_, x) {
      return x;
  }, []);

Combining both these two steps give us the final code as follows (notice the same code pattern for reduce). See if you recognize the construct for each step and if you can analyze what it is doing there.

function sort(input) {
  var array = input.slice(0);
  Array.apply(0, Array(array.length)).reduce(function (network, _, y) {
    return network.concat(Array.apply(0, Array(array.length - y - 1))
      .map(function(_, x) { return x; }));
  }, []).reduce(function(result, p) {
    if (array[p] < array[p + 1]) {
      var temp = array[p + 1];
      array[p + 1] = array[p];
      array[p] = temp;
    return array;
  }, array);
  return array;

While sorting network is supposed to be well suited for parallelized comparison, it does not give us a lot of benefit in the context above. However, I hope these two different ways to implement sorting in JavaScript will inspire you to further explore the wonderful world of sorting networks.

Note: Special thanks to Bei Zhang for his initial implementation of sorting network and for reviewing this blog post.


QUnit, used by projects like jQuery and jQuery Mobile, is a rather popular JavaScript testing framework. For tests written using QUnit, how do we measure its code coverage? A possible solution which is quite easy to setup is to leverage the deadly combination of Karma and Istanbul.

Just like our previous adventure with Jasmine code coverage, let us take a look at a simple code we need to test. This function My.sqrt is a reimplementation of Math.sqrt which may throw an exception if the input is invalid.

var My = {
  sqrt: function(x) {
    if (x < 0) throw new Error("sqrt can't work on negative number");
      return Math.exp(Math.log(x)/2);

A very simple QUnit-based test for the above code is as follows.

test("sqrt", function() {
  deepEqual(My.sqrt(4), 2, "square root of 4 is 2");

Manually running the test is easy as opening the test runner in a web browser:


For a smoothed development workflow, an automated way to run the tests will be much preferred. This is where Karma becomes very useful. Karma also has the ability to launch a predetermined collection of browsers, or even to use PhantomJS for a pure headless execution (suitable for smoke testing and/or continuous delivery).

Before we can use Karma, installation is necessary:

npm install karma karma-qunit karma-coverage

Karma requires a configuration file. For this purpose, the config file is very simple. As an illustration, the execution is done by PhantomJS but it is easy to include other browsers as well.

module.exports = function(config) {
    basePath: '',
    frameworks: ['qunit'],
    files: [
    browsers: ['PhantomJS'],
    singleRun: true,
    reporters: ['progress', 'coverage'],
    preprocessors: { '*.js': ['coverage'] }

Now you can start Karma with the above configuration, it would say that the test passes just fine. Should you encounter some problems, you can look at an example repository I have setup github.com/ariya/coverage-qunit-istanbul-karma, it may be useful as a starting point or a reference for your own project. As a convenience, the test in that repository can be executed via npm test.

What is more interesting here is that Karma runs its coverage processor, as indicated by preprocessors in the above configuration. Karma will run Istanbul, a full-featured instrumenter and coverage tracker. Essentially, Istanbul grabs the original JavaScript source and injects extra instrumentation code so that it can gather the execution metrics once the process finishes (read also my previous blog post on JavaScript Code Coverage with Istanbul). In this Karma and Istanbul combo, the generated coverage report is available in the under the subdirectory coverage.


The above report indicates that the single test for My.sqrt is still missing the test for an invalid input, thanks to branch coverage feature of Istanbul. The I indicator next to the conditional statement tells us that the if branch was never taken. Of course, once the issue is known, adding another test which will cover that branch is easy (left as an exercise for the reader).

Now that code coverage is tracker, perhaps you are ready for the next level? It is about setting the hard threshold so that future coverage regression will never happen. Protect yourself and your team from carelessness, overconfidence, or honest mistakes!


Searching for a particular element in a JavaScript array is often carried out using a typical iteration. In some cases, forEach and some can be used as well. What is often overlooked is the potential use of Array.prototype.reduce to perform such an operation.

ECMAScript 5.1 specification, in Section, describes the callback function to reduce as:

callbackfn is called with four arguments: the previousValue (or value from the previous call to callbackfn), the currentValue (value of the current element), the currentIndex, and the object being traversed.

An illustration of reduce can be seen in the following snippet. Thanks to the addition x + y, the code computes the sum of all numbers in the array, with an optional offset in the second example.

[1, 2, 3, 4, 5].reduce(function (x, y) { return x + y });       //  15
[1, 2, 3, 4, 5].reduce(function (x, y) { return x + y }, 100);  // 115

As a matter of fact, I already covered a rather complicated construct using reduce to produce the Fibonnaci series. To perform a search using reduce, fortunately it does not need to be complicated.

Let’s take a look at the following problem (part of JavaScript Under Pressure): find the longest string in an array of strings. An imperative solution looks something like the following code (using forEach may simplify the loop but the idea remains the same):

function findLongest(entries) {
  for (var i = 0, longest = ''; i < entries.length; ++i) 
    if (entries[i].length > longest.length) longest = entries[i];
  return longest;

A version which relies on reduce is a single statement:

function findLongest(entries) {
  return entries.reduce(function (longest, entry) {
    return entry.length > longest.length ? entry : longest;
  }, '');

We set the initial value for longest as an empty string. The callback function for reduce ensures that longest will be kept updated because we always choose, via the convenient ternary operator, the longest string for its return value.

Now imagine that the problem is expanded, not only we need to obtain the longest string but we also need to get the index of that longest string in the array. While it sounds more complex, the solution is still as compact as before:

function findLongest(entries) {
  return entries.reduce(function (longest, entry, index) {
    return entry.length > longest.value.length ?
      { index: index, value: entry } : longest;
  }, { index: -1, value: '' });

The callback function takes advantage of the third parameter, i.e. the index of the current element. The rest is pretty much the same, except now we need to store a richer object contained both the index and the string, as opposed to just a simple string.

At this point, the problem is made even more challenging: sort the array based on the length of the string. Fortunately, this is again not as crazy as you might think. In fact, we are halfway there since we already have the ability to find the longest string in an array. This is a perfect chance to implement something like insertion sort. For every run, find the longest string, pluck it from our array, and then the push it to the result.

We can realize quickly that the loop needs to run as many as the available array elements. If you read my previous blog post on Sequence with JavaScript Array, it is obvious that we can simply use Array.apply and map for the iteration. The code will look like the following fragment. See if you can figure out the reason behind the use of splice and pop there.

entries = Array.apply(0, Array(entries.length)).map(function () {
  return entries.splice(findLongest(entries).index, 1).pop();

Pushing a bit to the extreme, what if the solution can only use reduce? In this case, we need to revive the trick already employed in that Fibonacci series adventure. The use of reduce is reduced (pun intended) to an accumulating iteration, we simply start with an empty array as the initial value and fill this array as we go. Inlining the longest-string-search and shortening a few variables for some additional air of mystery, the complete incantation will be as fascinating as the code below:

entries = Array.apply(0, Array(entries.length)).reduce(function (r) {
  return r.concat(entries.splice(
    entries.reduce(function (longest, e, i) {
      return e.length >= longest.e.length ?  { i: i, e: e } : longest;
    }, { e: '' }).i, 1
}, []);

Insertion sort is rather impractical in real-world scenarios and the above cascaded construct is not always readable. However, hopefully this can still show that Array.prototype.reduce can be quite charming at times!