Tags:

whale
In the world of virtualization nowadays, Docker is the new kid on the block. It is almost trivial to set up and play with it when you are running Linux. What if, like many geeks out there, you are using OS X as your primary development system? Two possible solutions are discussed here, using boot2docker or running it via a Linux virtual machine.

Let’s take a simple Go-based HTTP server and run it in a container. I have prepared a demo at bitbucket.org/ariya/docker-hellogo that you can follow along. To initiate, start with:

git clone https://bitbucket.org/ariya/docker-hellogo.git
cd docker-hellogo

The content of the Dockerfile in that repo is as follows (simplifed):

FROM centos:centos6
ADD . /src
RUN yum -y install golang
EXPOSE  8200
CMD ["go", "run", "/src/serve.go"]

It sets CentOS 6 as the base image, installs Go, and finally exposes port 8200 (where the HTTP server will run). The final CMD line specifies what to do when the container is executed, which is to run the said HTTP server.

Assuming that Docker is available (e.g. properly installed on Ubuntu), we can build the container:

sudo docker build -t hellogo .

The dot . refers to the current directory (i.e. the Git checkout) and the built image will be called hellogo. Note that this will pull the base image for CentOS 6, if it is not yet available locally.

Once the build process is completed, running the image is as easy as:

sudo docker run -p 8200:8200 -t hellogo

The argument -p 8200:8200 specifies the port forwarding. Open your browser and go to http://localhost:8200 and you should see the famous Hello world! message.

For those who are using OS X, fortunately there are at least two possible ways to realize the above steps without creating a Linux VM manually and running it there.

The first choice is to use boot2docker, a superlightweight Linux distro just to run Docker. Once boot2docker is installed, the setup is like this (note that we need the second line to ensure the correct port forwarding):

boot2docker init
vboxmanage modifyvm boot2docker-vm --natpf1 "http,tcp,127.0.0.1,8200,,8200"
boot2docker up
export DOCKER_HOST=tcp://localhost:4243

And that’s it! Now you can run docker build and docker run as described earlier (skip the sudo part). Rather straighforward, isn’t it?

The second choice is to have a virtual machine running Linux and use Docker from there. It is indeed an additional layer and some extra overhead, but in many cases it still works quite well. Obviously, create a virtual machine manually is not something you normally do these days. We can leverage Vagrant and VirtualBox for that.

To illustrate this, there is a Vagrantfile in the Git repo:

VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "ubuntu/trusty64"
  config.vm.network "forwarded_port", guest: 8200, host: 8200
  config.vm.provision "shell",
    inline: "apt-get -y update && apt-get -y install docker.io"
end

It is based on the recent Ubuntu 14.04 (Trusty). The provisioning script is very simple, its job is to install Docker. Note also the forwarding of port 8200. Initialize this virtual machine by running:

vagrant up

Give it a minute or two and now the virtual machine should be ready. You can verify this by running VirtualBox Manager. If there is no problem whatsoever, we can connect to that virtual machine:

vagrant ssh

In this ssh session, you can run docker build and docker run as previously described. Since port 8200 is correctly forwarded, you could also visit http://localhost:8200 using e.g. Safari running on OS X (the host system).

For this setup, you can witness the power of virtualization. Your OS X machine is running Ubuntu 14.04 system in a VirtualBox-based virtual machine. Now, within that Ubuntu system, there is another CentOS 6.5 system running in a container. The simple Go-based HTTP server is being executed in that container. Fun, isn’t it?

Last but not least, the fresh Vagrant 1.6 release has an official support for Docker as a new provider. I haven’t tried this but if you found that this official Docker provider streamlines the workflow ever further, please do share it with us.

Contain all the things!

Tags:

NaN, not a number, is a special type value used to denote an unrepresentable value. With JavaScript, NaN can cause some confusion, starting from its typeof and all to the way the comparison is handled.

Several operations can lead to NaN as the result. Here are some examples (follow along on JSBin: jsbin.com/yulef):

Math.sqrt(-2)
Math.log(-1)
0/0
parseFloat('foo')

The first trap for many JavaScript beginners is usually the unexpected result of calling typeof:

console.log(typeof NaN);   // 'number'

In a way, while NaN isn’t supposed to be a number, its type is number. Got it?

Stay calm, as this will continue to lead to many confusing paths. Let’s compare two NaNs:

var x = Math.sqrt(-2);
var y = Math.log(-1);
console.log(x == y);      // false

Maybe that’s because we’re supposed to use strict equal (===) operator instead? Apparently not.

var x = Math.sqrt(-2);
var y = Math.log(-1);
console.log(x === y);      // false

Arrgh! Could it be because they are NaNs from two different operations? What about…

var x = Math.sqrt(-2);
var y = Math.sqrt(-2);
console.log(x == y);      // false

Even crazier:

var x = Math.sqrt(-2);
console.log(x == x);      // false

What about comparing two real NaNs?

console.log(NaN === NaN); // false

Because there are many ways to represent a NaN, it makes sense that one NaN will not be equal to another NaN. Still, this is the reason why I sometimes tweet:

To solve this, originally I intended to submit this proposal for ECMAScript 7:

nan

But of course, solutions (and workarounds) already exist today.

Let’s get to know the global function isNaN:

console.log(isNaN(NaN));      // true

Alas, isNan() has its own well-known flaws:

console.log(isNaN('hello'));  // true
console.log(isNaN(['x']));    // true
console.log(isNaN({}));       // true

This often leads to a number of different workarounds. One example is to exploit the non-reflective nature of NaN (see e.g. Kit Cambridge’s note):

var My = {
  isNaN: function (x) { return x !== x; }
}

Another example is to check for the value’s type first (to prevent coercion):

My.isNaN = function(x) { return typeof x === 'number' && isNaN(x); };

Note: The coercion that is being blocked here is related to isNaN. As an exercise, compare the result of isNaN(2), isNaN('2') and isNaN('two').

Fortunately, for the upcoming ECMAScript 6, there is Number.isNaN() which provides a true NaN detection (BTW, you can already use this function in the latest version of Chrome and Firefox). In the latest draft from April 2014 (Rev 24), this is specified in Section 20.1.2.4:

When the Number.isNaN is called with one argument number, the following steps are taken:
1. If Type(number) is not Number, return false.
2. If number is NaN, return true.
3. Otherwise, return false.

In other words, it returns true only if the argument is really NaN:

console.log(Number.isNaN(NaN));            // true
console.log(Number.isNaN(Math.sqrt(-2)));  // true
 
console.log(Number.isNaN('hello'));        // false
console.log(Number.isNaN(['x']));          // false
console.log(Number.isNaN({}));             // false

Next time you need to deal with NaN, be extremely careful!

Tags:

There is a definite convergence of solution duality in services like Dropbox, Box, Apple iCloud, and Google Drive. The first two started from being a content synchronization/storage and then moved towards an identity solution, the other two are known to have existing users lured into the cloud storage story. Should they continue to march forward, could this be the strategic delivery platform for HTML5 applications?

platformImagine you own a small business and it uses Box for content sharing and collaboration. At one point, every employee has an account there (even easier with a variety of single sign-on services out there). Before you know, Box will be your company-wide document management hub, in particular since it integrates with other business applications (CRM, marketing and sales tools, etc). The explosion of smartphones and tablets, mobile native and web apps, along with BYOD propels this even further. Being productive does not mean being in the office cubicle anymore.

Your data is stored in that secure, remote storage. Your identity is an integral part of the service. Your business applications already access the data and use that identity information. At this rate, the next logical step is to build your own applications on top of the application platform. Many organizations have the need for a set of customized applications, from a straighforward book-a-meeting-room mini application to something as complicated as HR-finance-engineering integration for new hires. Only the code to build the application is to be constructed, the back-end data storage and user authentication problems are already solved for you.

html5Speaking about code, this is where an application written using web technologies (HTML, CSS, JavaScript) can have a slight advantage. Granted, we can already enjoy native mobile apps from the vendor such as Dropbox Carousel, Box OneCloud Text Editor. Still, for applications specific to your business needs, there is seldom a reason to go that far. Functional is the keyword here. Obviously, you want the fastest deployment time possible, with the opportunity for further incremental improvements as the business needs change over the time.

The ultimate key here is when Box, Dropbox, and other similar solutions start to provide a hybrid application container. You can write your app in JavaScript and then get it smoothly deployed and managed via that container. Data access is easy with the existing back-end SDK, security and access control is there with the existing user management SDK.

It will be hard to expect that Google and Apple will provide such an integrated web application and delivery platform, consider each of them has a significant interest in pushing Android and iOS (to be fair, we may see another attempt from Google via the Chrome Apps approach). For Box and Dropbox however, they are in the perfect position to offer a platform which can easily attract a lot of experienced web developers out there.

Could it give the birth of a true integration of HTML5 and cloud technologies?

Tags:

jasmine-qunit
A unique feature of Venus.js, a JavaScript test runner from LinkedIn, is that the test configuration can be in the form of source annotation. This is useful, e.g. to choose which test library (Mocha, Jasmine, QUnit) should be used to execute the tests. Now, wouldn’t it be fantastic if the test runner can deduce the said library automatically?

During a chat over coffee, I proposed that kind of best-effort detection idea to my fellow Shape Security engineer, Seth (who is involved with Venus.js). Obviously, this type of detection might not be 100% accurate. However, I postulate that it should be good enough in most cases, particularly in the case where the annotation is not present.

As a proof of concept, I have implemented detect-testlib.js (see the Git repository at bitbucket.org/ariya/detect-testlib). It uses Esprima to parse the test code, collect the important verbs (more about this later), and then use the gathered information to decide whether the test is written in Jasmine or qUnit, or if it is completely unknown. To follow along, clone the repo, run npm install first, and the try the following:

node detect-testlib.js test/jquery-attributes.js

That jquery-attributes.js is taken from the actual jQuery unit tests. As expected, detect-testlib will confidently say that those tests are using qUnit. For another attempt, check test/yeoman-env.js (again, taken from Jasmine-based Yeoman unit tests).

How does the detection work? There are many different ways to implement it. For this proof-of-concept, I opt for something simple. The tool will scan the names of the function (aka, the verbs) used in the top-level and the secondary level. In other words, given the following code:

describe("sqrt", function() {
    it('computes the square root of 9 as 3', function() {
        expect(My.sqrt(9)).toBeCloseTo(3, 10);
    });
}

then it will collect describe in the top-level group and it in the test-level group. After a while, we will have these two arrays populated (without any duplicates). For example, running it on test/jquery-attributes.js will give the arrays as:

{ topLevel:
   ['module', 'test' ],
  testLevel: 
   [ 'expect',
     'deepEqual',
     'equal',
     'ok',
     'strictEqual',
     'testVal',
     'testAddClass',
     'testRemoveClass',
     'testToggleClass' ] }

Once the array is obtained, the special decide() function will use a simple heuristic to figure out the library being used in the test code. As an illustration, for the above example, it will conclude that the test is using Jasmine (based on the existence of describe and it). For a different test code like the following:

test("sqrt", function() {
  equal(My.sqrt(4), "2", "Check for square root of 4" );
});

then decode() will go for QUnit as its solution.

Obviously, in some corner cases this simple decision rule will fail. However, that is expected since it is rather minimalistic. You are encouraged to develop a more sophisticated classification implementation in case a more faithful decision is needed for your application.

How about Mocha? Since Mocha can support both TDD and BDD style, the decision factor is more complicated. A possible solution is to detect the assertion mechanism and cross-correlate it with the typical pattern of assertions (expect.js, Chai, etc) used with Mocha.

Have fun with autodetection!

Tags:

These days, having enough unit tests for a JavaScript-based web application/library is the bare minimum. Ideally, the code coverage of those tests is also monitored in a day-to-day development situation. Fortunately, this is easy to do with a modern test runner such as Venus.js.

Named after the famous Venus flytrap, Venus.js was originated at LinkedIn Engineering group to facilitate its JavaScript testing activities. Venus.js is pretty comprehensive, it supports unit tests written for a number of test libraries: Mocha (default), Jasmine, and QUnit. Venus.js is easy to install and to use, I recommend reading its excellent Getting Started tutorial.

For the demonstration of code coverage, I prepare a Git repository bitbucket.org/ariya/coverage-istanbul-venus. If you want to follow along, just clone it and check its contents.

First, let’s take a look at the code we want to test. This is just a DIY implementation of the square root function, it can’t be simpler than:

var My = {
    sqrt: function(x) {
        if (x < 0) throw new Error("sqrt can't work on negative number");
        return Math.exp(Math.log(x)/2);
    }
};

In order to maximize the test coverage, the unit tests for the above My.sqrt() function needs to check for normal square root operation and also when it is supposed to throw an exception. This is available in the test/test.sqrt.js file (based on Mocha) which looks like the following:

/**
 * @venus-library mocha
 * @venus-code ../sqrt.js
 */
describe("sqrt", function() {
  it("should compute the square root of 4 as 2", function() {
    expect(My.sqrt(4)).to.equal(2);
  });
});

One notable feature of Venus.js is its zero-configuration design. In the above example, we don’t need to write any HTML to serve the code and the test. Venus.js uses an annotation approach. You can see the use of @venus-code to specify the file containing the code we want to test (i.e. the implementation of My.sqrt) and @venus-library to choose the testing library (i.e. Mocha). Everything else will be taken care of automatically.

If Venus.js is properly installed, executing the test is a matter of running:

venus test/test.sqrt.js -e ghost

which gives the following result:

venusjs

In the test invocation, the option -e ghost indicates that the tests are to be executed headlessly using PhantomJS (again, another nice built-in feature of Venus.js). Of course, Venus.js supports other testing environments and it can run the tests on real web browsers or even via Selenium Grid or Sauce Labs.

How to show the code coverage of the tests? It is a matter of adding another option:

venus test/test.sqrt.js -e ghost --coverage

Behind the scene, Venus.js uses Istanbul, an excellent JavaScript instrumentation and code coverage tool. Running the test with coverage tracking will add a few more lines of report. Thanks to Istanbul, all three types of code coverage (statements, functions, branches) will be tracked accordingly.

venuscoverage

Another very useful feature of Venus.js is the ability to mix-and-match tests written using a different library. This is illustrated in the example repo. Instead of only test/test.sqrt.js, you also spot two additional files with their own set of unit tests: test/test.extensive.js and test/test.error.js. The former adds more checks on the square root functionality (probably excessive, but you got the point) while the latter detects some more corner cases. What is interesting here is that test.extensive.js relies on Jasmine while test.error.js is written using QUnit.

If you check the package manifest, what npm test actually runs is:

venus test --coverage -e ghost

In other words, Venus.js will locate all the test files in the test/ directory and execute them. In this case, we have three (3) test files using different test libraries and Venus.js will handle them just fine. Isn’t it nice?

In the past, I have explained the use of Karma and Istanbul to track code coverage of JavaScript unit tests written using Jasmine, QUnit, and Mocha. However, if Karma is not your cup of tea or if your different subteams would like to use different test libraries, then perhaps Venus.js can be the solution for you.

Have fun trapping those bugs!

Tags:

eurofighter
A few days ago, I gave a talk at the most recent Web Tech Talk meetup hosted by Samsung. The title is Supersonic JavaScript (forgive my little marketing stunt there) and the topic is on changing the way we think about optimizing JavaScript code.

None of the tricks presented there will make your code break the sound barrier. Nevertheless, some of them can serve as the food for thought to provoke our brain to look at the problem in a few different ways. If you want to follow along, check or download the slide deck (before you ask: it was not video recorded).

I discussed four different ideas during the talk.

Short Function. Back in the old days, function calls were expensive. These days, modern JavaScript engines are smart enough and can do self-optimization. For some details on this optimization, read my previous blog post Automatic Inlining in JavaScript Engines and Lazy Parsing in JavaScript Engines. There is no need to outsmart the engine and therefore stick with a concise and readable code.

Fixed Object Shape. This swings in the other direction. How can we help the engine so that it can take the fast path most of the time? For more information, refer to my blog post JavaScript object structure: speed matters.

Profile Guided. Related to the previous point, can we control our own code so that it takes the fast path whenever possible but will still fall back to the slow path everynow and then? What we need is a set of representative data for the benchmark and the profile can be used to tweak the implementation. More details are available in my two other blog posts Profile Guided JavaScript Optimization and Determining Objects in a Set.

Garbage Handling. Producing a lot of object often places a burden on the garbage collector. As an illustration, check out a short video from Jake Archibald describing the situation of using +new Date.

There is no silver bullet to any performance problem. However, like I already mentioned in my Performance Calendar’s article JavaScript Performance Analysis: Keeping the Big Picture, it is important to keep in mind: are we always seeing the big picture or are we trapped into optimizing to the local extreme only?

Now, where’s that TOPGUN application form again…