Tags:

These days, having enough unit tests for a JavaScript-based web application/library is the bare minimum. Ideally, the code coverage of those tests is also monitored in a day-to-day development situation. Fortunately, this is easy to do with a modern test runner such as Venus.js.

Named after the famous Venus flytrap, Venus.js was originated at LinkedIn Engineering group to facilitate its JavaScript testing activities. Venus.js is pretty comprehensive, it supports unit tests written for a number of test libraries: Mocha (default), Jasmine, and QUnit. Venus.js is easy to install and to use, I recommend reading its excellent Getting Started tutorial.

For the demonstration of code coverage, I prepare a Git repository bitbucket.org/ariya/coverage-istanbul-venus. If you want to follow along, just clone it and check its contents.

First, let’s take a look at the code we want to test. This is just a DIY implementation of the square root function, it can’t be simpler than:

var My = {
    sqrt: function(x) {
        if (x < 0) throw new Error("sqrt can't work on negative number");
        return Math.exp(Math.log(x)/2);
    }
};

In order to maximize the test coverage, the unit tests for the above My.sqrt() function needs to check for normal square root operation and also when it is supposed to throw an exception. This is available in the test/test.sqrt.js file (based on Mocha) which looks like the following:

/**
 * @venus-library mocha
 * @venus-code ../sqrt.js
 */
describe("sqrt", function() {
  it("should compute the square root of 4 as 2", function() {
    expect(My.sqrt(4)).to.equal(2);
  });
});

One notable feature of Venus.js is its zero-configuration design. In the above example, we don’t need to write any HTML to serve the code and the test. Venus.js uses an annotation approach. You can see the use of @venus-code to specify the file containing the code we want to test (i.e. the implementation of My.sqrt) and @venus-library to choose the testing library (i.e. Mocha). Everything else will be taken care of automatically.

If Venus.js is properly installed, executing the test is a matter of running:

venus test/test.sqrt.js -e ghost

which gives the following result:

venusjs

In the test invocation, the option -e ghost indicates that the tests are to be executed headlessly using PhantomJS (again, another nice built-in feature of Venus.js). Of course, Venus.js supports other testing environments and it can run the tests on real web browsers or even via Selenium Grid or Sauce Labs.

How to show the code coverage of the tests? It is a matter of adding another option:

venus test/test.sqrt.js -e ghost --coverage

Behind the scene, Venus.js uses Istanbul, an excellent JavaScript instrumentation and code coverage tool. Running the test with coverage tracking will add a few more lines of report. Thanks to Istanbul, all three types of code coverage (statements, functions, branches) will be tracked accordingly.

venuscoverage

Another very useful feature of Venus.js is the ability to mix-and-match tests written using a different library. This is illustrated in the example repo. Instead of only test/test.sqrt.js, you also spot two additional files with their own set of unit tests: test/test.extensive.js and test/test.error.js. The former adds more checks on the square root functionality (probably excessive, but you got the point) while the latter detects some more corner cases. What is interesting here is that test.extensive.js relies on Jasmine while test.error.js is written using QUnit.

If you check the package manifest, what npm test actually runs is:

venus test --coverage -e ghost

In other words, Venus.js will locate all the test files in the test/ directory and execute them. In this case, we have three (3) test files using different test libraries and Venus.js will handle them just fine. Isn’t it nice?

In the past, I have explained the use of Karma and Istanbul to track code coverage of JavaScript unit tests written using Jasmine, QUnit, and Mocha. However, if Karma is not your cup of tea or if your different subteams would like to use different test libraries, then perhaps Venus.js can be the solution for you.

Have fun trapping those bugs!

Tags:

eurofighter
A few days ago, I gave a talk at the most recent Web Tech Talk meetup hosted by Samsung. The title is Supersonic JavaScript (forgive my little marketing stunt there) and the topic is on changing the way we think about optimizing JavaScript code.

None of the tricks presented there will make your code break the sound barrier. Nevertheless, some of them can serve as the food for thought to provoke our brain to look at the problem in a few different ways. If you want to follow along, check or download the slide deck (before you ask: it was not video recorded).

I discussed four different ideas during the talk.

Short Function. Back in the old days, function calls were expensive. These days, modern JavaScript engines are smart enough and can do self-optimization. For some details on this optimization, read my previous blog post Automatic Inlining in JavaScript Engines and Lazy Parsing in JavaScript Engines. There is no need to outsmart the engine and therefore stick with a concise and readable code.

Fixed Object Shape. This swings in the other direction. How can we help the engine so that it can take the fast path most of the time? For more information, refer to my blog post JavaScript object structure: speed matters.

Profile Guided. Related to the previous point, can we control our own code so that it takes the fast path whenever possible but will still fall back to the slow path everynow and then? What we need is a set of representative data for the benchmark and the profile can be used to tweak the implementation. More details are available in my two other blog posts Profile Guided JavaScript Optimization and Determining Objects in a Set.

Garbage Handling. Producing a lot of object often places a burden on the garbage collector. As an illustration, check out a short video from Jake Archibald describing the situation of using +new Date.

There is no silver bullet to any performance problem. However, like I already mentioned in my Performance Calendar’s article JavaScript Performance Analysis: Keeping the Big Picture, it is important to keep in mind: are we always seeing the big picture or are we trapped into optimizing to the local extreme only?

Now, where’s that TOPGUN application form again…

Tags:

notepad

One of the interesting features of Esprima is to retrieve every comment inside a JavaScript source. Even better, each comment can be linked to the related syntax node. This is very helpful (like in the case of JSDoc) since any additional information regarding the program can be provided via the comment serving as a form of annotation.

Let’s take a look at the simple code below:

// Give the ultimate answer
function answer() { return 42 }

If we let Esprima consume the above code (as a string) like this (mind the attachComment option):

var tree = esprima.parse(code, { attachComment: true });

then the object tree will contain an array called leadingComments which is part of the function declaration. Portions of that tree is visualized in the following diagram. Note that the leadingComments array has only one element, because there exists only one single-line comment before the function.

comment

Many documentation tools rely on a specially-formatted comment to derive additional information. For example, since JavaScript does not specify the type of every function parameter, that piece of information can be encoded in the annotation. This is very familiar for those who use JSDoc, Closure Compiler, or other similar tools. The following fragment demonstrates the usage:

/**
 * Compute the square root.
 *
 * @param {number} x number.
 * @return {number} The square root of x.
 */
 
function SquareRoot(x) {
    return Math.exp(Math.log(x)/2);
}

Here is another example:

/**
 * Adds three numbers.
 *
 * @param {number} x First number.
 * @param {number} y Second number.
 */
 
function Add(x, y, z) {
    return x + y + z;
}

Unfortunately, the annotation in the above example is missing the tag for the third parameter. It often happens, in some cases due to some refactoring which may result in out-of-sync comment. Since such a mistake is not caught during unit testing, it may go undetected for a while.

It is however possible to use Esprima to extract the comment and then process it with a JSDoc annotation parser such as Doctrine. This way, if a function parameter is not undocumented, we can warn the developer as early as possible. The proof-of-concept of such a tool exists in the repository bitbucket.org/ariya/missing-doc/. The gist is in its analyze function (missing-doc.js):

    data = doctrine.parse(comment.value, {unwrap: true});
    params = [];
    data.tags.forEach(function (tag) {
        if (tag.title === 'param') {
            params.push(tag.name);
        }
    });

Once we got the comment associated with a syntax node (that represents a function declaration), it got parsed by Doctrine and we just iterate through all the found tags. The next step is quite logical:

    missing = [];
    node.params.forEach(function (param) {
        if (params.indexOf(param.name) < 0) {
            missing.push(param.name);
        }
    });

Here we compare what is being documented (x and y) in the annotation tags with the actual function parameters (x, y, and z). When something is not listed in the annotation, it will go into our missing array for the subsequent reporting.

Running the tool with Node.js like this:

npm install
node missing-doc.js test/sample2.js

gives the following result:

In function Add (Line 8):
 Parameter z is not documented.

Easy enough! All in all, this microtool weighs less than 70 lines of JavaScript.

How do you plan to use Esprima’s comment attachment feature?

Tags:

rhino
The most recent Java 8 release came with lots of new features, one of them is the brand-new JavaScript engine to replace the aging Rhino. This new engine, called Nashorn (German for rhinoceros), is high-performant and specification compliant. It is definitely useful whenever you want to mix-and-match your Java and JavaScript code.

To check the performance of Nashorn, I use it to run Esprima and let it parse some large code base (unminified jQuery, 268 KB). This quick, rather non-scientific test exercises two aspects of JavaScript run-time environment: continuous memory allocation (for the syntax nodes) and non-linear code execution (recursive-descent parsing).

If you want to follow along, check the repository bitbucket.org/ariya/nashorn-speedtest. Assuming you have JDK 8 installed properly, run the following:

javac -cp rhino.jar speedtest.java
java -cp .:rhino.jar speedtest

This test app will execute Esprima parser and tokenizer on the content of the test file. Rhino gets the first chance, Nashorn follows right after (each engine gets 30 runs). In the beginning, Rhino’s first run is 2607 ms and slowly it speeds up and finally this parsing is completed in just 824 ms. Nashorn timings have a different characteristic. When it is cold, Nashorn initially takes 5328 ms to carry out the operation but it quickly picks up the pace and before you know, it starts moving full steam ahead, reaching 208 ms per run.

Behind the scene, Nashorn compiles your JavaScript code into Java bytecodes and run them on the JVM itself. On top of that, Nashorn is taking advantage of invokedynamic instruction (from the Da Vinci Machine Project, part of Java 7) to permit "efficient and flexible execution" in a dynamic environment such as JavaScript. Other JVM languages, notably here JRuby, also benefit from this new invokedynamic feature.

speedtest

What about Nashorn vs V8? It is not terribly fair to compare both, V8 is designed specifically for JavaScript while Nashorn leverages the battle-hardened, multi-language JVM. But for the fun of it, the Git repo also includes a JavaScript implementation of the test app which can be executed with V8 shell. On the same machine, V8 can complete the task in about 110 ms. Nashorn is not as mature as V8 yet, it is quite an achievement that it is only twice as slow as V8 for this specific test.

As a high-performance JavaScript on the JVM, Nashorn has many possible applications. It serves as a nice platform to play and experiment with JavaFX. Yet, it is powerful enough to be part of a web service stack (Vert.x, Project Avatar). On a less ambitious level, having another fast implementation of JavaScript is always good so that there is an alternative to run various JavaScript tools in an environment where Node.js is not available or not supported. As an illustration, check my previous blog post on a simple Ant task to validate JavaScript code.

Next time you have the urge to herd some rhinos, consider Nashorn!

Tags:

Eclipse Orion released its latest version 5, right before the most recent EclipseCon. This new version packs several exciting features, everything from stylistic change in the appearance to an streamlined cloud deployment. My favorite is the easy-to-use Node.js bundle.

With Orion 5, it is supertrivial to try out Orion (assuming you have Node.js and npm):

npm install orion

Then you can launch it by running:

node node_modules/orion/server.js /path/to/your/project

and then open your favorite web browser and point it to localhost:8081. Now you will be able to edit existing files and create new files and folder. This works even if you don’t have any Internet connection.

Alternatively open the configuration file node_modules/orion/orion.conf and change workspace variable to the location of your JavaScript project you want to edit (if you are crazy, just set it to your home directory). Then, start Orion server by running npm start orion.

Let’s take a look at a quick Express example.

autocomplete

The above screenshot also demonstrates new Orion’s ability to provide autocomplete (or in Eclipse world, it’s called Content Asisst) for Express-based JavaScript code. It’s not limited to Express, there is also support for other frameworks such as Postgres, MySQL, MongoDB, and a few more.

Once this simple application is written, we can launch it without leaving Orion, thanks to its shell feature. Switch to the Shell tab, run npm install followed by node start hello.js, and our simple Express app is up and running.

launch

Orion now supports ESLint to validate your JavaScript code. Various rules for ESLint can be set visually.

validation

Speaking of customization, obviously you can choose a number of different theme or even create your own:

theme

It is also possible to try Orion via its online demos. If you would like to check the capabilities of Orion editing component only, there is the pure editor example. For testing its complete features, it is recommended to go to OrionHub, create an account, and enjoy the test drive.

Whether you are online or offline, web-based tools are just fun!

Tags:

fluent2014

This week it’s all about the most recent Fluent Conference 2014 in San Francisco. It’s the third Fluent and boy, it’s getting more phenomenal than ever.

For my part, I presented a talk on the topic of Design Strategies for JavaScript API (slide deck, 2.2 MB PDF download). If you are a regular reader of this blog, you may be familiar with the topic. A few past blog posts which discuss the subject in more details are:

Obviously there were tons of very very interesting presentations. To get the taste, you can watch the keynotes video (check this YouTube playlist). From Brendan’s session, bleeding-edge JavaScript features such as SIMD support and asm.js-esque Unreal 4 engine demo will make you very excited.

The full video compilation will be sold once it is out in a few weeks. You may also wait for those speakers who will upload their own video once it is available.

Kudos to the organizers and everyone involved for such a memorable event. See you next year!

Update: You can watch my presentation on YouTube (28 minutes).