A few days ago, I gave a talk at the most recent Web Tech Talk meetup hosted by Samsung. The title is Supersonic JavaScript (forgive my little marketing stunt there) and the topic is on changing the way we think about optimizing JavaScript code.

None of the tricks presented there will make your code break the sound barrier. Nevertheless, some of them can serve as the food for thought to provoke our brain to look at the problem in a few different ways. If you want to follow along, check or download the slide deck (before you ask: it was not video recorded).

I discussed four different ideas during the talk.

Short Function. Back in the old days, function calls were expensive. These days, modern JavaScript engines are smart enough and can do self-optimization. For some details on this optimization, read my previous blog post Automatic Inlining in JavaScript Engines and Lazy Parsing in JavaScript Engines. There is no need to outsmart the engine and therefore stick with a concise and readable code.

Fixed Object Shape. This swings in the other direction. How can we help the engine so that it can take the fast path most of the time? For more information, refer to my blog post JavaScript object structure: speed matters.

Profile Guided. Related to the previous point, can we control our own code so that it takes the fast path whenever possible but will still fall back to the slow path everynow and then? What we need is a set of representative data for the benchmark and the profile can be used to tweak the implementation. More details are available in my two other blog posts Profile Guided JavaScript Optimization and Determining Objects in a Set.

Garbage Handling. Producing a lot of object often places a burden on the garbage collector. As an illustration, check out a short video from Jake Archibald describing the situation of using +new Date.

There is no silver bullet to any performance problem. However, like I already mentioned in my Performance Calendar’s article JavaScript Performance Analysis: Keeping the Big Picture, it is important to keep in mind: are we always seeing the big picture or are we trapped into optimizing to the local extreme only?

Now, where’s that TOPGUN application form again…



One of the interesting features of Esprima is to retrieve every comment inside a JavaScript source. Even better, each comment can be linked to the related syntax node. This is very helpful (like in the case of JSDoc) since any additional information regarding the program can be provided via the comment serving as a form of annotation.

Let’s take a look at the simple code below:

// Give the ultimate answer
function answer() { return 42 }

If we let Esprima consume the above code (as a string) like this (mind the attachComment option):

var tree = esprima.parse(code, { attachComment: true });

then the object tree will contain an array called leadingComments which is part of the function declaration. Portions of that tree is visualized in the following diagram. Note that the leadingComments array has only one element, because there exists only one single-line comment before the function.


Many documentation tools rely on a specially-formatted comment to derive additional information. For example, since JavaScript does not specify the type of every function parameter, that piece of information can be encoded in the annotation. This is very familiar for those who use JSDoc, Closure Compiler, or other similar tools. The following fragment demonstrates the usage:

 * Compute the square root.
 * @param {number} x number.
 * @return {number} The square root of x.
function SquareRoot(x) {
    return Math.exp(Math.log(x)/2);

Here is another example:

 * Adds three numbers.
 * @param {number} x First number.
 * @param {number} y Second number.
function Add(x, y, z) {
    return x + y + z;

Unfortunately, the annotation in the above example is missing the tag for the third parameter. It often happens, in some cases due to some refactoring which may result in out-of-sync comment. Since such a mistake is not caught during unit testing, it may go undetected for a while.

It is however possible to use Esprima to extract the comment and then process it with a JSDoc annotation parser such as Doctrine. This way, if a function parameter is not undocumented, we can warn the developer as early as possible. The proof-of-concept of such a tool exists in the repository bitbucket.org/ariya/missing-doc/. The gist is in its analyze function (missing-doc.js):

    data = doctrine.parse(comment.value, {unwrap: true});
    params = [];
    data.tags.forEach(function (tag) {
        if (tag.title === 'param') {

Once we got the comment associated with a syntax node (that represents a function declaration), it got parsed by Doctrine and we just iterate through all the found tags. The next step is quite logical:

    missing = [];
    node.params.forEach(function (param) {
        if (params.indexOf(param.name) < 0) {

Here we compare what is being documented (x and y) in the annotation tags with the actual function parameters (x, y, and z). When something is not listed in the annotation, it will go into our missing array for the subsequent reporting.

Running the tool with Node.js like this:

npm install
node missing-doc.js test/sample2.js

gives the following result:

In function Add (Line 8):
 Parameter z is not documented.

Easy enough! All in all, this microtool weighs less than 70 lines of JavaScript.

How do you plan to use Esprima’s comment attachment feature?


The most recent Java 8 release came with lots of new features, one of them is the brand-new JavaScript engine to replace the aging Rhino. This new engine, called Nashorn (German for rhinoceros), is high-performant and specification compliant. It is definitely useful whenever you want to mix-and-match your Java and JavaScript code.

To check the performance of Nashorn, I use it to run Esprima and let it parse some large code base (unminified jQuery, 268 KB). This quick, rather non-scientific test exercises two aspects of JavaScript run-time environment: continuous memory allocation (for the syntax nodes) and non-linear code execution (recursive-descent parsing).

If you want to follow along, check the repository bitbucket.org/ariya/nashorn-speedtest. Assuming you have JDK 8 installed properly, run the following:

javac -cp rhino.jar speedtest.java
java -cp .:rhino.jar speedtest

This test app will execute Esprima parser and tokenizer on the content of the test file. Rhino gets the first chance, Nashorn follows right after (each engine gets 30 runs). In the beginning, Rhino’s first run is 2607 ms and slowly it speeds up and finally this parsing is completed in just 824 ms. Nashorn timings have a different characteristic. When it is cold, Nashorn initially takes 5328 ms to carry out the operation but it quickly picks up the pace and before you know, it starts moving full steam ahead, reaching 208 ms per run.

Behind the scene, Nashorn compiles your JavaScript code into Java bytecodes and run them on the JVM itself. On top of that, Nashorn is taking advantage of invokedynamic instruction (from the Da Vinci Machine Project, part of Java 7) to permit "efficient and flexible execution" in a dynamic environment such as JavaScript. Other JVM languages, notably here JRuby, also benefit from this new invokedynamic feature.


What about Nashorn vs V8? It is not terribly fair to compare both, V8 is designed specifically for JavaScript while Nashorn leverages the battle-hardened, multi-language JVM. But for the fun of it, the Git repo also includes a JavaScript implementation of the test app which can be executed with V8 shell. On the same machine, V8 can complete the task in about 110 ms. Nashorn is not as mature as V8 yet, it is quite an achievement that it is only twice as slow as V8 for this specific test.

As a high-performance JavaScript on the JVM, Nashorn has many possible applications. It serves as a nice platform to play and experiment with JavaFX. Yet, it is powerful enough to be part of a web service stack (Vert.x, Project Avatar). On a less ambitious level, having another fast implementation of JavaScript is always good so that there is an alternative to run various JavaScript tools in an environment where Node.js is not available or not supported. As an illustration, check my previous blog post on a simple Ant task to validate JavaScript code.

Next time you have the urge to herd some rhinos, consider Nashorn!


Eclipse Orion released its latest version 5, right before the most recent EclipseCon. This new version packs several exciting features, everything from stylistic change in the appearance to an streamlined cloud deployment. My favorite is the easy-to-use Node.js bundle.

With Orion 5, it is supertrivial to try out Orion (assuming you have Node.js and npm):

npm install orion

Then you can launch it by running:

node node_modules/orion/server.js /path/to/your/project

and then open your favorite web browser and point it to localhost:8081. Now you will be able to edit existing files and create new files and folder. This works even if you don’t have any Internet connection.

Alternatively open the configuration file node_modules/orion/orion.conf and change workspace variable to the location of your JavaScript project you want to edit (if you are crazy, just set it to your home directory). Then, start Orion server by running npm start orion.

Let’s take a look at a quick Express example.


The above screenshot also demonstrates new Orion’s ability to provide autocomplete (or in Eclipse world, it’s called Content Asisst) for Express-based JavaScript code. It’s not limited to Express, there is also support for other frameworks such as Postgres, MySQL, MongoDB, and a few more.

Once this simple application is written, we can launch it without leaving Orion, thanks to its shell feature. Switch to the Shell tab, run npm install followed by node start hello.js, and our simple Express app is up and running.


Orion now supports ESLint to validate your JavaScript code. Various rules for ESLint can be set visually.


Speaking of customization, obviously you can choose a number of different theme or even create your own:


It is also possible to try Orion via its online demos. If you would like to check the capabilities of Orion editing component only, there is the pure editor example. For testing its complete features, it is recommended to go to OrionHub, create an account, and enjoy the test drive.

Whether you are online or offline, web-based tools are just fun!



This week it’s all about the most recent Fluent Conference 2014 in San Francisco. It’s the third Fluent and boy, it’s getting more phenomenal than ever.

For my part, I presented a talk on the topic of Design Strategies for JavaScript API (slide deck, 2.2 MB PDF download). If you are a regular reader of this blog, you may be familiar with the topic. A few past blog posts which discuss the subject in more details are:

Obviously there were tons of very very interesting presentations. To get the taste, you can watch the keynotes video (check this YouTube playlist). From Brendan’s session, bleeding-edge JavaScript features such as SIMD support and asm.js-esque Unreal 4 engine demo will make you very excited.

The full video compilation will be sold once it is out in a few weeks. You may also wait for those speakers who will upload their own video once it is available.

Kudos to the organizers and everyone involved for such a memorable event. See you next year!

Update: You can watch my presentation on YouTube (28 minutes).


While static polymorphism is often discussed in the context of C++ (in particular, related to CRTP and method overloading), we can generalize the concept to help us choosing the most optimal function and property names of a public interface. This also applies to JavaScript API, of which some examples and illustrations are given in this blog post.


In his article Designing Qt-Style C++ API from 2005, Matthias Ettrich argued that the very major benefit of static polymorphism is to make it easier to memorize APIs and programming patterns. This can not be emphasized enough. Our memory has a limited capacity, related functionalities can be understood better when they demonstrate enough similarity. In other words, but static polymorphism is the answer to the (ultimate) search of consistency.

Take a look at the screenshot. It shows a user interface created by a hypothetical framework: some radio buttons, a check box, and a push button. Now imagine if setting the text which represents the label for each individual component involves a code fragment that accesses the framework like this:

X1.value = 'Rare';
X2.value = 'Medium';
X3.value = 'Well done';
Y.option = 'Fries';
Z.caption = 'Order';

Because the code is aligned that way, it is easy to see why this is confusing. A radio button relies on value property, a check box needs a property called option, and finally the text for the push button comes from caption. This demonstrates inconsistency. Once the problem is spotted, the fix is easy:

X1.value = 'Rare';
X2.value = 'Medium';
X3.value = 'Well done';
Y.value = 'Fries';
Z.value = 'Order';

This of course does not apply only to these UI elements. For example, a slider and a progress bar can have similar names for some of their properties since each needs a set of values to to define the range (maximum and minimum) and the current value. An example of incosistency is if one calles it maximum and the other prefers maxValue. Check your favorite UI framework’s API documentation for a progress bar and a slider and see if those properties demonstrate the principle of static polymorphism.

Of course, this also applies to function names. Imagine if moving a point involves calling translate whereas moving a rectangle means calling translateBy. Such a case could simply be an honest mistake, yet this indicates that it falls through the crack as it managed to escape any possible code review.

Static polymorphism does not stop at the practice of choosing function names. Imagine that we have a way to define a rectangular shape by its corner (top left position) and its dimension (width and height).

corner = new Point(10, 10);
dim = new Size(70, 50);
R = new Rect(corner, dim);

Since it is tedious to always create two objects for the constructor, we can have another shortcut constructor that takes four values. In this variant, the parameters of the constructor represents x1, y1, x2, y2 coordinates of that rectangle.

Q = new Rect(10, 10, 80, 60);

It is likely that the second constructor was designed in isolation. The set of numbers in the above line of code has a major difference compared to that of the previous code fragment. In fact, if someone converts the Point Size version to the shortcut version, they need to use different values. The spirit of the first constructor is a pair of (x, y) and (width, height), the second constructor however expects a pair of (x1, y1) and (x2, y2). Worse, if you are familiar with the first constructor and suddenly found a code that uses the second form, you might not be alarmed that the meaning of the last two numbers is not what you have in mind.

It does not matter if you are a library author or an application developer. Next time you want to introduce a new property/function/object, scour your existing code and look for patterns that have been used again and again. Those are good data points and should not be ignored.

Now, aren’t you hungry after looking at the first example? BRB.