Tags:

For modern web application development, having dozens of unit tests is not enough anymore. The actual code coverage of those tests would reveal if the application is thoroughly stressed or not. For tests written using the famous Jasmine test library, an easy way to have the coverage report is via Istanbul and Karma.

For this example, let’s assume that we have a simple library sqrt.js which contains an alternative implementation of Math.sqrt. Note also how it will throw an exception instead of returning NaN for an invalid input.

var My = {
  sqrt: function(x) {
    if (x < 0) throw new Error("sqrt can't work on negative number");
      return Math.exp(Math.log(x)/2);
  }
};

Using Jasmine placed under test/lib/jasmine-1.3.1, we can craft a test runner that includes the following spec:

describe("sqrt", function() {
  it("should compute the square root of 4 as 2", function() {
    expect(My.sqrt(4)).toEqual(2);
  });
});

Opening the spec runner in a web browser will give the expected outcome:

jasmine_runner

So far so good. Now let's see how the code coverage of our test setup can be measured.

The first order of business is to install Karma. If you are not familiar with Karma, it is basically a test runner which can launch and connect to a specific set of web browsers, run your tests, and then gather the report. Using Node.js, what we need to do is:

npm install karma karma-coverage

Before launching Karma, we need to specify its configuration. It could be as simple as the following my.conf.js (most entries are self-explained). Note that the tests are executed using PhantomJS for simplicity, it is however quite trivial to add other web browsers such as Chrome and Firefox.

module.exports = function(config) {
  config.set({
    basePath: '',
    frameworks: ['jasmine'],
    files: [
      '*.js',
      'test/spec/*.js'
    ],
    browsers: ['PhantomJS'],
    singleRun: true,
    reporters: ['progress', 'coverage'],
    preprocessors: { '*.js': ['coverage'] }
  });
};

Running the tests, as well as performing code coverage at the same time, can be triggered via:

node_modules/.bin/karma start my.conf.js

which will dump the output like:

INFO [karma]: Karma v0.10.2 server started at http://localhost:9876/
INFO [launcher]: Starting browser PhantomJS
INFO [PhantomJS 1.9.2 (Linux)]: Connected on socket N9nDnhJ0Np92NTSPGx-X
PhantomJS 1.9.2 (Linux): Executed 1 of 1 SUCCESS (0.029 secs / 0.003 secs)

As expected (from the previous manual invocation of the spec runner), the test passed just fine. However, the most particular interesting piece here is the code coverage report, it is stored (in the default location) under the subdirectory coverage. Open the report in your favorite browser and there you'll find the coverage analysis report.

branch_uncovered

Behind the scene, Karma is using Istanbul, a comprehensive JavaScript code coverage tool (read also my previous blog post on JavaScript Code Coverage with Istanbul). Istanbul parses the source file, in this example sqrt.js, using Esprima and then adds some extra instrumentation which will be used to gather the execution statistics. The above report that you see is one of the possible outputs, Istanbul can also generate LCOV report which is suitable for many continuous integration systems (Jenkins, TeamCity, etc). An extensive analysis of the coverage data should also prevent any future coverage regression, check out my other post Hard Thresholds on JavaScript Code Coverage.

One important thing about code coverage is branch coverage. If you pay attention carefully, our test above is still not exercising the situation where the input to My.sqrt is negative. There is a big "I" marking in the third-line of the code, this is Istanbul telling us that the if branch is not taken at all (for the else branch, it will be an "E" marker). Once this missing branch is noticed, improving the situation is as easy as adding one more test to the spec:

it("should throw an exception if given a negative number", function() {
  expect(function(){ My.sqrt(-1); }).
    toThrow(new Error("sqrt can't work on negative number"));
});

Once the test is executed again, the code coverage report looks way better and everyone is happy.

full_coverage

If you have some difficulties following the above step-by-step instructions, take a look at a Git repository I have prepared: github.com/ariya/coverage-jasmine-istanbul-karma. Feel free to play with it and customize it to suit your workflow!

Tags:

edge2013As part of my little tour this season, last Monday I was in Manhattan to participate at Edge Conference 2013, New York City edition. Organized by FT Labs and Google, Edge (always) featured a fascinating set of panels and high-quality discussions.

The videos of every panel (total duration: 7 hours) are already available:

On a side note, this also became my second visit to New York City, the first was for EmpireJS last year. Manhattan was still as charming as my first encounter. I really need to be careful, at this rate I would start to fall in love seriously with NYC.

If you missed this conference, watch the videos and keep an eye on its 2014 edition!

Tags:

Merging a branch is a pretty common operation when using Git. In some circumstances, Git by default will try to merge a branch in a fast-forward mode. How is this different with a merge without fast-forwarding?

gitbranch Let us assume that I created a topic branch named speedup from the current master. After working on this branch for a while (three commits, those white circles), I finally decided that I am done and then I pushed it to my own remote. Meanwhile, nothing else happened in the master branch, it remained in the same state right before I branched off. The situation is depicted in the following diagram.

Once the project maintainer got notified that my branch is ready to be integrated, she might use the usual steps of git fetch followed by git merge and thus, my work is landed in the source tree. Because master has not been changed since the commit (gray circle) which serves as the base for the said topic branch, Git will perform the merge using fast-forward. The whole series of the commits will be linear. The history will look like the diagram below (left side).

merging

Another variant of the merge is to use -no-ff option (it stands for no fast-forward). In this case, the history looks slightly different (right side), there is an additional commit (dotted circle) emphasizing the merge. This commit even has the right message informing us about the merged branch.

The default behavior of Git is to use fast-forwarding whenever possible. This can be changed, the no fast-forward mode can be easily set as the default merge using the right proper configuration.

Perhaps the typical encounter of non fast-forward merge is via the use of the green Merge button on GitHub, as part of its pull request workflow. When someone creates a pull request, there is a choice to merge the change (whenever GitHub thinks it is possible to do it) just pressing this button on the project page.

automerge

Unfortunately, at least as of now, GitHub’s web interface will perform the merge as if you would specify -no-ff. In other words, even if there is a possibility of fast-forwarding, GitHub will not do that. One possible explanation is so that the pull request could be identified. For example, the few recent commits of a project (I picked ESLint as an example, nothing particular about the project) can look like this:

nonlinear

Looking at the graph, it is clear that those few patches can be merged using fast-forward mode. Alas, GitHub style of merge turned the commit history from a linear progression to something that resembling a railroad diagram.

In short, non fast-forward merge keeps the notion of explicit branches . It may complicate the commit history with its non-linear outcome at the price of preserving the source of the branches (pull requests, when using GitHub). On the other hand, fast-forward merge keeps the changesets in a linear history, making it easier to use other tools (log, blame, bisect). The source of each branch will not be obvious, although this is not a big deal if the project mandates the strict cross-reference between the commit message and its issue tracker.

Which one is your merging preference, with or without fast-forwarding?

Tags:

Walking the syntax tree of a JavaScript code is often the first step towards building a specialized static analyzer. In some cases however, when the analysis involves variables and functions within the code, an additional scope analysis is necessary. This permits a more thorough examination of those variables and functions, including to check if some identifiers accidentally leak to the global scope.

Of course, such a simple leak detector is not new. In my previous blog post Polluting and Unused JavaScript Variables, I’ve covered two simple JavaScript utilities for catching this sloppy practice. In addition to that, I also reviewed the concept of identifier highlighting and rename refactoring in an editor. As a bonus of this highlighting feature, it is easy to spot the missing declaration which leads to the global leak (unless we’re in strict mode), as shown in the following screenshot of the online highlighting demo.

varleak

In the above code, widht is where the cursor is (hence, the yellow highlight). Due to the typo, it is not a match for the local variable declared as width. The problem is caught at run-time if the code is running in strict mode. However, obviously it is fantastic to get noticed of the mistake ahead of time. This is where a static analysis of the scope of every variable and function will be tremendously useful.

Fortunately, these days you can use a microlibrary called Escope (GitHub: Constellation/escope) which can analyze the scope of the entire code. This adds another useful library to the existing family of Esprima (for parsing), Estraverse (syntax traversal tool), and Escodegen (code regeneration). This arsenal of tools can be quite deadly.

The detailed operation and usage of Escope is beyond the scope (pun intended) of this blog post. Instead, let me just show you one built-in feature of the library, implicit declaration at the global scope. In other words, this is a collection of all variables which leak unintentionally, as in the previous highlighting example. It is as easy as this function:

function find_leak(code) {
  var leaks, syntax, globalScope;
 
  leaks = [];
  syntax = esprima.parse(code, { loc: true });
  globalScope = escope.analyze(syntax).scopes[0];
  globalScope.implicit.variables.forEach(function (v) {
      var id = v.identifiers[0];
      leaks.push({
          name: id.name,
          line: id.loc.start.line
      });
  });
 
  return leaks;
}

First we need to parse the code and store its abstract syntax tree in syntax. Note that location tracking is enabled because we want to locate the line number of every leaking variable. After that, scope analysis is being invoked and we grab the first one, its global scope. Now it is a matter of iterating variables within its implicit declaration and collecting the necessary information, i.e. the name and the location. This is the return value of the function and you can easily process it.

Real-world analysis will involve more processing than just a simple global leak collection (you can even visualize the scopes). Hopefully, this simple example will spark your interest in leveraging the scope information of any piece of JavaScript code.

Tags:

autumn2013

Summer is almost over, fall is around the corner. I’ll hit the road again, this time I plan to be in New York, Los Altos, and San Jose.

In just a few days (Sep 23), there is Edge NYC 2013 where I join the Rendering Performance panel. Edge is a quite popular high-quality web conference (in fact, this one is sold out already), the videos from the previous installments have been always my favorites. Thus, I am quite honored to be able to participate as the panelist. If you already registered, you can start lodging your questions for the panels.

A big fan of community events? Silicon Valley Code Camp is a perfect fit for you. Since this code camp is hosted in the weekend (Oct 5-6), it is a perfect opportunity to learn something new even though you are still busy with work. Plus, the venue itself, Foothill College, is quite a nice place. This time, my session will be about The Future of JavaScript Language Tooling.

html5devconf
Update: I will be also at the HTML5 Developer Conference (Oct 22-23) in San Francisco. This time, the topic will be about JavaScript Insights and I’ll be joined by a new partner-in-crime, Ann Robson.

Last but not least, another favorite event of mine: YUIConf 2013 (Nov 6-7). This event is still being prepared and it is likely too early to finalize the materials. However, most likely I plan to speak on the topic of Next-Generation JavaScript Language Tooling and JavaScript API Design Principles.

The last summer conferences were simply marvelous and I still plan to continue the experience!

Tags:

The second generation Nexus 7, revealed a few weeks ago, is a good refresh of this popular Android tablet. Beside the much improved display density (going to 323 ppi from 216 ppi), this All-New Nexus 7 also has a different SoC. If this tablet is used mainly for browsing the web, how does it perform compared to its older sibling?

Let us take a look at the hardware specification differences which may contribute to the performance. The memory has been bumped from 1 GB to 2 GB, this brings much more room to breath for the applications (notably the web browser). The SoC is still a quad-core system, Nexus 7 2012 is using 1.2 GHz Nvidia Tegra 3 while the 2013 edition is based on 1.51 GHz Qualcomm Snapdragon S4 Pro (APQ8064). The latter is a little confusing (probably just a branding issue) since it is more like an underclocked Snapdragon 600, with Krait 300 CPU.

Comparing these two SoCs, this battle can probably be viewed as a match between the implementations of ARM Cortex-A9 MPCore (Tegra 3) vs ARM Cortex-A15 MPCore (APQ8064). It would be interesting to see how APQ8064 competes with some new Tegra4-based tablets.

Now it is time to see some colorful bar charts. Note that every test is carried out on the respective device running Jelly Bean (Android 4.3) and with Chrome 28.

The first test is DOM performance. A fast implementation of DOM modification and access will significantly impact many web pages which sprinkle some interactivity. Using the collection of DOM core tests from Dromaeo, here is the result (longer is better). The new Nexus 7 definitely shows approximately 20% improvement compared to the older generation.

n7dromaeo

There is a similar consistency if we check for pure JavaScript performance via the Octane benchmark (longer is better). The margin is not that big, most likely because the tests do not involve as much memory access as the previous DOM analysis.

n7octane

With another benchmark from Mozilla, Kraken, the outcome looks pretty similar (shorter is better). Kraken itself aims to resemble more of future generation web apps. In this category, Snapdragon S4 Pro demonstrates a major win (more than 50%) over the poor Tegra3 system.

n7kraken

While it is not covered here, there is also a GPU difference which can make an impact. The new Nexus 7 is equipped with Adreno 200. According to the various graphics benchmarks done by AnandTech, this easily kicks Tegra’s ULP GeForce to the curb. Faster, better GPU is always a good thing for web browsers, particularly for rendering-heavy web applications which can use a lot of GPU compositing benefits.

From our previous quick check of the 2012 edition of Nexus 7, its web performance is more or less comparable to iPad 3. It is good to know that Google raises the bar again and pushes for a more performant, affordable Android tablets for all of us!

Verdict: Using Nexus 7 mainly for web browsing and the cost is not a problem? Upgrade.

Note: Special thanks to Donald Carr for a short loan of his Nexus 7.

Relevant Reviews: