.

Tags:

With the popularity of test-driven development (TDD), running a project which does not include an automated test workflow is often frown upon. The recent trend pushes it further: if the code coverage is not measured and monitored during that testing, the confidence level will not be very high. For JavaScript projects, how do we keep track of the coverage and prevent any regressions?

If the project is using Istanbul, the comprehensive code coverage tool for JavaScript, tracking is almost trivial. This holds even for a simple project. As an illustration, let’s have a quick look at esrefactor, a microlibrary which I created for assisting semi-automatic JavaScript refactoring, and its test suite. Running the unit tests (via Node.js) is a simple as node test/run.js (the tests are so simple, no external test framework is being used). Should we want to see the code coverage (and its report), istanbul is our best friend:

istanbul cover test/run.js

Once the coverage metrics are obtained, we can compare them against our predefined thresholds, serving as the baseline coverage limits. For example, the following line means that the statement coverage should be 90% or more. If it is less than the specified limit, Istanbul will complain aloud.

istanbul check-coverage --statement 90

While this is already very nice, this percentage threshold is often not what you want. In another scenario, we know the unit tests cover 45 statements and miss just 5 statements. How do we ensure that any future modification to this project does not degrade that coverage further to 6 or more uncovered statements? This is where Istanbul’s negative threshold feature becomes really handy. All we need to do is to tweak the coverage check like the following:

istanbul check-coverage --statement -5

Next time someone checks in a new piece of code which doesn’t come with a set of suitable unit tests, the above Istanbul invocation will warn that poor fellow that a coverage regression happens. Even better, enlarge the safety net by including the thresholds for function coverage and branch coverage (read also my past blog post on the significance of branch coverage):

istanbul check-coverage --statement -5 --branch -3 --function 100

Obligatory screenshot showing the situation where the threshold wasn’t hit:

istanbul_limits

As described in my multiple-layer defense approach, running coverage check manually is not the most optimal protection. We can bump up the coolness factor by using the powerful Git precommit hook feature. Now, don’t even try to sneak in untested code!

For many projects which rely on Node.js and npm, coverage check can be injected in the test ritual as well. First, add istanbul package into devDependencies section in package.json. To complete the typical npm install && npm test workflow, we need to involve coverage threshold check in the last step. The simplest way is by taking advantage of scripts section, illustrated here:

"scripts": {
"test": "node test/run.js && npm run-script coverage",
"coverage": "npm run-script analyze-coverage && npm run-script check-coverage",
"analyze-coverage": "node node_modules/istanbul/lib/cli.js cover test/run.js",
"check-coverage": "node node_modules/istanbul/lib/cli.js check-coverage --branch -2"
}

For a more serious project, using additional tools such as Grunt.js is highly recommended, particularly if some testing framework and/or test runner are also involved. There are already packages out there which integrates Istanbul with e.g. Mocha and QUnit. Thanks to this composibility technique, you can also find helper packages for test runners like Karma (née Testacular) and BusterJS, or even for CoffeeScript (via ibrik). Skipping coverage analysis should not be an excuse anymore!

I’d like to close this blog post with a brilliant statement from @davglass:

“If you think JSLint hurts your feelings, wait until you use Istanbul”

Protect yourself against code coverage regression, you’ll sleep better every night.

  • http://www.facebook.com/profile.php?id=556370857 Bryan Donovan

    I set up a couple of boilerplates for nodejs development with TDD, using mocha and istanbul. I found that (at least at the time I wrote the boilerplates) that the mocha-istanbul package didn’t really work. It generated different results depending on the order in which test files were run. Anyway, it’s easy to get around that by using mocha programatically, such as in a test/run.js file. Here are the boilerplates on github:

    https://github.com/BryanDonovan/nodejs-tdd-boilerplate-minimal

    https://github.com/BryanDonovan/nodejs-tdd-boilerplate

    • http://ariya.ofilabs.com/ Ariya Hidayat

      Looks good! Thanks for sharing.