Tags:

It is a truth universally acknowledged, that a single function critical to the success of the application, must be in want of a unit test. A practical way to prevent the lack of a unit test is to ensure that the overall code coverage does not regress. Fortunately, for applications written in JavaScript, there are a few code coverage services which can help with the task.

codecov-sticker

Thanks to a variety of language tooling available these days, it is not hard to measure and track code coverage of a JavaScript application. My go-to solution is involving Istanbul as the coverage tool, combined with either Karma or Venus.js as the test runner. This setup works with various popular unit test libraries out there. If you are new to this, I recommend checking out my past blog posts on this subject:

And yet, the work does not stop there. Would it be fantastic if the code coverage report becomes another feedback information for a contributor? Is it possible to track down every single pull request and check if the changes associated with that pull request would regress the coverage? The answer is yes. The key to that is utilizing a hosted code coverage service. There are many out there and in this post I will cover (pun intended) my current favorite, Codecov.io.

Thank for a set of its rich features, integrating Codecov.io to your open-source project is very easy. For a start, you do not need to create a dedicated account as you can just authenticate using Github. Furthermore, Codecov.io has a built-in support for Github (as well as other hosted Git such as Bitbucket), choosing a project to be added to your dashboard is trivial.

Keep in mind that Codecov.io displays the coverage information of your project. Your build process still need to produces that coverage information. Also, it is assumed that you have a continuous integration system that runs the build process every time there is a new check-in or when someone has a feature branch in a pull request. For many FOSS project, Travis CI is the most common solution although there are a few other hosted CI services out there.

To following a long, check out this simple repository that I have created: github.com/ariya/coverage-mocha-istanbul-karma. This repo contains a simple JavaScript project along with its equally simple test suite designed for Mocha. The tests will be executed by Karma.

To start using Codecov.io, first we need to enable the coverage information in Cobertura format. I have played with different coverage formats and I discovered that Cobertura is the most suitable (your mileage may vary and things can change from time to time). If you use Istanbul directly, you can use its report command to generate the coverage information in the right format (refer to the documentation for more details). With our setup, I modified a section in the Karma configuration file, karma.conf.js, from:

coverageReporter: {
    dir : 'coverage/',
    reporters: [
        { type: 'html', subdir: 'html' },
        { type: 'lcov', subdir: 'lcov' },
    ]
}

to:

coverageReporter: {
    dir : 'coverage/',
    reporters: [
        { type: 'html', subdir: 'html' },
        { type: 'lcovonly', subdir: 'lcov' },
        { type: 'cobertura', subdir: 'cobertura' }
    ]
}

This ensures that Karma tells Istanbul to produce another coverage information, in addition to the default lcov, in the format that we want, Cobertura. You can test this, simply execute npm test and after a while, you will spot the file coverage/cobertura/cobertura-coverage.xml that contains the coverage information. This is what we need to send to Codecov.io. There are multiple ways to do that, the easiest is to use codecov.io package. You can use this package by running:

npm install --save-dev codecov.io.

In this example, package.json is modified to look like this:

"scripts": {
    "test": "grunt karma:test",
    "ci": "npm test && codecov < coverage/cobertura/cobertura-coverage.xml"
}

Thus, everytime you invoke npm run ci on your Travis CI job, the tests will be executed and the coverage information will be sent to Codecov.io.

codecovpanel

To setup the dashboard, login to Codecov.io and add the repository as a new project. Codecov.io maintains a nice mapping of project URL. For example, the coverage dashboard for this example repo github.com/ariya/coverage-mocha-istanbul-karma is codecov.io/github/ariya/coverage-mocha-istanbul-karma. The next time you kick a build on the project, the dashboard will display the coverage information as sent from the build process.

If that works flawlessly, now you want to enable its pull request integration. Go to the project page and choose Integration and Setup, Pull Request Comment. Now you can determine various ways Codecov.io will comment on every pull request. For a start, you may want to enable Header and Compare Diff.

In the example repo, I have created a pull request, github.com/ariya/coverage-mocha-istanbul-karma/pull/3, that demonstrated a coverage regression. In that pull request, there is a commit that aims to optimize the code but that optimization does not include an additional unit test. This triggers the following response from Codecov.io, a feedback that is rather obvious:

codecovpr

With the build process that produces the coverage information, combined with a service such as Codecov.io, it is easy to keep untested code away from your project!

Tags:

In many tech conferences and other events, we see a trend where the speaker rarely introduces themselves or even when they do, it is rather short (and sweet). Why does this happen? Is that a good trend or a bad one?

stick-present

The argument against doing a self-introduction is pretty simple. Today, we live in a different age. Information is always available at our fingertips. Before going to a talk, we can do a lot of research on the speaker. Right before the talk, there is always an opportunity to check their Twitter, LinkedIn, and other social media. Even better, we can do that with a given context, whether it is related to the current topics of the day or with other speakers that we have known.

A minor variant of this approach is a very quick introduction, ideally in just a few seconds or less. It is thus important to come up with an introduction that is relevant to the audience. Something like “My name is Joe Sixpack, I work for Acme Corp” is less optimal as it does not give the audience any information as to why you are the best person to deliver the talk. It makes sense to switch to the style of “I’m Joe and I created Project Atlantis” if your talk is all about Project Atlantis. In the same spirit, it adds nothing if you ramble for minutes and minutes, enumerating your various achivements and other open-source projects, if those are remotely relevant to the presentation.

Of course, what would help is to establish a good online presence. Some people in the audience look you up on Twitter (and perhaps start following you). Others will Google/Bing/DuckDuckGo your name and take a quick look at your personal homepage. A few will probably want to know what you have posted on your Instagram. In all cases, it is very helpful for your audience if those sites give a faithful representative of who you are, what you like, and other informations related to the subject.

Obviously, this is all moot point if there is a moderator who is introducing you. In that case, I have seen that many presenters skip their self-introduction since usually the introduction from the moderator is already flattering and you do not want to spoil that.

What about telling the audience about your employer? I believe flashing the company logo or mentioning it quickly in passing is sufficient. If your talk is fantastic, there will be a lot of follow-up discussions and this is usually the best moment to tell more in-depth stories about your company or your start-up.

It is common nowadays to be in a conference where the talk is only 20 minutes, give or take. Therefore, every minute spent introducing yourself is a minute worth of another good material for your audience.

Tags:

At the most recent jQuerySF conference, Mike Sherov and I did a joint talk on the topic of JavaScript Syntax Tree: Demystified. The highlight of the talk was the demo from Mike as he showed how to fix coding style violations automatically.

The trick is to use JSCS and its latest features. If you want to follow a long, here is a step-by-step recipe.

First, you need to have JSCS installed. This is as easy as:

npm install -g jscs

Let’s pick an example project, for this illustration I use my kinetic scrolling demo:

git clone https://github.com/ariya/kinetic.git
cd kinetic

Now you want to let JSCS analyze all the JavaScript files in the project and deduce the most suitable code style:

jscs --auto-configure .

jscs
Give it a few seconds and after a while, JSCS will present the list of code style presets along with its associated number of errors, computed from your JavaScript code. If you already have a preset in my mind, you can choose one. An alternative would be to pick one that has the least amount of violations, as it indicates that your code already gravitates towards that preset.

Once you choose a preset, JSCS will ask you a couple of self-explained questions. At the end of this step, the configuration file .jscsrc will be created for you. With the configuration, the real magic happens. You just to invoke JSCS this way:

jscs -x .

then it will automatically reformat your JavaScript. Double check by looking at the changes and you will see that your code style now follows the specified preset.

With JSCS, you can comfortably ensure code style consistency throughout your project!

Tags:

ninja
With a complex application, it is often convenient to have a function that returns not just one value. There are many different ways to achieve this in C++, from using a structure to taking advantage of the latest C++ 11 tuple class template.

The obvious choice, returning an object, seems a bit overkill in many cases. First, you need to declare the structure. It is not seldom that the structure needs to be available for the consumer, hence you have to expose it to the outside world. The construction of the instance is also another ceremonial activity nobody likes to carry out unnecessarily.

Fortunately, if the function is supposed to return only two values, std::pair is to the rescue. Most likely, make_pair will be used to construct the pair. Each element of the pair can be accessed using first and second, respectively. This is illustrated in the following example:

std::pair<std::string , int> findPerson() {
    return std::make_pair("Joe Sixpack", 42);
}
 
int main(int, char**) {
    std::pair< std::string, int> person = findPerson();
    std::cout < < "Name: " << person.first << std::endl;
    std::cout << "Age: " << person.second << std::endl;
    return 0;
}

What if you need more than just two values? Well, obviously std::pair is not fit for the job. In this case, we can leverage boost:tuple from Boost Tuple library. If you are already using std::pair, it is very easy to get familiar with boost::tuple. A tuple can be created using make_tuple, its element is accessed using get<n>, where n denotes the element index.

#include <boost /tuple/tuple.hpp>
 
boost::tuple<std::string , std::string, int> findPerson() {
    return boost::make_tuple("Joe", "Sixpack", 42);
}
 
int main(int, char**) {
    boost::tuple< std::string , std::string, int> person = findPerson();
    std::cout < < "Name: " << person.get< 0>() < < " "
        << person.get< 1>() < < std::endl;
    std::cout << "Age: " << person.get< 2>() < < std::endl;
    return 0;
}

With the latest C++ 11, there is no need to rely on a third party library anymore since std::tuple is already available. With minor tweaks, the previous Boost example will look this in C++. Note also the use auto that saves us from unnecessary verbosity. The compiler knows the return type of findPerson and there is no need for a lengthy type declaration anymore.

#include <tuple>
 
std::tuple<std::string , std::string, int> findPerson() {
    return std::make_tuple("Joe", "Sixpack", 42);
}
 
int main(int, char**) {
    auto person = findPerson();
    std::cout < < "Name: " << std::get< 0>(person) < < " " <<
        std::get< 1>(person) < < std::endl;
    std::cout << "Age: " << std::get< 2>(person) < < std::endl;
    return 0;
}

While we are at it, might as well mention std::tie, useful to easily unpack a tuple (similar to ES6 destructuring). It is convenient alternative to the element access using get. The code fragment below demonstrates its usage.

int main(int, char**) {
    std::string first_name, last_name;
    int age;
    std::tie(first_name, last_name, age) = findPerson();
    std::cout < < "Name: " << first_name << std::endl;
    return 0;
}

From your own experience, which of these techniques do you like and why do you favor it?

Tags:

There are various hosted continuous integration services out there that you can use for your Node.js projects, from Travis CI to drone.io and many others. If you feel adventurous or you are always fascinated by a DIY solution (for whatever reasons), it is apparently quite easy to setup your own CI system quickly using Docker and TeamCity.

logo_teamcityAs an easy-to-use continuous integration system, TeamCity offers two free solutions for you: Professional Server license for up to 20 build configurations or Open Source license for your open-source projects. This is usually sufficient to get you started. Also, per the usual server agent architecture, we will run TeamCity server and agent in two separate containers. This is very similar to my previous blog post on TeamCity installation using Docker, with a minor tweak.

First, you need a machine for the server. This could be a physical machine, a virtual machine, or even a VPS. For a hassle-free setup, sign up for either Vultr or Digital Ocean (note: my affiliate links). Make sure you evaluate the system requirements to run the server (e.g. 2 cores and 2 GB RAM will be ideal).

On this machine, Docker must be installed properly. A useful quick test:

sudo docker run -it ariya/centos7-oracle-jre7 cat /etc/redhat-release

should show something like:

CentOS Linux release 7.0.1406 (Core)

Once Docker is there, starting TeamCity server is as easy as:

sudo docker run -dt --name teamcity_server -p 8111:8111 \
  ariya/centos7-teamcity-server

This is using a prepared container I have created called ariya/centos7-teamcity-server. Note that the container supports volume mapping of /data/teamcity. You definitely need to do this if you want to persist your TeamCity projects and other settings. Here is a fancier way to invoke the server where the data is stored on the host system under /var/data/teamcity and with automatic restart in case the server dies.

sudo docker run -dt --name teamcity_server --restart=always -p 8111:8111
  -v /var/data/teamcity:/data/teamcity
  ariya/centos7-teamcity-server

Also, if you are using a firewall, make sure to accept connections on port 8111. With iptables:

sudo iptables -A INPUT -p tcp --dport 8111 -j ACCEPT
sudo service iptables save

Once the server is running, visit the site (on port 8111) using your web browser. This allows you to initialize and configure TeamCity server. In a minute or two, it should be ready to use.

teamcity_starting

You can start creating your CI project, refer to the excellent TeamCity documentation for details. For the build process itself, it is quite common to invoke npm twice, first to install the dependencies and then to run the tests. This is illustrated in the following screenshot.

teamcity_project

While it is sufficient to use the command-line runner to invoke e.g. npm test, if you want to be a bit more sophisticated, you can use a customized runner such TeamCity.Node.

Of course, the project can not be executed right now because the server does not have any connecting build agents yet. Starting an agent is also extremely straightforward as I already prepared another container for that, ariya/centos7-teamcity-agent-nodejs. This container is already equipped with Node.js 0.10 and npm 1.3.

sudo docker run -e TEAMCITY_SERVER=http://$TEAMCITY_HOST:8111 -dt -p 9090:9090 \
  ariya/centos7-teamcity-agent-nodejs

In the above example, you need to supply the IP address of your server with the environment variable TEAMCITY_HOST. Again, the firewall needs to accept connections on port 9090.

teamcity_agent

It is of course possible to run this agent on the same host as the server, particularly if you have a beefy machine. In this case, you need to use Docker IP address:

export TEAMCITY_HOST=$(sudo docker inspect --format \
  '{{ .NetworkSettings.IPAddress }}' teamcity_server)

It takes a while for the agent to register itself with the server. However, it does not mean that the agent is immediately available. First, you need to authorize it so that the server will trust the agent and start dispatching the build tasks to the said agent. After that, you can start running your project.

teamcity_buildlog

Thanks to Docker, everything could be done in 10 minutes or less. Have fun with all the tests!

Tags:

blocks

Little did I know that the start of my adventure with Esprima three years ago will result in something beyond my expectation. While the syntax tree format used by Esprima is not original (see SpiderMonkey Parser API), this de-facto format gains a lot of traction since it provokes a Cambrian explosion of composable JavaScript language tooling, everything from a code coverage tool, a style checker, a delta debugger, a syntax autocompleter, a complexity visualizer, and many more. Mind you, this AST format is far from perfect and hence why some of us at Shape Security are taking a journey to figure out a better format.

Throughout the development, Esprima is also being used as a playground for a rigorious workflow. For example, performance is always important and hence why a benchmark system was implemented early on. There were numerous optimized JavaScript tricks (fixed object shape, profile-guided code shuffling, object-in-a-set) which I discovered via a few interesting investigations. Esprima also enforces a hard threshold of certain metrics, such as cyclomatic complexity and test coverage. Speaking of tests, I consider Esprima’s test suite (~ 800 unit tests) as its crown jewel. It is not uncommon to hear that this collection of tests is being utilized to assist the development of another similar parser, whether it is written in JavaScript or other languages.

After being in the wild for a while, Esprima started to attract more contributions, not only in term of adding new features but also for troubleshooting defects, solving performance challenges, and other less glamorous tasks. The growth, 600 dependent packages and 3 millions/month download on npmjs, needs to be anticipated as well. This was why after talking to Dave Methvin some time ago, I felt confident that jQuery Foundation would be a good new umbrella for the project. And that was how the adoption was initiated and finally completed a few weeks ago.

At the same time, JavaScript continues to evolve. The next edition, ECMAScript 6 (will be called ECMAScript 2015 officially) has its specification frozen, with some JavaScript engines (SpiderMonkey, V8, Chakra, JavaScriptCore) already start to support a few selected features. This has been anticipated by creating the special harmony branch in early 2012. In fact, it has served as the basis of a transpiler called (now defunct) Harmonizr, back when writing a transpiler was not considered cool yet. Meanwhile, more folks (particularly Facebook engineers and some others) continue to enhance this branch. It is being used to drive Facebook JavaScript infrastructure (see JSTransform, Recast, Regenerator, JSX), among others for its ES6 adoption. Still, this harmony branch (despite some unofficial third-party releases) is considered experimental and it should not be used in production.

This brings us to the most recent 2.0 release. Among others, this release starts to include carefully selected ES6 features (e.g. arrow function, default parameter, method definition). This is to facilitate the migration of downstream language tools, per the original plan outlined several months ago in the mailing-list:

The new master, which bears the version 2.x, will start to introduce ECMAScript 6 features. We will do it peacemeal, taking features which are known to be more or less stabilized in the most recent draft spec. In a few cases, this is a matter of bringing in the existing implementation from the experimental harmony branch.

Thanks to the wonderful community, these three years have been fantastic. Let’s continue to build amazing tools!