Tags:

If your library is being used by many other applications, it is often a good idea to run their test suites. Since those downstream applications may break because of your library, this helps catching those cases before you officially release a new public version of the library.

This is the extra cautious step that we have been practicing for the Esprima project. Since Esprima is the dependency of a few dozens projects out there and its npm package is downloaded at a rate of 10 millions/month, we want to be aware if anything will fall apart before we update it. Thus, those projects become a sort of animal sentinel ("canary in a coal mine").

Every time there is a new pull request, there is a number of downstream projects (e.g. Istanbul, JSCS, jsfmt, etc) which will be tested against the modified Esprima in that pull request. The logic is rather simple and the implementation itself weighs just around 100 lines of code.

downstream

First, a new temporary directory is created (since we don’t want to clobber the current working directory). In that directory, the downstream project, e.g. Istanbul, will be check out using git clone. For Esprima, we choose to track the master branch of every downstream project and live with the risk of instability. However, for your own project, you may consider tracking a stable branch instead.

After that, the project dependencies are installed via npm i. Obviously, among many different packages, this will install the stable, published version of Esprima. This is where the fun begins. That Esprima module will be replaced by dropping in the current development version of Esprima (i.e. of the feature branch associated with the pull request). Following this step, the project tests are executed by running npm test. If any of the tests failed, there is a possibility that it is caused by the pull request. Whether it is a success or a failure, the information will be posted to the pull request to help making the informed decision.

Note that in some cases, the downstream project itself could fail its own test suite. The solution is to track its more stable branch, rather than its development branch. Of course, if the project is too fragile and it does not keep the good hygiene, then it is not terribly suitable as a canary and you have to pick another project.

As with many other FOSS projects, Esprima is relying on Travis CI to run its tests. For this particular downstream test however, we decided to run it on a different hosted CI system, Circle CI. The reason is simple, we want to have the downstream tests running in parallel to the other usual battery of unit tests and regression tests. For Esprima, the entire downstream test could take some time to finish, about 14 minutes give or take. Running both in parallel means that every pull request can be reviewed after the Travis CI job is completed, while waiting for the longer downstream tests.

Maintaining a library? Be responsible and test your downstream projects!

Tags:

These days, we can reach anyone on this planet, anywhere they are located, anytime we like. However, we often never realize that it is only possible because there is a high-bandwidth Internet backbone running on modern and sophisticated fiber optic transmission systems.

At the most recent O’Reilly Velocity conferences, in Santa Clara and another one in New York, I gave a keynote titled 20,000 Leagues Inside the Optical Fiber. In that talk, I summarized a few important technological milestones in our communication system, up to where we enjoy the luxury of being interconnected with a pipeline with humongous capacity. I won’t spoil it for you as you can watch this 9-minute talk on YouTube:

I believe that keeping this perspective is very important for us. When we engage in various activities and we think that we are noble folks who achieve some earth-shattering objectives, let us pause for a second and recall the series of scientific journeys that transformed our civilization.

If you know someone who is a scientist, give them a hug. They are the reasons that we could be passionate about what we are doing today.

Tags:

Building a web application without testing it on the major consumer browsers will be crazy. Fortunately, we have a few cross-browser testing services such as Sauce Labs, BrowserStack, and many more. Still, for a quick sanity check on the latest stable version of Google Chrome and Mozilla Firefox, nothing beats the fantastic service provided by AppVeyor.

As a hosted continuous integration service, AppVeyor runs your application (and its tests) on Windows, more precisely Microsoft Windows Server 2012 R2. This means, we have access to the widely used web browsers: Internet Explorer, Firefox, and Chrome. Due to the platform integration, IE 11 is always available. Often times, Firefox and Chrome are a few versions behind. To solve this issue, we can always install the latest stable version of these two browser right before running thet tests.

If you want to follow along, I have prepared a simple project github.com/ariya/karma-appveyor. Clone the repository to get the feeling of what it is doing. Since it is designed to be very simple, it consists of only one test, written using Mocha unit test library and executed using Karma test runner:

describe("sqrt", function() {
  it("should compute the square root of 4 as 2", function() {
    assert.equal(Math.sqrt(4), 2);
  });
});

The test itself can be executed by running npm test. It will launch Karma to run the test in the following browsers available on your system: Chrome, Firefox, Safari, and IE. The available browsers are detected using a very nice Karma plugin called karma-detect-browsers. If you are on OS X, what you got is something like this:

appveyor

To run it on AppVeyor, first we need to craft the configuration file that looks like:

version: "{build}"

environment:
  nodejs_version: "0.12"

install:
  - ps: Install-Product node $env:nodejs_version
  - node --version
  - npm --version
  - npm install

test_script:
  - npm test

Now go to appveyor.com, sign in using your GitHub account, create a new project and choose your repository. Explicitly ask for a new build and after a while, AppVeyor is brewing the build as follows:

outdated

It is running the tests with IE 11, Firefox 30, and Chrome 41. The last two browsers are quite outdated. How do we force an upgrade?

Chocolatey to the rescue! Built on top of Nuget, Chocolatey facilitates a silent install of many Windows applications (hence why it is known as the "apt-get of Windows"). We need to tweak our appveyor.yml so that Chocolatey installs firefox and googlechrome package. Of course, if you are living on the edge, feel free to include Firefox Beta and Chrome Beta to the mixture.

install:
  - choco install firefox
  - choco install googlechrome
  - ps: Install-Product node $env:nodejs_version
  - node --version
  - npm --version
  - npm install

test_script:
  - npm test

Run the build on AppVeyor and this time, the build log will be different:

updated

There you go: we have IE 11, Firefox 40, and Chrome 45 running our test!

Tags:

It is a truth universally acknowledged, that a single function critical to the success of the application, must be in want of a unit test. A practical way to prevent the lack of a unit test is to ensure that the overall code coverage does not regress. Fortunately, for applications written in JavaScript, there are a few code coverage services which can help with the task.

codecov-sticker

Thanks to a variety of language tooling available these days, it is not hard to measure and track code coverage of a JavaScript application. My go-to solution is involving Istanbul as the coverage tool, combined with either Karma or Venus.js as the test runner. This setup works with various popular unit test libraries out there. If you are new to this, I recommend checking out my past blog posts on this subject:

And yet, the work does not stop there. Would it be fantastic if the code coverage report becomes another feedback information for a contributor? Is it possible to track down every single pull request and check if the changes associated with that pull request would regress the coverage? The answer is yes. The key to that is utilizing a hosted code coverage service. There are many out there and in this post I will cover (pun intended) my current favorite, Codecov.io.

Thank for a set of its rich features, integrating Codecov.io to your open-source project is very easy. For a start, you do not need to create a dedicated account as you can just authenticate using Github. Furthermore, Codecov.io has a built-in support for Github (as well as other hosted Git such as Bitbucket), choosing a project to be added to your dashboard is trivial.

Keep in mind that Codecov.io displays the coverage information of your project. Your build process still need to produces that coverage information. Also, it is assumed that you have a continuous integration system that runs the build process every time there is a new check-in or when someone has a feature branch in a pull request. For many FOSS project, Travis CI is the most common solution although there are a few other hosted CI services out there.

To following a long, check out this simple repository that I have created: github.com/ariya/coverage-mocha-istanbul-karma. This repo contains a simple JavaScript project along with its equally simple test suite designed for Mocha. The tests will be executed by Karma.

To start using Codecov.io, first we need to enable the coverage information in Cobertura format. I have played with different coverage formats and I discovered that Cobertura is the most suitable (your mileage may vary and things can change from time to time). If you use Istanbul directly, you can use its report command to generate the coverage information in the right format (refer to the documentation for more details). With our setup, I modified a section in the Karma configuration file, karma.conf.js, from:

coverageReporter: {
    dir : 'coverage/',
    reporters: [
        { type: 'html', subdir: 'html' },
        { type: 'lcov', subdir: 'lcov' },
    ]
}

to:

coverageReporter: {
    dir : 'coverage/',
    reporters: [
        { type: 'html', subdir: 'html' },
        { type: 'lcovonly', subdir: 'lcov' },
        { type: 'cobertura', subdir: 'cobertura' }
    ]
}

This ensures that Karma tells Istanbul to produce another coverage information, in addition to the default lcov, in the format that we want, Cobertura. You can test this, simply execute npm test and after a while, you will spot the file coverage/cobertura/cobertura-coverage.xml that contains the coverage information. This is what we need to send to Codecov.io. There are multiple ways to do that, the easiest is to use codecov.io package. You can use this package by running:

npm install --save-dev codecov.io.

In this example, package.json is modified to look like this:

"scripts": {
    "test": "grunt karma:test",
    "ci": "npm test && codecov < coverage/cobertura/cobertura-coverage.xml"
}

Thus, everytime you invoke npm run ci on your Travis CI job, the tests will be executed and the coverage information will be sent to Codecov.io.

codecovpanel

To setup the dashboard, login to Codecov.io and add the repository as a new project. Codecov.io maintains a nice mapping of project URL. For example, the coverage dashboard for this example repo github.com/ariya/coverage-mocha-istanbul-karma is codecov.io/github/ariya/coverage-mocha-istanbul-karma. The next time you kick a build on the project, the dashboard will display the coverage information as sent from the build process.

If that works flawlessly, now you want to enable its pull request integration. Go to the project page and choose Integration and Setup, Pull Request Comment. Now you can determine various ways Codecov.io will comment on every pull request. For a start, you may want to enable Header and Compare Diff.

In the example repo, I have created a pull request, github.com/ariya/coverage-mocha-istanbul-karma/pull/3, that demonstrated a coverage regression. In that pull request, there is a commit that aims to optimize the code but that optimization does not include an additional unit test. This triggers the following response from Codecov.io, a feedback that is rather obvious:

codecovpr

With the build process that produces the coverage information, combined with a service such as Codecov.io, it is easy to keep untested code away from your project!

Tags:

In many tech conferences and other events, we see a trend where the speaker rarely introduces themselves or even when they do, it is rather short (and sweet). Why does this happen? Is that a good trend or a bad one?

stick-present

The argument against doing a self-introduction is pretty simple. Today, we live in a different age. Information is always available at our fingertips. Before going to a talk, we can do a lot of research on the speaker. Right before the talk, there is always an opportunity to check their Twitter, LinkedIn, and other social media. Even better, we can do that with a given context, whether it is related to the current topics of the day or with other speakers that we have known.

A minor variant of this approach is a very quick introduction, ideally in just a few seconds or less. It is thus important to come up with an introduction that is relevant to the audience. Something like “My name is Joe Sixpack, I work for Acme Corp” is less optimal as it does not give the audience any information as to why you are the best person to deliver the talk. It makes sense to switch to the style of “I’m Joe and I created Project Atlantis” if your talk is all about Project Atlantis. In the same spirit, it adds nothing if you ramble for minutes and minutes, enumerating your various achivements and other open-source projects, if those are remotely relevant to the presentation.

Of course, what would help is to establish a good online presence. Some people in the audience look you up on Twitter (and perhaps start following you). Others will Google/Bing/DuckDuckGo your name and take a quick look at your personal homepage. A few will probably want to know what you have posted on your Instagram. In all cases, it is very helpful for your audience if those sites give a faithful representative of who you are, what you like, and other informations related to the subject.

Obviously, this is all moot point if there is a moderator who is introducing you. In that case, I have seen that many presenters skip their self-introduction since usually the introduction from the moderator is already flattering and you do not want to spoil that.

What about telling the audience about your employer? I believe flashing the company logo or mentioning it quickly in passing is sufficient. If your talk is fantastic, there will be a lot of follow-up discussions and this is usually the best moment to tell more in-depth stories about your company or your start-up.

It is common nowadays to be in a conference where the talk is only 20 minutes, give or take. Therefore, every minute spent introducing yourself is a minute worth of another good material for your audience.

Tags:

At the most recent jQuerySF conference, Mike Sherov and I did a joint talk on the topic of JavaScript Syntax Tree: Demystified. The highlight of the talk was the demo from Mike as he showed how to fix coding style violations automatically.

The trick is to use JSCS and its latest features. If you want to follow a long, here is a step-by-step recipe.

First, you need to have JSCS installed. This is as easy as:

npm install -g jscs

Let’s pick an example project, for this illustration I use my kinetic scrolling demo:

git clone https://github.com/ariya/kinetic.git
cd kinetic

Now you want to let JSCS analyze all the JavaScript files in the project and deduce the most suitable code style:

jscs --auto-configure .

jscs
Give it a few seconds and after a while, JSCS will present the list of code style presets along with its associated number of errors, computed from your JavaScript code. If you already have a preset in my mind, you can choose one. An alternative would be to pick one that has the least amount of violations, as it indicates that your code already gravitates towards that preset.

Once you choose a preset, JSCS will ask you a couple of self-explained questions. At the end of this step, the configuration file .jscsrc will be created for you. With the configuration, the real magic happens. You just to invoke JSCS this way:

jscs -x .

then it will automatically reformat your JavaScript. Double check by looking at the changes and you will see that your code style now follows the specified preset.

With JSCS, you can comfortably ensure code style consistency throughout your project!