Tags:

ChakraCore, the JavaScript engine that powers the new Edge browser, has been open-sourced by Microsoft. While currently it only runs on Windows, support for other operating systems is in the roadmap. It is fun to follow the progress of the Linux version of ChakraCore.

Obviously, nothing works on Linux yet since the porting is still in its early stage. For a start, ChakraCore Windows is built using Visual Studio. For other platforms, the build process needs to use CMake.

On top of that, Windows-specific API needs to be abstracted. The ChakraCore team decides to bring the important bits and pieces from CoreCLR, another open-source .NET core runtime from Microsoft. CoreCLR already contains the so-called PAL, Platform Adaptation Layer, that deals with file handling, memory management, thread, etc. It does make sense to leverage PAL since it already works quite well for multi-OS support in CoreCLR.

chakralinux

How to play with ChakraCore Linux? The easiest is using a virtual machine so that you can do it if your host machine is OS X. In fact, even on Linux host, it is quite beneficial to isolate the setup from your main working environment. I use Vagrant since this is the fastest to prepare. Since ChakraCore Linux requires Clang version 3.5, the easiest is to base your system on Ubuntu 15.10 (Wily Werewolf):

vagrant init ubuntu/wily64
vagrant up && vagrant ssh

You need a couple of good packages:

sudo apt-get -y update
sudo apt-get install -y build-essential git cmake clang

Switch from GCC to Clang:

export CC="$(which clang)"
export CXX="$(which clang)"

Now the party begins. Clone the repository of ChakraCore, switch to the Linux branch, and have fun!

git clone https://github.com/Microsoft/ChakraCore.git
cd ChakraCore
git branch linux origin/linux
git checkout linux
cmake . && make

Tags:

An important step in improving your public speaking skills is to learn from the master. These days, there are a lot of very fascinating talks that can become the source of inspiration. One of them is the most recent TED talk titled, "The four fish we’re overeating".

fourfish

This talk from Paul Greenberg, as in the usual TED tradition, is not very long. However, in just about 15 minutes, the talk beautifully demonstrates several attributes of a well thought and carefully crafted presentation. If you haven’t watched it, you are welcomed to watch the talk first and then come back again later. Go on, I can wait.

Everytime you attend a presentation training, the importance of the main story is emphasized again and again. The story needs to be narrated in such a way that it can answer the most important thing that the audience care about: What’s in it for me?. Of course, not everybody in the audience needs to draw the same conclusion after Paul finished the talk. Some of them will adjust their seafood consumption, some others will do a follow-up research, and maybe a few won’t care much since they are not a big fan of seafood anyway. Yet, such a talk is considered a success if nobody forgets the story right away the minute they are leaving.

Related to the story, the opening and the closing of the talk need to be as attractive. A very common way to do this is by narrating a mesmerizing personal story and this is what Paul did in the talk. The closing part is usually more difficult to handle since it needs to address two things. First, it should be related to the opening story to symbolize the wrap up. Second, it must end the talk on a high note.

Last but not least, the slides used by Paul solely served the role of supporting materials. Hence, there is no need to fill the decks with bullet points and long-winded text. The slides can be purely visual, filled with maps, charts, and comparisons. A few analogies here and there help making the point, whether it’s the four fish vs chickens-geese-ducks-turkeys or 80 ton metrics vs human weight. Still, if you would have listened to him while chatting over coffee, the story will not be less profound even where there is no slide involved.

What do you like about Paul’s talk? Which techniques do you plan to adopt for your own talk?

Tags:

If your library is being used by many other applications, it is often a good idea to run their test suites. Since those downstream applications may break because of your library, this helps catching those cases before you officially release a new public version of the library.

This is the extra cautious step that we have been practicing for the Esprima project. Since Esprima is the dependency of a few dozens projects out there and its npm package is downloaded at a rate of 10 millions/month, we want to be aware if anything will fall apart before we update it. Thus, those projects become a sort of animal sentinel ("canary in a coal mine").

Every time there is a new pull request, there is a number of downstream projects (e.g. Istanbul, JSCS, jsfmt, etc) which will be tested against the modified Esprima in that pull request. The logic is rather simple and the implementation itself weighs just around 100 lines of code.

downstream

First, a new temporary directory is created (since we don’t want to clobber the current working directory). In that directory, the downstream project, e.g. Istanbul, will be check out using git clone. For Esprima, we choose to track the master branch of every downstream project and live with the risk of instability. However, for your own project, you may consider tracking a stable branch instead.

After that, the project dependencies are installed via npm i. Obviously, among many different packages, this will install the stable, published version of Esprima. This is where the fun begins. That Esprima module will be replaced by dropping in the current development version of Esprima (i.e. of the feature branch associated with the pull request). Following this step, the project tests are executed by running npm test. If any of the tests failed, there is a possibility that it is caused by the pull request. Whether it is a success or a failure, the information will be posted to the pull request to help making the informed decision.

Note that in some cases, the downstream project itself could fail its own test suite. The solution is to track its more stable branch, rather than its development branch. Of course, if the project is too fragile and it does not keep the good hygiene, then it is not terribly suitable as a canary and you have to pick another project.

As with many other FOSS projects, Esprima is relying on Travis CI to run its tests. For this particular downstream test however, we decided to run it on a different hosted CI system, Circle CI. The reason is simple, we want to have the downstream tests running in parallel to the other usual battery of unit tests and regression tests. For Esprima, the entire downstream test could take some time to finish, about 14 minutes give or take. Running both in parallel means that every pull request can be reviewed after the Travis CI job is completed, while waiting for the longer downstream tests.

Maintaining a library? Be responsible and test your downstream projects!

Tags:

These days, we can reach anyone on this planet, anywhere they are located, anytime we like. However, we often never realize that it is only possible because there is a high-bandwidth Internet backbone running on modern and sophisticated fiber optic transmission systems.

At the most recent O’Reilly Velocity conferences, in Santa Clara and another one in New York, I gave a keynote titled 20,000 Leagues Inside the Optical Fiber. In that talk, I summarized a few important technological milestones in our communication system, up to where we enjoy the luxury of being interconnected with a pipeline with humongous capacity. I won’t spoil it for you as you can watch this 9-minute talk on YouTube:

I believe that keeping this perspective is very important for us. When we engage in various activities and we think that we are noble folks who achieve some earth-shattering objectives, let us pause for a second and recall the series of scientific journeys that transformed our civilization.

If you know someone who is a scientist, give them a hug. They are the reasons that we could be passionate about what we are doing today.

Tags:

Building a web application without testing it on the major consumer browsers will be crazy. Fortunately, we have a few cross-browser testing services such as Sauce Labs, BrowserStack, and many more. Still, for a quick sanity check on the latest stable version of Google Chrome and Mozilla Firefox, nothing beats the fantastic service provided by AppVeyor.

As a hosted continuous integration service, AppVeyor runs your application (and its tests) on Windows, more precisely Microsoft Windows Server 2012 R2. This means, we have access to the widely used web browsers: Internet Explorer, Firefox, and Chrome. Due to the platform integration, IE 11 is always available. Often times, Firefox and Chrome are a few versions behind. To solve this issue, we can always install the latest stable version of these two browser right before running thet tests.

If you want to follow along, I have prepared a simple project github.com/ariya/karma-appveyor. Clone the repository to get the feeling of what it is doing. Since it is designed to be very simple, it consists of only one test, written using Mocha unit test library and executed using Karma test runner:

describe("sqrt", function() {
  it("should compute the square root of 4 as 2", function() {
    assert.equal(Math.sqrt(4), 2);
  });
});

The test itself can be executed by running npm test. It will launch Karma to run the test in the following browsers available on your system: Chrome, Firefox, Safari, and IE. The available browsers are detected using a very nice Karma plugin called karma-detect-browsers. If you are on OS X, what you got is something like this:

appveyor

To run it on AppVeyor, first we need to craft the configuration file that looks like:

version: "{build}"

environment:
  nodejs_version: "0.12"

install:
  - ps: Install-Product node $env:nodejs_version
  - node --version
  - npm --version
  - npm install

test_script:
  - npm test

Now go to appveyor.com, sign in using your GitHub account, create a new project and choose your repository. Explicitly ask for a new build and after a while, AppVeyor is brewing the build as follows:

outdated

It is running the tests with IE 11, Firefox 30, and Chrome 41. The last two browsers are quite outdated. How do we force an upgrade?

Chocolatey to the rescue! Built on top of Nuget, Chocolatey facilitates a silent install of many Windows applications (hence why it is known as the "apt-get of Windows"). We need to tweak our appveyor.yml so that Chocolatey installs firefox and googlechrome package. Of course, if you are living on the edge, feel free to include Firefox Beta and Chrome Beta to the mixture.

install:
  - choco install firefox
  - choco install googlechrome
  - ps: Install-Product node $env:nodejs_version
  - node --version
  - npm --version
  - npm install

test_script:
  - npm test

Run the build on AppVeyor and this time, the build log will be different:

updated

There you go: we have IE 11, Firefox 40, and Chrome 45 running our test!

Tags:

It is a truth universally acknowledged, that a single function critical to the success of the application, must be in want of a unit test. A practical way to prevent the lack of a unit test is to ensure that the overall code coverage does not regress. Fortunately, for applications written in JavaScript, there are a few code coverage services which can help with the task.

codecov-sticker

Thanks to a variety of language tooling available these days, it is not hard to measure and track code coverage of a JavaScript application. My go-to solution is involving Istanbul as the coverage tool, combined with either Karma or Venus.js as the test runner. This setup works with various popular unit test libraries out there. If you are new to this, I recommend checking out my past blog posts on this subject:

And yet, the work does not stop there. Would it be fantastic if the code coverage report becomes another feedback information for a contributor? Is it possible to track down every single pull request and check if the changes associated with that pull request would regress the coverage? The answer is yes. The key to that is utilizing a hosted code coverage service. There are many out there and in this post I will cover (pun intended) my current favorite, Codecov.io.

Thank for a set of its rich features, integrating Codecov.io to your open-source project is very easy. For a start, you do not need to create a dedicated account as you can just authenticate using Github. Furthermore, Codecov.io has a built-in support for Github (as well as other hosted Git such as Bitbucket), choosing a project to be added to your dashboard is trivial.

Keep in mind that Codecov.io displays the coverage information of your project. Your build process still need to produces that coverage information. Also, it is assumed that you have a continuous integration system that runs the build process every time there is a new check-in or when someone has a feature branch in a pull request. For many FOSS project, Travis CI is the most common solution although there are a few other hosted CI services out there.

To following a long, check out this simple repository that I have created: github.com/ariya/coverage-mocha-istanbul-karma. This repo contains a simple JavaScript project along with its equally simple test suite designed for Mocha. The tests will be executed by Karma.

To start using Codecov.io, first we need to enable the coverage information in Cobertura format. I have played with different coverage formats and I discovered that Cobertura is the most suitable (your mileage may vary and things can change from time to time). If you use Istanbul directly, you can use its report command to generate the coverage information in the right format (refer to the documentation for more details). With our setup, I modified a section in the Karma configuration file, karma.conf.js, from:

coverageReporter: {
    dir : 'coverage/',
    reporters: [
        { type: 'html', subdir: 'html' },
        { type: 'lcov', subdir: 'lcov' },
    ]
}

to:

coverageReporter: {
    dir : 'coverage/',
    reporters: [
        { type: 'html', subdir: 'html' },
        { type: 'lcovonly', subdir: 'lcov' },
        { type: 'cobertura', subdir: 'cobertura' }
    ]
}

This ensures that Karma tells Istanbul to produce another coverage information, in addition to the default lcov, in the format that we want, Cobertura. You can test this, simply execute npm test and after a while, you will spot the file coverage/cobertura/cobertura-coverage.xml that contains the coverage information. This is what we need to send to Codecov.io. There are multiple ways to do that, the easiest is to use codecov.io package. You can use this package by running:

npm install --save-dev codecov.io.

In this example, package.json is modified to look like this:

"scripts": {
    "test": "grunt karma:test",
    "ci": "npm test && codecov < coverage/cobertura/cobertura-coverage.xml"
}

Thus, everytime you invoke npm run ci on your Travis CI job, the tests will be executed and the coverage information will be sent to Codecov.io.

codecovpanel

To setup the dashboard, login to Codecov.io and add the repository as a new project. Codecov.io maintains a nice mapping of project URL. For example, the coverage dashboard for this example repo github.com/ariya/coverage-mocha-istanbul-karma is codecov.io/github/ariya/coverage-mocha-istanbul-karma. The next time you kick a build on the project, the dashboard will display the coverage information as sent from the build process.

If that works flawlessly, now you want to enable its pull request integration. Go to the project page and choose Integration and Setup, Pull Request Comment. Now you can determine various ways Codecov.io will comment on every pull request. For a start, you may want to enable Header and Compare Diff.

In the example repo, I have created a pull request, github.com/ariya/coverage-mocha-istanbul-karma/pull/3, that demonstrated a coverage regression. In that pull request, there is a commit that aims to optimize the code but that optimization does not include an additional unit test. This triggers the following response from Codecov.io, a feedback that is rather obvious:

codecovpr

With the build process that produces the coverage information, combined with a service such as Codecov.io, it is easy to keep untested code away from your project!