Tags:

While a build system is always critical to the success of a software project, maintaining such a system is not always fun. Hence, we tend to investigate many different ways to reduce the maintenance effort. Thanks to Docker, there is a possibility to have the build agent itself very simple because it does nothing but to spin and run a Docker container.

phoenixImagine if you are a Python shop and suddenly you have an engineer trying to experiment with Go for the new REST API server. It is certainly possible to retrofit your build infrastructure to include Go development tools and dependencies. But what if another environment and other frameworks are also needed? It is not scalable (process-wise) to always bug your build/release engineers and bug them with these (continuous) requirements.

In a configuration that involves a server-agent setup (or in Jenkins lingo, master-slave), the agent is the one that does the backbreaking work. In the previous blog post, Build Agent: Template vs Provisioning, I already outlined the most common techniques to eliminate the need to babysit a build agent. I am myself is a big fan of the automatic provisioning approach. Like what Martin Fowler wrote about Phoenix Server:

A server should be like a phoenix, regularly rising from the ashes.

When a build agent misbehaves due to a configuration drift, we shall not bother to troubleshoot it. We simply terminate that troublesome phoenix and let it regenerate (thanks to the provisioning mechanism). For another rather philosophical aspect on this approach, read also my other blog post on A Maturity Model for Build Automation.

The Container is the Phoenix

If many of your build agents share the same trait, e.g. they are mostly a Linux system (often in its virtualized form, e.g. an EC2 instance) with assorted different tools (compilers, libraries, frameworks, test systems), then the scenario can be further simplified. What if the build agent is not the actual Phoenix? What if the build agent is only the realm where the phoenix lives (and dies)?

In this situation, a Docker container becomes the real phoenix. Every project will need to supply some additional information (imperative: in the form of a script, declarative: common configuration understood by the build tool) necessary for the build agent: which container to be used and how to initiate that in-container build.

Let’s take a simple project and setup a build using this Docker and Phoenix approach. For this example, we will build a CPU feature detection tool (implemented using C++). If you want to follow along, simply clone its git repository bitbucket.org/ariya/cpu-detect and pay attention to the phoenix subdirectory.

There are two shell scripts inside the phoenix subdirectory: init.sh and build.sh.

The first one, init.sh, is the one to be executed by the build agent. It pulls the container used to execute the actual build step. Since this is a C++ project, we will leverage the gcc container. After that, it runs the container with a volume mapping so that /source inside the container is mapped to the git checkout directory. When the container is launched, it also executes the other script build.sh (referred as /source/phoenix/build.sh since we are now inside the container).

If we simplify it, the whole content of init.sh can be summarized as:

docker run -v $SOURCE_PATH:/source gcc:4.9 sh - c "/source/phoenix/build.sh"

The second script, build.sh, is not executed by the build agent directly. It will run inside the specified container, as described above. The main part of build.sh is to run the actual build step. For this project, it only needs to invoke make (in a real-world project, a battery of tests must be part of this). Before that, the script needs to prepare a build directory and copy the original source (remember, /source inside the container corresponds to the git checkout). Once the build is completed, the build artifact has to be transferred back. In this case, we just copy the generated cpu-detect executable.

If any step during this process fails, including make itself, then the whole process will be marked as a failure. This automatic propagation of status eliminates the need for a custom error handling.

To test this setup, have a box with Docker ready to use and then launch phoenix/init.sh. If everything works correctly, you will see an output like the following screenshot.

incontainer

If you experience some Inception moment trying to follow the steps, please use the following diagram. It is also a useful exercise to adopt those two phoenix scripts to your own personal project.

diagram

Agent of Democracy

In the above example, we pull and run a ready-to-use gcc container. In practice, you may want to come up with a set of customized containers to suit your need. Hence, it is highly recommended that you setup your own Docker registry to be used internally. This becomes a private registry and it should not be accessible by anyone outside your organization. Here is how your init.sh might look like incorporating the technique:

REGISTRY="docker.mycompany.com"
IMAGE="golang"
TAG="1.4"
CONTAINER="${REGISTRY}/${IMAGE}:${TAG}"
 
echo "Container to be used: $CONTAINER."
docker pull $CONTAINER
echo

Now that the build process only happens inside the container, you can trim down the build agent. For example, it does not need to have packages for all development environment, from Perl to Haskell. All it needs is Docker (and of course the client software to run as a build agent) and thereby massively reducing the provisioning and maintenance effort.

Let’s go back to the illustrative use case mentioned earlier. If an engineer in your team is inspired to evaluate Go, you do not need to modify your build infrastructure. Just ask them to provide a suitable Go development container (or reuse an existing once such as google/golang) and prepare that phoenix-like bootstrapper scripts. The same goes for the new intern who prefers to tinker with Rust instead. No change in the build agent is necessary! Everyone, regardless the project requirements, can utilize the same infrastructure.

In fact, if you think through this carefully, you will realize that all those Linux build agents are not unique at all. They all have the same installed packages and no agent is better or worse than the others. There is no second-class citizen. This is democracy at its best.

Parametrization and Resilience

Knowing the build number and other related build information is often essential to the build process. Fortunately, many continuous integration systems (Bamboo, TeamCity, Jenkins, etc) can pass that information via environment variables. This is quite powerful since all we need to do is to continue to pass that to Docker. For example, if you use Bamboo, then the invocation of docker needs to be modified to look like (notice the use of -e option to denote an environment variable).

docker run -v $SOURCE_PATH:/source \
  -e bamboo_buildNumber=${bamboo_buildNumber}\
  $CONTAINER sh - c "/source/phoenix/build.sh"

medikitAnother side effect of this Docker-based build is the built-in error recovery. In many cases, a build may fail or it gets stuck in some process. Ideally, you want to terminate the build in this situation since it warrants a more thorough investigation. Armed with the useful Unix timeout command, we just need to modify our Docker invocation:

TIMEOUT=2m
echo "Triggering the build (with ${TIMEOUT} timeout)..."
timeout --signal=SIGKILL ${TIMEOUT} \
  docker run -v $SOURCE_PATH:/source \
  $CONTAINER sh - c "/source/phoenix/build.sh"

By the way, this is the reason why there is an explicit docker pull in init.sh. Technically it’s not needed, but we use it a mechanism to warm up the container cache. This way, the time it takes to initially pull the container will not be included in that 2-minute timeout.

With the use of timeout, if the Docker process would not complete in 2 minutes, it will be terminated with SIGKILL, effectively aborting the whole step at once. Since the offending application is isolated inside a container, this kind of clean-up also results in a really clean termination. There is no more server hanging out doing nothing because it was not killed properly. There is no stray zombie process eating the resources in the background.

Summary: Use Docker to modify the build agent to be a realm where your phoenix lives and dies. After that, turn every build process into a short-lived phoenix.

Tags:

The most recent Shellshock, a vulnerability in the popular shell bash, got me to evaluate again the unique setup on Ubuntu/Debian. In this setup, script execution is not handled by bash, this job is carred out by dash, the Debian Almquist Shell. Meanwhile, bash is still used for the interactive shell since dash does not have autocomplete and history support.

One advantage of this setup is that you start to write your script taking into account that it will not be executed by bash only. This makes sense, it requires just a little effort to avoid certain bashisms and stay compatible to the POSIX syntax. I myself was pretty ignorant of this, assuming that bash is ubiquitous. After using dash for a while, I learn a couple of new tricks and I am happier than my scripts follow this standard best practice. While dash is more efficient in its execution, in most cases the difference is negligible and that was not my primary concern.

Linux users can get dash easily, it is one apt-get or yum install away. For OS X, it is easy enough to build it from its source, e.g.:

DASH_VERSION=0.5.7
DASH_FULLNAME=dash-${DASH_VERSION}
DASH_TARBALL=${DASH_FULLNAME}.tar.gz
DASH_DOWNLOAD=http://gondor.apana.org.au/~herbert/dash/files/${DASH_TARBALL}
rm -rf ${DASH_TARBALL} ${DASH_FULLNAME}
curl -L ${DASH_DOWNLOAD} -o ${DASH_TARBALL}
tar -xzf ${DASH_TARBALL}
cd ${DASH_FULLNAME}
./configure && make
sudo make install
fish

As for the interactive shell, my favorite these days is fish. It does not support every single feature of bash but it works very well. If you are desperate, you can still workaround what you miss from bash. And make sure you check out Oh My Fish! as well.

Whether you prefer bash or dash or fish, Unix shells are always fun to explore!

Tags:

Last week I was in Chicago for the most recent jQuery Conference, part of my autumn tour. It was a fantastic opportunity to have some face-to-face conversations as well as to get to know different folks in the jQuery community. Most importantly, I feel the urge to recall the revolution of the web, in which the state Illinois played an extremely important role.

The talk I must deliver for jQuery Conference is on a 10,000-foot overview of JavaScript execution in a web browser (check the slide deck, expect the video in a few weeks. Update: watch the video). Since the browser is an essential topic in this case, I could not resist to interject a story, a short detour on the browser history. You might have read a different variant of this, when I wrote about the back-story of various names of popular browsers out there. In particular, pay attention to the exploratory theme (in bold) in the following diagram.

revo

The said revolution was started by Mosaic, well-known as the first popular web browser which dramatically boosted the popularity of world wide web. Mosaic was developed at NCSA, part of University of Illinois. Some Mosaic team members later went to create Netscape and develop Netscape Navigator. One of them is Marc Andreessen (@pmarca), a graduate of University of Illinois, who is known these days as a prominent investor at Andreessen Horowitz (a16z) VC firm.

Mosaic was later born again in the form of Internet Explorer. Spyglass, an official licensee of NCSA Mosaic project, licensed their browser technology (unrelated to the original Mosaic code) to Microsoft and it became the genesis of a new browser contender to Netscape Navigator. The rise of Internet Explorer also led to the first Browser War.

During this time, Netscape was in trouble and the browser code was released as an open-source project Mozilla. After a few years, Firefox emerged and this time became the browser that challenged Internet Explorer. One of the co-creator of Firefox is Dave Hyatt, he was involved with Netscape and it is hardly a surprise that he also finished his graduate from University of Illinois. Later on, Dave Hyatt was one of the influential figures behind WebKit, the rendering engine that powers Safari.

Something that is not mentioned in the above browser diagram is of course JavaScript. Are you going to be surprised if I now mention that Brendan Eich (@BrendanEich), the father of JavaScript, finished his master also at the University of Illinois?

Beside being close to the seed of such a revolution, I did really enjoy my first trip to Chicago. It seems that Chicago easily makes it into the list of future travel I want to plan. Meanwhile, my next stop will be San Francisco for the upcoming HTML5 Developer Conference.

Tags:

With so many great cross-platform libraries out there, there is hardly any need to reinvent the wheel. In many cases, it is even possible to extract a portion of a sophisticated multi-platform application code to be reused in a different application. In this example, a basic CPU detection class from Chromium C++ code is built into a simple command-line tool.

The challenge in such an extraction process is figuring out the dependencies. Chromium src/base directory contains a lot of useful basic classes for application development. They are however targeted primarily to be used when building Chromium alltogether. Often times, it means it is not easy to use only one class without pulling a series of other dependent classes. In the worse case, you will go down the rabbit hole.

For this exercise, we will be using the base::CPU class found in base/cpu.h. As expected, this class depends on some other classes. Fortunately for us, it does not go that far. The dependency graph looks like the following:

deps

Thus, what we need to do is to grab all those files from Chromium source code. So that you can follow along, I have prepared a Git repository bitbucket.org/ariya/cpu-detect. Once this base::CPU class is ready, using it is trivial:

base::CPU cpu;
std::cout < < "Vendor: " << cpu.vendor_name() << std::endl;

For the complete demo app, here is the full output when running on my Chromebook Pixel:

detect

Since this is just an example, we omit a rather important thing. The class implementation actually needs to use StringPiece. In our flavor above, string_piece.h is an empty file (to satify the include). We don’t need any code because StringPiece is only necessary for ARM architecture. If we stick with Unix on x86, then we can live with that dummy header file. Obviously, in real life and with other possible classes, you may not be that lucky.

Isn’t it true that we don’t code today what we can’t reuse tomorrow?

Tags:

autumn2014

After a short pause, I’ll be giving tech talks again in a few weeks. The first one will be for jQuery Conference in Chicago, the other one is for the autumn edition of HTML5 Developer Conference in San Francisco.

For the jQuery folks, I’d like to share my understanding as to how web browsers execute JavaScript code. The official title of the talk is JavaScript and the Browser: Under the Hood and the abstract looks like the following:

A browser’s JavaScript engine can seem like a magical black box. During this session, we’ll show you how it works from 10,000 feet and give you the understanding of the main building blocks of a JavaScript engine: the parser, the virtual machine, and the run-time libraries. In addition, the complicated relationship with the web browser with a JavaScript environment will be exposed. If you are curious as to what happens under the hood when the browser modifies the DOM, handles scripted user interaction, or executes a piece of jQuery code, don’t miss this session!

In San Francisco, I will use the opportunity to demystify the secrets behind smooth animated user interface. The talk itself is basically a walkthrough of some bite-size examples, demonstrating among others Cover Flow in JavaScript and CSS 3-D.

With the support for buttery-smooth GPU-accelerated 3-d effect with CSS, modern browsers allow an implementation of a stunning fluid and dynamic user interface. To ensure that the performance is still at the 60 fps level, certain best practices needs to be followed: fast JavaScript code, usage of requestAnimationFrame, and optimized GPU compositing. This talk aims to show a step-by-step guide to implement the infamous Cover Flow effect with JavaScript and CSS 3-D. We will start with the basic principle of custom touch-based scrolling technique, the math & physics behind momentum effect, and the use of perspective transformation to build a slick interface. Don’t worry, the final code is barely 200 lines!

If you will be around in Chicago or San Francisco, don’t miss these talks!

Tags:

As I mentioned in my earlier blog post, we are now working torward stabilizing the development version of PhantomJS. One thing I would like to elaborate here with respect to the features of this forthcoming PhantomJS 2 is its improved JavaScript support.

With the fresher WebKit (thanks to Qt 5.3’s QtWebKit module), PhantomJS 2 also benefits from a lot of JavaScript improvement in JavaScriptCore, the JavaScript engine of WebKit. This brings PhantomJS to the more-or-less expectation of features which are supposed to be supported by a modern web browser.

test262

In fact, if we run the official test suite for ECMA-262, the ECMAScript Language Specification version 5.1, at test262.ecmascript.org, then we will see that PhantomJS (just like Safari 7) only fails 2 tests. If you compare it with other browsers, Chrome 36 has 4 failures and Firefox 31 has 42 failing tests. In a way, we can say that (by leveraging JavaScriptCore), PhantomJS is pretty much as standard compliant as it could get.

failures

Among others, many will rejoice for the availability of Function.prototype.bind (bug 10522), something that is missing from the old QtWebKit in Qt 4.8. There were various attempts to solve this in the past, see my previous post to the mailing-list. However, as I wrote there, sometimes our collective effort was not enough to move the mountain.

Another related improvement is the handling of the special object arguments. Just like other modern browsers, now you can JSON.stringify this object (bug 11845), use in a for-in loop (bug 11558, bug 10315), and call Object.keys on it (bug 11746).

PhantomJS 2 will be released whenever it is ready (monitor the mailing-list if you want to get notified). Meanwhile, if you want to compile it yourself and play with this bleeding-edge version, follow the step-by-step instructions. If you are on Linux or OS X (like typical geeks these days), building it yourself should take just 30 minutes on a fairly decent machine.

Good things come to those who wait!