Tags:

phantomjs2It is been a while since PhantomJS received a facelift. This is about to change, the current master branch is now running the unstable version of PhantomJS 2. Among others, thing brings the fresher Qt 5.3 and its updated QtWebKit module.

Thanks to the hard work of many contributors, in particular @Vitallium and KDAB, PhantomJS 2 is getting close to its final stage. There are still many things to do, from fixing the failing unit tests to running a thorough integration test, but at least the current master branch can be built on Linux, OS X, and Windows already.

A typical user will want to wait until the final release to get the official binaries. However, those who are brave enough to experiment with this bleeding-edge version are welcome to build it from source.

We still do not know when it is going to be ready for the final release, stay tuned and monitor the mailing-list.

With this new major version, we also have an opportunity to review and improve the development workflow. Several topics are already being discussed (feel free to participate, your feedback will be appreciated): removing CoffeeScript support, revamping the test system, searching for a better issue tracker, building a continuous integration system, and last but not least, modularization approach for future distribution. These tasks are far from trivial, any kind of help is always welcomed.

A journey of a thousand miles begins with a single step. And expect more rolling updates in the next few weeks!

Tags:

TeamCity from JetBrains is an easy-to-use and powerful continuous integration system. It is a commercial product, but there is a special zero-cost license for small projects and FOSS applications. While installing TeamCity is relatively easy, its setup is further simplified via the use of Docker.

logo_teamcityLike many other state-of-art continuous integration systems, TeamCity adopts the concept of build server and build agent. The server is responsible for the adminstration and build configurations. The actual build itself (compilation, packaging, deployment, etc) is carried out by one or more build agents. With this approach, it is also easy to provision the agent automatically so that the entire setup requires very little manual tweaks.

TeamCity Server only requires Java. The installation is rather straightforward. With Docker, this is even easier. I have prepared a special container for this, ariya/centos6-teamcity-server. The base system for the container is ariya/centos6-oracle-jre7, a CentOS 6.5 system running the official Oracle Java 7 (to be more precise, JRE 1.7.0_65-b17 at the time of this writing).

Assuming you have a system (VPS such as Linode or DigitalOcean, Amazon EC2 instance, a virtual machine, a real box) that already has Docker installed, setting up a TeamCity Server is as easy as running the following commands. Note that if you are on OS X, use boot2docker if you just want to experiment with this setup (see my previous blog post Docker on OS X for more details).

docker run -dt -name teamcity_server -p 8111:8111 ariya/centos6-teamcity-server

Give it a few minutes or so, now open the box’s address at port 8111 to start the web configuration of TeamCity Server (read the official TeamCity documentation for more details), as shown in the following screenshot. If your host system is using iptables, make sure to accept connections on port 8111. Note that TeamCity data will be stored in the special location /data/teamcity. This is a standard Docker volume, it is useful to allow easy mounting, back-up, or future upgrade.

teamcity

Once the server is configured, it is time assign a build agent to this server (otherwise, nothing can be built). Again, we will spawn a build agent easily using Docker by running the container named ariya/centos6-teamcity-agent. For the agent to work, we need to specify the server. Here is how you would run it:

docker run -e TEAMCITY_SERVER=http://buildserver:8111 \
    -dt -p 9090:9090 ariya/centos6-teamcity-agent

If you run this in the same host which is running the server container, you need to link them together:

docker run -e TEAMCITY_SERVER=http://teamcity_server:8111 \
    --link teamcity_server:teamcity_server -dt ariya/centos6-teamcity-agent

The environment variable TEAMCITY_SERVER is mandatory, it needs to point to the location of the instance of TeamCity server you started in the previous step. Once you run the container, it will contact the specified server, download the agent ZIP file, and set it up. Wait a few minutes since the build agent usually updates itself after the first contact to the server. If everything works correctly, you should see a new agent appearing on the Agents tab on your TeamCity server web interface. Authorize the agent and now it should be ready to take any build job!

If there is a problem launching the agent (docker ps) does not show the container running, try to run it again but this time with the option -it (interactive terminal) instead of -dt. This will dump some additional debugging message which can be helpful to assist troubleshooting.

Note that this agent container is also based on CentOS 6 with Java 7. Usually this is not enough as you may need other dependencies (different SDKs, compilers, libraries, etc). Ideally those dependencies should be resolved automatically, either by basing the container on a different system or by setting up the right automatic provisiniong. Refer to my previous blog post Build Agent: Template vs Provisioning for a more detailed overview.

Still have an excuse not to do continuous integration? I don’t think so!

Tags:

For an automated build system, a typical configuration involves the separation between the build server and the build agents (some systems call it master-slave or coordinator-runner). Such a configuration allows any addition or removal of new build agents, perhaps to improve the build performance, without causing too much disruption. When it is time to spawn a new build agent, there are at least two possible techniques: recreate it from the template or provision it from scratch.

vmExcept for various corner cases, build agents nowadays often run in a virtualized environment. This makes it easy to install, upgrade, and manage the agent. An important benefit of having it virtualized is the ability to take the snapshot of the state of the build agent. When there is a problem, it is possible to revert it to the last good known snapshot. In addition, that snapshot could serve as the agent template. If there is a need to have more build agents, maybe because the build jobs are getting larger and larger, then a new agent can be created by cloning it from the template.

With today’s technologies, template-based build agent is not difficult to handle. Vagrant permits a simplified workflow for managing virtual machines with VirtualBox, VMware, etc. Continuous integration system like TeamCity and Bamboo has a built-in support for Amazon EC2, a new instance from a specified AMI can be started and stopped automatically. And of course, running a new Linux system in a container is a child’s play with Docker.

This template-based approach, while convenient, has a major drawback. If the software to be built has an updated set of dependencies (patched libraries, different compiler, new OS), then all the build agents become outdated. It is of course enough to create a fresh agent from scratch based on the new dependencies and spawning a bunch of new agents from this template. Yet, this process is often not automated and error-prone, an accident waiting to happen.

In the previous blog post A Maturity Model for Build Automation, I did already outline the loose mapping of capability maturity model to the state of common automated build system. With this, it is easy to see how we can level up the above template-based approach. Instead of relying on a predefined configuration, a build agent should be able to create a working environment for the build from a provisioning script. The litmus test is rather simple: given a fresh virtual machine, the build agent must figure out all the dependencies, find out what’s missing and solve it, and then be in a state where it is ready to take any build job.

Again, today’s technologies make such those provisioning actions as easy as 1-2-3. We already have all kinds of powerful configuration management tools (CFEngine, Chef, Puppet, Ansible, etc). In many cases, relying on the OS package managers (apt-get, rpm, yum, Chocolatey, etc) or even framework-specific packaging solution (pip, npm, gem, etc) is more than enough. There is hardly any excuse not to adopt this provisioning solution.

Last but not least, it’s possible to combine these two to form a hybrid, optimized approach. Given a plain vanilla machine, the provisioning script can always upgrade it to the intended state. That should also still hold even if the machine is already polluted with some old dependencies. This opens an opportunity for doing both. In the context of Docker, it means that the base image needs to be refreshed with all the dependencies, e.g. different compiler and system libraries. At this point, the existing agents can still continue to function, perhaps installing missing stuff as necessary. However, once the base image is fully upgraded, the agent container can be rebuilt and it will bypass any redundant installation.

Care to share which approach do you use/experience/prefer?

Tags:

At some point, a software project will grow beyond its original scope. In many cases, some portions of the project become their own mini world. For maintenance purposes, it is often benefical to separate them into their own projects. Furthermore, the commit history for the extracted project should not be lost. With Git, this can be achieved using git-subtree.

signpostWhile git-subtree is quite powerful, the feature that we need for this task is its splitting capability. The documentation says the following regarding this split feature:

Extract a new, synthetic project history from the history of the prefix subtree. The new history includes only the commits (including merges) that affected prefix, and each of those commits now has the contents of prefix at the root of the project instead of in a subdirectory. Thus, the newly created history is suitable for export as a separate git repository.

This turns out to be quite simple. In fact, there is already a Stack Overflow answer which describes the necessary step-by-step instructions. The illustration below, also dealing with a real-world repo, hopefully serves as an additional example of this use case.

First of all, make sure you have a fresh version of Git:

git --version

If it says 1.8.3, then get a newer version since there is a bug (fixed in 1.8.4) which will pollute your commit logs badly, i.e. by adding “-n” everywhere.

For this example, let’s say we want to extract the funny automatic name generator (for a container) from the Docker project into its own Git repository. We start by cloning the main Docker repository:

git clone https://github.com/dotcloud/docker.git
cd docker

We then split the name generator, which lives under pkg/namesgenerator, and place it into a separate branch. Here the branch is called namesgen but feel free to name it anything you like.

git-subtree split --prefix=pkg/namesgenerator/ --branch=namesgen

The above process is going to take a while, depending on the size of the repository. When it is completed, we can verify it by inspecting the commit history:

git log namesgen

The next step is to prepare a place for the new repository (choose any directory you prefer). From there, all we need to do is to pull the namesgen branch which was splitted before:

cd ~
mkdir namesgen
cd namesgen
git init
git pull /path/to/docker/checkout namesgen

That’s it! Of course, normally you want to push this to some remote, e.g. a repository on GitHub or Bitbucket or your own Git endpoint:

git remote add origin git@github.com:joesixpack/namesgen.git
git push -u origin --all

The new repository will only contain the files from pkg/namesgenerator/ directory from Docker repository. And obviously, every commit that touch that directory still appears in the history.

Mission accomplished!

Tags:

Last week, one of my favorites conferences, Velocity Conference, took place in Santa Clara. Beside the joy of meeting old friends and making new acquaintances, Velocity was exciting for me due to its crazy amount of excellent materials to digested post-conference. In addition to that, this was also for the fourth time I gave a talk there and this time I presented the topic of Smooth Animation on Mobile Web.

coverflow

The objective of the talk (see the slide deck) is to demonstrate some examples which implement some common best-practices for client-side high-performance web apps: optimized JavaScript, requestAnimationFrame, event throttling, and GPU acceleration. Rather than simple examples, demos which represent real-world applications were shown instead. This covers things like kinetic scrolling for implementing a flick list, parallax effect in a photo viewer, and last but not least, a clone of Cover Flow with a deadly combination of CSS 3-D and JavaScript.

Followers of this blog corner may realize that this is now a new topic as I have covered this extensively in the past (on kinetic scrolling, parallax effect, and Cover Flow). This is the first time however that I delivered the subject in a tech talk (and I won’t fool myself, there is certainly room for improvement). It is certainly a challenge, complicated written articles can be consumed and analyzed one step at a time while a presentation forces everyone to be on the same pace. I still plan to experiment with various different deliveries of the content.

On a related note, it is also fascinating to notice how sophisticated user interface takes a lot of ideas of everyday physics, whether it’s about geometry or kinematics. Even on many modern mobile platform, the hardware is more than capable to produce all fancy effects that we want. The limit will be our own creativity: would we learn more about physics and leverage it to build amazing interface for mobile web?

Tags:

The Spurs just recently won the 2014 NBA Finals with a series of convincing games. Most importantly however, they demonstrated the amazing traits of unselfishness and teamwork. It is not about getting the individual fame and glory, it is all about working together towards the common goal.

Time and time again we witnessed the excellent execution of team effort, both in the defense and offense (if you missed it, watch the following 1-minute clip). Every player, superstar or not, is a key element to the orchestrated attack. Some players will get the assist and field goal points in their stats, but nobody would care if the team loses. On other hand, when the team wins, everyone deserves the credit.

If you are an engineer, the adventure of the Spurs towards the championship provides an extremely valuable lesson. As I have written in details in the previous blog post Impact Factor, once you are past the initial stage of proving yourself, the real journey begins. Your career immediately reaches saturation if you can not escape from the checklist mentality. It is not about being a lone wolf anymore, it is all about collaboration and playmaking.

The tagline of a magazine ad (it was for Visual Studio) that I spotted 6 years ago said it best:

Good coders solve problem. Great teams make history.

Be a source of inspiration for your team mates and you’ll ultimately shape the future!