Tags:

At some point, a software project will grow beyond its original scope. In many cases, some portions of the project become their own mini world. For maintenance purposes, it is often benefical to separate them into their own projects. Furthermore, the commit history for the extracted project should not be lost. With Git, this can be achieved using git-subtree.

signpostWhile git-subtree is quite powerful, the feature that we need for this task is its splitting capability. The documentation says the following regarding this split feature:

Extract a new, synthetic project history from the history of the prefix subtree. The new history includes only the commits (including merges) that affected prefix, and each of those commits now has the contents of prefix at the root of the project instead of in a subdirectory. Thus, the newly created history is suitable for export as a separate git repository.

This turns out to be quite simple. In fact, there is already a Stack Overflow answer which describes the necessary step-by-step instructions. The illustration below, also dealing with a real-world repo, hopefully serves as an additional example of this use case.

First of all, make sure you have a fresh version of Git:

git --version

If it says 1.8.3, then get a newer version since there is a bug (fixed in 1.8.4) which will pollute your commit logs badly, i.e. by adding “-n” everywhere.

For this example, let’s say we want to extract the funny automatic name generator (for a container) from the Docker project into its own Git repository. We start by cloning the main Docker repository:

git clone https://github.com/dotcloud/docker.git
cd docker

We then split the name generator, which lives under pkg/namesgenerator, and place it into a separate branch. Here the branch is called namesgen but feel free to name it anything you like.

git-subtree split --prefix=pkg/namesgenerator/ --branch=namesgen

The above process is going to take a while, depending on the size of the repository. When it is completed, we can verify it by inspecting the commit history:

git log namesgen

The next step is to prepare a place for the new repository (choose any directory you prefer). From there, all we need to do is to pull the namesgen branch which was splitted before:

cd ~
mkdir namesgen
cd namesgen
git init
git pull /path/to/docker/checkout namesgen

That’s it! Of course, normally you want to push this to some remote, e.g. a repository on GitHub or Bitbucket or your own Git endpoint:

git remote add origin git@github.com:joesixpack/namesgen.git
git push -u origin --all

The new repository will only contain the files from pkg/namesgenerator/ directory from Docker repository. And obviously, every commit that touch that directory still appears in the history.

Mission accomplished!

Tags:

Last week, one of my favorites conferences, Velocity Conference, took place in Santa Clara. Beside the joy of meeting old friends and making new acquaintances, Velocity was exciting for me due to its crazy amount of excellent materials to digested post-conference. In addition to that, this was also for the fourth time I gave a talk there and this time I presented the topic of Smooth Animation on Mobile Web.

coverflow

The objective of the talk (see the slide deck) is to demonstrate some examples which implement some common best-practices for client-side high-performance web apps: optimized JavaScript, requestAnimationFrame, event throttling, and GPU acceleration. Rather than simple examples, demos which represent real-world applications were shown instead. This covers things like kinetic scrolling for implementing a flick list, parallax effect in a photo viewer, and last but not least, a clone of Cover Flow with a deadly combination of CSS 3-D and JavaScript.

Followers of this blog corner may realize that this is now a new topic as I have covered this extensively in the past (on kinetic scrolling, parallax effect, and Cover Flow). This is the first time however that I delivered the subject in a tech talk (and I won’t fool myself, there is certainly room for improvement). It is certainly a challenge, complicated written articles can be consumed and analyzed one step at a time while a presentation forces everyone to be on the same pace. I still plan to experiment with various different deliveries of the content.

On a related note, it is also fascinating to notice how sophisticated user interface takes a lot of ideas of everyday physics, whether it’s about geometry or kinematics. Even on many modern mobile platform, the hardware is more than capable to produce all fancy effects that we want. The limit will be our own creativity: would we learn more about physics and leverage it to build amazing interface for mobile web?

Tags:

The Spurs just recently won the 2014 NBA Finals with a series of convincing games. Most importantly however, they demonstrated the amazing traits of unselfishness and teamwork. It is not about getting the individual fame and glory, it is all about working together towards the common goal.

Time and time again we witnessed the excellent execution of team effort, both in the defense and offense (if you missed it, watch the following 1-minute clip). Every player, superstar or not, is a key element to the orchestrated attack. Some players will get the assist and field goal points in their stats, but nobody would care if the team loses. On other hand, when the team wins, everyone deserves the credit.

If you are an engineer, the adventure of the Spurs towards the championship provides an extremely valuable lesson. As I have written in details in the previous blog post Impact Factor, once you are past the initial stage of proving yourself, the real journey begins. Your career immediately reaches saturation if you can not escape from the checklist mentality. It is not about being a lone wolf anymore, it is all about collaboration and playmaking.

The tagline of a magazine ad (it was for Visual Studio) that I spotted 6 years ago said it best:

Good coders solve problem. Great teams make history.

Be a source of inspiration for your team mates and you’ll ultimately shape the future!

Tags:

When dealing with annoying people, there are at least two possible choices: ignore them or contain them. The former is a popular advice for combatting Internet trolls. The question is, shall we start to move slowly towards the containment tactic?

trollfaceImagine a group of people wandering in your neighborhood. They are not criminals, they are not dangerous at all. They just tend to waste everyone’s time. They are the ultimate masters of time sinks. Of course, we can (and we should) continue to ignore them (“don’t feed the Troll” mantra) until they are going away.

An alternative will be to invite all of them to a basement, get some pizza and other communal food, and ask a very smart robot (with top-of-the-line artifical intelligence) to keep them busy for as long as they care. The idea is very simple, the more time they spend in that contained environment, the less time they have to waste someone’s else time.

While such a robot is not available yet, the strategies can be still developed and validated in a smaller scope, perhaps as an online bot. In many cases, we need to borrow the elements from the trolls themselves. What follows is a random collection of some ideas, I’m sure these can be improved further.

Timing is always important. Tracking past responses makes it possible to deduce the likely schedule of the trolls. The bot needs to keep the trolls busy at their peak time.

Asymmetric effort ensures the effectiveness. Our bot should spend 5 seconds writing a reponse that keeps the trolls busy for half a day.

Provocation should be a recurring theme, the bot should ignore any baits from the trolls (wouldn’t it be easy?) and try to always get under their skin.

In our tech world, we have witnessed computers capable of understanding complex web pages, playing chess, winning a quiz show, and serving as a personal assistant. Will we see human trolls vs troll bots anytime soon?

Tags:

whale
In the world of virtualization nowadays, Docker is the new kid on the block. It is almost trivial to set up and play with it when you are running Linux. What if, like many geeks out there, you are using OS X as your primary development system? Two possible solutions are discussed here, using boot2docker or running it via a Linux virtual machine.

Let’s take a simple Go-based HTTP server and run it in a container. I have prepared a demo at bitbucket.org/ariya/docker-hellogo that you can follow along. To initiate, start with:

git clone https://bitbucket.org/ariya/docker-hellogo.git
cd docker-hellogo

The content of the Dockerfile in that repo is as follows (simplifed):

FROM centos:centos6
ADD . /src
RUN yum -y install golang
EXPOSE  8200
CMD ["go", "run", "/src/serve.go"]

It sets CentOS 6 as the base image, installs Go, and finally exposes port 8200 (where the HTTP server will run). The final CMD line specifies what to do when the container is executed, which is to run the said HTTP server.

Assuming that Docker is available (e.g. properly installed on Ubuntu), we can build the container:

sudo docker build -t hellogo .

The dot . refers to the current directory (i.e. the Git checkout) and the built image will be called hellogo. Note that this will pull the base image for CentOS 6, if it is not yet available locally.

Once the build process is completed, running the image is as easy as:

sudo docker run -p 8200:8200 -t hellogo

The argument -p 8200:8200 specifies the port forwarding. Open your browser and go to http://localhost:8200 and you should see the famous Hello world! message.

For those who are using OS X, fortunately there are at least two possible ways to realize the above steps without creating a Linux VM manually and running it there.

The first choice is to use boot2docker, a superlightweight Linux distro just to run Docker. Once boot2docker is installed, the setup is like this (note that we need the second line to ensure the correct port forwarding):

boot2docker init
vboxmanage modifyvm boot2docker-vm --natpf1 "http,tcp,127.0.0.1,8200,,8200"
boot2docker up
export DOCKER_HOST=tcp://localhost:4243

And that’s it! Now you can run docker build and docker run as described earlier (skip the sudo part). Rather straighforward, isn’t it?

The second choice is to have a virtual machine running Linux and use Docker from there. It is indeed an additional layer and some extra overhead, but in many cases it still works quite well. Obviously, create a virtual machine manually is not something you normally do these days. We can leverage Vagrant and VirtualBox for that.

To illustrate this, there is a Vagrantfile in the Git repo:

VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "ubuntu/trusty64"
  config.vm.network "forwarded_port", guest: 8200, host: 8200
  config.vm.provision "shell",
    inline: "apt-get -y update && apt-get -y install docker.io"
end

It is based on the recent Ubuntu 14.04 (Trusty). The provisioning script is very simple, its job is to install Docker. Note also the forwarding of port 8200. Initialize this virtual machine by running:

vagrant up

Give it a minute or two and now the virtual machine should be ready. You can verify this by running VirtualBox Manager. If there is no problem whatsoever, we can connect to that virtual machine:

vagrant ssh

In this ssh session, you can run docker build and docker run as previously described. Since port 8200 is correctly forwarded, you could also visit http://localhost:8200 using e.g. Safari running on OS X (the host system).

For this setup, you can witness the power of virtualization. Your OS X machine is running Ubuntu 14.04 system in a VirtualBox-based virtual machine. Now, within that Ubuntu system, there is another CentOS 6.5 system running in a container. The simple Go-based HTTP server is being executed in that container. Fun, isn’t it?

Last but not least, the fresh Vagrant 1.6 release has an official support for Docker as a new provider. I haven’t tried this but if you found that this official Docker provider streamlines the workflow ever further, please do share it with us.

Contain all the things!

Tags:

NaN, not a number, is a special type value used to denote an unrepresentable value. With JavaScript, NaN can cause some confusion, starting from its typeof and all to the way the comparison is handled.

Several operations can lead to NaN as the result. Here are some examples (follow along on JSBin: jsbin.com/yulef):

Math.sqrt(-2)
Math.log(-1)
0/0
parseFloat('foo')

The first trap for many JavaScript beginners is usually the unexpected result of calling typeof:

console.log(typeof NaN);   // 'number'

In a way, while NaN isn’t supposed to be a number, its type is number. Got it?

Stay calm, as this will continue to lead to many confusing paths. Let’s compare two NaNs:

var x = Math.sqrt(-2);
var y = Math.log(-1);
console.log(x == y);      // false

Maybe that’s because we’re supposed to use strict equal (===) operator instead? Apparently not.

var x = Math.sqrt(-2);
var y = Math.log(-1);
console.log(x === y);      // false

Arrgh! Could it be because they are NaNs from two different operations? What about…

var x = Math.sqrt(-2);
var y = Math.sqrt(-2);
console.log(x == y);      // false

Even crazier:

var x = Math.sqrt(-2);
console.log(x == x);      // false

What about comparing two real NaNs?

console.log(NaN === NaN); // false

Because there are many ways to represent a NaN, it makes sense that one NaN will not be equal to another NaN. Still, this is the reason why I sometimes tweet:

To solve this, originally I intended to submit this proposal for ECMAScript 7:

nan

But of course, solutions (and workarounds) already exist today.

Let’s get to know the global function isNaN:

console.log(isNaN(NaN));      // true

Alas, isNan() has its own well-known flaws:

console.log(isNaN('hello'));  // true
console.log(isNaN(['x']));    // true
console.log(isNaN({}));       // true

This often leads to a number of different workarounds. One example is to exploit the non-reflective nature of NaN (see e.g. Kit Cambridge’s note):

var My = {
  isNaN: function (x) { return x !== x; }
}

Another example is to check for the value’s type first (to prevent coercion):

My.isNaN = function(x) { return typeof x === 'number' && isNaN(x); };

Note: The coercion that is being blocked here is related to isNaN. As an exercise, compare the result of isNaN(2), isNaN('2') and isNaN('two').

Fortunately, for the upcoming ECMAScript 6, there is Number.isNaN() which provides a true NaN detection (BTW, you can already use this function in the latest version of Chrome and Firefox). In the latest draft from April 2014 (Rev 24), this is specified in Section 20.1.2.4:

When the Number.isNaN is called with one argument number, the following steps are taken:
1. If Type(number) is not Number, return false.
2. If number is NaN, return true.
3. Otherwise, return false.

In other words, it returns true only if the argument is really NaN:

console.log(Number.isNaN(NaN));            // true
console.log(Number.isNaN(Math.sqrt(-2)));  // true
 
console.log(Number.isNaN('hello'));        // false
console.log(Number.isNaN(['x']));          // false
console.log(Number.isNaN({}));             // false

Next time you need to deal with NaN, be extremely careful!