Power users on OS X are familiar with Homebrew or MacPorts for installing and managing software packages conveniently. Yet, those two well-known tools are not the exclusive players. There is a growing interest in Nix, particularly for its use on OS X.

Package management using Nix is quite simple and intuitive. It does work quite well to replace Homebrew and MacPorts. To get started, install Nix following the instructions:

curl https://nixos.org/nix/install | sh

Nix only needs access to /nix, it does not touch any other top-level directories (Nix will never pollute your /usr or /usr/local). Hence, removing Nix is a matter to nuking that /nix directory.

Once it is installed, the main command-line tool you will interact the most will be nix-env. Try installing a trivial package like this:

$ nix-env -i hello
installing ‘hello-2.10’
these paths will be fetched (0.02 MiB download, 0.07 MiB unpacked):
fetching path ‘/nix/store/b6bxihaz9s5c79dsgbbxvjg8w44a036i-hello-2.10’...
$ hello --version
hello (GNU Hello) 2.10

Note the installation path, a peculiar subdirectory under /nix/store. The name contains the cryptographic hash of all inputs necessary to build the package, essentially capturing the complete build dependencies. This enables powerful Nix features such as easy handling of multiple package versions, atomic installation, and many more.

Nix also creates a profile for every user, which you once you search for an executable (the importance of Nix profile itself will be more obvious once you start to be more familiar with Nix).

$ which hello

Removing a package is as easy as installing it:

$ nix-env -e hello
uninstalling ‘hello-2.10

In many cases, Nix will install a package in its binary form (as built and cached by the Hydra-based build farm).


Wondering what you can install with Nix? Well, Nix’s collection of packages (especially on OS X, around seven thousands) is not as impressive as Homebrew and MacPorts. Yet, you may find the common packages already available, from Git to Vim (and its plugins). To list all available packages:

$ nix-env -qa

Just like every package manager, Nix is also useful to upgrade your arsenal of tools. For instance, OS X El Capitan is armed with Git 2.6 by default. But perhaps you want to use the most recent Git 2.8 instead. This is not a difficult endeavor:

$ git --version
git version 2.6.4 (Apple Git-63)
$ nix-env -i git
warning: there are multiple derivations named ‘git-2.8.0’; using the first one
installing ‘git-2.8.0’
$ which git
$ git --version
git version 2.8.0

Later on, if you decide that you don’t like the latest version and you prefer to stick with the default one, the rollback leaves no meaningful left-over and it returns the state of the system exactly before you installed Git 2.8:

$ nix-env -e git
uninstalling ‘git-2.8.0’
$ which git
$ git --version
git version 2.6.4 (Apple Git-63)

These package management tasks are not unique to Nix. Wait for the sequel of this post, where we learn the power of Nix to comfortably handle multiple environments (e.g. Python 2.7 vs Python 3.5).


When using a programming library, it is unfortunate that we often encounter the use of function and property names in its negative variant. Particularly when there is a choice of two values, using the positive variant would help reducing confusions and ambiguities.


As a non-native speaker, understanding a request for help sometimes troubles me a little bit, especially in the first few years of using the language regularly. With English, my biggest challenge was "Do you mind if I borrow your pen?" and other similar constructs. In the beginning, almost automatically my answer was always "Yes" (of course, what I meant was "Yes, you can use my pen"). Note how my answer was semantically not in agreement with my intention. If it was expanded into "Yes, I do mind", that definitely indicated that I did not like that person to borrow my pen.

If you wake me up in the middle of the night because you need my help to figure out whether A equals a if caseInsensitive is false, I will be having a hard time. It would have been much easier for me when that special property carries the name caseSensitive.

In the web world, it is hard to avoid this form because disabled is a popular property name, especially for form elements. This also means that the use of disabled propagates to various library, from jQuery UI to AngularJS (ngDisabled).

The same problem applies to function names. Looking at any piece of code that invokes setHidden(false), I need to pause and think to ensure that I don’t get it wrong ("is that component visible or not"). Less likely of course if it is simply setVisible(true). A similar case is to be made for methods such as disallowThis() or disallowThat(). An alternative exists, simply use the allowThis(), allowThat() variants when it makes sense.

In real-world, we are practically more familiar with the existence of thing, and not the lack of thing. You feel that the coffee shop is warm because of the heat, not due to the lack of cold. As you are ready to request a refill, your barista notices that your cup is a quarter full, not a three quarter empty.

Next time you would want to expose a property or a method in a public API, think about its positive vs negative form!


We often hear stories of harmful last-mile content tampering, from YouTube video downgrading to JavaScript-injected advertisement. This prompted me to run an experiment of always-on VPN on my phone (Nexus 5X running Android 6). Surprisingly, I come to the conclusion that it is definitely feasible to do so without affecting the battery life. Even if you are not a road warrior, it is still good to give it a try and see how it goes.

ptThere are many ways to set up VPN, it varies from doing everything yourself to just using a commercial service. For Android or iOS devices, there exist numerous popular choices for a good VPN service. Comparing different VPN services is beyond the scope of this blog post, feel free to check out the popular ones such as Vypr, Express, Pure, and many more. One thing that I discovered while doing this exercise was that there is no truly "the best VPN" as the service you choose will depend on the trade-offs you are willing to make.

One service that I liked a lot and thus I ran it for a while for this experiment is Private Tunnel (works well for both Android and iOS). It is developed and offered by OpenVPN Technologies, the folks behind the OpenVPN project. Unsurprisingly of course, it uses OpenVPN under the hood. What makes Private Tunnel very attractive to me is the pricing model. Unlike other services that use the usual monthly subscription model, Private Tunnel usage is based on volume. This is typically known as pay-as-you-go, e.g. $20 gives you access to 100 GB traffic. Since I do not use my phone for high-bandwidth activities (such as media streaming), it is going to take me forever to hit that 100 GB quota. For all intents and purposes, it is very affordable.

One does not need to have any technical expertise on VPN or OpenVPN to use Private Tunnel.
The app itself is almost trivial: launch and press the Connect button. It could not be simpler than that. The only drawback is the lack of automatic reconnect. If your WiFi is flaky or your 3G/4G/LTE is spotty, occasionally you will be in a state of not using the tunneled connection. In a way, this is a good thing because it trains you to watch for the lock symbol on the status bar, hence building your natural sense of security.

There are advantages and disadvantages of using VPN. If you follow the school of thought of using VPN continuously, at least now you would be able to do it even as you consume the Internet from your wonderful smartphone.

At the next installment, we will take a look at some easy steps to set up an OpenVPN server manually and use OpenVPN client from your phone.


MozJPEG, a JPEG encoder project from Mozilla, is a fantastic way to optimize your JPEG files. Setting it up however might be quite a hassle. Fortunately, a virtualized environment such as Docker offers a much simplified way to use MozJPEG.

The important requirement is that you have Docker installed and ready to use. If you are on Linux, this should be easy. For OS X and Windows users, follow the steps in my previous blog post on Easy Docker on OS X.

First, we will grab MozJPEG source code:

git clone git://github.com/mozilla/mozjpeg.git 
cd mozjpeg 
git checkout v3.1

In the current directory, create a Dockerfile with the following content. As you can see, here we will base it on Alpine Linux since it is quite small (around 5 MB).

FROM alpine:3.3 
ADD . /source 
RUN apk --update add autoconf automake build-base libtool nasm 
RUN cd /source && autoreconf -fiv && ./configure --prefix=/usr && make install

Once this Dockerfile is ready, fire it up with:

docker build -t mozjpeg .

You can watch the progress as Docker grabs Alpine 3.3 base image and let Alpine’s package manager, apk, install a number of dependent packages. After that, MozJPEG is being compiled and built from source. Once this is completed (it may take a while), we are ready to utilize this new image for optimizing JPEGs. Also, there is no need to stay in the current directory.

For the basics on using MozJPEG, I recommend reading the article Using mozjpeg to Create Efficient JPEGs. Let’s say we have a picture we want to optimize, e.g. photo.jpg. We can start a new Docker container containing the above compiled MozJPEG and use it as follows:

cd ~/Documents 
docker run -v $PWD:/img mozjpeg sh -c "/opt/mozjpeg/bin/cjpeg -quality 80
  /img/photo.jpg > /img/photo_small.jpg"

The command-line option –v $PWD:/img maps the current directory on your host machine to /img as seen from within the container. After that, a shell is invoked with the full command to start MozJPEG’s cjpeg at quality level 80. When I tried this on a simple photo, I was very happy with the optimized version while I got a massive decrease in file size (from 158 KB to 45 KB). Of course, your mileage may vary and make sure you read Kornel’s excellent article on fair image comparison.

Still not optimizing your JPEG files? Now you have no more excuse!


Using Docker on OS X is getting easier. Previously, it involved setting up boot2docker by hand. With the new Docker Toolbox (which wraps boot2docker and Kitematic, among others), installing Docker is almost trivial.

The first step is to download and install Docker Toolbox. In case it encounters a problem during the initial run, I recommend reinstalling or updating your VirtualBox first. The major component of Docker Toolbox is Kitematic, which serves a gateway to a catalog of Docker images. You can launch a container based on a particular image. The first image, hello-world-nginx, is always a good start. It contains an instance of Nginx, a popular web server. Click on the CREATE button and wait for a moment while the image is being downloaded from Docker Hub. The progress can be monitored from the container logs.


Once the container is running, we can test it. Go to the Settings tab and then Ports. It will show the IP address and port on the OS X system where Nginx is serving the page. Now all you need to do is to open your favorite web browser to go to that address and you will see the HTML content served by Nginx inside the container.

To change the content of HTML file, go to the Volumes setting and click on /website_files. Kitematic will map it to a directory on your OS X. This is the fun part: edit the HTML file, save it, and refresh your web browser. It is fun to see this round-trip of bits and bytes, a file from the OS X system goes into the container, served via Nginx running in Linux, and then appears as a web content through a web browser.


If you want to play with Docker client command line, simply click the DOCKER CLI button on the bottom left corner of Kitematic. This will open the terminal app with access to the docker executable. Try it by running:

$ docker --version 
Docker version 1.9.1, build a34a1d5

or by running this minimalistic Node.js image (obviously, it is a complicated way to compute the square root of a number):

$ docker run iron/node node -e "console.log(Math.sqrt(2))" 

Still afraid of Docker hassle? Hopefully this post shows that the fear is not justified.


ChakraCore, the JavaScript engine that powers the new Edge browser, has been open-sourced by Microsoft. While currently it only runs on Windows, support for other operating systems is in the roadmap. It is fun to follow the progress of the Linux version of ChakraCore.

Obviously, nothing works on Linux yet since the porting is still in its early stage. For a start, ChakraCore Windows is built using Visual Studio. For other platforms, the build process needs to use CMake.

On top of that, Windows-specific API needs to be abstracted. The ChakraCore team decides to bring the important bits and pieces from CoreCLR, another open-source .NET core runtime from Microsoft. CoreCLR already contains the so-called PAL, Platform Adaptation Layer, that deals with file handling, memory management, thread, etc. It does make sense to leverage PAL since it already works quite well for multi-OS support in CoreCLR.


How to play with ChakraCore Linux? The easiest is using a virtual machine so that you can do it if your host machine is OS X. In fact, even on Linux host, it is quite beneficial to isolate the setup from your main working environment. I use Vagrant since this is the fastest to prepare. Since ChakraCore Linux requires Clang version 3.5, the easiest is to base your system on Ubuntu 15.10 (Wily Werewolf):

vagrant init ubuntu/wily64
vagrant up && vagrant ssh

You need a couple of good packages:

sudo apt-get -y update
sudo apt-get install -y build-essential git cmake clang

Switch from GCC to Clang:

export CC="$(which clang)"
export CXX="$(which clang)"

Now the party begins. Clone the repository of ChakraCore, switch to the Linux branch, and have fun!

git clone https://github.com/Microsoft/ChakraCore.git
cd ChakraCore
git branch linux origin/linux
git checkout linux
cmake . && make