Tags:

With so many great cross-platform libraries out there, there is hardly any need to reinvent the wheel. In many cases, it is even possible to extract a portion of a sophisticated multi-platform application code to be reused in a different application. In this example, a basic CPU detection class from Chromium C++ code is built into a simple command-line tool.

The challenge in such an extraction process is figuring out the dependencies. Chromium src/base directory contains a lot of useful basic classes for application development. They are however targeted primarily to be used when building Chromium alltogether. Often times, it means it is not easy to use only one class without pulling a series of other dependent classes. In the worse case, you will go down the rabbit hole.

For this exercise, we will be using the base::CPU class found in base/cpu.h. As expected, this class depends on some other classes. Fortunately for us, it does not go that far. The dependency graph looks like the following:

deps

Thus, what we need to do is to grab all those files from Chromium source code. So that you can follow along, I have prepared a Git repository bitbucket.org/ariya/cpu-detect. Once this base::CPU class is ready, using it is trivial:

base::CPU cpu;
std::cout < < "Vendor: " << cpu.vendor_name() << std::endl;

For the complete demo app, here is the full output when running on my Chromebook Pixel:

detect

Since this is just an example, we omit a rather important thing. The class implementation actually needs to use StringPiece. In our flavor above, string_piece.h is an empty file (to satify the include). We don’t need any code because StringPiece is only necessary for ARM architecture. If we stick with Unix on x86, then we can live with that dummy header file. Obviously, in real life and with other possible classes, you may not be that lucky.

Isn’t it true that we don’t code today what we can’t reuse tomorrow?

Tags:

autumn2014

After a short pause, I’ll be giving tech talks again in a few weeks. The first one will be for jQuery Conference in Chicago, the other one is for the autumn edition of HTML5 Developer Conference in San Francisco.

For the jQuery folks, I’d like to share my understanding as to how web browsers execute JavaScript code. The official title of the talk is JavaScript and the Browser: Under the Hood and the abstract looks like the following:

A browser’s JavaScript engine can seem like a magical black box. During this session, we’ll show you how it works from 10,000 feet and give you the understanding of the main building blocks of a JavaScript engine: the parser, the virtual machine, and the run-time libraries. In addition, the complicated relationship with the web browser with a JavaScript environment will be exposed. If you are curious as to what happens under the hood when the browser modifies the DOM, handles scripted user interaction, or executes a piece of jQuery code, don’t miss this session!

In San Francisco, I will use the opportunity to demystify the secrets behind smooth animated user interface. The talk itself is basically a walkthrough of some bite-size examples, demonstrating among others Cover Flow in JavaScript and CSS 3-D.

With the support for buttery-smooth GPU-accelerated 3-d effect with CSS, modern browsers allow an implementation of a stunning fluid and dynamic user interface. To ensure that the performance is still at the 60 fps level, certain best practices needs to be followed: fast JavaScript code, usage of requestAnimationFrame, and optimized GPU compositing. This talk aims to show a step-by-step guide to implement the infamous Cover Flow effect with JavaScript and CSS 3-D. We will start with the basic principle of custom touch-based scrolling technique, the math & physics behind momentum effect, and the use of perspective transformation to build a slick interface. Don’t worry, the final code is barely 200 lines!

If you will be around in Chicago or San Francisco, don’t miss these talks!

Tags:

As I mentioned in my earlier blog post, we are now working torward stabilizing the development version of PhantomJS. One thing I would like to elaborate here with respect to the features of this forthcoming PhantomJS 2 is its improved JavaScript support.

With the fresher WebKit (thanks to Qt 5.3’s QtWebKit module), PhantomJS 2 also benefits from a lot of JavaScript improvement in JavaScriptCore, the JavaScript engine of WebKit. This brings PhantomJS to the more-or-less expectation of features which are supposed to be supported by a modern web browser.

test262

In fact, if we run the official test suite for ECMA-262, the ECMAScript Language Specification version 5.1, at test262.ecmascript.org, then we will see that PhantomJS (just like Safari 7) only fails 2 tests. If you compare it with other browsers, Chrome 36 has 4 failures and Firefox 31 has 42 failing tests. In a way, we can say that (by leveraging JavaScriptCore), PhantomJS is pretty much as standard compliant as it could get.

failures

Among others, many will rejoice for the availability of Function.prototype.bind (bug 10522), something that is missing from the old QtWebKit in Qt 4.8. There were various attempts to solve this in the past, see my previous post to the mailing-list. However, as I wrote there, sometimes our collective effort was not enough to move the mountain.

Another related improvement is the handling of the special object arguments. Just like other modern browsers, now you can JSON.stringify this object (bug 11845), use in a for-in loop (bug 11558, bug 10315), and call Object.keys on it (bug 11746).

PhantomJS 2 will be released whenever it is ready (monitor the mailing-list if you want to get notified). Meanwhile, if you want to compile it yourself and play with this bleeding-edge version, follow the step-by-step instructions. If you are on Linux or OS X (like typical geeks these days), building it yourself should take just 30 minutes on a fairly decent machine.

Good things come to those who wait!

Tags:

cgdb

Mastering GNU Debugger (gdb) is an essential skill for many programmers these days. In many cases, debugging using gdb is carried out straight from your favorite editor or IDE. For a quick stand-alone debugging session, a nice alternative is to use a visual, terminal-based wrapper for GDB called cgdb.

The usage of cgdb is designed to appeal vim users. When you launch cgdb, it shows a split-screen arrangement. The bottom half is the usual gdb console while the top half shows the content of the file where the program currently stops. The size of the window can be easily adjusted, I often shrink the gdb console to a minimum since I like to see more file content.

Everytime gdb does the single-stepping or moving to the next source line, the corresponding line in the source view will be highlighted. It is even customizable, either with an arrow indicator on the gutter or by highlighting the entire line. The source view window is always synchronized, if gdb jumps to another file then you will immediately see the content of that file.

Opening another source is also easy, thanks to its vim-like incremental filename search. Once a particular file is open, setting a breakpoint on a certain line is as simple as specifying the line number in the gdb console.

There are many different things you can do with cgdb, refer to its documentation for the details. If you are stuck with plain vanilla gdb hitherto, I highly recommend using cgdb for fun and profit!

Tags:

raptorRecently, Google engineers landed a new optimizing JavaScript compiler for V8, codenamed TurboFan. As the name implies, this is supposed to further improve JavaScript execution speed, likely to be better than its predecessor, Crankshaft. While TurboFan is still in its early stage, that doesn’t mean we can’t take a look at it.

Playing with this TurboFan flavor of V8 is not difficult. First you need to build the bleeding-edge branch, where this 70,000-lines of code currently resides. After the new V8 shell is freshly baked, we can have some fun and inspecting TurboFan’s work.

I did not have the time to dig really deep yet, so for now we just take a peek at the initial stage of the new optimizing compiler’s pipeline.

Let’s have a simple test program:

function answer() {
  return 42;
}
print(answer());

If this is test.js, then we can play with TurboFan by running:

/path/to/v8/shell --always-opt \
  --trace-turbo \
  --turbo-types \
  --turbo-filter=answer \
  test.js

We use the --always-opt option so that the code is optimized immediately (otherwise, only e.g. hot loops will be optimized). In order to inspect TurboFan, --trace-turbo and --turbo-types options are necessary. Last but not least, we are only interested in our own function answer() to be examined, hence the use of --turbo-filter option. If we pass * instead, V8 will dump too much information, mostly on other internals slightly irrelevant for this discussion.

We can see that TurboFan is doing its magic by looking at the first few lines of the output:

---------------------------------------------------
Begin compiling method answer using Turbofan

For further investigation, it is better to redirect the output to a log file. The log file will contain the data for 4 different graphs: initial untyped graph, context specialized graph, lowered typed graph, and lowered generic graph. This is the result of the Turbofan compiling pipeline. Every graph is easy to visualize since it is printed in the de-facto dot format.

First, we need to separate each individual graph:

csplit -s -f graph log "/--/" {4}

Assuming GraphViz is installed, we can see the first graph, the initial untyped, by running:

tail -n +2 graph01 | dot -T png > untyped.png

which is shown in the following screenshot:

untyped

This is the intermediate representation (IR) directed graph. You may recognize some nodes in the graph resembling the original JavaScript code such as the NumberConstant[42] and Return nodes. Each node has an operator and its associated IR opcode. This is very similar to the Sea of Nodes IR approach from Cliff Click (see Combining analyses, combining optimizations and A Simple Graph-Based Intermediate Representation) used by Java HotSpot compiler.

The above graph is built as the first compiler pipeline stage by traversing the abstract syntax tree. There is hardly a surprise here. The edges Start (node #0) and End (node #10) are self explanatory. For every function, the information on its Parameter (node #4) is always mandatory. Checking the stack is an inherent part of V8 internals, hence the need for JSCallRuntime (node #5).

Inside the function body, every statement will be visited by the AST builder. In our example, there is only one, a return statement. For this, the builder also needs to visit the argument, which happens to be a numeric literal. The final outcome is a node which represents the opcode Return (node #7) which also refers to the constant (node #6).

The node in gray (Return, node #9) indicates that it is “dead”, i.e. unused. This is actually a special return statement (returning undefined) which plays a role only if the function does not have an explicit return. Since it is not the case here, the node is not being used or referred anywhere, hence its dead status.

After this initial graph is obtained, the next stage are context specialization, type analysis, and IR lowering. These three topics are outside the scope (pun intended) of what I want to cover right now so we will have to discuss them some other time. However, note that our test.js is very simple, there is no assignment or any complicated operations and hence the subsequent compiler stages do not enhance the IR graph in any meaningful way. In fact, If you plot graph02 (using the similar dot command as before), you will see that the resulting image is exactly the same as in the previous screenshot.

Ultimately, TurboFan needs to generate some machine code. Predictably, it has its own code generator (currently for x86 and ARM, both 32-bit and 64-bit), it does not reuse the existing Hydrogen and Lithium code generators from Crankshaft. The machine code is emitted from the instruction sequence. If you take a look at the log file, the relevant part of the sequence is:

      14: ArchRet v6(=rax)

If you find out what v6 is, it refers to the constant 42. On x86-64, this instruction thus can be turned into a MOVQ RAX, 0x2A00000000 followed by RET 8. Straightforward, isn’t it?

TurboFan is still very young and I’m sure there is still lots of room to grow. In most recent episode of JavaScript engine optimization, WebKit enjoys a speed boost thanks to the new FTL (fourth-tier JIT compiler based on LLVM) while Firefox continues to refine its whole-method JIT compiler IonMonkey. Will TurboFan become the V8’s answer to them?

Welcome to the world, TurboFan!

Tags:

phantomjs2It is been a while since PhantomJS received a facelift. This is about to change, the current master branch is now running the unstable version of PhantomJS 2. Among others, thing brings the fresher Qt 5.3 and its updated QtWebKit module.

Thanks to the hard work of many contributors, in particular @Vitallium and KDAB, PhantomJS 2 is getting close to its final stage. There are still many things to do, from fixing the failing unit tests to running a thorough integration test, but at least the current master branch can be built on Linux, OS X, and Windows already.

A typical user will want to wait until the final release to get the official binaries. However, those who are brave enough to experiment with this bleeding-edge version are welcome to build it from source.

We still do not know when it is going to be ready for the final release, stay tuned and monitor the mailing-list.

With this new major version, we also have an opportunity to review and improve the development workflow. Several topics are already being discussed (feel free to participate, your feedback will be appreciated): removing CoffeeScript support, revamping the test system, searching for a better issue tracker, building a continuous integration system, and last but not least, modularization approach for future distribution. These tasks are far from trivial, any kind of help is always welcomed.

A journey of a thousand miles begins with a single step. And expect more rolling updates in the next few weeks!