Tags:

devnexus
In the last two days I have been busy with DevNexus 2014, the largest tech conference held in Atlanta. DevNexus has been going on for a while and this year it’s getting really big with over 1,000 attendees and tons of speakers.

For this edition of DevNexus, I delivered two talks: Tweaking CSS3 for Hardware Acceleration (slide deck, PDF download of 4.2 MB) and JavaScript API Design Principles (slide deck, PDF download of 2.1 MB). I got a lot of good discussion and feedback after each talk. The sessions were recorded, expect to see the audio/video in the near future. Of course, I also enjoyed the good opportunity to meet folks I’ve known only from our online interactions, as well as to make new friends in this space.

The conference itself has been fantastic, professionally organized without any glitch at all. WiFi worked flawlessly, the snack and the food were amazing (obviously, there was Coca Cola everywhere). Kudos to the sponsors and the organizers for such a memorably event!

This is my first visit to Atlanta and it is been quite impressive. I definitely do not mind coming back in the near future. Meanwhile, I have two more events to visit, Fluent and EclipseCon, to complete my winter/spring conference tour. Next stop: San Francisco.

Tags:

ops
The use of graphics processing unit (GPU) in modern browsers, particularly for page rendering, means that there is no more excuse for laggy animation. In some cases, when the intended animation is not necessarily GPU friendly, some tricks need to be employed so that the graphics operations do not cause a potential performance problem.

The fundamental principles of exploiting GPU for popular CSS features have been described in my previous blog post on Optimizing CSS3 for GPU Compositing. The key here is the compositing process where the browser uploads portions of the page as GPU textures and subsequent animation frames involve only a small set of operations on those textures. The current modern browser rendering engines allow a few different operations to delegated to its GPU-compositor: opacity, transformation, and filter.

Still, for a buttery smooth 60 fps interface, web developers need to ensure that certain rendering operations are GPU friendly. As mentioned in my previous blog post, an easy way to verify that is by using Safari’s Show Compositing Border feature. The number on the top left corner for each rectangle represents more or less every content update operation which necessites a texture upload to the GPU. Efficient compositing is indicated by having that number unchanged during the course of the animation.

What about animations that are not easily handled by the compositor? Let’s take a look at the following example (also check the live demo at codepen.io/ariya/full/xuwgy):

@keyframes box {
    0% { background-color: green; }
    100% { background-color: blue; }
}

uploads
For clarity, the box is also moving horizontally back and forth. When it is on the left, the color is green and as it moves to the right, the color also changes to blue. Safari (with its compositing border indicator enabled) reveals that there is a continuous rendering of the box onto a GPU texture. With more boxes, while viewing it on a mobile device, the animation will cause frame-dropping or even an application crash.

To overcome this issue, we need to find a trick where a continous texture update is avoided. In the simple color transition example above, we are lucky since we can opt to use CSS filter instead. However, if assume we can’t or won’t be using it, what would be a more generalized approach? Apparently, for such an animation with a short duration and thus the accuracy is not a big concern, we can always approximate it by superimposing two states, each represents the initial and the final state, and tween the opacity accordingly. To get the feeling of it, check out the demo at codepen.io/ariya/full/ofDIh.

In this arrangement, the user has the illusion that the box changes color. What actually happens is that the green box starts to disappear when the blue one slowly appears. Since changing the opacity is a very cheap operation for the GPU, the animation will be smooth. The following diagram shows the magic behind the scene. Viewed from a user facing north west, it is as if there is only one opaque box with a gradual color transition.

layers

This technique can be applied to other properties as well. For example, take a look at this glowing effect: codepen.io/ariya/full/nFADe. While glowing can be achieved by varying the blur radius of the shadow, tweening between the glowing version and non-glowing version is our kind of cheat with this trick. Less accurate, more shortcut.

As with any other types of workaround, the opacity tweening trick has some drawbacks. Most important is that it requires more memory since we trade it for a fast animation frame. Thus, be judious in employing the trick since you can’t blindly consume all the available GPU textures for user interface animations.

Last but not least, if you prefer to watch a video on this subject, take a look my past presentation on Fluid User Interface with Hardware Acceleration (28-min video, slide deck).

Tags:

jqcon
The amazing weather in San Diego became the witness of the awesome jQuery Conference held this week. The keynotes were entertaining, the talks were inspiring (the videos will be available in the near future), and of course nothing beats meeting folks from this vibrant jQuery community.

For my part, I presented a talk titled Dynamic Code Analysis for JavaScript (check the slide deck, or download as PDF 3.3 MB) as the first one in the Code for Thought track. Those who follow this blog corner might be already familiar with the assorted topics I’ve covered, particularly from these past blog posts:

Beside that, of course meeting old acquitances and making new friends are fun! It’s also nice to have a face-to-face meeting with folks I’ve known only from hitherto online interactions.

The conference itself was professionally organized and it’s simply fantastic (check out some pictures by @gseguin on Flickr). Kudos to the sponsors, organizers, and everyone involved in making such a memorable event!

This jQuery Conference is the start of my winter/spring conference tour. My next stop will be a bit farther: DevNexus in Atlanta.

Tags:

xmen
In JavaScript, mistaking slice for splice (or vice versa) is a common mistake among rookies and even experts. These two functions, although they have similar names, are doing two completely different things. In practice, such a confusion can be avoided by choosing an API that telegraphs the const-correctness of the function.

Array’s slice (ECMAScript 5.1 Specification Section 15.4.4.10) is quite similar to String’s slice. According to the specification, slice needs to accept two arguments, start and end. It will return a new array containing the elements from the given start index up the one right before the specified end index. It’s not very difficult to understand what slice does:

'abc'.slice(1,2)           // "b"
[14, 3, 77].slice(1, 2)    //  [3]

An important aspect of slice is that it does not change the array which invokes it. The following code fragment illustrates the behavior. As you can see, x keeps its elements and y gets the sliced version thereof.

var x = [14, 3, 77];
var y = x.slice(1, 2);
console.log(x);          // [14, 3, 77]
console.log(y);          // [3]

Although splice (Section 15.4.4.12) also takes two arguments (at minimum), the meaning is very different:

[14, 3, 77].slice(1, 2)     //  [3]
[14, 3, 77].splice(1, 2)    //  [3, 77]

On top of that, splice also mutates the array that calls it. This is not supposed to be a surprise, after all the name splice implies it.

var x = [14, 3, 77]
var y = x.splice(1, 2)
console.log(x)           // [14]
console.log(y)           // [3, 77]

When you build your own module, it is important to choose an API which minimizes this slice vs splice confusion. Ideally, the user of your module should not always read the documentation to figure out which one they really want. What kind of naming convention shall we follow?

A convention I’m familiar with (from my past time involvement with Qt) is by choosing the right form of the verb: present to indicate a possibly modifying action and past participle to return a new version without mutating the object. If possible, provide a pair of those methods. The following example illustrates the concept.

var p = new Point(100, 75);
p.translate(25, 25);
console.log(p);       // { x: 125, y: 100 }
 
var q = new Point(200, 100);
var s = q.translated(10, 50);
console.log(q);       // { x: 200, y: 100 }
console.log(s);       // { x: 210, y: 150 }

Note the difference between translate() that moves the point (in 2-D Cartesian coordinate system) and translated() which only creates a translated version. The point object p changed because it calls translate. Meanwhile, the object q stays the same since translated() does not modify it and it only returns a fresh copy as the new object s.

If this convention is used consistently throughout your application, that kind of confusion will be massively reduced. And one day, you can let your users sing I Can See Clearly Now happily!

At the recent San Francisco JavaScript meetup, I gave a talk on the subject of Searching and Sorting without Loops as part of its Functional Monthly series. In that talk, I explained various higher-order functions of JavaScript Array and their usage to replace explicit loops (for or while) in several different code patterns.

noloop

I have published the slide deck for the talk (as PDF, 1.4 MB). The talk started with a quick refresh of five Array’s functions: map, filter, reduce, some, and every. Those functions will be used in the subsequent three parts of the talk: sequences, searching, and finally sorting. The first one, building sequences without any explicit loop, sets the stage for the rest of the discussion.

If you are following this blog corner, you may notice that this is not a new topic. This talk is basically a detailed walkthrough of some posts I have written in the past:

It seems that you will need a lot of effort to understand all this complicate stuff. Nevertheless, think about the reward. Being able to understand this code fragment (fits in a single tweet) that constructs the first n numbers in the Fibonacci series is quite satisfying:

function fibo(n) {
  return Array.apply(0, Array(n)).
    reduce(function(x, y, z){ return x.concat((z < 2) ? z : x[z-1] + x[z-2]); }, []);
}

The age of exploration is just beginning.

Tags:

Many free/open-source projects often suffer from a very specific feedback where it is assumed that a certain feature will not be implemented because of a philosophical reason. It is what I called as the "flying car" problem.

delorean

As an illustration, with a lot of users and very few contributors, PhantomJS was bound to have that problem. In fact, it does already and it will continue to have it. I don’t have a clear idea why it happens, however I suspect that it is caused by the practical impedance mismatch between the fundamental core implementation and its users. As you can imagine, most PhantomJS users are web developers who are not always exposed to the intricacies and the challenges of what happens behind the scene. This is pretty much like an automatic gearbox, my car mechanic and I might have a completely different idea of how such a gearbox should operate.

In a mailing-list thread over a year, I summarize the situation as:

Any "why PhantomJS can’t do this" situation should be (at first) treated the same way as
"why my car can’t fly" question. A car designer loves to have it, but the technology might not be there yet.

Another example is this remark on Hacker News (I don’t regularly follow HN, but from time to time certain comments were brought to my attention):

I believe phantom made a fundamental mistake of not being nodejs based in the first place.

This is despite the subject itself is mentioned in the FAQ and has been discussed in the mailing-list (several times). I won’t go into the technical details (not the point here), but surely you can’t help but to notice a similar pattern here.

An engineering project is always handicapped by a certain engineering constraints. Many times the developers want to be pragmatic, there is nothing opinionated or philosophical about it. The recent Cambrian explosion of social media forums highlights the loudest noises and provokes heated flame wars to a different level. It is easy to fall into the trap whereby we assume (by means of extrapolation) that every project owner is hotheaded and opinionated. The calm and reasonable ones fade into the background.

Every sport team will welcome useful and constructive feedback. An emotional knee-jerk reaction from a trigger-happy armchair quarterback however hardly makes it to the prioritized items. As I have often expressed, we are not in the kindergarten anymore, screaming does not make the solution appear faster. While it is an opportunity to polish the art of self-restraint, any kind of flying car problem unnecessarily drains the energy of every project maintainer. The optimal win-win compromise is for both sides to always practice the principle of Audi alteram partem. The very least, give it five minutes.

Enough rambling already, I need to go back to my garage to fix that damn hoverboard…