Tags:

Eclipse Orion released its latest version 5, right before the most recent EclipseCon. This new version packs several exciting features, everything from stylistic change in the appearance to an streamlined cloud deployment. My favorite is the easy-to-use Node.js bundle.

With Orion 5, it is supertrivial to try out Orion (assuming you have Node.js and npm):

npm install orion

Then you can launch it by running:

node node_modules/orion/server.js /path/to/your/project

and then open your favorite web browser and point it to localhost:8081. Now you will be able to edit existing files and create new files and folder. This works even if you don’t have any Internet connection.

Alternatively open the configuration file node_modules/orion/orion.conf and change workspace variable to the location of your JavaScript project you want to edit (if you are crazy, just set it to your home directory). Then, start Orion server by running npm start orion.

Let’s take a look at a quick Express example.

autocomplete

The above screenshot also demonstrates new Orion’s ability to provide autocomplete (or in Eclipse world, it’s called Content Asisst) for Express-based JavaScript code. It’s not limited to Express, there is also support for other frameworks such as Postgres, MySQL, MongoDB, and a few more.

Once this simple application is written, we can launch it without leaving Orion, thanks to its shell feature. Switch to the Shell tab, run npm install followed by node start hello.js, and our simple Express app is up and running.

launch

Orion now supports ESLint to validate your JavaScript code. Various rules for ESLint can be set visually.

validation

Speaking of customization, obviously you can choose a number of different theme or even create your own:

theme

It is also possible to try Orion via its online demos. If you would like to check the capabilities of Orion editing component only, there is the pure editor example. For testing its complete features, it is recommended to go to OrionHub, create an account, and enjoy the test drive.

Whether you are online or offline, web-based tools are just fun!

Tags:

fluent2014

This week it’s all about the most recent Fluent Conference 2014 in San Francisco. It’s the third Fluent and boy, it’s getting more phenomenal than ever.

For my part, I presented a talk on the topic of Design Strategies for JavaScript API (slide deck, 2.2 MB PDF download). If you are a regular reader of this blog, you may be familiar with the topic. A few past blog posts which discuss the subject in more details are:

Obviously there were tons of very very interesting presentations. To get the taste, you can watch the keynotes video (check this YouTube playlist). From Brendan’s session, bleeding-edge JavaScript features such as SIMD support and asm.js-esque Unreal 4 engine demo will make you very excited.

The full video compilation will be sold once it is out in a few weeks. You may also wait for those speakers who will upload their own video once it is available.

Kudos to the organizers and everyone involved for such a memorable event. See you next year!

Update: You can watch my presentation on YouTube (28 minutes).

Tags:

While static polymorphism is often discussed in the context of C++ (in particular, related to CRTP and method overloading), we can generalize the concept to help us choosing the most optimal function and property names of a public interface. This also applies to JavaScript API, of which some examples and illustrations are given in this blog post.

buttons

In his article Designing Qt-Style C++ API from 2005, Matthias Ettrich argued that the very major benefit of static polymorphism is to make it easier to memorize APIs and programming patterns. This can not be emphasized enough. Our memory has a limited capacity, related functionalities can be understood better when they demonstrate enough similarity. In other words, but static polymorphism is the answer to the (ultimate) search of consistency.

Take a look at the screenshot. It shows a user interface created by a hypothetical framework: some radio buttons, a check box, and a push button. Now imagine if setting the text which represents the label for each individual component involves a code fragment that accesses the framework like this:

X1.value = 'Rare';
X2.value = 'Medium';
X3.value = 'Well done';
Y.option = 'Fries';
Z.caption = 'Order';

Because the code is aligned that way, it is easy to see why this is confusing. A radio button relies on value property, a check box needs a property called option, and finally the text for the push button comes from caption. This demonstrates inconsistency. Once the problem is spotted, the fix is easy:

X1.value = 'Rare';
X2.value = 'Medium';
X3.value = 'Well done';
Y.value = 'Fries';
Z.value = 'Order';

This of course does not apply only to these UI elements. For example, a slider and a progress bar can have similar names for some of their properties since each needs a set of values to to define the range (maximum and minimum) and the current value. An example of incosistency is if one calles it maximum and the other prefers maxValue. Check your favorite UI framework’s API documentation for a progress bar and a slider and see if those properties demonstrate the principle of static polymorphism.

Of course, this also applies to function names. Imagine if moving a point involves calling translate whereas moving a rectangle means calling translateBy. Such a case could simply be an honest mistake, yet this indicates that it falls through the crack as it managed to escape any possible code review.

Static polymorphism does not stop at the practice of choosing function names. Imagine that we have a way to define a rectangular shape by its corner (top left position) and its dimension (width and height).

corner = new Point(10, 10);
dim = new Size(70, 50);
R = new Rect(corner, dim);
rectangle

Since it is tedious to always create two objects for the constructor, we can have another shortcut constructor that takes four values. In this variant, the parameters of the constructor represents x1, y1, x2, y2 coordinates of that rectangle.

Q = new Rect(10, 10, 80, 60);

It is likely that the second constructor was designed in isolation. The set of numbers in the above line of code has a major difference compared to that of the previous code fragment. In fact, if someone converts the Point Size version to the shortcut version, they need to use different values. The spirit of the first constructor is a pair of (x, y) and (width, height), the second constructor however expects a pair of (x1, y1) and (x2, y2). Worse, if you are familiar with the first constructor and suddenly found a code that uses the second form, you might not be alarmed that the meaning of the last two numbers is not what you have in mind.

It does not matter if you are a library author or an application developer. Next time you want to introduce a new property/function/object, scour your existing code and look for patterns that have been used again and again. Those are good data points and should not be ignored.

Now, aren’t you hungry after looking at the first example? BRB.

Tags:

threemusketeers
Extracting a portion of a string is a fairly well understood practice. With JavaScript, there are three different built-in functions which can perform that operation. Because of this, often it is very confusing for beginners as to which function should be used. Even worse, sometimes it is easy to fall into the trap and choose the wrong function.

String’s substring (ECMAScript 5.1 Specification Section 15.5.4.15) is the first logical choice to retrieve a part of the string. This substring function can accept two numbers, the start and the end (exclusive) character position, respectively. In case the end value is smaller than the start, substring is smart enough to swap the values before doing the string extraction. An example of substring is illustrated in this code snippet:

var a = 'The Three Musketeers';
a.substring(4, 9);     'Three'
a.substring(9, 4);     'Three'

Many JavaScript environments (including most modern web browsers) also implement a variant of substring called substr (Section B.2.3). However, the parameters for substr are the start character position and the numbers of characters to be extracted, respectively. This is shown in the following fragment:

var b = 'The Three Musketeers';
b.substr(4, 9);     'Three Mus'
b.substr(9, 4);     ' Mus'

This pair of functions, when they are both available, can be really confusing. It is so easy to mistake one for another and thereby leading to an unexpected outcome. It also does not help that the names, substring and substr, are too similar. Without looking at the documentation or the specification, there is a chance of picking a wrong one.

To add more confusion to this mixture, a String object also supports slice (Section 15.5.4.13), just like in Array’s slice. For all intents and purposes, slice has a behavior very close to substring (accepting start and end position). However, there is a minor difference. If the end value is smaller than the start, slice will not internally swap the values. In other words, it follows what is expected for Array’s slice in the same situation and thus it returns an empty string instead.

var c = 'The Three Musketeers';
c.slice(4, 9);       'Three'
c.slice(9, 4);       ''

Each of these three function can accept two parameters and perform the string extraction based on those parameter values. The result however can be different. Again, it is just like in the confusing case of Array methods (see my previous blog post on JavaScript Array: slice vs splice)

When we write our own JavaScript library, how can we minimize such a confusion? The solution is of course to avoid an API which leads to this situation at the first place. Whenever a new public function needs to be introduced, search for existing ones to ensure that there will not be a similar confusion. Of course, it is even better if such a step is enlisted in the API review checklist.

Prevention is the best cure. Be advised of your function name!

Tags:

devnexus
In the last two days I have been busy with DevNexus 2014, the largest tech conference held in Atlanta. DevNexus has been going on for a while and this year it’s getting really big with over 1,000 attendees and tons of speakers.

For this edition of DevNexus, I delivered two talks: Tweaking CSS3 for Hardware Acceleration (slide deck, PDF download of 4.2 MB) and JavaScript API Design Principles (slide deck, PDF download of 2.1 MB). I got a lot of good discussion and feedback after each talk. The sessions were recorded, expect to see the audio/video in the near future. Of course, I also enjoyed the good opportunity to meet folks I’ve known only from our online interactions, as well as to make new friends in this space.

The conference itself has been fantastic, professionally organized without any glitch at all. WiFi worked flawlessly, the snack and the food were amazing (obviously, there was Coca Cola everywhere). Kudos to the sponsors and the organizers for such a memorably event!

This is my first visit to Atlanta and it is been quite impressive. I definitely do not mind coming back in the near future. Meanwhile, I have two more events to visit, Fluent and EclipseCon, to complete my winter/spring conference tour. Next stop: San Francisco.

Tags:

ops
The use of graphics processing unit (GPU) in modern browsers, particularly for page rendering, means that there is no more excuse for laggy animation. In some cases, when the intended animation is not necessarily GPU friendly, some tricks need to be employed so that the graphics operations do not cause a potential performance problem.

The fundamental principles of exploiting GPU for popular CSS features have been described in my previous blog post on Optimizing CSS3 for GPU Compositing. The key here is the compositing process where the browser uploads portions of the page as GPU textures and subsequent animation frames involve only a small set of operations on those textures. The current modern browser rendering engines allow a few different operations to delegated to its GPU-compositor: opacity, transformation, and filter.

Still, for a buttery smooth 60 fps interface, web developers need to ensure that certain rendering operations are GPU friendly. As mentioned in my previous blog post, an easy way to verify that is by using Safari’s Show Compositing Border feature. The number on the top left corner for each rectangle represents more or less every content update operation which necessites a texture upload to the GPU. Efficient compositing is indicated by having that number unchanged during the course of the animation.

What about animations that are not easily handled by the compositor? Let’s take a look at the following example (also check the live demo at codepen.io/ariya/full/xuwgy):

@keyframes box {
    0% { background-color: green; }
    100% { background-color: blue; }
}

uploads
For clarity, the box is also moving horizontally back and forth. When it is on the left, the color is green and as it moves to the right, the color also changes to blue. Safari (with its compositing border indicator enabled) reveals that there is a continuous rendering of the box onto a GPU texture. With more boxes, while viewing it on a mobile device, the animation will cause frame-dropping or even an application crash.

To overcome this issue, we need to find a trick where a continous texture update is avoided. In the simple color transition example above, we are lucky since we can opt to use CSS filter instead. However, if assume we can’t or won’t be using it, what would be a more generalized approach? Apparently, for such an animation with a short duration and thus the accuracy is not a big concern, we can always approximate it by superimposing two states, each represents the initial and the final state, and tween the opacity accordingly. To get the feeling of it, check out the demo at codepen.io/ariya/full/ofDIh.

In this arrangement, the user has the illusion that the box changes color. What actually happens is that the green box starts to disappear when the blue one slowly appears. Since changing the opacity is a very cheap operation for the GPU, the animation will be smooth. The following diagram shows the magic behind the scene. Viewed from a user facing north west, it is as if there is only one opaque box with a gradual color transition.

layers

This technique can be applied to other properties as well. For example, take a look at this glowing effect: codepen.io/ariya/full/nFADe. While glowing can be achieved by varying the blur radius of the shadow, tweening between the glowing version and non-glowing version is our kind of cheat with this trick. Less accurate, more shortcut.

As with any other types of workaround, the opacity tweening trick has some drawbacks. Most important is that it requires more memory since we trade it for a fast animation frame. Thus, be judious in employing the trick since you can’t blindly consume all the available GPU textures for user interface animations.

Last but not least, if you prefer to watch a video on this subject, take a look my past presentation on Fluid User Interface with Hardware Acceleration (28-min video, slide deck).