Tags:

scope
Iterating over an array to search for an item is a pretty common task. With JavaScript, Array.prototype.forEach is often the popular choice. In some cases however, Array.prototype.some can be a better fit for the job if there is no need to search the entire array once a condition is fulfilled.

There are at least three different ways to iterate over arrays and objects in JavaScript: for loops, array methods, listing property keys. If the iteration is being carried out to perform a search operation, it may lead to the following example:

function findEmployee(id) {
  for (var i in employees)
    if (employees[i].id === id)
      return employees[i];
}

Without any guards and error checks, the code above is rather self-explanatory. Now, since JavaScript Array has some powerful higher-order functions, one certainly can tweak the above code to look like this:

function findEmployee(id) {
    var employee;
    employees.forEach(function (e) {
        if (e.id === id) employee = e;
    });
    return employee;
}

Let us review what the specification says about Array.prototype.forEach (Section 15.4.4.18):

callbackfn should be a function that accepts three arguments. forEach calls callbackfn once for each element present in the array, in ascending order.

In our example problem, supposed that the employee’s id is unique (ideally of course this would be an associative container, see also the blog post Determining Objects in a Set, but that is a different story). The search via forEach is rather inefficient, it will continue even though there is already a hit. It is mandated by the above specification, invoking the callback once for each element present.

An improvement to the search would be to stop right away once the condition is fulfilled. Note that the condition needs not be only a single match. There are problems like searching the first 3 free spaces, locating enough suitable rooms, etc. Fortunately, there are Array methods which can do that: every and some. I have covered the use of every to implement a primality test (see the previous blog post on Prime Numbers, Factorial, and Fibonacci Series with JavaScript Array) so let us take a look at its sibling.

Section 15.4.4.17 of ECMAScript 5.1 specification reveals that for Array.prototype.some:

callbackfn should be a function that accepts three arguments and returns a value that is coercible to the Boolean value true or false. some calls callbackfn once for each element present in the array, in ascending order, until it finds one where callbackfn returns true.

This is exactly what we need! Rewriting the previous code fragment gives the following version. The iteration will not continue if there is a match.

function findEmployee(id) {
    var employee;
    employees.some(function (e) {
        if (e.id === id) {
            employee = e;
            return true;
        }
    });
    return employee;
}

Did you ever encounter a situation where some can be used instead of forEach? Tell us!

Tags:

Flick list, with its momentum effect and elastic edge, becomes a common user-interface pattern since it was made popular by Apple iPhone a few years ago. Implementing this pattern using HTML and JavaScript seems to be a daunting task for many web developers. In this series, I will uncover the mystery of kinetic scrolling via several easy-to-digest code examples.

Before we go crazy and apply those fancy effects, it is important to set a solid foundation. Hence, the first part will deal only with an exemplary implementation of a basic drag-to-scroll technique (no momentum, no edge bouncing). The concept and also some parts of the code will be reused in the rest of the series. If you want to follow along and get the full code, check out the repository github.com/ariya/kinetic.

scroll

Here is the game plan. Let us assume that the view (a DOM element) we want to scroll is quite large. Being viewed on the limited device screen, it is as if the screen acts as a viewport window. Scrolling the content is a matter of translating the view while keeping the viewport fixed. In order to translate the view correctly (using CSS3 transform), we need to capture the user interaction with the view. As the user taps and then drags up/down the view, we need to control the offset accordingly to give the illusion that the view follows his finger’s movement.

For this tutorial, the view contains the common Lorem ipsum text (generated via lipsum.com). Also notice that the scrolling is only in the vertical direction. To give an idea, try to load the demo ariya.github.io/kinetic/1 on your modern smartphone. It has been tested on Android 4.3 (Chrome, Firefox), Android 2.3 (Kindle Fire), and iOS 6 (Mobile Safari).

basicscroll

Since we do not want the browser to handle and interpret user gesture natively, we need to hijack it. This is achieved by installing the right event listeners (for both mouse and touch events), as illustrated below. The handlers tap, drag, and release have the most important role in the implementation of this scrolling technique.

view = document.getElementById('view');
if (typeof window.ontouchstart !== 'undefined') {
    view.addEventListener('touchstart', tap);
    view.addEventListener('touchmove', drag);
    view.addEventListener('touchend', release);
}
view.addEventListener('mousedown', tap);
view.addEventListener('mousemove', drag);
view.addEventListener('mouseup', release);

Initializing some state variables is also an important step. In particular, we need to find the right bounds (max and min) for the view’s scrolling offset. Because our view will occupy the whole screen, innerHeight is used here. In a real-world application, you might want to use the computed (style) height of the view’s parent element instead. As you will see shortly, pressed state is necessary to know when the user drags the list.

max = parseInt(getComputedStyle(view).height, 10) - innerHeight;
offset = min = 0;
pressed = false;

If you try the demo, you will notice a subtle scroll indicator. Intentionally, this is placed on the left side of the view. This way, you will notice immediately that this is not the browser’s native scrollbar. That indicator needs to be placed anywhere between the topmost and bottommost of the screen, hence the need for a relative factor (will be used later). Bonus point: where does that hardcoded value 30 comes from?

indicator = document.getElementById('indicator');
relative = (innerHeight - 30) / max;

Since we want to adjust the view’s position using CSS3 transform, we need to figure out the right style property to use. A comprehensive detection can be employed, but the following simple approach already works quite reliably.

xform = 'transform';
['webkit', 'Moz', 'O', 'ms'].every(function (prefix) {
    var e = prefix + 'Transform';
    if (typeof view.style[e] !== 'undefined') {
        xform = e;
        return false;
    }
    return true;
});

Before we see the actual event handlers, let us take a look at two important helper functions.

Since both mouse and touch events need to be supported, the following ypos function abstracts the retrieval of the vertical position associated with the event.

function ypos(e) {
    // touch event
    if (e.targetTouches && (e.targetTouches.length >= 1)) {
        return e.targetTouches[0].clientY;
    }
 
    // mouse event
    return e.clientY;
}

Another important function is scroll, it moves the view and the scroll indicator to the right place. Note that we need to clamp the scroll offset so that it does not go outside the computed bounds.

function scroll(y) {
    offset = (y > max) ? max : (y < min) ? min : y;
    view.style[xform] = 'translateY(' + (-offset) + 'px)';
    indicator.style[xform] = 'translateY(' + (offset * relative) + 'px)';
}

All good things come in three. The functions tap, release, and drag are essential to the core logic of the scrolling. Surprisingly, they are all simple and concise!

The first one, tap, is triggered when the user touches the list for the first. This is where we need to mark it as pressed.

function tap(e) {
    pressed = true;
    reference = ypos(e);
    e.preventDefault();
    e.stopPropagation();
    return false;
}

Later on, when the user releases his grip, we need to undo the marking via release.

function release(e) {
    pressed = false;
    e.preventDefault();
    e.stopPropagation();
    return false;
}

Every time the user moves his finger, we know exactly how many pixels it has moved (since we always track the last point). This way, the view's offset also receives the same amount of relative scrolling movement. A simple threshold of 2 pixels is used to prevent jittering due to some micromovements.

function drag(e) {
    var y, delta;
    if (pressed) {
        y = ypos(e);
        delta = reference - y;
        if (delta > 2 || delta < -2) {
            reference = y;
            scroll(offset + delta);
        }
    }
    e.preventDefault();
    e.stopPropagation();
    return false;
}

And that's all the scrolling code! Overall, it weighs just around 80 lines.

Feel brave and want an exercise? Tweak the scroll indicator so that it fades in and fades out at the right time (synchronized with the pressed state). Its opacity can be animated with CSS3 transition.

Also, keep in mind that the code presented here serves mostly as an inspiration. It is optimized for readability and not for performance (extreme JavaScript optimizations, GPU compositing, etc). Your real-world implementation needs to be more structured, robust, and well-tested.

scrollframes

How about the performance? Well, at this stage, there is hardly anything computationally expensive. The scrolling speed can be checked either using Chrome's frame rate HUD or painting time. More detailed timeline is also available via Chrome Developer Tools. The above capture shows the frame statistics on a Nexus 4 running Chrome 28. We are well within the limit of 60 fps!

In the next installment, watch how the momentum effect gets implemented. Stay tuned.
Update: I already published the second part, covering the physics behind the inertial deceleration.

Tags:

Getting a consistent 60 fps experience on mobile web means that we need to monitor the performance criteria. One important metric is the painting time, particularly important on a resource-constraint device. Fortunately, with Chrome’s continuous painting mode, it is easy to check whether the performance budget is fulfilled or not.

To set this up on an Android phone, remote debugging must be enabled (follow the detailed steps). On the desktop side, install ADB Plugin from Chrome Web Store. If everything works well, remote debugging is just one click away. Once the session starts, look for Settings icon (right bottom corner) which will open the Settings pane. From this pane, Enable continuous page repainting is easy to locate.

continuouspainting

glow
Once it is activated, there will be a simple, greenish head-up display (HUD) which displays the painting time rate chart (as a function of time), as well as the GPU memory consumption. The chart has a threshold line, this is the 16.7 ms painting time corresponding with the target frame rate of 60 fps.

The following screenshot demonstrates the HUD for a simple web page, ariya.github.io/css/glowingtext/. This page was designed to simulate an animated glowing text by varying the blur radius of the text shadow.

For this example, we can see how the painting time gets modulated. This is because the glow effect increases and decreases the blur radius, and hence it varies the amount of pixels processed for every frame. That explains why the painting time chart has those periodic peaks and valleys.

Overall, this HUD is quite similar to the FPS counter (see my previous blog post Frame Rate HUD on Chrome for Android). However, since what is being displayed here is the painting time, it is more convenient to spot the spikes indicating "too slow" painting.

timeline

Arguably, while you are there, you might as well use the full performance analysis via the Timeline feature (shown above). In particular, some detailed rendering and painting records can be easily obtained, refer to Chrome Developer Tools documentation for more info.

To the glory of 60 fps!

Tags:

speaker

Attending technical events, from the local after-hours meetups to the high-caliber and well-known conferences, becomes the usual part of a developer’s life. Generally, those events are packed with 45-minute talks, often also to the full one hour. I argue that there are more benefits of limiting such tech talks to a shorter duration, say 20 minutes (or even 18 minutes, in the style of TED talks). The most important is that it will lead to a more thoughtful, lean, and balanced content.

First of all, the presenter needs to cut to the chase. Long-winded introduction is out of the question. In the last few years, we have seen some tremendous improvement in the way the self-introduction was conducted. There is no more wasted minutes of credentials flashing (meticulous list of projects, certificates, affiliates), irrelevant funny story about someone’s Twitter handle, or any other conversational icebreakers longer than necessary. But of course it is even better if there is simply no room for those. After all, the audience can use their favorite search engine to look for more information about the speaker.

As a corollary, the speaker also needs to filter the content and condense it. During the preparation, careful thought needs to be exercised to pick the most relevant topics only, given the time constraint. It will be less of "How do I fill this 40 minutes" and more of "What will be the three important take aways for the audience". Remember that the speech is for the benefit of the audience, it is not a vehicle to show off various domain expertise of the presenter.

A shorter allocated time naturally promotes to a more balanced composition. If the presentation is in the format of problem-solving (typical for a lot of tech talks), the speaker will not ramble too much on the initial sales pitch. How many times have you witnessed talks about web performance in which the first 15 minutes were spent only on explaining the benefit of performant web sites? Let’s get straight down to business, no need to convince us that we are in the right room.

Last but not least, packing different talks into an hour means more diversity and variety for the attendees. One hour is not enough anyway to make someone become really competent in a particular field. That time is better spent on giving the audience some use-cases, inspirations and pointers for further self-exploration. Even one convincing story will be more memorable than a long checklist of tips and tricks. Now, imagine getting three inspirational and memorable stories in one hour!

What if the topic can not be packed into a 20-minute session? Well, there is always a choice of splitting them into two parts, conveniently mapped (among others) into beginner vs intermediate level. Those who are not novice anymore but still want to enrich their knowledge can choose to only participate in the second part. Other curious minds who just want to get the taste of the field could focus on the first part, possibly even skipping the sequel. It is a better value for everyone.

There will be always the need for longer talks. However, those in-depth coverage are better handled in the form of workshops. Many conferences have this extra one-day, usually before the usual technical sessions start, dedicated for those long, lecture-style of tutorials.

These days, conference organizers often offers some extras to the standard technical sessions, usually in the form of lighting talks, unconference, Ignite, and other similar variants. In some cases, like JSConf, many talks finished in 30 minutes of less. At the last O’Reilly Fluent, we have seen two 20-minute talks packed into one usual slot. In the near future, hopefully more and more organizers will consider and give priorities to those short, high-quality talks.

Remember Pericles, Time is the wisest counselor of all.

Tags:

Profile-guided optimization (PGO) is a known compiler technique to produce an optimized code. The generated code is biased towards the common data set which will be fed to the code. It is definitely possible to apply the same technique to web applications written in JavaScript.

Various compilers, from Microsoft Visual C++ to Intel Fortran, leverage the profile data to make judgements on how to hit the right compromise. For example, small functions which get executed quite often should be in the close proximity so that the locality helps the CPU cache. The execution statistics can also help deciding which functions should be inlined, which branches should be pushed to top, and many other different trade-offs.

A key to a successful profile-guided optimization is a set of representative data. It could be a sample which is obtained during the test run. However, we need to ensure that the data really resembles, to a certain extent, the actual real-world distribution. For example, a JSON parser should be tested with a set of valid (albeit synthetic) stringified objects, and not just a stream of random characters.

Let us take a look at the following example. Supported the JSON parser mentioned above (or any other kind of parsing function) needs to check whether a character represents a decimal digit (0-9) or not. A possible implementation, among a billion possible variants, is as simple as:

function isDigit(ch) {
  return '0123456789'.indexOf(ch) >= 0;
}

datasets
The next question is, how the input data distribution look like? For this discussion, let us assume that apparently the characters fed into this function are mostly whitespaces, letters, and the real digits as depicted in the pie chart.

If 65% of the time, isDigit will receive a space, should we tackle that first? We may save some time since there is no need to check for those 10 different possibilities of the digits if it turns out to be a simple whitespace. With this in mind, we can tweak the implementation to become:

function isDigit(ch) {
  return (ch !== ' ') && '0123456789'.indexOf(ch) >= 0;
}

Thanks to the short-circuit behavior of logical AND expression (see Section 11.11 Binary Logical Operators), everytime ch is a space, the function will bail out immediately. This form the special fast path for the code. If this condition is not fulfilled, we fall back again to the slow path.

Generalizing this approach, a pair of slow vs fast path is illustrated as follows:

shortcut

Given the right profile data, it should not be too difficult to estimate the performance benefit. For the given example isDigit, it likely does not matter much since the function is so simple and it gets executed in a split second. When measuring the time, make sure you pay attention to both its accuracy and precision.

In a general situation where you can avoid the slow path under certain circumstances, the extra performance should be noticeable. For example, if the generic slow path takes 2 ms to process the data and you have 100 items in the data set, the total processing will be 200 ms. Now, assume that the fast path is twice as fast, i.e. 1 ms. According to the given distribution, the fast path is executed only on 65 items of that data set. Also, because the slow path is now in the fall back and not executed immediately, there will be an extra 20% overhead. This means, the total processing will become 142 ms (65 * 1 ms + 35 * 2.2 ms). That is roughly 40% speed-up.

Last but not least, there are two things you need to consider before doing a profile-guided optimization. First, make sure that this is where the bottleneck is. Nothing is more disastrous than a premature optimization as it is not worth any engineer’s time to speed up something which does not matter much. Second, pay attention to the data set for the test runs as it must resemble the real-world data which will be faced by your application. Having a wrong profile will cause an incorrect optimization and may lead to a performance penalty. Also, keep in mind that the data set may evolve. After all, optimization is a journey and not a destination.

Like they used to say: profile responsibly.

Tags:

lock

Instead of using loops, JavaScript Array object is quite powerful to create sequences. What about some more complex series and not just a list of consecutive numbers or letters? Fortunately, we still have other Array’s functions such filter, map, every, and reduce at our disposal. Those can be used to generate a list of prime numbers, compute the factorial, and produce the Fibonacci series.

Prime Numbers

Prime numbers are fascinating, in particular because the concept is so simple and yet there are multiple ways dealing with them so that the tasks are often employed in problem-solving interviews. Every JavaScript programmer worth her salt should be able to come up with a simple way to check whether a given integer i is a prime number or not. One possible loop-based solution, for i >= 2, is given as follows:

function isPrime(i) {
  for (var c = 2; c < = Math.sqrt(i); ++c)
    if (i % c === 0) return false;
  return true;
}

The above primality test implementation is probably not the most optimal one, it is enough to illustrate the concept.

From this function, the next task is to print all prime numbers between 0 and N:

function primeList(N) {
  var list = [];
   for (var i = 2; i != N; ++i)
     if (isPrime(i)) list.push(i);
  return list;
}

We already learned that some loop-based iterations can be turned into Array one-liners. Can we apply the same technique to the above problem?

Let us tackle isPrime() first. The suitable solution here is to use Array.prototype.every. In Section 15.4.4.16, the ECMAScript 5.1 specification says:

every calls callbackfn once for each element present in the array, in ascending order, until it finds one where callbackfn returns false. If such an element is found, every immediately returns false. Otherwise, if callbackfn returned true for all elements, every will return true.

In other words, we can use every to check for every potential divisor and see if one of them is the divisor for the candidate. If yes, obviously the candidate is not a prime number and we just need to bail out immediately. If there is a suitable divisor after the exhaustive search, it means we find our prime number.

Surprisingly, the code is shorter that the above explanation. Also, ~~ trick is being used instead of Math.floor, for some additional mystery.

function isPrime(i) {
  return (i > 1) && Array.apply(0, Array(1 + ~~Math.sqrt(i))).
    every(function (x, y) { return (y < 2) || (i % y !== 0) });
}

The other function, primeList(), is rather easy to refactor. Recall again the approach of using the combination of Array constructor, apply, and map, we end up with the following final solution. Note that the primality test is now inlined via the use of Array.prototype.filter.

function primeList(N) {
  return Array.apply(0, Array(N)).map(function (x, y) { return y }).
    filter(function (i) {
      return (i > 1) && Array.apply(0, Array(1 + ~~Math.sqrt(i))).
        every(function (x, y) { return (y < 2) || (i % y !== 0) });
    });
}

The (deadly) combination of map, filter, and every is apparently enough to avoid the loops!

If you care about the upcoming ECMAScript 6, the use of arrow function and array comprehension permits the above construct to be written as:

function primeList(N) {
  return [for (i of Array.apply(0, Array(N)).map((x, y) => y))
    if ((i > 1) && Array.apply(0, Array(1 + ~~Math.sqrt(i))).
      every((x, y) => (y < 2) || (i % y !== 0) ))
    i];
}

Factorial

Computing factorial is another math-related intriguing activity. In fact, due to its nature, this becomes the usual exercise for recursive programming introduction. For now, let us rather focus on the non-recursive implementation, something along the line of:

function factorial(n) {
  var f = 1;
  for (var c = 1; c < = n; ++c) f *= c;
  return f;
}

Here, if we want to avoid the imperative style of using a loop, the strategy will be different. Because computing the factorial of n depends on all the previous values, we can not simply use map or even every like in the previous example. For this, we need the help of Array.prototype.reduce. The specification, in Section 15.4.4.21, has something to say about this higher-order function :

callbackfn is called with four arguments: the previousValue (or value from the previous call to callbackfn), the currentValue (value of the current element), the currentIndex, and the object being traversed.

It is likely easier to understand reduce from the following simple illustration:

[1, 2, 3, 4, 5].reduce(function (x, y) { return x + y });       //  15
[1, 2, 3, 4, 5].reduce(function (x, y) { return x + y }, 100);  // 115

Because of x + y, the above code essentially produces the sum of every number in the array. Here x corresponds to the previous value and y is the current value (which will go from 1 to 5). In this context, imagine x acts like an accumulator. Note also that reduce can accept an optional second argument which will become the initial value. In the above example, 100, this will offset the computed total.

We can also do things like the following. Not only the array element (current value, y) is being used, the callback also uses the element index, passed as the third argument z.

[14, 3, 77].reduce(function(x, y, z) { return x + y * z }, 0);   // 0*14 + 1*3 + 2*77

If you make it this far, you probably already come up with a solution to the factorial problem. Here is an exemplary implementation. Note the return value from the callback, it is not straightforward because the element index z starts from 0, not from 1.

function factorial(n) {
  return Array.apply(0, Array(n)).reduce(function(x, y, z) { return x + x * z; }, 1);
}

As usual, here is the ECMAScript 6 version with an arrow function:

function factorial(n) {
  return Array.apply(0, Array(n)).reduce((x, y, z) => x + x * z, 1);
}

Finally, for a comparison see also this tweet from Angus Croll (@angustweets). For a different take, check out the version from Jed Schmidt (@jedschmidt) where the function itself is also reused as the callback for reduce.

Fibonacci Series

This short treatise is incomplete without Fibonacci series, often associated with the growth of rabbit population. The idea behind Fibonacci numbers is relatively simple and yet there can be dozens of puzzling programming tasks associated with it.

If one is asked to show the first n Fibonacci numbers, her implementation can look like:

function fibo(n) {
  var f = [];
  for (var c = 0; c < n; ++c) {
    f.push((c < 2) ? c : f[c-1] + f[c-2]);
  }
  return f;
}

If we want to switch the implementation to leverage JavaScript Array object, the situation with the two previous values becomes a real problem. Relying on reduce would be tricky since its callback only receives one previous value. Also, we can peek at the last two values from the callback's last argument, the array being traversed, because that array is immutable (this is functional programming after all).

Fortunately, the programming world is full of magic tricks and secret passages. Among other possibilities, one trick where we can keep using reduce is to use an (empty) array as the initial value. In fact, this array will contain the final Fibonacci series. Since we continue to stash more numbers in that array, looking at the two previous numbers becomes very easy. In fact, the full code for that is not long-winded at all.

function fibo(n) {
  return Array.apply(0, Array(n)).
    reduce(function(x, y, z){ return x.concat((z < 2) ? z : x[z-1] + x[z-2]); }, []);
}

Update: The original version contains map but Jed Schmidt (@jedschmidt) kindly pointed out that it is not necessary because we just want to use the element index and we do not care about the element value itself.

Rewriting the above function to use ECMAScript 6's array comprehension and arrow function is left as an exercise for the reader.

Finally, if you still cast doubt on the power of JavaScript Array object, think about it again. Among mortals second thoughts are the wisest.