The most recent Java 8 release came with lots of new features, one of them is the brand-new JavaScript engine to replace the aging Rhino. This new engine, called Nashorn (German for rhinoceros), is high-performant and specification compliant. It is definitely useful whenever you want to mix-and-match your Java and JavaScript code.

To check the performance of Nashorn, I use it to run Esprima and let it parse some large code base (unminified jQuery, 268 KB). This quick, rather non-scientific test exercises two aspects of JavaScript run-time environment: continuous memory allocation (for the syntax nodes) and non-linear code execution (recursive-descent parsing).

If you want to follow along, check the repository bitbucket.org/ariya/nashorn-speedtest. Assuming you have JDK 8 installed properly, run the following:

javac -cp rhino.jar speedtest.java
java -cp .:rhino.jar speedtest

This test app will execute Esprima parser and tokenizer on the content of the test file. Rhino gets the first chance, Nashorn follows right after (each engine gets 30 runs). In the beginning, Rhino’s first run is 2607 ms and slowly it speeds up and finally this parsing is completed in just 824 ms. Nashorn timings have a different characteristic. When it is cold, Nashorn initially takes 5328 ms to carry out the operation but it quickly picks up the pace and before you know, it starts moving full steam ahead, reaching 208 ms per run.

Behind the scene, Nashorn compiles your JavaScript code into Java bytecodes and run them on the JVM itself. On top of that, Nashorn is taking advantage of invokedynamic instruction (from the Da Vinci Machine Project, part of Java 7) to permit "efficient and flexible execution" in a dynamic environment such as JavaScript. Other JVM languages, notably here JRuby, also benefit from this new invokedynamic feature.


What about Nashorn vs V8? It is not terribly fair to compare both, V8 is designed specifically for JavaScript while Nashorn leverages the battle-hardened, multi-language JVM. But for the fun of it, the Git repo also includes a JavaScript implementation of the test app which can be executed with V8 shell. On the same machine, V8 can complete the task in about 110 ms. Nashorn is not as mature as V8 yet, it is quite an achievement that it is only twice as slow as V8 for this specific test.

As a high-performance JavaScript on the JVM, Nashorn has many possible applications. It serves as a nice platform to play and experiment with JavaFX. Yet, it is powerful enough to be part of a web service stack (Vert.x, Project Avatar). On a less ambitious level, having another fast implementation of JavaScript is always good so that there is an alternative to run various JavaScript tools in an environment where Node.js is not available or not supported. As an illustration, check my previous blog post on a simple Ant task to validate JavaScript code.

Next time you have the urge to herd some rhinos, consider Nashorn!

  • MOB_i_L

    Can you use the Rhino-debugger with Nashorn?

    • aplatypus

      Just a guess, but it sounds like you ought to be able to use the Java debugger. Worth checking the Nashorn specs to see.

  • Avatar1337

    You can’t compare Nashorn with V8. V8 is implemented in C++. Nashorn becomes completely platform independent. V8 is not, if you use it to run modules. You can run Java modules from Nashorn, keeping it completely platform independent. They are designed for different things.

    • It is perfectly valid to compare them. They are JavaScript engines and we compare their behavior as a JavaScript engine (in the above case, with respect to its JavaScript execution speed).

      • Avatar1337

        But V8 should have a better performance, not because it is more mature, but because it is implemented in machine code (C++). Thus, I do not think it is fair to compare the two.

        • That’s incorrect. In both cases, the execution speed depends on the generated code by the JIT compiler. V8 and JVM are very capable to generate optimized compiled code.

          • Avatar1337

            Nashorn is compiled to Java binary which is in turn compiled to machine code with the built in JIT of the VM. V8 is compiled with a JIT to machine code directly. There is an extra step with Nashorn.

          • That’s a perfectly valid statement. And yet, we’re not interested in the code generation time, only the code execution time.

          • Avatar1337

            When you run java -cp .:rhino.jar speedtest, you let the VM load a JS file. That JS file is during runtime generated to Java binary, because it is a dynamic language. So the code generation of Nashorn to Java binary occurs during runtime, not compile time. That is why code generation time matters during execution.

          • Yeah but check the code. We’re not measuring that at all.

          • Avatar1337


            tree = context.evaluateString(scope, “esprima.parse($code)”, source, line, null);
            tokens = context.evaluateString(scope, “esprima.tokenize($code)”, source, line, null);


            tree = inv.invokeMethod(esprima, “parse”, code);
            tokens = inv.invokeMethod(esprima, “tokenize”, code);

            You are executing JS and measuring the time. But It is done dynamically, so you will measure code generation time as well simultaneously. It is inevitable. Java has one more abstraction level, that’s what makes it platform independent. This time it is the JS interpreter that is platform independent. V8 JS interpreter is not platform independent. You have to write a new one for each platform.

          • The code generation time only matters in the first runs. After that, both engines will just executed the cached generated code.

          • Avatar1337

            Yeah, if it is using cached Java binary then it would be the same amount of interpretation layers compared to V8 running in realtime, however, wouldn’t V8 use cached machine code then? Then you would have cached Java binary compared to cached machine code. Once again, an unfair advantage.

          • JVM will turn those Java bytecodes into machine codes.

          • Avatar1337

            Yes in realtime I know, but you have “turning Java bytecode into machine code during runtime” vs “just running cached machine code”. Which do you think will be faster?

          • Avatar1337

            The only way for them to be comparable is if JVM caches machine code and not Java binary from the Nashorn script. Then V8 and Nashorn would be on the same level. In the end, it still becomes a Java application vs a native application.

          • sandos

            When a JIT has warmed up, there is no intrinsic difference between compiled C/C++ and JIT:ed code. The JIT simply compiles during runtime. There are other aspects that will, most of the time, mean JIT is slower but not necessarily so. A JIT also has more information than a pre-compiler for some optimizations.

          • Avatar1337

            Yes, but if you compile during runtime then it becomes slower then already compiled code from say C++, just because you have an extra step. However, if JVM caches compiled code, then I can see how it can have almost the same performance when it has “warmed up”.

  • Emanon

    I am a Javascript Developer and our stack never included Java until now… SO, I am COMPLETELY new to the idea of Nashorn. What exactly does it enable you to do on the “front end” – I am completely missing the power of Nashan here. Would you edit the html OR edit the Hello.class file? From a complete beginner standpoint, how can I understand the usage of Nashorn?

    • Jason

      As far as I understand this isn’t for front-end scripting stuff (which your right, is what you’d typically used JavaScript for). The idea here is that you could right “system” back-end code in JavaScript, by leveraging Nashorn, which would take your JS file and turn it into Java bytecode and run it on the JVM just as if you’d written the equivalent program in Java. So you’d write a little JS script, invoke it with Nashorn from like a command prompt or within a Java program and it’d run that for you (no DOM involved). I think you also have access to Java objects from within your JS script using this.

  • mrft

    For some reason inserting items by key in a javascript object is a lot slower on Nashorn than in other javascript engines.

    When measuring the time it takes to add items to the map, and to read them afterwards, when I use JDK1.8 I get a very low number of inserts per second.

    for (i = 0; 0 <= nrOfCycles; i++) {
    m["key" + i] = "value" + i;
    and afterwards
    var x;
    for (i = 0; 0 put is very slow, get is fast
    PUT 100000 times took 6955 milliseconds. That’s 14378 per second, and 0.06955 ms per event.
    GET 100000 times took 13 milliseconds. That’s 7692307 per second, and 0.00013 ms per event.

    JDK1.7 (Rhino?) -> everything is ‘not so fast’ but put is faster than nashorn

    PUT 100000 times took 606 milliseconds. That’s 165016 per second, and 0.00606 ms per event.
    GET 100000 times took 481 milliseconds. That’s 207900 per second, and 0.00481 ms per event.

    OpenJDK7 (?) -> more balanced results
    Hashmap.put 100000 times took 312 milliseconds. That’s 320512 per second, and 0.00312 ms per event.
    Hashmap.get 100000 times took 133 milliseconds. That’s 751879 per second, and 0.00133 ms per event.

    NodeJS -> the fastest, as expected
    PUT 1000000 times took 1321 milliseconds. That’s 757002 per second, and 0.001321 ms per event.GET 1000000 times took 121 milliseconds. That’s 8264462 per second, and 0.000121 ms per event.

  • Jason

    Twice as slow *but* thread friendly. I can see many places where Nashorn may significanlty outperform V8 once you start using threads.

  • Tim Fox

    Interesting article.
    So.. for a 100% *pure* JS application V8 is slightly faster than Nashorn, but don’t forget, if you’re using a toolkit such as Vert.x (which is written in Java) but lets you write your application using JavaScript (using Nashorn), you may well find your application is faster overall on the JVM than using V8.
    An example would be writing a simple webserver, using JavaScript in both Vert.x and Node.js. Although V8 itself is slightly faster than Nashorn, when running your application in Vert.x only a small fraction of the CPU cycles are spent in your JavaScript code, most of the time and grunt work is spent in the Vert.x (or Netty or JDK) compiled classes, which are faster than the equivalent in Node.js.
    So… even if Nashorn is faster than V8 it doesn’t matter if 95% of CPU cycles are spent in compiled Java classes which are faster than V8.

  • Sébastien Lorber

    Do you think that Java code compiling to JS and then executed in Nashorn will be faster than Java code ? haha 🙂