Skip to content
This repository has been archived by the owner on Mar 29, 2019. It is now read-only.

Benchmarks

Hannes Verschore edited this page Apr 5, 2017 · 6 revisions
  1. Add a new benchmark
  2. Current benchmarks
    1. Sunspider
    2. V8
    3. Octane
    4. Dromaeo
    5. Massive
    6. JetStream
    7. Speedometer
    8. Assorted
    9. AsmjsApps
    10. AsmjsMicro
    11. Sixspeed
    12. Emberperf

Add a new benchmark

Add a shell benchmark

  1. Add resources to benchmark folder

The shell benchmarks are located in benchmarks/. Every benchmark has its own proper directory with the resources needed to run the benchmark. Inside the directory there is only one requirement. It needs to have a VERSION file which contains the current version of the benchmark. Every time the resources change the version needs to change too.

  1. Update the python script

Secondly you need to update slave/benchmarks_shell.py. This contains the logic to execute the benchmark and how to interpret the results. You extend the "Benchmark" class and add the "getCommand" and "processResults" definitions.

    class Octane(Benchmark):
        def __init__(self):
            super(Octane, self).__init__('benchmark-name', 'benchmark-directory/')
            
       def getCommand(self, shell, args):
           """
           This runs the 'run.js' file in the corresponding benchmark directory
           """
           command = [shell]
           if args:
               command.extend(args)
           return command + ['run.js']
    
       def processResults(self, output):
           """
           Interpret the execution output and translate it into the benchmark result.
           The output is an array of dictionaries containing 'name' and 'score'.
           The total score of the benchmark has the name '__total__'
           """
           return [
               {"name": "subtest1", "score": "0.6"},
               {"name": "subtest2", "score": "1.0"},
               {"name": "__total__", "score": "0.1"}
           ]
  1. Add an entry of this benchmark in the database.

This is done by writing a migration named migration-XX.php in /database where XX needs to be then next free number. It has to contain the following content:

    <?php
    global $name, $pretty_name, $lower_is_better;
    
    $name = ""; // Corresponds to the name in benchmarks_shell.py. An unique name without spaces.
    $pretty_name = ""; // The name that will be visible on the AWFY site.
    $lower_is_better = true; // If the benchmark is measuring time executing. Lower is better. If the benchmark is a score this needs to be false.
    
    $migrate = function() {
        global $name, $pretty_name, $lower_is_better;
        mysql_query("INSERT INTO `awfy_suite` (`name`, `description`, `better_direction`, `visible`) VALUES ('$name', '$pretty_name', '".(($lower_is_better)?"-1":"1")."', '1');
    };
    $rollback = function() {
        global $name;
        mysql_query("DELETE FROM `awfy_suite` WHERE `name` = '$name'");
    };

Add a browser benchmark

  1. Update the python script

You need to update slave/benchmarks_remote.py and add a new class for your benchmark.

    class Octane(Benchmark):
        def __init__(self):
            Benchmark.__init__(self, "2.0.1") # Set the version number
    
        def processResults(self, results):
            # Convert the results into the desired format (if it isn't already):
            # return [
            #           {"name": "subtest1", "score": "0.6"},
            #           {"name": "subtest2", "score": "1.0"},
            #           {"name": "__total__", "score": "0.1"}
            #        ]
            return results
    
        @staticmethod
        def translatePath(path):
            # Return the actual url it needs to fetch.
            if path == "" or path == "/":
                path = "/octane/index.html"
            return "http", "chromium.github.io", path
    
        @staticmethod
        def injectData(path, data):
            if path == "/octane/index.html":
                # Update to make sure the result gets send to:
                # 'http://localhost:8000/submit?results=' + encodeURIComponent(JSON.stringify(results))
                return data.replace("...", "...");
            return data
    
        @staticmethod
        def name():
            return "octane"
  1. Add an entry of this benchmark in the database.

This is done by writing a migration named migration-XX.php in /database where XX needs to be then next free number. It has to contain the following content:

    <?php
    global $name, $pretty_name, $lower_is_better;
    
    $name = ""; // Corresponds to the name in benchmarks_shell.py. An unique name without spaces.
    $pretty_name = ""; // The name that will be visible on the AWFY site.
    $lower_is_better = true; // If the benchmark is measuring time executing. Lower is better. If the benchmark is a score this needs to be false.
    
    $migrate = function() {
        global $name, $pretty_name, $lower_is_better;
        mysql_query("INSERT INTO `awfy_suite` (`name`, `description`, `better_direction`, `visible`) VALUES ('$name', '$pretty_name', '".(($lower_is_better)?"-1":"1")."', '1');
    };
    $rollback = function() {
        global $name;
        mysql_query("DELETE FROM `awfy_suite` WHERE `name` = '$name'");
    };

Current benchmarks

Sunspider

This is SunSpider, a JavaScript benchmark. This benchmark tests the core JavaScript language only, not the DOM or other browser APIs. It is designed to compare different versions of the same browser, and different browsers to each other. Unlike many widely available JavaScript benchmarks, this test is:

  • Url: https://webkit.org/perf/sunspider/sunspider.html
  • Lower is better: Yes
  • Result: Running 10 iterations of every subtest and return the main execution time in ms. The total is a sum of the execution of the subtests.
  • Status: Small regressions allowed, Big regressions need approval.
  • Contact person: @h4writer, @jandem

V8

This benchmark contains a suite of pure JavaScript benchmarks that we have used to tune V8.

  • Url: https://developers.google.com/octane/
  • Lower is better: No
  • Result: The final score is computed as the geometric mean of the individual results to make it independent of the running times of the individual benchmarks and of a reference system (score 100). Scores are not comparable across benchmark suite versions and higher scores means better performance: Bigger is better!
  • Status: Superseded by the octane benchmark
  • Contact person:

Octane

Octane 2.0 is a benchmark that measures a JavaScript engine’s performance by running a suite of tests representative of certain use cases in JavaScript applications.

  • Url: https://developers.google.com/octane/
  • Lower is better: No
  • Result: In a nutshell: bigger is better. Octane measures the time a test takes to complete and then assigns a score that is inversely proportional to the run time (historically, Firefox 2 produced a score of 100 on an old benchmark rig the V8 team used). The final score is computed as the geometric mean of the individual results to make it independent of the running times.
  • Status: Any regression needs approval
  • Contact person: @h4writer, @jandem

Dromaeo

Mozilla JavaScript performance test suite.

  • Url: http://dromaeo.com/
  • Lower is better: Yes
  • Result: Runs the recommended tests and returns the mean of 5 runs.
  • Status: Actively trying to improve results.
  • Contact person: @h4writer, @jandem

Massive

Massive is a benchmark that measures asm.js performance specifically. It contains several large, real-world codebases: Poppler, SQLite, Lua and Box2D;

  • Url: https://kripken.github.io/Massive/
  • Lower is better: No
  • Result: Massive reports an overall score, summarizing it’s individual measurements. Massive tests, in addition to throughput, how long it takes the browser to load a large codebase, and how responsive it is while doing so. It also tests how consistent performance is.
  • Status: Not actively improving anymore since WebAssembly, but until WebAssembly takes over no major regressions are allowed.
  • Contact person: @luke

JetStream

JetStream is a JavaScript benchmark suite focused on the most advanced web applications.

  • Url: http://browserbench.org/JetStream/
  • Lower is better: No
  • Result: Bigger scores are better.
  • Status: Actively trying to decrease performance
  • Contact person: @jandem, @h4writer

Speedometer

Speedometer is a browser benchmark that measures the responsiveness of Web applications. It uses demo web applications to simulate user actions such as adding to-do items.

Assorted

A list of small scripts the Spidermonkey want to keep tracking.

AsmjsApps

Asm.js performance

AsmjsMicro

Asm.js performance

Sixspeed

Performance of ES6 features relative to the ES5 baseline operations per second.

  • Url: https://kpdecker.github.io/six-speed/
  • Lower is better: yes
  • Result:
  • Status: Trying to improve the worst offenders. Don't benchmarketing this since it is a bad benchmark. Only look at the performance in 1000ms, 100ms or 10ms.
  • Contact person: @h4writer

Emberperf

A benchmark created out of using Ember.

  • Url: http://emberperf.eviltrout.com/
  • Lower is better: yes
  • Result: Run all tests for Ember 2.11.1, except link-to get('active'), Ember.LinkView.create, link-to get('active') – rendered, Ember.Component.create and Render bind-attr. Those tests don't run to completion.
  • Status: Trying to improve libraries like ember. Fix worst offenders.
  • Contact person: @h4writer