Changeset 185014 in webkit


Ignore:
Timestamp:
May 29, 2015, 4:03:45 PM (10 years ago)
Author:
rniwa@webkit.org
Message:

run-benchmark should print out the results
https://bugs.webkit.org/show_bug.cgi?id=145398

Reviewed by Antti Koivisto.

Added BenchmarkResults to compute and format the aggregated values. It also does the syntax/semantic check
of the output to catch early errors.

  • Scripts/webkitpy/benchmark_runner/benchmark_results.py: Added.

(BenchmarkResults): Added.
(BenchmarkResults.init): Added.
(BenchmarkResults.format): Added.
(BenchmarkResults._format_tests): Added. Used by BenchmarkResults.format.
(BenchmarkResults._format_values): Formats a list of values measured for a given metric on a given test.
Uses the sample standard deviation to compute the significant figures for the value.
(BenchmarkResults._unit_from_metric): Added.
(BenchmarkResults._aggregate_results): Added.
(BenchmarkResults._aggregate_results_for_test): Added.
(BenchmarkResults._flatten_list): Added.
(BenchmarkResults._subtest_values_by_config_iteration): Added. Organizes values measured for subtests
by the iteration number so that i-th array contains values for all subtests at i-th iteration.
(BenchmarkResults._aggregate_values): Added.
(BenchmarkResults._lint_results): Added.
(BenchmarkResults._lint_subtest_results): Added.
(BenchmarkResults._lint_aggregator_list): Added.
(BenchmarkResults._lint_configuration): Added.
(BenchmarkResults._lint_values): Added.
(BenchmarkResults._is_numeric): Added.

  • Scripts/webkitpy/benchmark_runner/benchmark_results_unittest.py: Added.

(BenchmarkResultsTest):
(BenchmarkResultsTest.test_init):
(BenchmarkResultsTest.test_format):
(test_format_values_with_large_error):
(test_format_values_with_small_error):
(test_format_values_with_time):
(test_format_values_with_no_error):
(test_format_values_with_small_difference):
(test_aggregate_results):
(test_aggregate_results_with_gropus):
(test_aggregate_nested_results):
(test_lint_results):

  • Scripts/webkitpy/benchmark_runner/benchmark_runner.py:

(BenchmarkRunner.execute): Added a call to show_results
(BenchmarkRunner.wrap): Only dump the merged JSON when debugging.
(BenchmarkRunner.show_results): Added.

Location:
trunk/Tools
Files:
2 added
2 edited

Legend:

Unmodified
Added
Removed
  • trunk/Tools/ChangeLog

    r184980 r185014  
     12015-05-29  Ryosuke Niwa  <rniwa@webkit.org>
     2
     3        run-benchmark should print out the results
     4        https://bugs.webkit.org/show_bug.cgi?id=145398
     5
     6        Reviewed by Antti Koivisto.
     7
     8        Added BenchmarkResults to compute and format the aggregated values. It also does the syntax/semantic check
     9        of the output to catch early errors.
     10
     11        * Scripts/webkitpy/benchmark_runner/benchmark_results.py: Added.
     12        (BenchmarkResults): Added.
     13        (BenchmarkResults.__init__): Added.
     14        (BenchmarkResults.format): Added.
     15        (BenchmarkResults._format_tests): Added. Used by BenchmarkResults.format.
     16        (BenchmarkResults._format_values): Formats a list of values measured for a given metric on a given test.
     17        Uses the sample standard deviation to compute the significant figures for the value.
     18        (BenchmarkResults._unit_from_metric): Added.
     19        (BenchmarkResults._aggregate_results): Added.
     20        (BenchmarkResults._aggregate_results_for_test): Added.
     21        (BenchmarkResults._flatten_list): Added.
     22        (BenchmarkResults._subtest_values_by_config_iteration): Added. Organizes values measured for subtests
     23        by the iteration number so that i-th array contains values for all subtests at i-th iteration.
     24        (BenchmarkResults._aggregate_values): Added.
     25        (BenchmarkResults._lint_results): Added.
     26        (BenchmarkResults._lint_subtest_results): Added.
     27        (BenchmarkResults._lint_aggregator_list): Added.
     28        (BenchmarkResults._lint_configuration): Added.
     29        (BenchmarkResults._lint_values): Added.
     30        (BenchmarkResults._is_numeric): Added.
     31        * Scripts/webkitpy/benchmark_runner/benchmark_results_unittest.py: Added.
     32        (BenchmarkResultsTest):
     33        (BenchmarkResultsTest.test_init):
     34        (BenchmarkResultsTest.test_format):
     35        (test_format_values_with_large_error):
     36        (test_format_values_with_small_error):
     37        (test_format_values_with_time):
     38        (test_format_values_with_no_error):
     39        (test_format_values_with_small_difference):
     40        (test_aggregate_results):
     41        (test_aggregate_results_with_gropus):
     42        (test_aggregate_nested_results):
     43        (test_lint_results):
     44        * Scripts/webkitpy/benchmark_runner/benchmark_runner.py:
     45        (BenchmarkRunner.execute): Added a call to show_results
     46        (BenchmarkRunner.wrap): Only dump the merged JSON when debugging.
     47        (BenchmarkRunner.show_results): Added.
     48
     492015-05-15  Ryosuke Niwa  <rniwa@webkit.org>
     50
     51        run_benchmark should have an option to specify the number of runs
     52        https://bugs.webkit.org/show_bug.cgi?id=145091
     53
     54        Reviewed by Antti Koivisto.
     55
     56        Added --count option.
     57
     58        * Scripts/run-benchmark:
     59        (main):
     60        * Scripts/webkitpy/benchmark_runner/benchmark_runner.py:
     61        (BenchmarkRunner.__init__):
     62
    1632015-05-28  Alexey Proskuryakov  <ap@apple.com>
    264
  • trunk/Tools/Scripts/webkitpy/benchmark_runner/benchmark_runner.py

    r184431 r185014  
    1313
    1414from benchmark_builder.benchmark_builder_factory import BenchmarkBuilderFactory
     15from benchmark_results import BenchmarkResults
    1516from browser_driver.browser_driver_factory import BrowserDriverFactory
    1617from http_server_driver.http_server_driver_factory import HTTPServerDriverFactory
     
    9293        results = self.wrap(results)
    9394        self.dump(results, self.outputFile if self.outputFile else self.plan['output_file'])
     95        self.show_results(results)
    9496        benchmarkBuilder.clean()
    9597        return 0
     
    107109    @classmethod
    108110    def wrap(cls, dicts):
    109         _log.info('Merging following results:\n%s', json.dumps(dicts))
     111        _log.debug('Merging following results:\n%s', json.dumps(dicts))
    110112        if not dicts:
    111113            return None
     
    113115        for dic in dicts:
    114116            ret = cls.merge(ret, dic)
    115         _log.info('Results after merging:\n%s', json.dumps(ret))
     117        _log.debug('Results after merging:\n%s', json.dumps(ret))
    116118        return ret
    117119
     
    136138        # for other types
    137139        return a + b
     140
     141    @classmethod
     142    def show_results(cls, results):
     143        results = BenchmarkResults(results)
     144        print results.format()
Note: See TracChangeset for help on using the changeset viewer.