Changeset 129091 in webkit


Ignore:
Timestamp:
Sep 19, 2012 10:14:16 PM (12 years ago)
Author:
rniwa@webkit.org
Message:

run-perf-tests should record indivisual value instead of statistics
https://bugs.webkit.org/show_bug.cgi?id=97155

Reviewed by Hajime Morita.

PerformanceTests:

Report the list of values as "values" so that run-perf-tests can parse them.

  • resources/runner.js:

(PerfTestRunner.computeStatistics):
(PerfTestRunner.printStatistics):

Tools:

Parse the list of indivisual value reported by tests and include them as "values".
We strip "values" from the output JSON when uploading it to the perf-o-matic
since it doesn't know how to parse "values" or ignore it.

  • Scripts/webkitpy/performance_tests/perftest.py:

(PerfTest):
(PerfTest.parse_output): Parse and report "values".
(PageLoadingPerfTest.run): Report indivisual page loading time in "values".

  • Scripts/webkitpy/performance_tests/perftest_unittest.py:

(MainTest.test_parse_output):
(MainTest.test_parse_output_with_failing_line):
(TestPageLoadingPerfTest.test_run):

  • Scripts/webkitpy/performance_tests/perftestsrunner.py:

(PerfTestsRunner._generate_and_show_results): Strip "values" from each result
until we update perf-o-matic.

  • Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py:

(test_run_memory_test):
(test_run_with_json_output):
(test_run_with_description):
(test_run_with_slave_config_json):
(test_run_with_multiple_repositories):

LayoutTests:

The expected result now contains indivisual value.

  • fast/harness/perftests/runs-per-second-log-expected.txt:
Location:
trunk
Files:
9 edited

Legend:

Unmodified
Added
Removed
  • trunk/LayoutTests/ChangeLog

    r129090 r129091  
     12012-09-19  Ryosuke Niwa  <rniwa@webkit.org>
     2
     3        run-perf-tests should record indivisual value instead of statistics
     4        https://bugs.webkit.org/show_bug.cgi?id=97155
     5
     6        Reviewed by Hajime Morita.
     7
     8        The expected result now contains indivisual value.
     9
     10        * fast/harness/perftests/runs-per-second-log-expected.txt:
     11
    1122012-09-19  David Grogan  <dgrogan@chromium.org>
    213
  • trunk/LayoutTests/fast/harness/perftests/runs-per-second-log-expected.txt

    r125194 r129091  
    1010
    1111Time:
     12values 1, 2, 3, 4, 5 runs/s
    1213avg 3 runs/s
    1314median 3 runs/s
  • trunk/PerformanceTests/ChangeLog

    r128779 r129091  
     12012-09-19  Ryosuke Niwa  <rniwa@webkit.org>
     2
     3        run-perf-tests should record indivisual value instead of statistics
     4        https://bugs.webkit.org/show_bug.cgi?id=97155
     5
     6        Reviewed by Hajime Morita.
     7
     8        Report the list of values as "values" so that run-perf-tests can parse them.
     9
     10        * resources/runner.js:
     11        (PerfTestRunner.computeStatistics):
     12        (PerfTestRunner.printStatistics):
     13
    1142012-09-17  Ryosuke Niwa  <rniwa@webkit.org>
    215
  • trunk/PerformanceTests/resources/runner.js

    r128649 r129091  
    7575    // Compute the mean and variance using a numerically stable algorithm.
    7676    var squareSum = 0;
     77    result.values = times;
    7778    result.mean = data[0];
    7879    result.sum = data[0];
     
    100101    this.log("");
    101102    this.log(title);
     103    this.log("values " + statistics.values.join(', ') + " " + statistics.unit)
    102104    this.log("avg " + statistics.mean + " " + statistics.unit);
    103105    this.log("median " + statistics.median + " " + statistics.unit);
  • trunk/Tools/ChangeLog

    r129083 r129091  
     12012-09-19  Ryosuke Niwa  <rniwa@webkit.org>
     2
     3        run-perf-tests should record indivisual value instead of statistics
     4        https://bugs.webkit.org/show_bug.cgi?id=97155
     5
     6        Reviewed by Hajime Morita.
     7
     8        Parse the list of indivisual value reported by tests and include them as "values".
     9        We strip "values" from the output JSON when uploading it to the perf-o-matic
     10        since it doesn't know how to parse "values" or ignore it.
     11
     12        * Scripts/webkitpy/performance_tests/perftest.py:
     13        (PerfTest):
     14        (PerfTest.parse_output): Parse and report "values".
     15        (PageLoadingPerfTest.run): Report indivisual page loading time in "values".
     16        * Scripts/webkitpy/performance_tests/perftest_unittest.py:
     17        (MainTest.test_parse_output):
     18        (MainTest.test_parse_output_with_failing_line):
     19        (TestPageLoadingPerfTest.test_run):
     20        * Scripts/webkitpy/performance_tests/perftestsrunner.py:
     21        (PerfTestsRunner._generate_and_show_results): Strip "values" from each result
     22        until we update perf-o-matic.
     23        * Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py:
     24        (test_run_memory_test):
     25        (test_run_with_json_output):
     26        (test_run_with_description):
     27        (test_run_with_slave_config_json):
     28        (test_run_with_multiple_repositories):
     29
    1302012-09-19  Dirk Pranke  <dpranke@chromium.org>
    231
  • trunk/Tools/Scripts/webkitpy/performance_tests/perftest.py

    r126512 r129091  
    115115    _result_classes = ['Time', 'JS Heap', 'Malloc']
    116116    _result_class_regex = re.compile(r'^(?P<resultclass>' + r'|'.join(_result_classes) + '):')
    117     _statistics_keys = ['avg', 'median', 'stdev', 'min', 'max', 'unit']
    118     _score_regex = re.compile(r'^(?P<key>' + r'|'.join(_statistics_keys) + r')\s+(?P<value>[0-9\.]+)\s*(?P<unit>.*)')
     117    _statistics_keys = ['avg', 'median', 'stdev', 'min', 'max', 'unit', 'values']
     118    _score_regex = re.compile(r'^(?P<key>' + r'|'.join(_statistics_keys) + r')\s+(?P<value>([0-9\.]+(,\s+)?)+)\s*(?P<unit>.*)')
    119119
    120120    def parse_output(self, output):
     
    139139            if score:
    140140                key = score.group('key')
    141                 value = float(score.group('value'))
     141                if ', ' in score.group('value'):
     142                    value = [float(number) for number in score.group('value').split(', ')]
     143                else:
     144                    value = float(score.group('value'))
    142145                unit = score.group('unit')
    143146                name = test_name
     
    209212            test_times.append(output.test_time * 1000)
    210213
    211         test_times = sorted(test_times)
     214        sorted_test_times = sorted(test_times)
    212215
    213216        # Compute the mean and variance using a numerically stable algorithm.
    214217        squareSum = 0
    215218        mean = 0
    216         valueSum = sum(test_times)
    217         for i, time in enumerate(test_times):
     219        valueSum = sum(sorted_test_times)
     220        for i, time in enumerate(sorted_test_times):
    218221            delta = time - mean
    219222            sweep = i + 1.0
     
    222225
    223226        middle = int(len(test_times) / 2)
    224         results = {'avg': mean,
    225             'min': min(test_times),
    226             'max': max(test_times),
    227             'median': test_times[middle] if len(test_times) % 2 else (test_times[middle - 1] + test_times[middle]) / 2,
     227        results = {'values': test_times,
     228            'avg': mean,
     229            'min': sorted_test_times[0],
     230            'max': sorted_test_times[-1],
     231            'median': sorted_test_times[middle] if len(sorted_test_times) % 2 else (sorted_test_times[middle - 1] + sorted_test_times[middle]) / 2,
    228232            'stdev': math.sqrt(squareSum),
    229233            'unit': 'ms'}
  • trunk/Tools/Scripts/webkitpy/performance_tests/perftest_unittest.py

    r126512 r129091  
    5151            '',
    5252            'Time:',
     53            'values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 ms',
    5354            'avg 1100 ms',
    5455            'median 1101 ms',
     
    6162            test = PerfTest(None, 'some-test', '/path/some-dir/some-test')
    6263            self.assertEqual(test.parse_output(output),
    63                 {'some-test': {'avg': 1100.0, 'median': 1101.0, 'min': 1080.0, 'max': 1120.0, 'stdev': 11.0, 'unit': 'ms'}})
     64                {'some-test': {'avg': 1100.0, 'median': 1101.0, 'min': 1080.0, 'max': 1120.0, 'stdev': 11.0, 'unit': 'ms',
     65                    'values': [i for i in range(1, 20)]}})
    6466        finally:
    6567            pass
     
    7779            '',
    7880            'Time:'
     81            'values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 ms',
    7982            'avg 1100 ms',
    8083            'median 1101 ms',
     
    110113    def test_run(self):
    111114        test = PageLoadingPerfTest(None, 'some-test', '/path/some-dir/some-test')
    112         driver = TestPageLoadingPerfTest.MockDriver([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20])
     115        driver = TestPageLoadingPerfTest.MockDriver(range(1, 21))
    113116        output_capture = OutputCapture()
    114117        output_capture.capture_output()
    115118        try:
    116119            self.assertEqual(test.run(driver, None),
    117                 {'some-test': {'max': 20000, 'avg': 11000.0, 'median': 11000, 'stdev': math.sqrt(570 * 1000 * 1000), 'min': 2000, 'unit': 'ms'}})
     120                {'some-test': {'max': 20000, 'avg': 11000.0, 'median': 11000, 'stdev': math.sqrt(570 * 1000 * 1000), 'min': 2000, 'unit': 'ms',
     121                    'values': [i * 1000 for i in range(2, 21)]}})
    118122        finally:
    119123            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
  • trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner.py

    r129055 r129091  
    192192                return self.EXIT_CODE_BAD_MERGE
    193193            results_page_path = self._host.filesystem.splitext(output_json_path)[0] + '.html'
     194        else:
     195            # FIXME: Remove this code once webkit-perf.appspot.com supported "values".
     196            for result in output['results'].values():
     197                if isinstance(result, dict) and 'values' in result:
     198                    del result['values']
    194199
    195200        self._generate_output_files(output_json_path, results_page_path, output)
  • trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py

    r129055 r129091  
    9393
    9494Time:
     95values 1504, 1505, 1510, 1504, 1507, 1509, 1510, 1487, 1488, 1472, 1472, 1488, 1473, 1472, 1475, 1487, 1486, 1486, 1475, 1471 ms
    9596avg 1489.05 ms
    9697median 1487 ms
     
    104105
    105106Time:
     107values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 ms
    106108avg 1100 ms
    107109median 1101 ms
     
    115117
    116118Time:
     119values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 ms
    117120avg 1100 ms
    118121median 1101 ms
     
    122125
    123126JS Heap:
     127values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 bytes
    124128avg 832000 bytes
    125129median 829000 bytes
     
    129133
    130134Malloc:
     135values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 bytes
    131136avg 532000 bytes
    132137median 529000 bytes
     
    287292            '', '']))
    288293        results = runner.load_output_json()[0]['results']
    289         self.assertEqual(results['Parser/memory-test'], {'min': 1080.0, 'max': 1120.0, 'median': 1101.0, 'stdev': 11.0, 'avg': 1100.0, 'unit': 'ms'})
    290         self.assertEqual(results['Parser/memory-test:JSHeap'], {'min': 811000.0, 'max': 848000.0, 'median': 829000.0, 'stdev': 15000.0, 'avg': 832000.0, 'unit': 'bytes'})
    291         self.assertEqual(results['Parser/memory-test:Malloc'], {'min': 511000.0, 'max': 548000.0, 'median': 529000.0, 'stdev': 13000.0, 'avg': 532000.0, 'unit': 'bytes'})
     294        values = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
     295        self.assertEqual(results['Parser/memory-test'], {'min': 1080.0, 'max': 1120.0, 'median': 1101.0, 'stdev': 11.0, 'avg': 1100.0, 'unit': 'ms', 'values': values})
     296        self.assertEqual(results['Parser/memory-test:JSHeap'], {'min': 811000.0, 'max': 848000.0, 'median': 829000.0, 'stdev': 15000.0, 'avg': 832000.0, 'unit': 'bytes', 'values': values})
     297        self.assertEqual(results['Parser/memory-test:Malloc'], {'min': 511000.0, 'max': 548000.0, 'median': 529000.0, 'stdev': 13000.0, 'avg': 532000.0, 'unit': 'bytes', 'values': values})
    292298
    293299    def _test_run_with_json_output(self, runner, filesystem, upload_suceeds=False, expected_exit_code=0):
     
    331337
    332338    _event_target_wrapper_and_inspector_results = {
     339        "Bindings/event-target-wrapper": {"max": 1510, "avg": 1489.05, "median": 1487, "min": 1471, "stdev": 14.46, "unit": "ms",
     340           "values": [1504, 1505, 1510, 1504, 1507, 1509, 1510, 1487, 1488, 1472, 1472, 1488, 1473, 1472, 1475, 1487, 1486, 1486, 1475, 1471]},
     341        "inspector/pass.html:group_name:test_name": 42}
     342
     343    # FIXME: Remove this variance once perf-o-matic supported "values".
     344    _event_target_wrapper_and_inspector_results_without_values = {
    333345        "Bindings/event-target-wrapper": {"max": 1510, "avg": 1489.05, "median": 1487, "min": 1471, "stdev": 14.46, "unit": "ms"},
    334346        "inspector/pass.html:group_name:test_name": 42}
     
    339351        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
    340352        self.assertEqual(runner.load_output_json(), {
    341             "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
     353            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results_without_values,
    342354            "webkit-revision": "5678", "branch": "webkit-trunk"})
    343355
     
    348360        self.assertEqual(runner.load_output_json(), {
    349361            "timestamp": 123456789, "description": "some description",
    350             "results": self._event_target_wrapper_and_inspector_results,
     362            "results": self._event_target_wrapper_and_inspector_results_without_values,
    351363            "webkit-revision": "5678", "branch": "webkit-trunk"})
    352364
     
    438450        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
    439451        self.assertEqual(runner.load_output_json(), {
    440             "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
     452            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results_without_values,
    441453            "webkit-revision": "5678", "branch": "webkit-trunk", "key": "value"})
    442454
     
    457469        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
    458470        self.assertEqual(runner.load_output_json(), {
    459             "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
     471            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results_without_values,
    460472            "webkit-revision": "5678", "some-revision": "5678", "branch": "webkit-trunk"})
    461473
Note: See TracChangeset for help on using the changeset viewer.