Changeset 119188 in webkit


Ignore:
Timestamp:
May 31, 2012 8:19:31 PM (12 years ago)
Author:
rniwa@webkit.org
Message:

Add public page loading performance tests using web-page-replay
https://bugs.webkit.org/show_bug.cgi?id=84008

Reviewed by Dirk Pranke.

PerformanceTests:

Add replay tests for google.com and youtube.com as examples.

  • Replay: Added.
  • Replay/www.google.com.replay: Added.
  • Replay/www.youtube.com.replay: Added.

Tools:

Add the primitive implementation of replay performance tests. We use web-page-replay (http://code.google.com/p/web-page-replay/)
to cache data locally. Each replay test is represented by a text file with .replay extension containing a single URL.
To hash out bugs and isolate them from the rest of performance tests, replay tests are hidden behind --replay flag.

Run "run-perf-tests --replay PerformanceTests/Replay" after changing the system network preference to forward HTTP and HTTPS requests
to localhost:8080 and localhost:8443 respectively (i.e. configure the system as if there are HTTP proxies at ports 8080 and 8443)
excluding: *.webkit.org, *.googlecode.com, *.sourceforge.net, pypi.python.org, and www.adambarth.com for thirdparty Python dependencies.
run-perf-tests starts web-page-replay, which provides HTTP proxies at ports 8080 and 8443 to replay pages.

  • Scripts/webkitpy/layout_tests/port/driver.py:

(Driver.is_external_http_test): Added.

  • Scripts/webkitpy/layout_tests/port/webkit.py:

(WebKitDriver._command_from_driver_input): Allow test names that starts with http:// or https://.

  • Scripts/webkitpy/performance_tests/perftest.py:

(PerfTest.init): Takes port.
(PerfTest.prepare): Added. Overridden by ReplayPerfTest.
(PerfTest):
(PerfTest.run): Calls run_single.
(PerfTest.run_single): Extracted from PageLoadingPerfTest.run.
(ChromiumStylePerfTest.init):
(PageLoadingPerfTest.init):
(PageLoadingPerfTest.run):
(ReplayServer): Added. Responsible for starting and stopping replay.py in the web-page-replay.
(ReplayServer.init):
(ReplayServer.wait_until_ready): Wait until port 8080 is ready. I have tried looking at the piped output from web-page-replay
but it caused a dead lock on some web pages.
(ReplayServer.stop):
(ReplayServer.del):
(ReplayPerfTest):
(ReplayPerfTest.init):
(ReplayPerfTest._start_replay_server):
(ReplayPerfTest.prepare): Creates test.wpr and test-expected.png to cache the page when a replay test is ran for the first time.
The subsequent runs of the same test will just use test.wpr.
(ReplayPerfTest.run_single):
(PerfTestFactory):
(PerfTestFactory.create_perf_test):

  • Scripts/webkitpy/performance_tests/perftest_unittest.py:

(MainTest.test_parse_output):
(MainTest.test_parse_output_with_failing_line):
(TestPageLoadingPerfTest.test_run):
(TestPageLoadingPerfTest.test_run_with_bad_output):
(TestReplayPerfTest):
(TestReplayPerfTest.ReplayTestPort):
(TestReplayPerfTest.ReplayTestPort.init):
(TestReplayPerfTest.ReplayTestPort.init.ReplayTestDriver):
(TestReplayPerfTest.ReplayTestPort.init.ReplayTestDriver.run_test):
(TestReplayPerfTest.ReplayTestPort._driver_class):
(TestReplayPerfTest.MockReplayServer):
(TestReplayPerfTest.MockReplayServer.init):
(TestReplayPerfTest.MockReplayServer.stop):
(TestReplayPerfTest._add_file):
(TestReplayPerfTest._setup_test):
(TestReplayPerfTest.test_run_single):
(TestReplayPerfTest.test_run_single.run_test):
(TestReplayPerfTest.test_run_single_fails_without_webpagereplay):
(TestReplayPerfTest.test_prepare_fails_when_wait_until_ready_fails):
(TestReplayPerfTest.test_run_single_fails_when_output_has_error):
(TestReplayPerfTest.test_run_single_fails_when_output_has_error.run_test):
(TestReplayPerfTest.test_prepare):
(TestReplayPerfTest.test_prepare.run_test):
(TestReplayPerfTest.test_prepare_calls_run_single):
(TestReplayPerfTest.test_prepare_calls_run_single.run_single):
(TestPerfTestFactory.test_regular_test):
(TestPerfTestFactory.test_inspector_test):
(TestPerfTestFactory.test_page_loading_test):

  • Scripts/webkitpy/performance_tests/perftestsrunner.py:

(PerfTestsRunner):
(PerfTestsRunner._parse_args): Added --replay flag to enable replay tests.
(PerfTestsRunner._collect_tests): Collect .replay files when replay tests are enabled.
(PerfTestsRunner._collect_tests._is_test_file):
(PerfTestsRunner.run): Exit early if one of calls to prepare() fails.

  • Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py:

(create_runner):
(run_test):
(_tests_for_runner):
(test_run_test_set):
(test_run_test_set_kills_drt_per_run):
(test_run_test_pause_before_testing):
(test_run_test_set_for_parser_tests):
(test_run_test_set_with_json_output):
(test_run_test_set_with_json_source):
(test_run_test_set_with_multiple_repositories):
(test_run_with_upload_json):
(test_upload_json):
(test_upload_json.MockFileUploader.upload_single_text_file):
(_add_file):
(test_collect_tests):
(test_collect_tests_with_multile_files):
(test_collect_tests_with_multile_files.add_file):
(test_collect_tests_with_skipped_list):
(test_collect_tests_with_page_load_svg):
(test_collect_tests_should_ignore_replay_tests_by_default):
(test_collect_tests_with_replay_tests):
(test_parse_args):

  • Scripts/webkitpy/thirdparty/init.py: Added the dependency for web-page-replay version 1.1.1.

(AutoinstallImportHook.find_module):
(AutoinstallImportHook._install_webpagereplay):

Location:
trunk
Files:
3 added
8 edited

Legend:

Unmodified
Added
Removed
  • trunk/PerformanceTests/ChangeLog

    r118899 r119188  
     12012-06-01  Ryosuke Niwa  <rniwa@webkit.org>
     2
     3        Add public page loading performance tests using web-page-replay
     4        https://bugs.webkit.org/show_bug.cgi?id=84008
     5
     6        Reviewed by Dirk Pranke.
     7
     8        Add replay tests for google.com and youtube.com as examples.
     9
     10        * Replay: Added.
     11        * Replay/www.google.com.replay: Added.
     12        * Replay/www.youtube.com.replay: Added.
     13
    1142012-05-30  Kentaro Hara  <haraken@chromium.org>
    215
  • trunk/Tools/ChangeLog

    r119183 r119188  
     12012-06-01  Ryosuke Niwa  <rniwa@webkit.org>
     2
     3        Add public page loading performance tests using web-page-replay
     4        https://bugs.webkit.org/show_bug.cgi?id=84008
     5
     6        Reviewed by Dirk Pranke.
     7
     8        Add the primitive implementation of replay performance tests. We use web-page-replay (http://code.google.com/p/web-page-replay/)
     9        to cache data locally. Each replay test is represented by a text file with .replay extension containing a single URL.
     10        To hash out bugs and isolate them from the rest of performance tests, replay tests are hidden behind --replay flag.
     11
     12        Run "run-perf-tests --replay PerformanceTests/Replay" after changing the system network preference to forward HTTP and HTTPS requests
     13        to localhost:8080 and localhost:8443 respectively (i.e. configure the system as if there are HTTP proxies at ports 8080 and 8443)
     14        excluding: *.webkit.org, *.googlecode.com, *.sourceforge.net, pypi.python.org, and www.adambarth.com for thirdparty Python dependencies.
     15        run-perf-tests starts web-page-replay, which provides HTTP proxies at ports 8080 and 8443 to replay pages.
     16
     17        * Scripts/webkitpy/layout_tests/port/driver.py:
     18        (Driver.is_external_http_test): Added.
     19        * Scripts/webkitpy/layout_tests/port/webkit.py:
     20        (WebKitDriver._command_from_driver_input): Allow test names that starts with http:// or https://.
     21        * Scripts/webkitpy/performance_tests/perftest.py:
     22        (PerfTest.__init__): Takes port.
     23        (PerfTest.prepare): Added. Overridden by ReplayPerfTest.
     24        (PerfTest):
     25        (PerfTest.run): Calls run_single.
     26        (PerfTest.run_single): Extracted from PageLoadingPerfTest.run.
     27        (ChromiumStylePerfTest.__init__):
     28        (PageLoadingPerfTest.__init__):
     29        (PageLoadingPerfTest.run):
     30        (ReplayServer): Added. Responsible for starting and stopping replay.py in the web-page-replay.
     31        (ReplayServer.__init__):
     32        (ReplayServer.wait_until_ready): Wait until port 8080 is ready. I have tried looking at the piped output from web-page-replay
     33        but it caused a dead lock on some web pages.
     34        (ReplayServer.stop):
     35        (ReplayServer.__del__):
     36        (ReplayPerfTest):
     37        (ReplayPerfTest.__init__):
     38        (ReplayPerfTest._start_replay_server):
     39        (ReplayPerfTest.prepare): Creates test.wpr and test-expected.png to cache the page when a replay test is ran for the first time.
     40        The subsequent runs of the same test will just use test.wpr.
     41        (ReplayPerfTest.run_single):
     42        (PerfTestFactory):
     43        (PerfTestFactory.create_perf_test):
     44        * Scripts/webkitpy/performance_tests/perftest_unittest.py:
     45        (MainTest.test_parse_output):
     46        (MainTest.test_parse_output_with_failing_line):
     47        (TestPageLoadingPerfTest.test_run):
     48        (TestPageLoadingPerfTest.test_run_with_bad_output):
     49        (TestReplayPerfTest):
     50        (TestReplayPerfTest.ReplayTestPort):
     51        (TestReplayPerfTest.ReplayTestPort.__init__):
     52        (TestReplayPerfTest.ReplayTestPort.__init__.ReplayTestDriver):
     53        (TestReplayPerfTest.ReplayTestPort.__init__.ReplayTestDriver.run_test):
     54        (TestReplayPerfTest.ReplayTestPort._driver_class):
     55        (TestReplayPerfTest.MockReplayServer):
     56        (TestReplayPerfTest.MockReplayServer.__init__):
     57        (TestReplayPerfTest.MockReplayServer.stop):
     58        (TestReplayPerfTest._add_file):
     59        (TestReplayPerfTest._setup_test):
     60        (TestReplayPerfTest.test_run_single):
     61        (TestReplayPerfTest.test_run_single.run_test):
     62        (TestReplayPerfTest.test_run_single_fails_without_webpagereplay):
     63        (TestReplayPerfTest.test_prepare_fails_when_wait_until_ready_fails):
     64        (TestReplayPerfTest.test_run_single_fails_when_output_has_error):
     65        (TestReplayPerfTest.test_run_single_fails_when_output_has_error.run_test):
     66        (TestReplayPerfTest.test_prepare):
     67        (TestReplayPerfTest.test_prepare.run_test):
     68        (TestReplayPerfTest.test_prepare_calls_run_single):
     69        (TestReplayPerfTest.test_prepare_calls_run_single.run_single):
     70        (TestPerfTestFactory.test_regular_test):
     71        (TestPerfTestFactory.test_inspector_test):
     72        (TestPerfTestFactory.test_page_loading_test):
     73        * Scripts/webkitpy/performance_tests/perftestsrunner.py:
     74        (PerfTestsRunner):
     75        (PerfTestsRunner._parse_args): Added --replay flag to enable replay tests.
     76        (PerfTestsRunner._collect_tests): Collect .replay files when replay tests are enabled.
     77        (PerfTestsRunner._collect_tests._is_test_file):
     78        (PerfTestsRunner.run): Exit early if one of calls to prepare() fails.
     79        * Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py:
     80        (create_runner):
     81        (run_test):
     82        (_tests_for_runner):
     83        (test_run_test_set):
     84        (test_run_test_set_kills_drt_per_run):
     85        (test_run_test_pause_before_testing):
     86        (test_run_test_set_for_parser_tests):
     87        (test_run_test_set_with_json_output):
     88        (test_run_test_set_with_json_source):
     89        (test_run_test_set_with_multiple_repositories):
     90        (test_run_with_upload_json):
     91        (test_upload_json):
     92        (test_upload_json.MockFileUploader.upload_single_text_file):
     93        (_add_file):
     94        (test_collect_tests):
     95        (test_collect_tests_with_multile_files):
     96        (test_collect_tests_with_multile_files.add_file):
     97        (test_collect_tests_with_skipped_list):
     98        (test_collect_tests_with_page_load_svg):
     99        (test_collect_tests_should_ignore_replay_tests_by_default):
     100        (test_collect_tests_with_replay_tests):
     101        (test_parse_args):
     102        * Scripts/webkitpy/thirdparty/__init__.py: Added the dependency for web-page-replay version 1.1.1.
     103        (AutoinstallImportHook.find_module):
     104        (AutoinstallImportHook._install_webpagereplay):
     105
    11062012-05-31  Yaron Friedman  <yfriedman@chromium.org>
    2107
  • trunk/Tools/Scripts/webkitpy/layout_tests/port/webkit.py

    r119018 r119188  
    535535
    536536    def _command_from_driver_input(self, driver_input):
    537         if self.is_http_test(driver_input.test_name):
     537        # FIXME: performance tests pass in full URLs instead of test names.
     538        if driver_input.test_name.startswith('http://') or driver_input.test_name.startswith('https://'):
     539            command = driver_input.test_name
     540        elif self.is_http_test(driver_input.test_name):
    538541            command = self.test_to_uri(driver_input.test_name)
    539542        else:
  • trunk/Tools/Scripts/webkitpy/performance_tests/perftest.py

    r117422 r119188  
    2929
    3030
     31import errno
    3132import logging
    3233import math
    3334import re
    34 
     35import os
     36import signal
     37import socket
     38import subprocess
     39import time
     40
     41# Import for auto-install
     42import webkitpy.thirdparty.autoinstalled.webpagereplay.replay
     43
     44from webkitpy.layout_tests.controllers.test_result_writer import TestResultWriter
    3545from webkitpy.layout_tests.port.driver import DriverInput
     46from webkitpy.layout_tests.port.driver import DriverOutput
    3647
    3748
     
    4051
    4152class PerfTest(object):
    42     def __init__(self, test_name, path_or_url):
     53    def __init__(self, port, test_name, path_or_url):
     54        self._port = port
    4355        self._test_name = test_name
    4456        self._path_or_url = path_or_url
     
    5062        return self._path_or_url
    5163
    52     def run(self, driver, timeout_ms):
    53         output = driver.run_test(DriverInput(self.path_or_url(), timeout_ms, None, False))
     64    def prepare(self, time_out_ms):
     65        return True
     66
     67    def run(self, driver, time_out_ms):
     68        output = self.run_single(driver, self.path_or_url(), time_out_ms)
    5469        if self.run_failed(output):
    5570            return None
    5671        return self.parse_output(output)
     72
     73    def run_single(self, driver, path_or_url, time_out_ms, should_run_pixel_test=False):
     74        return driver.run_test(DriverInput(path_or_url, time_out_ms, image_hash=None, should_run_pixel_test=should_run_pixel_test))
    5775
    5876    def run_failed(self, output):
     
    138156    _chromium_style_result_regex = re.compile(r'^RESULT\s+(?P<name>[^=]+)\s*=\s+(?P<value>\d+(\.\d+)?)\s*(?P<unit>\w+)$')
    139157
    140     def __init__(self, test_name, path_or_url):
    141         super(ChromiumStylePerfTest, self).__init__(test_name, path_or_url)
     158    def __init__(self, port, test_name, path_or_url):
     159        super(ChromiumStylePerfTest, self).__init__(port, test_name, path_or_url)
    142160
    143161    def parse_output(self, output):
     
    158176
    159177class PageLoadingPerfTest(PerfTest):
    160     def __init__(self, test_name, path_or_url):
    161         super(PageLoadingPerfTest, self).__init__(test_name, path_or_url)
    162 
    163     def run(self, driver, timeout_ms):
     178    def __init__(self, port, test_name, path_or_url):
     179        super(PageLoadingPerfTest, self).__init__(port, test_name, path_or_url)
     180
     181    def run(self, driver, time_out_ms):
    164182        test_times = []
    165183
    166184        for i in range(0, 20):
    167             output = driver.run_test(DriverInput(self.path_or_url(), timeout_ms, None, False))
    168             if self.run_failed(output):
     185            output = self.run_single(driver, self.path_or_url(), time_out_ms)
     186            if not output or self.run_failed(output):
    169187                return None
    170188            if i == 0:
     
    195213
    196214
     215class ReplayServer(object):
     216    def __init__(self, archive, record):
     217        self._process = None
     218
     219        # FIXME: Should error if local proxy isn't set to forward requests to localhost:8080 and localhost:8413
     220
     221        replay_path = webkitpy.thirdparty.autoinstalled.webpagereplay.replay.__file__
     222        args = ['python', replay_path, '--no-dns_forwarding', '--port', '8080', '--ssl_port', '8413', '--use_closest_match', '--log_level', 'warning']
     223        if record:
     224            args.append('--record')
     225        args.append(archive)
     226
     227        self._process = subprocess.Popen(args)
     228
     229    def wait_until_ready(self):
     230        for i in range(0, 10):
     231            try:
     232                connection = socket.create_connection(('localhost', '8080'), timeout=1)
     233                connection.close()
     234                return True
     235            except socket.error:
     236                time.sleep(1)
     237                continue
     238        return False
     239
     240    def stop(self):
     241        if self._process:
     242            self._process.send_signal(signal.SIGINT)
     243            self._process.wait()
     244        self._process = None
     245
     246    def __del__(self):
     247        self.stop()
     248
     249
     250class ReplayPerfTest(PageLoadingPerfTest):
     251    def __init__(self, port, test_name, path_or_url):
     252        super(ReplayPerfTest, self).__init__(port, test_name, path_or_url)
     253
     254    def _start_replay_server(self, archive, record):
     255        try:
     256            return ReplayServer(archive, record)
     257        except OSError as error:
     258            if error.errno == errno.ENOENT:
     259                _log.error("Replay tests require web-page-replay.")
     260            else:
     261                raise error
     262
     263    def prepare(self, time_out_ms):
     264        filesystem = self._port.host.filesystem
     265        path_without_ext = filesystem.splitext(self.path_or_url())[0]
     266
     267        self._archive_path = filesystem.join(path_without_ext + '.wpr')
     268        self._expected_image_path = filesystem.join(path_without_ext + '-expected.png')
     269        self._url = filesystem.read_text_file(self.path_or_url()).split('\n')[0]
     270
     271        if filesystem.isfile(self._archive_path) and filesystem.isfile(self._expected_image_path):
     272            _log.info("Replay ready for %s" % self._archive_path)
     273            return True
     274
     275        _log.info("Preparing replay for %s" % self.test_name())
     276
     277        driver = self._port.create_driver(worker_number=1, no_timeout=True)
     278        try:
     279            output = self.run_single(driver, self._url, time_out_ms, record=True)
     280        finally:
     281            driver.stop()
     282
     283        if not output or not filesystem.isfile(self._archive_path):
     284            _log.error("Failed to prepare a replay for %s" % self.test_name())
     285            return False
     286
     287        _log.info("Prepared replay for %s" % self.test_name())
     288
     289        return True
     290
     291    def run_single(self, driver, url, time_out_ms, record=False):
     292        server = self._start_replay_server(self._archive_path, record)
     293        if not server:
     294            _log.error("Web page replay didn't start.")
     295            return None
     296
     297        try:
     298            if not server.wait_until_ready():
     299                _log.error("Web page replay didn't start.")
     300                return None
     301
     302            super(ReplayPerfTest, self).run_single(driver, "about:blank", time_out_ms)
     303            _log.debug("Loading the page")
     304
     305            output = super(ReplayPerfTest, self).run_single(driver, self._url, time_out_ms, should_run_pixel_test=True)
     306            if self.run_failed(output):
     307                return None
     308
     309            if not output.image:
     310                _log.error("Loading the page did not generate image results")
     311                _log.error(output.text)
     312                return None
     313
     314            filesystem = self._port.host.filesystem
     315            dirname = filesystem.dirname(url)
     316            filename = filesystem.split(url)[1]
     317            writer = TestResultWriter(filesystem, self._port, dirname, filename)
     318            if record:
     319                writer.write_image_files(actual_image=None, expected_image=output.image)
     320            else:
     321                writer.write_image_files(actual_image=output.image, expected_image=None)
     322
     323            return output
     324        finally:
     325            server.stop()
     326
     327
    197328class PerfTestFactory(object):
    198329
    199330    _pattern_map = [
    200         (re.compile('^inspector/'), ChromiumStylePerfTest),
    201         (re.compile('^PageLoad/'), PageLoadingPerfTest),
     331        (re.compile(r'^inspector/'), ChromiumStylePerfTest),
     332        (re.compile(r'^PageLoad/'), PageLoadingPerfTest),
     333        (re.compile(r'(.+)\.replay$'), ReplayPerfTest),
    202334    ]
    203335
    204336    @classmethod
    205     def create_perf_test(cls, test_name, path):
     337    def create_perf_test(cls, port, test_name, path):
    206338        for (pattern, test_class) in cls._pattern_map:
    207339            if pattern.match(test_name):
    208                 return test_class(test_name, path)
    209         return PerfTest(test_name, path)
     340                return test_class(port, test_name, path)
     341        return PerfTest(port, test_name, path)
  • trunk/Tools/Scripts/webkitpy/performance_tests/perftest_unittest.py

    r115466 r119188  
    3232import unittest
    3333
     34from webkitpy.common.host_mock import MockHost
    3435from webkitpy.common.system.outputcapture import OutputCapture
    3536from webkitpy.layout_tests.port.driver import DriverOutput
     37from webkitpy.layout_tests.port.test import TestDriver
     38from webkitpy.layout_tests.port.test import TestPort
    3639from webkitpy.performance_tests.perftest import ChromiumStylePerfTest
    3740from webkitpy.performance_tests.perftest import PageLoadingPerfTest
    3841from webkitpy.performance_tests.perftest import PerfTest
    3942from webkitpy.performance_tests.perftest import PerfTestFactory
     43from webkitpy.performance_tests.perftest import ReplayPerfTest
    4044
    4145
     
    5458        output_capture.capture_output()
    5559        try:
    56             test = PerfTest('some-test', '/path/some-dir/some-test')
     60            test = PerfTest(None, 'some-test', '/path/some-dir/some-test')
    5761            self.assertEqual(test.parse_output(output),
    5862                {'some-test': {'avg': 1100.0, 'median': 1101.0, 'min': 1080.0, 'max': 1120.0, 'stdev': 11.0, 'unit': 'ms'}})
     
    7882        output_capture.capture_output()
    7983        try:
    80             test = PerfTest('some-test', '/path/some-dir/some-test')
     84            test = PerfTest(None, 'some-test', '/path/some-dir/some-test')
    8185            self.assertEqual(test.parse_output(output), None)
    8286        finally:
     
    102106
    103107    def test_run(self):
    104         test = PageLoadingPerfTest('some-test', '/path/some-dir/some-test')
     108        test = PageLoadingPerfTest(None, 'some-test', '/path/some-dir/some-test')
    105109        driver = TestPageLoadingPerfTest.MockDriver([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20])
    106110        output_capture = OutputCapture()
     
    119123        output_capture.capture_output()
    120124        try:
    121             test = PageLoadingPerfTest('some-test', '/path/some-dir/some-test')
     125            test = PageLoadingPerfTest(None, 'some-test', '/path/some-dir/some-test')
    122126            driver = TestPageLoadingPerfTest.MockDriver([1, 2, 3, 4, 5, 6, 7, 'some error', 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20])
    123127            self.assertEqual(test.run(driver, None), None)
     
    129133
    130134
     135class TestReplayPerfTest(unittest.TestCase):
     136
     137    class ReplayTestPort(TestPort):
     138        def __init__(self, custom_run_test=None):
     139
     140            class ReplayTestDriver(TestDriver):
     141                def run_test(self, text_input):
     142                    return custom_run_test(text_input) if custom_run_test else None
     143
     144            self._custom_driver_class = ReplayTestDriver
     145            super(self.__class__, self).__init__(host=MockHost())
     146
     147        def _driver_class(self):
     148            return self._custom_driver_class
     149
     150    class MockReplayServer(object):
     151        def __init__(self, wait_until_ready=True):
     152            self.wait_until_ready = lambda: wait_until_ready
     153
     154        def stop(self):
     155            pass
     156
     157    def _add_file(self, port, dirname, filename, content=True):
     158        port.host.filesystem.maybe_make_directory(dirname)
     159        port.host.filesystem.files[port.host.filesystem.join(dirname, filename)] = content
     160
     161    def _setup_test(self, run_test=None):
     162        test_port = self.ReplayTestPort(run_test)
     163        self._add_file(test_port, '/path/some-dir', 'some-test.replay', 'http://some-test/')
     164        test = ReplayPerfTest(test_port, 'some-test.replay', '/path/some-dir/some-test.replay')
     165        test._start_replay_server = lambda archive, record: self.__class__.MockReplayServer()
     166        return test, test_port
     167
     168    def test_run_single(self):
     169        output_capture = OutputCapture()
     170        output_capture.capture_output()
     171
     172        loaded_pages = []
     173
     174        def run_test(test_input):
     175            if test_input.test_name != "about:blank":
     176                self.assertEqual(test_input.test_name, 'http://some-test/')
     177            loaded_pages.append(test_input)
     178            self._add_file(port, '/path/some-dir', 'some-test.wpr', 'wpr content')
     179            return DriverOutput('actual text', 'actual image', 'actual checksum',
     180                audio=None, crash=False, timeout=False, error=False)
     181
     182        test, port = self._setup_test(run_test)
     183        test._archive_path = '/path/some-dir/some-test.wpr'
     184        test._url = 'http://some-test/'
     185
     186        try:
     187            driver = port.create_driver(worker_number=1, no_timeout=True)
     188            self.assertTrue(test.run_single(driver, '/path/some-dir/some-test.replay', time_out_ms=100))
     189        finally:
     190            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
     191
     192        self.assertEqual(len(loaded_pages), 2)
     193        self.assertEqual(loaded_pages[0].test_name, 'about:blank')
     194        self.assertEqual(loaded_pages[1].test_name, 'http://some-test/')
     195        self.assertEqual(actual_stdout, '')
     196        self.assertEqual(actual_stderr, '')
     197        self.assertEqual(actual_logs, '')
     198
     199    def test_run_single_fails_without_webpagereplay(self):
     200        output_capture = OutputCapture()
     201        output_capture.capture_output()
     202
     203        test, port = self._setup_test()
     204        test._start_replay_server = lambda archive, record: None
     205        test._archive_path = '/path/some-dir.wpr'
     206        test._url = 'http://some-test/'
     207
     208        try:
     209            driver = port.create_driver(worker_number=1, no_timeout=True)
     210            self.assertEqual(test.run_single(driver, '/path/some-dir/some-test.replay', time_out_ms=100), None)
     211        finally:
     212            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
     213        self.assertEqual(actual_stdout, '')
     214        self.assertEqual(actual_stderr, '')
     215        self.assertEqual(actual_logs, "Web page replay didn't start.\n")
     216
     217    def test_prepare_fails_when_wait_until_ready_fails(self):
     218        output_capture = OutputCapture()
     219        output_capture.capture_output()
     220
     221        test, port = self._setup_test()
     222        test._start_replay_server = lambda archive, record: self.__class__.MockReplayServer(wait_until_ready=False)
     223        test._archive_path = '/path/some-dir.wpr'
     224        test._url = 'http://some-test/'
     225
     226        try:
     227            driver = port.create_driver(worker_number=1, no_timeout=True)
     228            self.assertEqual(test.run_single(driver, '/path/some-dir/some-test.replay', time_out_ms=100), None)
     229        finally:
     230            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
     231
     232        self.assertEqual(actual_stdout, '')
     233        self.assertEqual(actual_stderr, '')
     234        self.assertEqual(actual_logs, "Web page replay didn't start.\n")
     235
     236    def test_run_single_fails_when_output_has_error(self):
     237        output_capture = OutputCapture()
     238        output_capture.capture_output()
     239
     240        loaded_pages = []
     241
     242        def run_test(test_input):
     243            loaded_pages.append(test_input)
     244            self._add_file(port, '/path/some-dir', 'some-test.wpr', 'wpr content')
     245            return DriverOutput('actual text', 'actual image', 'actual checksum',
     246                audio=None, crash=False, timeout=False, error='some error')
     247
     248        test, port = self._setup_test(run_test)
     249        test._archive_path = '/path/some-dir.wpr'
     250        test._url = 'http://some-test/'
     251
     252        try:
     253            driver = port.create_driver(worker_number=1, no_timeout=True)
     254            self.assertEqual(test.run_single(driver, '/path/some-dir/some-test.replay', time_out_ms=100), None)
     255        finally:
     256            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
     257
     258        self.assertEqual(len(loaded_pages), 2)
     259        self.assertEqual(loaded_pages[0].test_name, 'about:blank')
     260        self.assertEqual(loaded_pages[1].test_name, 'http://some-test/')
     261        self.assertEqual(actual_stdout, '')
     262        self.assertEqual(actual_stderr, '')
     263        self.assertEqual(actual_logs, 'error: some-test.replay\nsome error\n')
     264
     265    def test_prepare(self):
     266        output_capture = OutputCapture()
     267        output_capture.capture_output()
     268
     269        def run_test(test_input):
     270            self._add_file(port, '/path/some-dir', 'some-test.wpr', 'wpr content')
     271            return DriverOutput('actual text', 'actual image', 'actual checksum',
     272                audio=None, crash=False, timeout=False, error=False)
     273
     274        test, port = self._setup_test(run_test)
     275
     276        try:
     277            self.assertEqual(test.prepare(time_out_ms=100), True)
     278        finally:
     279            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
     280
     281        self.assertEqual(actual_stdout, '')
     282        self.assertEqual(actual_stderr, '')
     283        self.assertEqual(actual_logs, 'Preparing replay for some-test.replay\nPrepared replay for some-test.replay\n')
     284
     285    def test_prepare_calls_run_single(self):
     286        output_capture = OutputCapture()
     287        output_capture.capture_output()
     288        called = [False]
     289
     290        def run_single(driver, url, time_out_ms, record):
     291            self.assertTrue(record)
     292            self.assertEqual(url, 'http://some-test/')
     293            called[0] = True
     294            return False
     295
     296        test, port = self._setup_test()
     297        test.run_single = run_single
     298
     299        try:
     300            self.assertEqual(test.prepare(time_out_ms=100), False)
     301        finally:
     302            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
     303        self.assertTrue(called[0])
     304        self.assertEqual(test._archive_path, '/path/some-dir/some-test.wpr')
     305        self.assertEqual(test._url, 'http://some-test/')
     306        self.assertEqual(actual_stdout, '')
     307        self.assertEqual(actual_stderr, '')
     308        self.assertEqual(actual_logs, "Preparing replay for some-test.replay\nFailed to prepare a replay for some-test.replay\n")
     309
    131310class TestPerfTestFactory(unittest.TestCase):
    132311    def test_regular_test(self):
    133         test = PerfTestFactory.create_perf_test('some-dir/some-test', '/path/some-dir/some-test')
     312        test = PerfTestFactory.create_perf_test(None, 'some-dir/some-test', '/path/some-dir/some-test')
    134313        self.assertEqual(test.__class__, PerfTest)
    135314
    136315    def test_inspector_test(self):
    137         test = PerfTestFactory.create_perf_test('inspector/some-test', '/path/inspector/some-test')
     316        test = PerfTestFactory.create_perf_test(None, 'inspector/some-test', '/path/inspector/some-test')
    138317        self.assertEqual(test.__class__, ChromiumStylePerfTest)
    139318
    140319    def test_page_loading_test(self):
    141         test = PerfTestFactory.create_perf_test('PageLoad/some-test', '/path/PageLoad/some-test')
     320        test = PerfTestFactory.create_perf_test(None, 'PageLoad/some-test', '/path/PageLoad/some-test')
    142321        self.assertEqual(test.__class__, PageLoadingPerfTest)
    143322
  • trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner.py

    r115466 r119188  
    4242from webkitpy.layout_tests.views import printing
    4343from webkitpy.performance_tests.perftest import PerfTestFactory
     44from webkitpy.performance_tests.perftest import ReplayPerfTest
    4445
    4546
     
    5253    _EXIT_CODE_BAD_JSON = -2
    5354    _EXIT_CODE_FAILED_UPLOADING = -3
     55    _EXIT_CODE_BAD_PREPARATION = -4
    5456
    5557    def __init__(self, args=None, port=None):
     
    9193                help="Pause before running the tests to let user attach a performance monitor."),
    9294            optparse.make_option("--output-json-path",
    93                 help="Filename of the JSON file that summaries the results"),
     95                help="Filename of the JSON file that summaries the results."),
    9496            optparse.make_option("--source-json-path",
    95                 help="Path to a JSON file to be merged into the JSON file when --output-json-path is present"),
     97                help="Path to a JSON file to be merged into the JSON file when --output-json-path is present."),
    9698            optparse.make_option("--test-results-server",
    97                 help="Upload the generated JSON file to the specified server when --output-json-path is present"),
     99                help="Upload the generated JSON file to the specified server when --output-json-path is present."),
    98100            optparse.make_option("--webkit-test-runner", "-2", action="store_true",
    99101                help="Use WebKitTestRunner rather than DumpRenderTree."),
     102            optparse.make_option("--replay", dest="replay", action="store_true", default=False,
     103                help="Run replay tests."),
    100104            ]
    101105        return optparse.OptionParser(option_list=(perf_option_list)).parse_args(args)
     
    104108        """Return the list of tests found."""
    105109
     110        test_extensions = ['.html', '.svg']
     111        if self._options.replay:
     112            test_extensions.append('.replay')
     113
    106114        def _is_test_file(filesystem, dirname, filename):
    107             return filesystem.splitext(filename)[1] in ['.html', '.svg']
     115            return filesystem.splitext(filename)[1] in test_extensions
    108116
    109117        filesystem = self._host.filesystem
     
    123131            if self._port.skips_perf_test(relative_path):
    124132                continue
    125             tests.append(PerfTestFactory.create_perf_test(relative_path, path))
     133            test = PerfTestFactory.create_perf_test(self._port, relative_path, path)
     134            tests.append(test)
    126135
    127136        return tests
     
    132141            return self._EXIT_CODE_BAD_BUILD
    133142
    134         # We wrap any parts of the run that are slow or likely to raise exceptions
    135         # in a try/finally to ensure that we clean up the logging configuration.
    136         unexpected = -1
    137143        tests = self._collect_tests()
     144        _log.info("Running %d tests" % len(tests))
     145
     146        for test in tests:
     147            if not test.prepare(self._options.time_out_ms):
     148                return self._EXIT_CODE_BAD_PREPARATION
     149
    138150        unexpected = self._run_tests_set(sorted(list(tests), key=lambda test: test.test_name()), self._port)
    139151
  • trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py

    r115466 r119188  
    121121        runner._host.filesystem.maybe_make_directory(runner._base_path, 'Bindings')
    122122        runner._host.filesystem.maybe_make_directory(runner._base_path, 'Parser')
    123         return runner
     123        return runner, test_port
    124124
    125125    def run_test(self, test_name):
    126         runner = self.create_runner()
     126        runner, port = self.create_runner()
    127127        driver = MainTest.TestDriver()
    128         return runner._run_single_test(ChromiumStylePerfTest(test_name, runner._host.filesystem.join('some-dir', test_name)), driver)
     128        return runner._run_single_test(ChromiumStylePerfTest(port, test_name, runner._host.filesystem.join('some-dir', test_name)), driver)
    129129
    130130    def test_run_passing_test(self):
     
    153153            dirname = filesystem.dirname(path)
    154154            if test.startswith('inspector/'):
    155                 tests.append(ChromiumStylePerfTest(test, path))
     155                tests.append(ChromiumStylePerfTest(runner._port, test, path))
    156156            else:
    157                 tests.append(PerfTest(test, path))
     157                tests.append(PerfTest(runner._port, test, path))
    158158        return tests
    159159
    160160    def test_run_test_set(self):
    161         runner = self.create_runner()
     161        runner, port = self.create_runner()
    162162        tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html',
    163163            'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html'])
     
    165165        output.capture_output()
    166166        try:
    167             unexpected_result_count = runner._run_tests_set(tests, runner._port)
     167            unexpected_result_count = runner._run_tests_set(tests, port)
    168168        finally:
    169169            stdout, stderr, log = output.restore_output()
     
    179179                TestDriverWithStopCount.stop_count += 1
    180180
    181         runner = self.create_runner(driver_class=TestDriverWithStopCount)
     181        runner, port = self.create_runner(driver_class=TestDriverWithStopCount)
    182182
    183183        tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html',
    184184            'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html'])
    185         unexpected_result_count = runner._run_tests_set(tests, runner._port)
     185        unexpected_result_count = runner._run_tests_set(tests, port)
    186186
    187187        self.assertEqual(TestDriverWithStopCount.stop_count, 6)
     
    194194                TestDriverWithStartCount.start_count += 1
    195195
    196         runner = self.create_runner(args=["--pause-before-testing"], driver_class=TestDriverWithStartCount)
     196        runner, port = self.create_runner(args=["--pause-before-testing"], driver_class=TestDriverWithStartCount)
    197197        tests = self._tests_for_runner(runner, ['inspector/pass.html'])
    198198
     
    200200        output.capture_output()
    201201        try:
    202             unexpected_result_count = runner._run_tests_set(tests, runner._port)
     202            unexpected_result_count = runner._run_tests_set(tests, port)
    203203            self.assertEqual(TestDriverWithStartCount.start_count, 1)
    204204        finally:
     
    208208
    209209    def test_run_test_set_for_parser_tests(self):
    210         runner = self.create_runner()
     210        runner, port = self.create_runner()
    211211        tests = self._tests_for_runner(runner, ['Bindings/event-target-wrapper.html', 'Parser/some-parser.html'])
    212212        output = OutputCapture()
    213213        output.capture_output()
    214214        try:
    215             unexpected_result_count = runner._run_tests_set(tests, runner._port)
     215            unexpected_result_count = runner._run_tests_set(tests, port)
    216216        finally:
    217217            stdout, stderr, log = output.restore_output()
     
    227227
    228228    def test_run_test_set_with_json_output(self):
    229         runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json'])
    230         runner._host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
    231         runner._host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True
     229        runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json'])
     230        port.host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
     231        port.host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True
    232232        runner._timestamp = 123456789
    233233        output_capture = OutputCapture()
     
    239239
    240240        self.assertEqual(logs,
    241             '\n'.join(['Running Bindings/event-target-wrapper.html (1 of 2)',
     241            '\n'.join(['Running 2 tests',
     242                       'Running Bindings/event-target-wrapper.html (1 of 2)',
    242243                       'RESULT Bindings: event-target-wrapper= 1489.05 ms',
    243244                       'median= 1487.0 ms, stdev= 14.46 ms, min= 1471.0 ms, max= 1510.0 ms',
     
    247248                       '', '']))
    248249
    249         self.assertEqual(json.loads(runner._host.filesystem.files['/mock-checkout/output.json']), {
     250        self.assertEqual(json.loads(port.host.filesystem.files['/mock-checkout/output.json']), {
    250251            "timestamp": 123456789, "results":
    251252            {"Bindings/event-target-wrapper": {"max": 1510, "avg": 1489.05, "median": 1487, "min": 1471, "stdev": 14.46, "unit": "ms"},
     
    254255
    255256    def test_run_test_set_with_json_source(self):
    256         runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json', '--source-json-path=/mock-checkout/source.json'])
    257         runner._host.filesystem.files['/mock-checkout/source.json'] = '{"key": "value"}'
    258         runner._host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
    259         runner._host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True
     257        runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json', '--source-json-path=/mock-checkout/source.json'])
     258        port.host.filesystem.files['/mock-checkout/source.json'] = '{"key": "value"}'
     259        port.host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
     260        port.host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True
    260261        runner._timestamp = 123456789
    261262        output_capture = OutputCapture()
     
    266267            stdout, stderr, logs = output_capture.restore_output()
    267268
    268         self.assertEqual(logs, '\n'.join(['Running Bindings/event-target-wrapper.html (1 of 2)',
     269        self.assertEqual(logs, '\n'.join(['Running 2 tests',
     270            'Running Bindings/event-target-wrapper.html (1 of 2)',
    269271            'RESULT Bindings: event-target-wrapper= 1489.05 ms',
    270272            'median= 1487.0 ms, stdev= 14.46 ms, min= 1471.0 ms, max= 1510.0 ms',
     
    274276            '', '']))
    275277
    276         self.assertEqual(json.loads(runner._host.filesystem.files['/mock-checkout/output.json']), {
     278        self.assertEqual(json.loads(port.host.filesystem.files['/mock-checkout/output.json']), {
    277279            "timestamp": 123456789, "results":
    278280            {"Bindings/event-target-wrapper": {"max": 1510, "avg": 1489.05, "median": 1487, "min": 1471, "stdev": 14.46, "unit": "ms"},
     
    282284
    283285    def test_run_test_set_with_multiple_repositories(self):
    284         runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json'])
    285         runner._host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
     286        runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json'])
     287        port.host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
    286288        runner._timestamp = 123456789
    287         runner._port.repository_paths = lambda: [('webkit', '/mock-checkout'), ('some', '/mock-checkout/some')]
     289        port.repository_paths = lambda: [('webkit', '/mock-checkout'), ('some', '/mock-checkout/some')]
    288290        self.assertEqual(runner.run(), 0)
    289         self.assertEqual(json.loads(runner._host.filesystem.files['/mock-checkout/output.json']), {
     291        self.assertEqual(json.loads(port.host.filesystem.files['/mock-checkout/output.json']), {
    290292            "timestamp": 123456789, "results": {"inspector/pass.html:group_name:test_name": 42.0}, "webkit-revision": 5678, "some-revision": 5678})
    291293
    292294    def test_run_with_upload_json(self):
    293         runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',
     295        runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',
    294296            '--test-results-server', 'some.host', '--platform', 'platform1', '--builder-name', 'builder1', '--build-number', '123'])
    295297        upload_json_is_called = [False]
     
    303305
    304306        runner._upload_json = mock_upload_json
    305         runner._host.filesystem.files['/mock-checkout/source.json'] = '{"key": "value"}'
    306         runner._host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
    307         runner._host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True
     307        port.host.filesystem.files['/mock-checkout/source.json'] = '{"key": "value"}'
     308        port.host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
     309        port.host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True
    308310        runner._timestamp = 123456789
    309311        self.assertEqual(runner.run(), 0)
    310312        self.assertEqual(upload_json_is_called[0], True)
    311         generated_json = json.loads(runner._host.filesystem.files['/mock-checkout/output.json'])
     313        generated_json = json.loads(port.host.filesystem.files['/mock-checkout/output.json'])
    312314        self.assertEqual(generated_json['platform'], 'platform1')
    313315        self.assertEqual(generated_json['builder-name'], 'builder1')
     
    315317        upload_json_returns_true = False
    316318
    317         runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',
     319        runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',
    318320            '--test-results-server', 'some.host', '--platform', 'platform1', '--builder-name', 'builder1', '--build-number', '123'])
    319321        runner._upload_json = mock_upload_json
     
    321323
    322324    def test_upload_json(self):
    323         runner = self.create_runner()
    324         runner._host.filesystem.files['/mock-checkout/some.json'] = 'some content'
     325        runner, port = self.create_runner()
     326        port.host.filesystem.files['/mock-checkout/some.json'] = 'some content'
    325327
    326328        called = []
     
    335337
    336338            def upload_single_text_file(mock, filesystem, content_type, filename):
    337                 self.assertEqual(filesystem, runner._host.filesystem)
     339                self.assertEqual(filesystem, port.host.filesystem)
    338340                self.assertEqual(content_type, 'application/json')
    339341                self.assertEqual(filename, 'some.json')
     
    359361        self.assertEqual(called, ['FileUploader', 'upload_single_text_file'])
    360362
     363    def _add_file(self, runner, dirname, filename, content=True):
     364        dirname = runner._host.filesystem.join(runner._base_path, dirname) if dirname else runner._base_path
     365        runner._host.filesystem.maybe_make_directory(dirname)
     366        runner._host.filesystem.files[runner._host.filesystem.join(dirname, filename)] = content
     367
    361368    def test_collect_tests(self):
    362         runner = self.create_runner()
    363         filename = runner._host.filesystem.join(runner._base_path, 'inspector', 'a_file.html')
    364         runner._host.filesystem.files[filename] = 'a content'
     369        runner, port = self.create_runner()
     370        self._add_file(runner, 'inspector', 'a_file.html', 'a content')
    365371        tests = runner._collect_tests()
    366372        self.assertEqual(len(tests), 1)
     
    369375        return sorted([test.test_name() for test in runner._collect_tests()])
    370376
    371     def test_collect_tests(self):
    372         runner = self.create_runner(args=['PerformanceTests/test1.html', 'test2.html'])
     377    def test_collect_tests_with_multile_files(self):
     378        runner, port = self.create_runner(args=['PerformanceTests/test1.html', 'test2.html'])
    373379
    374380        def add_file(filename):
    375             runner._host.filesystem.files[runner._host.filesystem.join(runner._base_path, filename)] = 'some content'
     381            port.host.filesystem.files[runner._host.filesystem.join(runner._base_path, filename)] = 'some content'
    376382
    377383        add_file('test1.html')
    378384        add_file('test2.html')
    379385        add_file('test3.html')
    380         runner._host.filesystem.chdir(runner._port.perf_tests_dir()[:runner._port.perf_tests_dir().rfind(runner._host.filesystem.sep)])
     386        port.host.filesystem.chdir(runner._port.perf_tests_dir()[:runner._port.perf_tests_dir().rfind(runner._host.filesystem.sep)])
    381387        self.assertEqual(self._collect_tests_and_sort_test_name(runner), ['test1.html', 'test2.html'])
    382388
    383389    def test_collect_tests_with_skipped_list(self):
    384         runner = self.create_runner()
    385 
    386         def add_file(dirname, filename, content=True):
    387             dirname = runner._host.filesystem.join(runner._base_path, dirname) if dirname else runner._base_path
    388             runner._host.filesystem.maybe_make_directory(dirname)
    389             runner._host.filesystem.files[runner._host.filesystem.join(dirname, filename)] = content
    390 
    391         add_file('inspector', 'test1.html')
    392         add_file('inspector', 'unsupported_test1.html')
    393         add_file('inspector', 'test2.html')
    394         add_file('inspector/resources', 'resource_file.html')
    395         add_file('unsupported', 'unsupported_test2.html')
    396         runner._port.skipped_perf_tests = lambda: ['inspector/unsupported_test1.html', 'unsupported']
     390        runner, port = self.create_runner()
     391
     392        self._add_file(runner, 'inspector', 'test1.html')
     393        self._add_file(runner, 'inspector', 'unsupported_test1.html')
     394        self._add_file(runner, 'inspector', 'test2.html')
     395        self._add_file(runner, 'inspector/resources', 'resource_file.html')
     396        self._add_file(runner, 'unsupported', 'unsupported_test2.html')
     397        port.skipped_perf_tests = lambda: ['inspector/unsupported_test1.html', 'unsupported']
    397398        self.assertEqual(self._collect_tests_and_sort_test_name(runner), ['inspector/test1.html', 'inspector/test2.html'])
    398399
    399400    def test_collect_tests_with_page_load_svg(self):
    400         runner = self.create_runner()
    401 
    402         def add_file(dirname, filename, content=True):
    403             dirname = runner._host.filesystem.join(runner._base_path, dirname) if dirname else runner._base_path
    404             runner._host.filesystem.maybe_make_directory(dirname)
    405             runner._host.filesystem.files[runner._host.filesystem.join(dirname, filename)] = content
    406 
    407         add_file('PageLoad', 'some-svg-test.svg')
     401        runner, port = self.create_runner()
     402        self._add_file(runner, 'PageLoad', 'some-svg-test.svg')
    408403        tests = runner._collect_tests()
    409404        self.assertEqual(len(tests), 1)
    410405        self.assertEqual(tests[0].__class__.__name__, 'PageLoadingPerfTest')
    411406
     407    def test_collect_tests_should_ignore_replay_tests_by_default(self):
     408        runner, port = self.create_runner()
     409        self._add_file(runner, 'Replay', 'www.webkit.org.replay')
     410        self.assertEqual(runner._collect_tests(), [])
     411
     412    def test_collect_tests_with_replay_tests(self):
     413        runner, port = self.create_runner(args=['--replay'])
     414        self._add_file(runner, 'Replay', 'www.webkit.org.replay')
     415        tests = runner._collect_tests()
     416        self.assertEqual(len(tests), 1)
     417        self.assertEqual(tests[0].__class__.__name__, 'ReplayPerfTest')
     418
    412419    def test_parse_args(self):
    413         runner = self.create_runner()
     420        runner, port = self.create_runner()
    414421        options, args = PerfTestsRunner._parse_args([
    415422                '--build-directory=folder42',
  • trunk/Tools/Scripts/webkitpy/thirdparty/__init__.py

    r116668 r119188  
    8383        elif '.buildbot' in fullname:
    8484            self._install_buildbot()
     85        elif '.webpagereplay' in fullname:
     86            self._install_webpagereplay()
    8587
    8688    def _install_mechanize(self):
     
    127129                          url_subpath="ircbot.py")
    128130
     131    def _install_webpagereplay(self):
     132        if not self._fs.exists(self._fs.join(_AUTOINSTALLED_DIR, "webpagereplay")):
     133            self._install("http://web-page-replay.googlecode.com/files/webpagereplay-1.1.1.tar.gz", "webpagereplay-1.1.1")
     134            self._fs.move(self._fs.join(_AUTOINSTALLED_DIR, "webpagereplay-1.1.1"), self._fs.join(_AUTOINSTALLED_DIR, "webpagereplay"))
     135
     136        init_path = self._fs.join(_AUTOINSTALLED_DIR, "webpagereplay", "__init__.py")
     137        if not self._fs.exists(init_path):
     138            self._fs.write_text_file(init_path, "")
     139
    129140    def _install(self, url, url_subpath):
    130141        installer = AutoInstaller(target_dir=_AUTOINSTALLED_DIR)
Note: See TracChangeset for help on using the changeset viewer.