Changeset 119188 in webkit
- Timestamp:
- May 31, 2012 8:19:31 PM (12 years ago)
- Location:
- trunk
- Files:
-
- 3 added
- 8 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/PerformanceTests/ChangeLog
r118899 r119188 1 2012-06-01 Ryosuke Niwa <rniwa@webkit.org> 2 3 Add public page loading performance tests using web-page-replay 4 https://bugs.webkit.org/show_bug.cgi?id=84008 5 6 Reviewed by Dirk Pranke. 7 8 Add replay tests for google.com and youtube.com as examples. 9 10 * Replay: Added. 11 * Replay/www.google.com.replay: Added. 12 * Replay/www.youtube.com.replay: Added. 13 1 14 2012-05-30 Kentaro Hara <haraken@chromium.org> 2 15 -
trunk/Tools/ChangeLog
r119183 r119188 1 2012-06-01 Ryosuke Niwa <rniwa@webkit.org> 2 3 Add public page loading performance tests using web-page-replay 4 https://bugs.webkit.org/show_bug.cgi?id=84008 5 6 Reviewed by Dirk Pranke. 7 8 Add the primitive implementation of replay performance tests. We use web-page-replay (http://code.google.com/p/web-page-replay/) 9 to cache data locally. Each replay test is represented by a text file with .replay extension containing a single URL. 10 To hash out bugs and isolate them from the rest of performance tests, replay tests are hidden behind --replay flag. 11 12 Run "run-perf-tests --replay PerformanceTests/Replay" after changing the system network preference to forward HTTP and HTTPS requests 13 to localhost:8080 and localhost:8443 respectively (i.e. configure the system as if there are HTTP proxies at ports 8080 and 8443) 14 excluding: *.webkit.org, *.googlecode.com, *.sourceforge.net, pypi.python.org, and www.adambarth.com for thirdparty Python dependencies. 15 run-perf-tests starts web-page-replay, which provides HTTP proxies at ports 8080 and 8443 to replay pages. 16 17 * Scripts/webkitpy/layout_tests/port/driver.py: 18 (Driver.is_external_http_test): Added. 19 * Scripts/webkitpy/layout_tests/port/webkit.py: 20 (WebKitDriver._command_from_driver_input): Allow test names that starts with http:// or https://. 21 * Scripts/webkitpy/performance_tests/perftest.py: 22 (PerfTest.__init__): Takes port. 23 (PerfTest.prepare): Added. Overridden by ReplayPerfTest. 24 (PerfTest): 25 (PerfTest.run): Calls run_single. 26 (PerfTest.run_single): Extracted from PageLoadingPerfTest.run. 27 (ChromiumStylePerfTest.__init__): 28 (PageLoadingPerfTest.__init__): 29 (PageLoadingPerfTest.run): 30 (ReplayServer): Added. Responsible for starting and stopping replay.py in the web-page-replay. 31 (ReplayServer.__init__): 32 (ReplayServer.wait_until_ready): Wait until port 8080 is ready. I have tried looking at the piped output from web-page-replay 33 but it caused a dead lock on some web pages. 34 (ReplayServer.stop): 35 (ReplayServer.__del__): 36 (ReplayPerfTest): 37 (ReplayPerfTest.__init__): 38 (ReplayPerfTest._start_replay_server): 39 (ReplayPerfTest.prepare): Creates test.wpr and test-expected.png to cache the page when a replay test is ran for the first time. 40 The subsequent runs of the same test will just use test.wpr. 41 (ReplayPerfTest.run_single): 42 (PerfTestFactory): 43 (PerfTestFactory.create_perf_test): 44 * Scripts/webkitpy/performance_tests/perftest_unittest.py: 45 (MainTest.test_parse_output): 46 (MainTest.test_parse_output_with_failing_line): 47 (TestPageLoadingPerfTest.test_run): 48 (TestPageLoadingPerfTest.test_run_with_bad_output): 49 (TestReplayPerfTest): 50 (TestReplayPerfTest.ReplayTestPort): 51 (TestReplayPerfTest.ReplayTestPort.__init__): 52 (TestReplayPerfTest.ReplayTestPort.__init__.ReplayTestDriver): 53 (TestReplayPerfTest.ReplayTestPort.__init__.ReplayTestDriver.run_test): 54 (TestReplayPerfTest.ReplayTestPort._driver_class): 55 (TestReplayPerfTest.MockReplayServer): 56 (TestReplayPerfTest.MockReplayServer.__init__): 57 (TestReplayPerfTest.MockReplayServer.stop): 58 (TestReplayPerfTest._add_file): 59 (TestReplayPerfTest._setup_test): 60 (TestReplayPerfTest.test_run_single): 61 (TestReplayPerfTest.test_run_single.run_test): 62 (TestReplayPerfTest.test_run_single_fails_without_webpagereplay): 63 (TestReplayPerfTest.test_prepare_fails_when_wait_until_ready_fails): 64 (TestReplayPerfTest.test_run_single_fails_when_output_has_error): 65 (TestReplayPerfTest.test_run_single_fails_when_output_has_error.run_test): 66 (TestReplayPerfTest.test_prepare): 67 (TestReplayPerfTest.test_prepare.run_test): 68 (TestReplayPerfTest.test_prepare_calls_run_single): 69 (TestReplayPerfTest.test_prepare_calls_run_single.run_single): 70 (TestPerfTestFactory.test_regular_test): 71 (TestPerfTestFactory.test_inspector_test): 72 (TestPerfTestFactory.test_page_loading_test): 73 * Scripts/webkitpy/performance_tests/perftestsrunner.py: 74 (PerfTestsRunner): 75 (PerfTestsRunner._parse_args): Added --replay flag to enable replay tests. 76 (PerfTestsRunner._collect_tests): Collect .replay files when replay tests are enabled. 77 (PerfTestsRunner._collect_tests._is_test_file): 78 (PerfTestsRunner.run): Exit early if one of calls to prepare() fails. 79 * Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py: 80 (create_runner): 81 (run_test): 82 (_tests_for_runner): 83 (test_run_test_set): 84 (test_run_test_set_kills_drt_per_run): 85 (test_run_test_pause_before_testing): 86 (test_run_test_set_for_parser_tests): 87 (test_run_test_set_with_json_output): 88 (test_run_test_set_with_json_source): 89 (test_run_test_set_with_multiple_repositories): 90 (test_run_with_upload_json): 91 (test_upload_json): 92 (test_upload_json.MockFileUploader.upload_single_text_file): 93 (_add_file): 94 (test_collect_tests): 95 (test_collect_tests_with_multile_files): 96 (test_collect_tests_with_multile_files.add_file): 97 (test_collect_tests_with_skipped_list): 98 (test_collect_tests_with_page_load_svg): 99 (test_collect_tests_should_ignore_replay_tests_by_default): 100 (test_collect_tests_with_replay_tests): 101 (test_parse_args): 102 * Scripts/webkitpy/thirdparty/__init__.py: Added the dependency for web-page-replay version 1.1.1. 103 (AutoinstallImportHook.find_module): 104 (AutoinstallImportHook._install_webpagereplay): 105 1 106 2012-05-31 Yaron Friedman <yfriedman@chromium.org> 2 107 -
trunk/Tools/Scripts/webkitpy/layout_tests/port/webkit.py
r119018 r119188 535 535 536 536 def _command_from_driver_input(self, driver_input): 537 if self.is_http_test(driver_input.test_name): 537 # FIXME: performance tests pass in full URLs instead of test names. 538 if driver_input.test_name.startswith('http://') or driver_input.test_name.startswith('https://'): 539 command = driver_input.test_name 540 elif self.is_http_test(driver_input.test_name): 538 541 command = self.test_to_uri(driver_input.test_name) 539 542 else: -
trunk/Tools/Scripts/webkitpy/performance_tests/perftest.py
r117422 r119188 29 29 30 30 31 import errno 31 32 import logging 32 33 import math 33 34 import re 34 35 import os 36 import signal 37 import socket 38 import subprocess 39 import time 40 41 # Import for auto-install 42 import webkitpy.thirdparty.autoinstalled.webpagereplay.replay 43 44 from webkitpy.layout_tests.controllers.test_result_writer import TestResultWriter 35 45 from webkitpy.layout_tests.port.driver import DriverInput 46 from webkitpy.layout_tests.port.driver import DriverOutput 36 47 37 48 … … 40 51 41 52 class PerfTest(object): 42 def __init__(self, test_name, path_or_url): 53 def __init__(self, port, test_name, path_or_url): 54 self._port = port 43 55 self._test_name = test_name 44 56 self._path_or_url = path_or_url … … 50 62 return self._path_or_url 51 63 52 def run(self, driver, timeout_ms): 53 output = driver.run_test(DriverInput(self.path_or_url(), timeout_ms, None, False)) 64 def prepare(self, time_out_ms): 65 return True 66 67 def run(self, driver, time_out_ms): 68 output = self.run_single(driver, self.path_or_url(), time_out_ms) 54 69 if self.run_failed(output): 55 70 return None 56 71 return self.parse_output(output) 72 73 def run_single(self, driver, path_or_url, time_out_ms, should_run_pixel_test=False): 74 return driver.run_test(DriverInput(path_or_url, time_out_ms, image_hash=None, should_run_pixel_test=should_run_pixel_test)) 57 75 58 76 def run_failed(self, output): … … 138 156 _chromium_style_result_regex = re.compile(r'^RESULT\s+(?P<name>[^=]+)\s*=\s+(?P<value>\d+(\.\d+)?)\s*(?P<unit>\w+)$') 139 157 140 def __init__(self, test_name, path_or_url):141 super(ChromiumStylePerfTest, self).__init__( test_name, path_or_url)158 def __init__(self, port, test_name, path_or_url): 159 super(ChromiumStylePerfTest, self).__init__(port, test_name, path_or_url) 142 160 143 161 def parse_output(self, output): … … 158 176 159 177 class PageLoadingPerfTest(PerfTest): 160 def __init__(self, test_name, path_or_url):161 super(PageLoadingPerfTest, self).__init__( test_name, path_or_url)162 163 def run(self, driver, time out_ms):178 def __init__(self, port, test_name, path_or_url): 179 super(PageLoadingPerfTest, self).__init__(port, test_name, path_or_url) 180 181 def run(self, driver, time_out_ms): 164 182 test_times = [] 165 183 166 184 for i in range(0, 20): 167 output = driver.run_test(DriverInput(self.path_or_url(), timeout_ms, None, False))168 if self.run_failed(output):185 output = self.run_single(driver, self.path_or_url(), time_out_ms) 186 if not output or self.run_failed(output): 169 187 return None 170 188 if i == 0: … … 195 213 196 214 215 class ReplayServer(object): 216 def __init__(self, archive, record): 217 self._process = None 218 219 # FIXME: Should error if local proxy isn't set to forward requests to localhost:8080 and localhost:8413 220 221 replay_path = webkitpy.thirdparty.autoinstalled.webpagereplay.replay.__file__ 222 args = ['python', replay_path, '--no-dns_forwarding', '--port', '8080', '--ssl_port', '8413', '--use_closest_match', '--log_level', 'warning'] 223 if record: 224 args.append('--record') 225 args.append(archive) 226 227 self._process = subprocess.Popen(args) 228 229 def wait_until_ready(self): 230 for i in range(0, 10): 231 try: 232 connection = socket.create_connection(('localhost', '8080'), timeout=1) 233 connection.close() 234 return True 235 except socket.error: 236 time.sleep(1) 237 continue 238 return False 239 240 def stop(self): 241 if self._process: 242 self._process.send_signal(signal.SIGINT) 243 self._process.wait() 244 self._process = None 245 246 def __del__(self): 247 self.stop() 248 249 250 class ReplayPerfTest(PageLoadingPerfTest): 251 def __init__(self, port, test_name, path_or_url): 252 super(ReplayPerfTest, self).__init__(port, test_name, path_or_url) 253 254 def _start_replay_server(self, archive, record): 255 try: 256 return ReplayServer(archive, record) 257 except OSError as error: 258 if error.errno == errno.ENOENT: 259 _log.error("Replay tests require web-page-replay.") 260 else: 261 raise error 262 263 def prepare(self, time_out_ms): 264 filesystem = self._port.host.filesystem 265 path_without_ext = filesystem.splitext(self.path_or_url())[0] 266 267 self._archive_path = filesystem.join(path_without_ext + '.wpr') 268 self._expected_image_path = filesystem.join(path_without_ext + '-expected.png') 269 self._url = filesystem.read_text_file(self.path_or_url()).split('\n')[0] 270 271 if filesystem.isfile(self._archive_path) and filesystem.isfile(self._expected_image_path): 272 _log.info("Replay ready for %s" % self._archive_path) 273 return True 274 275 _log.info("Preparing replay for %s" % self.test_name()) 276 277 driver = self._port.create_driver(worker_number=1, no_timeout=True) 278 try: 279 output = self.run_single(driver, self._url, time_out_ms, record=True) 280 finally: 281 driver.stop() 282 283 if not output or not filesystem.isfile(self._archive_path): 284 _log.error("Failed to prepare a replay for %s" % self.test_name()) 285 return False 286 287 _log.info("Prepared replay for %s" % self.test_name()) 288 289 return True 290 291 def run_single(self, driver, url, time_out_ms, record=False): 292 server = self._start_replay_server(self._archive_path, record) 293 if not server: 294 _log.error("Web page replay didn't start.") 295 return None 296 297 try: 298 if not server.wait_until_ready(): 299 _log.error("Web page replay didn't start.") 300 return None 301 302 super(ReplayPerfTest, self).run_single(driver, "about:blank", time_out_ms) 303 _log.debug("Loading the page") 304 305 output = super(ReplayPerfTest, self).run_single(driver, self._url, time_out_ms, should_run_pixel_test=True) 306 if self.run_failed(output): 307 return None 308 309 if not output.image: 310 _log.error("Loading the page did not generate image results") 311 _log.error(output.text) 312 return None 313 314 filesystem = self._port.host.filesystem 315 dirname = filesystem.dirname(url) 316 filename = filesystem.split(url)[1] 317 writer = TestResultWriter(filesystem, self._port, dirname, filename) 318 if record: 319 writer.write_image_files(actual_image=None, expected_image=output.image) 320 else: 321 writer.write_image_files(actual_image=output.image, expected_image=None) 322 323 return output 324 finally: 325 server.stop() 326 327 197 328 class PerfTestFactory(object): 198 329 199 330 _pattern_map = [ 200 (re.compile('^inspector/'), ChromiumStylePerfTest), 201 (re.compile('^PageLoad/'), PageLoadingPerfTest), 331 (re.compile(r'^inspector/'), ChromiumStylePerfTest), 332 (re.compile(r'^PageLoad/'), PageLoadingPerfTest), 333 (re.compile(r'(.+)\.replay$'), ReplayPerfTest), 202 334 ] 203 335 204 336 @classmethod 205 def create_perf_test(cls, test_name, path):337 def create_perf_test(cls, port, test_name, path): 206 338 for (pattern, test_class) in cls._pattern_map: 207 339 if pattern.match(test_name): 208 return test_class( test_name, path)209 return PerfTest( test_name, path)340 return test_class(port, test_name, path) 341 return PerfTest(port, test_name, path) -
trunk/Tools/Scripts/webkitpy/performance_tests/perftest_unittest.py
r115466 r119188 32 32 import unittest 33 33 34 from webkitpy.common.host_mock import MockHost 34 35 from webkitpy.common.system.outputcapture import OutputCapture 35 36 from webkitpy.layout_tests.port.driver import DriverOutput 37 from webkitpy.layout_tests.port.test import TestDriver 38 from webkitpy.layout_tests.port.test import TestPort 36 39 from webkitpy.performance_tests.perftest import ChromiumStylePerfTest 37 40 from webkitpy.performance_tests.perftest import PageLoadingPerfTest 38 41 from webkitpy.performance_tests.perftest import PerfTest 39 42 from webkitpy.performance_tests.perftest import PerfTestFactory 43 from webkitpy.performance_tests.perftest import ReplayPerfTest 40 44 41 45 … … 54 58 output_capture.capture_output() 55 59 try: 56 test = PerfTest( 'some-test', '/path/some-dir/some-test')60 test = PerfTest(None, 'some-test', '/path/some-dir/some-test') 57 61 self.assertEqual(test.parse_output(output), 58 62 {'some-test': {'avg': 1100.0, 'median': 1101.0, 'min': 1080.0, 'max': 1120.0, 'stdev': 11.0, 'unit': 'ms'}}) … … 78 82 output_capture.capture_output() 79 83 try: 80 test = PerfTest( 'some-test', '/path/some-dir/some-test')84 test = PerfTest(None, 'some-test', '/path/some-dir/some-test') 81 85 self.assertEqual(test.parse_output(output), None) 82 86 finally: … … 102 106 103 107 def test_run(self): 104 test = PageLoadingPerfTest( 'some-test', '/path/some-dir/some-test')108 test = PageLoadingPerfTest(None, 'some-test', '/path/some-dir/some-test') 105 109 driver = TestPageLoadingPerfTest.MockDriver([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]) 106 110 output_capture = OutputCapture() … … 119 123 output_capture.capture_output() 120 124 try: 121 test = PageLoadingPerfTest( 'some-test', '/path/some-dir/some-test')125 test = PageLoadingPerfTest(None, 'some-test', '/path/some-dir/some-test') 122 126 driver = TestPageLoadingPerfTest.MockDriver([1, 2, 3, 4, 5, 6, 7, 'some error', 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]) 123 127 self.assertEqual(test.run(driver, None), None) … … 129 133 130 134 135 class TestReplayPerfTest(unittest.TestCase): 136 137 class ReplayTestPort(TestPort): 138 def __init__(self, custom_run_test=None): 139 140 class ReplayTestDriver(TestDriver): 141 def run_test(self, text_input): 142 return custom_run_test(text_input) if custom_run_test else None 143 144 self._custom_driver_class = ReplayTestDriver 145 super(self.__class__, self).__init__(host=MockHost()) 146 147 def _driver_class(self): 148 return self._custom_driver_class 149 150 class MockReplayServer(object): 151 def __init__(self, wait_until_ready=True): 152 self.wait_until_ready = lambda: wait_until_ready 153 154 def stop(self): 155 pass 156 157 def _add_file(self, port, dirname, filename, content=True): 158 port.host.filesystem.maybe_make_directory(dirname) 159 port.host.filesystem.files[port.host.filesystem.join(dirname, filename)] = content 160 161 def _setup_test(self, run_test=None): 162 test_port = self.ReplayTestPort(run_test) 163 self._add_file(test_port, '/path/some-dir', 'some-test.replay', 'http://some-test/') 164 test = ReplayPerfTest(test_port, 'some-test.replay', '/path/some-dir/some-test.replay') 165 test._start_replay_server = lambda archive, record: self.__class__.MockReplayServer() 166 return test, test_port 167 168 def test_run_single(self): 169 output_capture = OutputCapture() 170 output_capture.capture_output() 171 172 loaded_pages = [] 173 174 def run_test(test_input): 175 if test_input.test_name != "about:blank": 176 self.assertEqual(test_input.test_name, 'http://some-test/') 177 loaded_pages.append(test_input) 178 self._add_file(port, '/path/some-dir', 'some-test.wpr', 'wpr content') 179 return DriverOutput('actual text', 'actual image', 'actual checksum', 180 audio=None, crash=False, timeout=False, error=False) 181 182 test, port = self._setup_test(run_test) 183 test._archive_path = '/path/some-dir/some-test.wpr' 184 test._url = 'http://some-test/' 185 186 try: 187 driver = port.create_driver(worker_number=1, no_timeout=True) 188 self.assertTrue(test.run_single(driver, '/path/some-dir/some-test.replay', time_out_ms=100)) 189 finally: 190 actual_stdout, actual_stderr, actual_logs = output_capture.restore_output() 191 192 self.assertEqual(len(loaded_pages), 2) 193 self.assertEqual(loaded_pages[0].test_name, 'about:blank') 194 self.assertEqual(loaded_pages[1].test_name, 'http://some-test/') 195 self.assertEqual(actual_stdout, '') 196 self.assertEqual(actual_stderr, '') 197 self.assertEqual(actual_logs, '') 198 199 def test_run_single_fails_without_webpagereplay(self): 200 output_capture = OutputCapture() 201 output_capture.capture_output() 202 203 test, port = self._setup_test() 204 test._start_replay_server = lambda archive, record: None 205 test._archive_path = '/path/some-dir.wpr' 206 test._url = 'http://some-test/' 207 208 try: 209 driver = port.create_driver(worker_number=1, no_timeout=True) 210 self.assertEqual(test.run_single(driver, '/path/some-dir/some-test.replay', time_out_ms=100), None) 211 finally: 212 actual_stdout, actual_stderr, actual_logs = output_capture.restore_output() 213 self.assertEqual(actual_stdout, '') 214 self.assertEqual(actual_stderr, '') 215 self.assertEqual(actual_logs, "Web page replay didn't start.\n") 216 217 def test_prepare_fails_when_wait_until_ready_fails(self): 218 output_capture = OutputCapture() 219 output_capture.capture_output() 220 221 test, port = self._setup_test() 222 test._start_replay_server = lambda archive, record: self.__class__.MockReplayServer(wait_until_ready=False) 223 test._archive_path = '/path/some-dir.wpr' 224 test._url = 'http://some-test/' 225 226 try: 227 driver = port.create_driver(worker_number=1, no_timeout=True) 228 self.assertEqual(test.run_single(driver, '/path/some-dir/some-test.replay', time_out_ms=100), None) 229 finally: 230 actual_stdout, actual_stderr, actual_logs = output_capture.restore_output() 231 232 self.assertEqual(actual_stdout, '') 233 self.assertEqual(actual_stderr, '') 234 self.assertEqual(actual_logs, "Web page replay didn't start.\n") 235 236 def test_run_single_fails_when_output_has_error(self): 237 output_capture = OutputCapture() 238 output_capture.capture_output() 239 240 loaded_pages = [] 241 242 def run_test(test_input): 243 loaded_pages.append(test_input) 244 self._add_file(port, '/path/some-dir', 'some-test.wpr', 'wpr content') 245 return DriverOutput('actual text', 'actual image', 'actual checksum', 246 audio=None, crash=False, timeout=False, error='some error') 247 248 test, port = self._setup_test(run_test) 249 test._archive_path = '/path/some-dir.wpr' 250 test._url = 'http://some-test/' 251 252 try: 253 driver = port.create_driver(worker_number=1, no_timeout=True) 254 self.assertEqual(test.run_single(driver, '/path/some-dir/some-test.replay', time_out_ms=100), None) 255 finally: 256 actual_stdout, actual_stderr, actual_logs = output_capture.restore_output() 257 258 self.assertEqual(len(loaded_pages), 2) 259 self.assertEqual(loaded_pages[0].test_name, 'about:blank') 260 self.assertEqual(loaded_pages[1].test_name, 'http://some-test/') 261 self.assertEqual(actual_stdout, '') 262 self.assertEqual(actual_stderr, '') 263 self.assertEqual(actual_logs, 'error: some-test.replay\nsome error\n') 264 265 def test_prepare(self): 266 output_capture = OutputCapture() 267 output_capture.capture_output() 268 269 def run_test(test_input): 270 self._add_file(port, '/path/some-dir', 'some-test.wpr', 'wpr content') 271 return DriverOutput('actual text', 'actual image', 'actual checksum', 272 audio=None, crash=False, timeout=False, error=False) 273 274 test, port = self._setup_test(run_test) 275 276 try: 277 self.assertEqual(test.prepare(time_out_ms=100), True) 278 finally: 279 actual_stdout, actual_stderr, actual_logs = output_capture.restore_output() 280 281 self.assertEqual(actual_stdout, '') 282 self.assertEqual(actual_stderr, '') 283 self.assertEqual(actual_logs, 'Preparing replay for some-test.replay\nPrepared replay for some-test.replay\n') 284 285 def test_prepare_calls_run_single(self): 286 output_capture = OutputCapture() 287 output_capture.capture_output() 288 called = [False] 289 290 def run_single(driver, url, time_out_ms, record): 291 self.assertTrue(record) 292 self.assertEqual(url, 'http://some-test/') 293 called[0] = True 294 return False 295 296 test, port = self._setup_test() 297 test.run_single = run_single 298 299 try: 300 self.assertEqual(test.prepare(time_out_ms=100), False) 301 finally: 302 actual_stdout, actual_stderr, actual_logs = output_capture.restore_output() 303 self.assertTrue(called[0]) 304 self.assertEqual(test._archive_path, '/path/some-dir/some-test.wpr') 305 self.assertEqual(test._url, 'http://some-test/') 306 self.assertEqual(actual_stdout, '') 307 self.assertEqual(actual_stderr, '') 308 self.assertEqual(actual_logs, "Preparing replay for some-test.replay\nFailed to prepare a replay for some-test.replay\n") 309 131 310 class TestPerfTestFactory(unittest.TestCase): 132 311 def test_regular_test(self): 133 test = PerfTestFactory.create_perf_test( 'some-dir/some-test', '/path/some-dir/some-test')312 test = PerfTestFactory.create_perf_test(None, 'some-dir/some-test', '/path/some-dir/some-test') 134 313 self.assertEqual(test.__class__, PerfTest) 135 314 136 315 def test_inspector_test(self): 137 test = PerfTestFactory.create_perf_test( 'inspector/some-test', '/path/inspector/some-test')316 test = PerfTestFactory.create_perf_test(None, 'inspector/some-test', '/path/inspector/some-test') 138 317 self.assertEqual(test.__class__, ChromiumStylePerfTest) 139 318 140 319 def test_page_loading_test(self): 141 test = PerfTestFactory.create_perf_test( 'PageLoad/some-test', '/path/PageLoad/some-test')320 test = PerfTestFactory.create_perf_test(None, 'PageLoad/some-test', '/path/PageLoad/some-test') 142 321 self.assertEqual(test.__class__, PageLoadingPerfTest) 143 322 -
trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner.py
r115466 r119188 42 42 from webkitpy.layout_tests.views import printing 43 43 from webkitpy.performance_tests.perftest import PerfTestFactory 44 from webkitpy.performance_tests.perftest import ReplayPerfTest 44 45 45 46 … … 52 53 _EXIT_CODE_BAD_JSON = -2 53 54 _EXIT_CODE_FAILED_UPLOADING = -3 55 _EXIT_CODE_BAD_PREPARATION = -4 54 56 55 57 def __init__(self, args=None, port=None): … … 91 93 help="Pause before running the tests to let user attach a performance monitor."), 92 94 optparse.make_option("--output-json-path", 93 help="Filename of the JSON file that summaries the results "),95 help="Filename of the JSON file that summaries the results."), 94 96 optparse.make_option("--source-json-path", 95 help="Path to a JSON file to be merged into the JSON file when --output-json-path is present "),97 help="Path to a JSON file to be merged into the JSON file when --output-json-path is present."), 96 98 optparse.make_option("--test-results-server", 97 help="Upload the generated JSON file to the specified server when --output-json-path is present "),99 help="Upload the generated JSON file to the specified server when --output-json-path is present."), 98 100 optparse.make_option("--webkit-test-runner", "-2", action="store_true", 99 101 help="Use WebKitTestRunner rather than DumpRenderTree."), 102 optparse.make_option("--replay", dest="replay", action="store_true", default=False, 103 help="Run replay tests."), 100 104 ] 101 105 return optparse.OptionParser(option_list=(perf_option_list)).parse_args(args) … … 104 108 """Return the list of tests found.""" 105 109 110 test_extensions = ['.html', '.svg'] 111 if self._options.replay: 112 test_extensions.append('.replay') 113 106 114 def _is_test_file(filesystem, dirname, filename): 107 return filesystem.splitext(filename)[1] in ['.html', '.svg']115 return filesystem.splitext(filename)[1] in test_extensions 108 116 109 117 filesystem = self._host.filesystem … … 123 131 if self._port.skips_perf_test(relative_path): 124 132 continue 125 tests.append(PerfTestFactory.create_perf_test(relative_path, path)) 133 test = PerfTestFactory.create_perf_test(self._port, relative_path, path) 134 tests.append(test) 126 135 127 136 return tests … … 132 141 return self._EXIT_CODE_BAD_BUILD 133 142 134 # We wrap any parts of the run that are slow or likely to raise exceptions135 # in a try/finally to ensure that we clean up the logging configuration.136 unexpected = -1137 143 tests = self._collect_tests() 144 _log.info("Running %d tests" % len(tests)) 145 146 for test in tests: 147 if not test.prepare(self._options.time_out_ms): 148 return self._EXIT_CODE_BAD_PREPARATION 149 138 150 unexpected = self._run_tests_set(sorted(list(tests), key=lambda test: test.test_name()), self._port) 139 151 -
trunk/Tools/Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py
r115466 r119188 121 121 runner._host.filesystem.maybe_make_directory(runner._base_path, 'Bindings') 122 122 runner._host.filesystem.maybe_make_directory(runner._base_path, 'Parser') 123 return runner 123 return runner, test_port 124 124 125 125 def run_test(self, test_name): 126 runner = self.create_runner()126 runner, port = self.create_runner() 127 127 driver = MainTest.TestDriver() 128 return runner._run_single_test(ChromiumStylePerfTest( test_name, runner._host.filesystem.join('some-dir', test_name)), driver)128 return runner._run_single_test(ChromiumStylePerfTest(port, test_name, runner._host.filesystem.join('some-dir', test_name)), driver) 129 129 130 130 def test_run_passing_test(self): … … 153 153 dirname = filesystem.dirname(path) 154 154 if test.startswith('inspector/'): 155 tests.append(ChromiumStylePerfTest( test, path))155 tests.append(ChromiumStylePerfTest(runner._port, test, path)) 156 156 else: 157 tests.append(PerfTest( test, path))157 tests.append(PerfTest(runner._port, test, path)) 158 158 return tests 159 159 160 160 def test_run_test_set(self): 161 runner = self.create_runner()161 runner, port = self.create_runner() 162 162 tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html', 163 163 'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html']) … … 165 165 output.capture_output() 166 166 try: 167 unexpected_result_count = runner._run_tests_set(tests, runner._port)167 unexpected_result_count = runner._run_tests_set(tests, port) 168 168 finally: 169 169 stdout, stderr, log = output.restore_output() … … 179 179 TestDriverWithStopCount.stop_count += 1 180 180 181 runner = self.create_runner(driver_class=TestDriverWithStopCount)181 runner, port = self.create_runner(driver_class=TestDriverWithStopCount) 182 182 183 183 tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html', 184 184 'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html']) 185 unexpected_result_count = runner._run_tests_set(tests, runner._port)185 unexpected_result_count = runner._run_tests_set(tests, port) 186 186 187 187 self.assertEqual(TestDriverWithStopCount.stop_count, 6) … … 194 194 TestDriverWithStartCount.start_count += 1 195 195 196 runner = self.create_runner(args=["--pause-before-testing"], driver_class=TestDriverWithStartCount)196 runner, port = self.create_runner(args=["--pause-before-testing"], driver_class=TestDriverWithStartCount) 197 197 tests = self._tests_for_runner(runner, ['inspector/pass.html']) 198 198 … … 200 200 output.capture_output() 201 201 try: 202 unexpected_result_count = runner._run_tests_set(tests, runner._port)202 unexpected_result_count = runner._run_tests_set(tests, port) 203 203 self.assertEqual(TestDriverWithStartCount.start_count, 1) 204 204 finally: … … 208 208 209 209 def test_run_test_set_for_parser_tests(self): 210 runner = self.create_runner()210 runner, port = self.create_runner() 211 211 tests = self._tests_for_runner(runner, ['Bindings/event-target-wrapper.html', 'Parser/some-parser.html']) 212 212 output = OutputCapture() 213 213 output.capture_output() 214 214 try: 215 unexpected_result_count = runner._run_tests_set(tests, runner._port)215 unexpected_result_count = runner._run_tests_set(tests, port) 216 216 finally: 217 217 stdout, stderr, log = output.restore_output() … … 227 227 228 228 def test_run_test_set_with_json_output(self): 229 runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json'])230 runner._host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True231 runner._host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True229 runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json']) 230 port.host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True 231 port.host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True 232 232 runner._timestamp = 123456789 233 233 output_capture = OutputCapture() … … 239 239 240 240 self.assertEqual(logs, 241 '\n'.join(['Running Bindings/event-target-wrapper.html (1 of 2)', 241 '\n'.join(['Running 2 tests', 242 'Running Bindings/event-target-wrapper.html (1 of 2)', 242 243 'RESULT Bindings: event-target-wrapper= 1489.05 ms', 243 244 'median= 1487.0 ms, stdev= 14.46 ms, min= 1471.0 ms, max= 1510.0 ms', … … 247 248 '', ''])) 248 249 249 self.assertEqual(json.loads( runner._host.filesystem.files['/mock-checkout/output.json']), {250 self.assertEqual(json.loads(port.host.filesystem.files['/mock-checkout/output.json']), { 250 251 "timestamp": 123456789, "results": 251 252 {"Bindings/event-target-wrapper": {"max": 1510, "avg": 1489.05, "median": 1487, "min": 1471, "stdev": 14.46, "unit": "ms"}, … … 254 255 255 256 def test_run_test_set_with_json_source(self): 256 runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json', '--source-json-path=/mock-checkout/source.json'])257 runner._host.filesystem.files['/mock-checkout/source.json'] = '{"key": "value"}'258 runner._host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True259 runner._host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True257 runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json', '--source-json-path=/mock-checkout/source.json']) 258 port.host.filesystem.files['/mock-checkout/source.json'] = '{"key": "value"}' 259 port.host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True 260 port.host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True 260 261 runner._timestamp = 123456789 261 262 output_capture = OutputCapture() … … 266 267 stdout, stderr, logs = output_capture.restore_output() 267 268 268 self.assertEqual(logs, '\n'.join(['Running Bindings/event-target-wrapper.html (1 of 2)', 269 self.assertEqual(logs, '\n'.join(['Running 2 tests', 270 'Running Bindings/event-target-wrapper.html (1 of 2)', 269 271 'RESULT Bindings: event-target-wrapper= 1489.05 ms', 270 272 'median= 1487.0 ms, stdev= 14.46 ms, min= 1471.0 ms, max= 1510.0 ms', … … 274 276 '', ''])) 275 277 276 self.assertEqual(json.loads( runner._host.filesystem.files['/mock-checkout/output.json']), {278 self.assertEqual(json.loads(port.host.filesystem.files['/mock-checkout/output.json']), { 277 279 "timestamp": 123456789, "results": 278 280 {"Bindings/event-target-wrapper": {"max": 1510, "avg": 1489.05, "median": 1487, "min": 1471, "stdev": 14.46, "unit": "ms"}, … … 282 284 283 285 def test_run_test_set_with_multiple_repositories(self): 284 runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json'])285 runner._host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True286 runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json']) 287 port.host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True 286 288 runner._timestamp = 123456789 287 runner._port.repository_paths = lambda: [('webkit', '/mock-checkout'), ('some', '/mock-checkout/some')]289 port.repository_paths = lambda: [('webkit', '/mock-checkout'), ('some', '/mock-checkout/some')] 288 290 self.assertEqual(runner.run(), 0) 289 self.assertEqual(json.loads( runner._host.filesystem.files['/mock-checkout/output.json']), {291 self.assertEqual(json.loads(port.host.filesystem.files['/mock-checkout/output.json']), { 290 292 "timestamp": 123456789, "results": {"inspector/pass.html:group_name:test_name": 42.0}, "webkit-revision": 5678, "some-revision": 5678}) 291 293 292 294 def test_run_with_upload_json(self): 293 runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',295 runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json', 294 296 '--test-results-server', 'some.host', '--platform', 'platform1', '--builder-name', 'builder1', '--build-number', '123']) 295 297 upload_json_is_called = [False] … … 303 305 304 306 runner._upload_json = mock_upload_json 305 runner._host.filesystem.files['/mock-checkout/source.json'] = '{"key": "value"}'306 runner._host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True307 runner._host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True307 port.host.filesystem.files['/mock-checkout/source.json'] = '{"key": "value"}' 308 port.host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True 309 port.host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True 308 310 runner._timestamp = 123456789 309 311 self.assertEqual(runner.run(), 0) 310 312 self.assertEqual(upload_json_is_called[0], True) 311 generated_json = json.loads( runner._host.filesystem.files['/mock-checkout/output.json'])313 generated_json = json.loads(port.host.filesystem.files['/mock-checkout/output.json']) 312 314 self.assertEqual(generated_json['platform'], 'platform1') 313 315 self.assertEqual(generated_json['builder-name'], 'builder1') … … 315 317 upload_json_returns_true = False 316 318 317 runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',319 runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json', 318 320 '--test-results-server', 'some.host', '--platform', 'platform1', '--builder-name', 'builder1', '--build-number', '123']) 319 321 runner._upload_json = mock_upload_json … … 321 323 322 324 def test_upload_json(self): 323 runner = self.create_runner()324 runner._host.filesystem.files['/mock-checkout/some.json'] = 'some content'325 runner, port = self.create_runner() 326 port.host.filesystem.files['/mock-checkout/some.json'] = 'some content' 325 327 326 328 called = [] … … 335 337 336 338 def upload_single_text_file(mock, filesystem, content_type, filename): 337 self.assertEqual(filesystem, runner._host.filesystem)339 self.assertEqual(filesystem, port.host.filesystem) 338 340 self.assertEqual(content_type, 'application/json') 339 341 self.assertEqual(filename, 'some.json') … … 359 361 self.assertEqual(called, ['FileUploader', 'upload_single_text_file']) 360 362 363 def _add_file(self, runner, dirname, filename, content=True): 364 dirname = runner._host.filesystem.join(runner._base_path, dirname) if dirname else runner._base_path 365 runner._host.filesystem.maybe_make_directory(dirname) 366 runner._host.filesystem.files[runner._host.filesystem.join(dirname, filename)] = content 367 361 368 def test_collect_tests(self): 362 runner = self.create_runner() 363 filename = runner._host.filesystem.join(runner._base_path, 'inspector', 'a_file.html') 364 runner._host.filesystem.files[filename] = 'a content' 369 runner, port = self.create_runner() 370 self._add_file(runner, 'inspector', 'a_file.html', 'a content') 365 371 tests = runner._collect_tests() 366 372 self.assertEqual(len(tests), 1) … … 369 375 return sorted([test.test_name() for test in runner._collect_tests()]) 370 376 371 def test_collect_tests (self):372 runner = self.create_runner(args=['PerformanceTests/test1.html', 'test2.html'])377 def test_collect_tests_with_multile_files(self): 378 runner, port = self.create_runner(args=['PerformanceTests/test1.html', 'test2.html']) 373 379 374 380 def add_file(filename): 375 runner._host.filesystem.files[runner._host.filesystem.join(runner._base_path, filename)] = 'some content'381 port.host.filesystem.files[runner._host.filesystem.join(runner._base_path, filename)] = 'some content' 376 382 377 383 add_file('test1.html') 378 384 add_file('test2.html') 379 385 add_file('test3.html') 380 runner._host.filesystem.chdir(runner._port.perf_tests_dir()[:runner._port.perf_tests_dir().rfind(runner._host.filesystem.sep)])386 port.host.filesystem.chdir(runner._port.perf_tests_dir()[:runner._port.perf_tests_dir().rfind(runner._host.filesystem.sep)]) 381 387 self.assertEqual(self._collect_tests_and_sort_test_name(runner), ['test1.html', 'test2.html']) 382 388 383 389 def test_collect_tests_with_skipped_list(self): 384 runner = self.create_runner() 385 386 def add_file(dirname, filename, content=True): 387 dirname = runner._host.filesystem.join(runner._base_path, dirname) if dirname else runner._base_path 388 runner._host.filesystem.maybe_make_directory(dirname) 389 runner._host.filesystem.files[runner._host.filesystem.join(dirname, filename)] = content 390 391 add_file('inspector', 'test1.html') 392 add_file('inspector', 'unsupported_test1.html') 393 add_file('inspector', 'test2.html') 394 add_file('inspector/resources', 'resource_file.html') 395 add_file('unsupported', 'unsupported_test2.html') 396 runner._port.skipped_perf_tests = lambda: ['inspector/unsupported_test1.html', 'unsupported'] 390 runner, port = self.create_runner() 391 392 self._add_file(runner, 'inspector', 'test1.html') 393 self._add_file(runner, 'inspector', 'unsupported_test1.html') 394 self._add_file(runner, 'inspector', 'test2.html') 395 self._add_file(runner, 'inspector/resources', 'resource_file.html') 396 self._add_file(runner, 'unsupported', 'unsupported_test2.html') 397 port.skipped_perf_tests = lambda: ['inspector/unsupported_test1.html', 'unsupported'] 397 398 self.assertEqual(self._collect_tests_and_sort_test_name(runner), ['inspector/test1.html', 'inspector/test2.html']) 398 399 399 400 def test_collect_tests_with_page_load_svg(self): 400 runner = self.create_runner() 401 402 def add_file(dirname, filename, content=True): 403 dirname = runner._host.filesystem.join(runner._base_path, dirname) if dirname else runner._base_path 404 runner._host.filesystem.maybe_make_directory(dirname) 405 runner._host.filesystem.files[runner._host.filesystem.join(dirname, filename)] = content 406 407 add_file('PageLoad', 'some-svg-test.svg') 401 runner, port = self.create_runner() 402 self._add_file(runner, 'PageLoad', 'some-svg-test.svg') 408 403 tests = runner._collect_tests() 409 404 self.assertEqual(len(tests), 1) 410 405 self.assertEqual(tests[0].__class__.__name__, 'PageLoadingPerfTest') 411 406 407 def test_collect_tests_should_ignore_replay_tests_by_default(self): 408 runner, port = self.create_runner() 409 self._add_file(runner, 'Replay', 'www.webkit.org.replay') 410 self.assertEqual(runner._collect_tests(), []) 411 412 def test_collect_tests_with_replay_tests(self): 413 runner, port = self.create_runner(args=['--replay']) 414 self._add_file(runner, 'Replay', 'www.webkit.org.replay') 415 tests = runner._collect_tests() 416 self.assertEqual(len(tests), 1) 417 self.assertEqual(tests[0].__class__.__name__, 'ReplayPerfTest') 418 412 419 def test_parse_args(self): 413 runner = self.create_runner()420 runner, port = self.create_runner() 414 421 options, args = PerfTestsRunner._parse_args([ 415 422 '--build-directory=folder42', -
trunk/Tools/Scripts/webkitpy/thirdparty/__init__.py
r116668 r119188 83 83 elif '.buildbot' in fullname: 84 84 self._install_buildbot() 85 elif '.webpagereplay' in fullname: 86 self._install_webpagereplay() 85 87 86 88 def _install_mechanize(self): … … 127 129 url_subpath="ircbot.py") 128 130 131 def _install_webpagereplay(self): 132 if not self._fs.exists(self._fs.join(_AUTOINSTALLED_DIR, "webpagereplay")): 133 self._install("http://web-page-replay.googlecode.com/files/webpagereplay-1.1.1.tar.gz", "webpagereplay-1.1.1") 134 self._fs.move(self._fs.join(_AUTOINSTALLED_DIR, "webpagereplay-1.1.1"), self._fs.join(_AUTOINSTALLED_DIR, "webpagereplay")) 135 136 init_path = self._fs.join(_AUTOINSTALLED_DIR, "webpagereplay", "__init__.py") 137 if not self._fs.exists(init_path): 138 self._fs.write_text_file(init_path, "") 139 129 140 def _install(self, url, url_subpath): 130 141 installer = AutoInstaller(target_dir=_AUTOINSTALLED_DIR)
Note: See TracChangeset
for help on using the changeset viewer.