Changes between Version 5 and Version 6 of TestExpectations


Ignore:
Timestamp:
Jul 7, 2011, 3:55:12 PM (13 years ago)
Author:
dpranke@chromium.org
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • TestExpectations

    v5 v6  
    1212 * Triaging test failures
    1313
    14 == How We manage tests that fail ==
     14== How we manage tests that fail ==
     15
     16The primary function of the LayoutTests is as a *regression test suite*. This means that, while we care about whether a page is being rendered correctly, we care more about whether the page is being rendered the way we expect it to. In other words, we look more for changes in behavior than we do for correctness.
     17
     18All layout tests have "expected results", which may be one of several forms. The test may produce a text file containing javascript log messages, or a text rendering of the Render Tree. It may also produce a screen capture of the rendered page (if you are running with {{{--pixel-tests}}} enabled) as PNG files as well. For WebAudio tests, we can produce WAV files instead of either text or PNG files. For any of these types of tests, there are files checked into the LayoutTests directory named "-expected.{txt,png,wav}". In many (most?) cases, the output is expected to be generic and match on any webkit port.
     19
     20When the output doesn't match, there are two potential reasons for it:
     21
     22 1. The port is performing "correctly", but the output simply won't match the generic version. The usual reason for this is for things like form controls, which are rendered differently on each platform.
     23 2. The port is performing "incorrectly" (i.e., the test is failing).
     24
     25In the former case, the convention is to check in a platform-specific "-expected" file that overrides the generic one.
     26
     27In the latter case, this is dealt with differently on different ports.
     28
     29In all ports except for the Chromium ones, the convention is to check in the incorrect output as a platform-specific file, and then file a bug to track the incorrectness. For some tests, on some ports, the test is *never* expected to pass, in which case the test is added to the {{{Skipped}} files instead. We will also add tests to the Skipped files if it would affect the rest of the test run or cause NRWT itself to crash.
     30
     31In the Chromium ports, the convention is to add a line of text to the test_expectations.txt file (see below).
     32
     33Lastly, we also support the concept of "reference tests", which check that two pages are rendered identically (pixel-by-pixel). As long as the two tests' output match, the tests pass. For more on reference tests, see [wiki:RefTests].
    1534
    1635== Suppressing failures using NRWT: the test_expectations.txt file ==
    1736
    18 As a bit of background for those of you not very familiar with the syntax of this file (I will put this on the wiki shortly, as well) ...
     37The test expectations file is found in a platform-specific directory under LayoutTests. I will use the Chromium version as an example.
    1938
    20 The syntax of the file is roughly:
     39=== Syntax ===
     40
     41The syntax of the file is roughly one expectation per line. An expectation can apply to either a directory of tests, or a specific tests. Lines prefixed with "// " are treated as comments,
     42and blank lines are allowed as well.
     43
     44The syntax of a line is roughly:
    2145
    2246<modifier> <modifier>* ":" <test-name> "=" <expected result>+
    2347
    24 the expected result can be one of PASS, FAIL, TEXT, IMAGE, CRASH, TIMEOUT, IMAGE+TEXT, AUDIO.
     48For example:
    2549
    26 the modifiers are a bit more complicated ... they include bug identifiers, configuration parameters, and "misc". Bug identifiers are the things we've been talking about.
     50{{{
     51BUGWK12345 WIN DEBUG : fast/html/keygen.html = CRASH
     52}}}
    2753
    28 "configuration parameters" describe which variations of your port the test expectation should apply to, e.g., "VISTA DEBUG" or "SNOWLEOPARD GPU"
     54which indicates that the "fast/html/keygen.html" test file is expected to crash when run in the Debug configuration on Windows, and the tracking bug for this crash is bug #12345 in the webkit bug repository. Note that the test will still be run, so that we can notice if it doesn't actually crash.
    2955
    30 "misc" includes "SLOW", "SKIP", "REBASLINE", "NOW", and "WONTFIX".
     56==== Expected results ====
    3157
    32 "SLOW" changes the default timeout for the test to be longer.
     58The expected result can be one of PASS, FAIL, TEXT, IMAGE, CRASH, TIMEOUT, IMAGE+TEXT, AUDIO. These should be fairly self explanatory. Note that IMAGE+TEXT means that we expect *both* the text output and the image output to be different. In other words, this is an AND, not an OR.
    3359
    34 "SKIP" tells NRWT to skip the test altogether.
     60The "FAIL" modifier, on the other hand, is an OR, and is equivalent to saying that any one of TEXT, IMAGE, AUDIO, or IMAGE+TEXT might happen, but not TIMEOUT or CRASH.
    3561
    36 "WONTFIX" is a modifier that is mostly used for reporting to indicate that you have no plans to fix this expectation. E.g., if you never wanted to support webarchives, you might have a line that said "BUGXXX SKIP WONTFIX : webarchive = PASS FAIL".
     62Multiple expected results are allowed, for tests that are flaky and may produce different results in different runs.
    3763
    38 "NOW" is not actually used anywhere. I added it at some point for some reporting but it can surely be deleted. There also used to be a "DEFER" but that was removed a while back. These concepts could be used to help track which fixes where expected to be handled in a given release, but that sort of thing is better done through a bug tracking system like bugzilla.
     64==== Modifiers ===
    3965
    40 "REBASELINE" is used to tell one of our scripts (rebaseline-chromium-webkit-tests) which tests to pull new baselines for. This doesn't really belong in this file at all, but it was a convenient way to be able to tag multiple tests at once at the time.
     66The set of allowed modifiers are a bit more complicated ... they include bug identifiers, configuration parameters, and miscellaneous options.
    4167
     68Bug identifiers allow you to identify the tracking bugs. They must start with a "BUG" prefix. There are three supported identifiers:
     69
     70   1. BUGWK12345 indicates a bug in the WebKit bug database, and is equivalent to https://bugs.webkit.org/show_bug.cgi?id=12345
     71   2. BUGCR12345 indicates a bug in the Chromium bug database, and is equivalent to https://bugs.chromium.org/12345 .
     72   3. BUGDPRANKE is a "placeholder" that indicates that no bug has been filed yet, but you should bug that individual about the status.
     73
     74
     75Configuration parameters describe which variations of your port the test expectation should apply to. Typically each test_expectations.txt file is used for multiple variations of a test run, because each variation usually has a lot of failures in common, and it's easier to manage all of the failures in one place. The exact set of configuration parameters will vary from port to port. Here are the options supported for Chromium:
     76
     77 * LEOPARD SNOWLEOPARD XP VISTA WIN7 LUCID : these indicate particular versions of particular operating systems. Multiple options may appear on a single line, but duplicates are not allowed
     78   * MAC WIN LINUX: These are "macros" that expand out to all of the relevant versions of each operationg system. It is a syntax error to specify both one of the macros and one of the expanded options, e.g. "MAC LEOPARD"
     79 * DEBUG RELEASE: the different build types
     80 * GPU CPU: Whether we are testing the "GPU-accelerated" code paths, or the regular software-only (CPU) code paths
     81 * X86 X86_64: Whether we are testing the 32-bit or 64-bit versions of the code.
     82
     83Different ports may not support all of these options, since they may not be relevant.
     84
     85Note that not all parameters need to be listed, and if no parameters in a particular category are listed, the parser assumes that any combination applies.
     86
     87To figure out if a configuration matches, we take the logical OR of the modifiers in each category, and the AND of modifiers in different categories, so a line containing "LEOPARD VISTA DEBUG GPU" means that it will apply to tests run on either Mac Leopard or Windows Vista, in Debug mode, using the GPU acceleration code paths.
     88
     89There are also "miscellaneous" modifiers:
     90
     91 * SLOW : This indicates that the test is expected to be slow, and we apply three times the normal timeout for a test. Note that SLOW tests cannot be expected to TIMEOUT.
     92 * SKIP: This indicates that the test will never pass and there's no point in running it. This is equivalent to listing the test in the Skipped files (see below)
     93 * WONTFIX: This is a modifier that does not affect anything in the test run itself, but can be used for reporting; it indicates that we don't ever expect the test to pass.
     94 * NOW: This modifier has never been used and should probably be removed.
     95 * REBASELINE: This modifier is used by the rebaseline-chromium-webkit-tests script, and is not allowed to exist in a checked-in version of the file.
     96
     97=== Semantics ===
     98
     99When parsing the file, we use two rules to figure out if an expectation line applies to the current run:
     100
     101 1. If the configuration parameters don't match the configuration of the current run, the expectation is ignored.
     102 2. Expectations that match more of a test name are used before expectations that match less of a test name.
     103
     104For example, if you had the following lines in your file, and you were running a debug build on Mac SnowLeopard:
     105
     106{{{
     107BUGWK12345 SNOWLEOPARD : fast/html = TEXT
     108BUGWK12345 SNOWLEOPARD : fast/html/keygen.html = PASS
     109BUGWK12345 VISTA : fast/forms/submit.html = IMAGE
     110}}}
     111
     112You'd expect:
     113
     114 * {{{fast/html/article-element.html}}} to fail with a text diff (since it is in the fast/html directory)
     115 * {{{fast/html/keygen.html}}} to pass (since the exact match on the test name)
     116 * {{{fast/html/submit.html}}} to pass (since the configuration parameters don't apply.
     117
     118Again, *duplicate expectations are not allowed*.
    42119
    43120== Suppressing failures using ORWT: Skipped files and checked-in failures ==
    44121
     122ORWT has a much simpler mechanism. Tests are either expected to pass, or can be skipped. To skip a test, list it in the Skipped file for your platform. If your test produces output different from the "expected" version, check in the new (possibly incorrect) version in your platform-specific directory. See [wiki:LayoutTestSearchPath] for figuring out where that directory is.
    45123
     124