Changes between Version 19 and Version 20 of TestExpectations


Ignore:
Timestamp:
Sep 27, 2012 2:50:44 PM (12 years ago)
Author:
dpranke@chromium.org
Comment:

remove comments about ORWT and Skipped files

Legend:

Unmodified
Added
Removed
Modified
  • TestExpectations

    v19 v20  
    1212The primary function of the LayoutTests is as a ''regression test suite''. This means that, while we care about whether a page is being rendered correctly, we care more about whether the page is being rendered the way we expect it to. In other words, we look more for changes in behavior than we do for correctness.
    1313
    14 All layout tests have "expected results", which may be one of several forms. The test may produce a text file containing javascript log messages, or a text rendering of the Render Tree. It may also produce a screen capture of the rendered page as PNG files as well (if you are running with {{{--pixel-tests}}} enabled). For WebAudio tests, we can produce WAV files instead of either text or PNG files. For any of these types of tests, there are files checked into the LayoutTests directory named "-expected.{txt,png,wav}". In many (most?) cases, the output is expected to be generic and match on any webkit port.
     14All layout tests have "expected results", which may be one of several forms. The test may produce a text file containing javascript log messages, or a text rendering of the Render Tree. It may also produce a screen capture of the rendered page as PNG files as well (if you are running with {{{--pixel-tests}}} enabled). For WebAudio tests, we can produce WAV files instead of either text or PNG files. For any of these types of tests, there are files checked into the LayoutTests directory named "-expected.{txt,png,wav}". In many (most?) cases, the output is expected to be generic and match on any webkit port. Lastly, we also support the concept of "reference tests", which check that two pages are rendered identically (pixel-by-pixel). As long as the two tests' output match, the tests pass. For more on reference tests, see [wiki:RefTests].
    1515
    1616When the output doesn't match, there are two potential reasons for it:
     
    1919 2. The port is performing "incorrectly" (i.e., the test is failing).
    2020
    21 In the former case, the convention is to check in a platform-specific "-expected" file that overrides the generic one.
     21In the former case, the convention is to check in a platform-specific "-expected" file that overrides the generic one. In the latter case, you have one of two options:
    2222
    23 In the latter case, this is dealt with differently on different ports.
    24 
    25 In all ports except for the Chromium ones, the convention is to check in the incorrect output as a platform-specific file, and then file a bug to track the incorrectness. For some tests, on some ports, the test is *never* expected to pass, in which case the test is added to the {{{Skipped}} files instead. We will also add tests to the Skipped files if it would affect the rest of the test run or cause NRWT itself to crash.
    26 
    27 In the Chromium ports, the convention is to add a line of text to the TestExpectations file (see below).
    28 
    29 Lastly, we also support the concept of "reference tests", which check that two pages are rendered identically (pixel-by-pixel). As long as the two tests' output match, the tests pass. For more on reference tests, see [wiki:RefTests].
     231. Check in a new baseline as a platform-specific file and file a bug to track the incorrectness. Some types of failures (like crashes and timeouts) can't be handled this way, of course.
     242. Add an entry to the TestExpectations file (see below).
    3025
    3126== Suppressing failures using the TestExpectations file ==
     
    117112  * Try to specify platforms and configs as accurately as possible. If a test passes on all but on platform, it should only have that platform listed
    118113  * If a test fails intermittently, use multiple expectations.
    119 
    120 == Suppressing failures using ORWT: Skipped files and checked-in failures ==
    121 
    122 ORWT has a much simpler mechanism. Tests are either expected to pass, or can be skipped. To skip a test, list it in the Skipped file for your platform. If your test produces output different from the "expected" version, check in the new (possibly incorrect) version in your platform-specific directory. See [wiki:LayoutTestSearchPath] for figuring out where that directory is.