wiki:TestExpectations

Version 19 (modified by dpranke@chromium.org, 12 years ago) ( diff )

add comments about flaky tests

Managing Test Expectations

See also

How we manage tests that fail

The primary function of the LayoutTests is as a regression test suite. This means that, while we care about whether a page is being rendered correctly, we care more about whether the page is being rendered the way we expect it to. In other words, we look more for changes in behavior than we do for correctness.

All layout tests have "expected results", which may be one of several forms. The test may produce a text file containing javascript log messages, or a text rendering of the Render Tree. It may also produce a screen capture of the rendered page as PNG files as well (if you are running with --pixel-tests enabled). For WebAudio tests, we can produce WAV files instead of either text or PNG files. For any of these types of tests, there are files checked into the LayoutTests directory named "-expected.{txt,png,wav}". In many (most?) cases, the output is expected to be generic and match on any webkit port.

When the output doesn't match, there are two potential reasons for it:

  1. The port is performing "correctly", but the output simply won't match the generic version. The usual reason for this is for things like form controls, which are rendered differently on each platform.
  2. The port is performing "incorrectly" (i.e., the test is failing).

In the former case, the convention is to check in a platform-specific "-expected" file that overrides the generic one.

In the latter case, this is dealt with differently on different ports.

In all ports except for the Chromium ones, the convention is to check in the incorrect output as a platform-specific file, and then file a bug to track the incorrectness. For some tests, on some ports, the test is *never* expected to pass, in which case the test is added to the {{{Skipped}} files instead. We will also add tests to the Skipped files if it would affect the rest of the test run or cause NRWT itself to crash.

In the Chromium ports, the convention is to add a line of text to the TestExpectations file (see below).

Lastly, we also support the concept of "reference tests", which check that two pages are rendered identically (pixel-by-pixel). As long as the two tests' output match, the tests pass. For more on reference tests, see RefTests.

Suppressing failures using the TestExpectations file

The test expectations files are found in platform-specific directories under LayoutTests. Ports may use one or more files which are used in order, with later files overriding earlier ones.

Syntax

The syntax of the file is roughly one expectation per line. An expectation can apply to either a directory of tests, or a specific tests. Lines prefixed with "# " are treated as comments, and blank lines are allowed as well.

The syntax of a line is roughly:

[ bugs ] [ "[" modifiers "]" ] test_name [ "[" expectations "]" ]

  • Tokens are separated by whitespace
  • The brackets delimiting the modifiers and expectations from the bugs and the test_name are not optional, although the bugs, modifiers, and expectations. In other words, if you want to specify modifiers or expectations, you must enclose them in brackets.
  • Lines are expected to have one or more bug identifiers, and the linter will complain about lines missing them. Bug identifiers are of the form "webkit.org/b/12345", "crbug.com/12345", "code.google.com/p/v8/issues/detail?id=12345" or "Bug(username)"
  • If no modifiers are specified, the test applies to all of the configurations applicable to that file
  • Modifiers can be one or more of Mac, SnowLeopard, Lion, MountainLion, Win, XP, Vista, Win7, Win7SP0, Linux, Lucid, x86_64, x86, Release, Debug. Not all modifiers make sense on all ports or in all lines.
  • Expectations can be one or more of Crash, Failure, ImageOnlyFailure, Pass, Rebaseline, Slow, Skip, Timeout, WontFix. If mulitple expectations are listed, the test is considered "flaky" and any of those results will be considered as expected.

For example:

webkit.org/b/12345 [ Win Debug ] fast/html/keygen.html [ Crash ]

which indicates that the "fast/html/keygen.html" test file is expected to crash when run in the Debug configuration on Windows, and the tracking bug for this crash is bug #12345 in the webkit bug repository. Note that the test will still be run, so that we can notice if it doesn't actually crash.

Assuming you're running a debug build on Mac Lion, the following lines are all equivalent:

fast/html/keygen.html
Bug(darin) fast/html/keygen.html
[ Lion Debug ] fast/html/keygen.html
fast/html/keygen.html [ Skip ]
fast/html/keygen.html [ WontFix ]
Bug(darin) [ Lion Debug] fast/html/keygen.html [ Skip ]

Semantics

  • WontFix implies Skip and also indicates that we don't have any plans to make the test pass
  • WontFix and Skip must be the only expectation and cannot be specified alongside Crash or anything else; since the tests will be skipped, the other expectations will likely become stale.
  • Slow means that we expect the test to run slowly and will use a longer, port-specific timeout. A given line cannot have both Slow and Timeout
  • Rebaseline is an old expectation used in conjunction with webkit-patch rebaseline-expectations and is not allowed to be checked in
  • If no expectations are specified we default to Skip for compatibility with the Skipped files

Also, when parsing the file, we use two rules to figure out if an expectation line applies to the current run:

  1. If the configuration parameters don't match the configuration of the current run, the expectation is ignored.
  2. Expectations that match more of a test name are used before expectations that match less of a test name.

For example, if you had the following lines in your file, and you were running a debug build on Mac SnowLeopard:

webkit.org/b/12345 [ SnowLeopard ] fast/html [ Failure ]
webkit.org/b/12345 [ SnowLeopard ] fast/html/keygen.html [ Pass ]
webkit.org/b/12345 [ Vista ] fast/forms/submit.html [ ImageOnlyFailure ]
webkit.org/b/12345 fast/html/section-element.html [ Failure Crash ]

You'd expect:

  • fast/html/article-element.html to fail with a text diff (since it is in the fast/html directory)
  • fast/html/keygen.html to pass (since the exact match on the test name)
  • fast/html/submit.html to pass (since the configuration parameters don't apply.
  • fast/html/section-element.html to either crash or produce a text (or image and text) failure, but not to do anything, i.e., it is not expected to time out or pass.

Again, duplicate expectations are not allowed within a single file and will generate warnings. Ports may use multiple TestExpectations files, and entries in a later file override entries in an earlier file. The list of files used by a port is determined by the port's implementation of expectation_files in Tools/Scripts/webkitpy/layout_tests/port/{mac,win,qt,gtk,etc.}.py

You can verify that any changes you've made to an expectations file are correct by running:

% new-run-webkit-tests --lint-test-files

which will cycle through all of the possible combinations of configurations looking for problems.

Rules of Thumb for Suppressing Failures

Here are some rules-of-thumb that you could apply when adding new expectations to the file:

  • Only use WontFix when you know for sure we will never, ever implement the capability, tested by the test
  • Use Skip when the test:
    • throws JavaScript exception and makes text-only test manifest as pixel-test. This usually manifests in "Missing test expectations" failure.
    • disrupts running of the other tests. Although this is not typical, it may still be possible. Please make sure to give Pri-1 to the associated bug.
  • Try to specify platforms and configs as accurately as possible. If a test passes on all but on platform, it should only have that platform listed
  • If a test fails intermittently, use multiple expectations.

Suppressing failures using ORWT: Skipped files and checked-in failures

ORWT has a much simpler mechanism. Tests are either expected to pass, or can be skipped. To skip a test, list it in the Skipped file for your platform. If your test produces output different from the "expected" version, check in the new (possibly incorrect) version in your platform-specific directory. See LayoutTestSearchPath for figuring out where that directory is.

Note: See TracWiki for help on using the wiki.