| | 71 | |
| | 72 | === Semantics === |
| | 73 | |
| | 74 | When parsing the file, we use two rules to figure out if an expectation line applies to the current run: |
| | 75 | |
| | 76 | 1. If the configuration parameters don't match the configuration of the current run, the expectation is ignored. |
| | 77 | 2. Expectations that match more of a test name are used before expectations that match less of a test name. |
| | 78 | |
| | 79 | For example, if you had the following lines in your file, and you were running a debug build on Mac SnowLeopard: |
| | 80 | |
| | 81 | {{{ |
| | 82 | webkit.org/b/12345 [ SnowLeopard ] fast/html [ Failure ] |
| | 83 | webkit.org/b/12345 [ SnowLeopard ] fast/html/keygen.html [ Pass ] |
| | 84 | webkit.org/b/12345 [ Vista ] fast/forms/submit.html [ ImageOnlyFailure ] |
| | 85 | }}} |
| | 86 | |
| | 87 | You'd expect: |
| | 88 | |
| | 89 | * {{{fast/html/article-element.html}}} to fail with a text diff (since it is in the fast/html directory) |
| | 90 | * {{{fast/html/keygen.html}}} to pass (since the exact match on the test name) |
| | 91 | * {{{fast/html/submit.html}}} to pass (since the configuration parameters don't apply. |
| | 92 | |
| | 93 | Again, *duplicate expectations are not allowed* within a single file and will generate warnings. Ports may use multiple TestExpectations files, and entries in a later file override entries in an earlier file. |
| | 94 | |
| | 95 | You can verify that any changes you've made to an expectations file are correct by running: |
| | 96 | |
| | 97 | {{{ |
| | 98 | % new-run-webkit-tests --lint-test-files |
| | 99 | }}} |
| | 100 | |
| | 101 | which will cycle through all of the possible combinations of configurations looking for problems. |
| | 102 | |
| | 103 | == Rules of Thumb for Suppressing Failures == |
| | 104 | |
| | 105 | Here are some rules-of-thumb that you could apply when adding new expectations to the file: |
| | 106 | |
| | 107 | * Only use WontFix when you know for sure we will never, ever implement the capability, tested by the test |
| | 108 | * Use Skip when the test: |
| | 109 | * throws JavaScript exception and makes text-only test manifest as pixel-test. This usually manifests in "Missing test expectations" failure. |
| | 110 | * disrupts running of the other tests. Although this is not typical, it may still be possible. Please make sure to give Pri-1 to the associated bug. |
| | 111 | * Try to specify platforms and configs as accurately as possible. If a test passes on all but on platform, it should only have that platform listed |
| | 112 | * If a test fails intermittently, use multiple expectations. |
| 130 | | === Semantics === |
| 131 | | |
| 132 | | When parsing the file, we use two rules to figure out if an expectation line applies to the current run: |
| 133 | | |
| 134 | | 1. If the configuration parameters don't match the configuration of the current run, the expectation is ignored. |
| 135 | | 2. Expectations that match more of a test name are used before expectations that match less of a test name. |
| 136 | | |
| 137 | | For example, if you had the following lines in your file, and you were running a debug build on Mac SnowLeopard: |
| 138 | | |
| 139 | | {{{ |
| 140 | | BUGWK12345 SNOWLEOPARD : fast/html = TEXT |
| 141 | | BUGWK12345 SNOWLEOPARD : fast/html/keygen.html = PASS |
| 142 | | BUGWK12345 VISTA : fast/forms/submit.html = IMAGE |
| 143 | | }}} |
| 144 | | |
| 145 | | You'd expect: |
| 146 | | |
| 147 | | * {{{fast/html/article-element.html}}} to fail with a text diff (since it is in the fast/html directory) |
| 148 | | * {{{fast/html/keygen.html}}} to pass (since the exact match on the test name) |
| 149 | | * {{{fast/html/submit.html}}} to pass (since the configuration parameters don't apply. |
| 150 | | |
| 151 | | Again, *duplicate expectations are not allowed*. |
| 152 | | |
| 153 | | You can verify that any changes you've made to an expectations file are correct by running: |
| 154 | | |
| 155 | | {{{ |
| 156 | | % new-run-webkit-tests --lint-test-files |
| 157 | | }}} |
| 158 | | |
| 159 | | which will cycle through all of the possible combinations of configurations looking for errors and conflicts. It's not instantaneous, but shouldn't take more than a minute or two. |
| 160 | | |
| 161 | | == Rules of Thumb for Suppressing Failures == |
| 162 | | |
| 163 | | Here are some rules-of-thumb that you could apply when adding new expectations to the file: |
| 164 | | |
| 165 | | * Only use WONTFIX when you know for sure we will never, ever implement the capability, tested by the test |
| 166 | | * Use SKIP when the test: |
| 167 | | * throws JavaScript exception and makes text-only test manifest as pixel-test. This usually manifests in "Missing test expectations" failure. |
| 168 | | * disrupts running of the other tests. Although this is not typical, it may still be possible. Please make sure to give Pri-1 to the associated bug. |
| 169 | | * Try to specify platforms and configs as accurately as possible. If a test passes on all but on platform, it should only have that platform listed |
| 170 | | * If a test fails intermittently, use multiple expectations. |