80 | | * If the test fails every time and the test's output is the same every time, check in new results for the test and include the bug URL in your ChangeLog. |
81 | | * You should do this even if the test's output includes failure messages or incorrect rendering. By running the test against these "expected failure" results rather than skipping the test entirely, we can discover when new regressions are introduced. |
82 | | * If the test fails intermittently, or crashes, or hangs, add the test to the appropriate Skipped files (e.g., `LayoutTests/platform/mac-leopard/Skipped`). Include a comment in the Skipped file with the bug URL and a brief description of how it fails, e.g.: |
| 80 | * Otherwise, do one of the following and note what you did in the bug you filed. |
| 81 | * If the test fails every time and the test's output is the same every time, check in new results for the test and include the bug URL in your ChangeLog. |
| 82 | * You should do this even if the test's output includes failure messages or incorrect rendering. By running the test against these "expected failure" results rather than skipping the test entirely, we can discover when new regressions are introduced. |
| 83 | * If the test fails intermittently, or crashes, or hangs, add the test to the appropriate Skipped files (e.g., `LayoutTests/platform/mac-leopard/Skipped`). Include a comment in the Skipped file with the bug URL and a brief description of how it fails, e.g.: |