Changes between Version 20 and Version 21 of TriagingTestFailures
- Timestamp:
- Feb 28, 2011, 7:49:22 AM (14 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
TabularUnified TriagingTestFailures
v20 v21 21 21 1. When prompted, specify which builder you're interested in. 22 22 1. Press Enter to continue. `webkit-patch` will look back through the recent builds for that builder until it has found when all current failures were introduced. 23 24 = Find out whether the failures are known = 25 26 Bugs that are known to be making the bots red are marked with the `MakingBotsRed` keyword. Look through [https://bugs.webkit.org/buglist.cgi?keywords=MakingBotsRed the list of bugs with the MakingBotsRed keyword] to see if the failures are already known. If so, you might be able to skip ahead to [#Getthebotsgreenagain Get the bots green again]. 23 27 24 28 = Find out when each test started failing = … … 65 69 * If multiple tests had incorrect output, or if the failure is a hang, or if the failure is a crash with no crash log available, link to the results.html for that build (e.g., http://build.webkit.org/results/Windows%20Release%20(Tests)/r71112%20(5886)/results.html). 66 70 1. Apply keywords 71 * `MakingBotsRed` 67 72 * `LayoutTestFailure` 68 73 * `Regression`, if the failure is due to a regression in WebKit … … 77 82 78 83 * If you know why the test is failing, and know how to fix it (e.g., by making a change to WebKit or checking in new correct results), then fix it and close the bug! 79 * Otherwise, do one of the following and note what you did in the bug you filed .84 * Otherwise, do one of the following and note what you did in the bug you filed, then remove the `MakingBotsRed` keyword. 80 85 * If the test fails every time and the test's output is the same every time, check in new results for the test and include the bug URL in your ChangeLog. (See more discussion of this policy [http://article.gmane.org/gmane.os.opendarwin.webkit.devel/10017 here].) 81 86 * You should do this even if the test's output includes failure messages or incorrect rendering. By running the test against these "expected failure" results rather than skipping the test entirely, we can discover when new regressions are introduced.