Changes between Version 20 and Version 21 of TriagingTestFailures


Ignore:
Timestamp:
Feb 28, 2011 7:49:22 AM (13 years ago)
Author:
Adam Roben
Comment:

Add information about the new MakingBotsRed keyword

Legend:

Unmodified
Added
Removed
Modified
  • TriagingTestFailures

    v20 v21  
    2121   1. When prompted, specify which builder you're interested in.
    2222   1. Press Enter to continue. `webkit-patch` will look back through the recent builds for that builder until it has found when all current failures were introduced.
     23
     24= Find out whether the failures are known =
     25
     26Bugs that are known to be making the bots red are marked with the `MakingBotsRed` keyword. Look through [https://bugs.webkit.org/buglist.cgi?keywords=MakingBotsRed the list of bugs with the MakingBotsRed keyword] to see if the failures are already known. If so, you might be able to skip ahead to [#Getthebotsgreenagain Get the bots green again].
    2327
    2428= Find out when each test started failing =
     
    6569     * If multiple tests had incorrect output, or if the failure is a hang, or if the failure is a crash with no crash log available, link to the results.html for that build (e.g., http://build.webkit.org/results/Windows%20Release%20(Tests)/r71112%20(5886)/results.html).
    6670 1. Apply keywords
     71   * `MakingBotsRed`
    6772   * `LayoutTestFailure`
    6873   * `Regression`, if the failure is due to a regression in WebKit
     
    7782
    7883 * If you know why the test is failing, and know how to fix it (e.g., by making a change to WebKit or checking in new correct results), then fix it and close the bug!
    79  * Otherwise, do one of the following and note what you did in the bug you filed.
     84 * Otherwise, do one of the following and note what you did in the bug you filed, then remove the `MakingBotsRed` keyword.
    8085   * If the test fails every time and the test's output is the same every time, check in new results for the test and include the bug URL in your ChangeLog. (See more discussion of this policy [http://article.gmane.org/gmane.os.opendarwin.webkit.devel/10017 here].)
    8186     * You should do this even if the test's output includes failure messages or incorrect rendering. By running the test against these "expected failure" results rather than skipping the test entirely, we can discover when new regressions are introduced.