Version 7 (modified by 14 years ago) ( diff ) | ,
---|
Introduction
The buildbots are most useful when they are "green" (i.e., the build isn't broken and there are no unexpected test failures). When the bots are green it's very easy to notice when new regressions are introduced, but when they're red it's nearly impossible.
This guide attempts to walk you through a process to get the bots green again when they are read by triaging test failures, filing bugs on them, and checking in new results or skipping tests.
Find out what is failing
There are two main ways to do this:
- Browse build.webkit.org (you should probably start with this one)
- Find recent builds that have failed
- Windows:
- Go to http://build.webkit.org/buildslaves.
- Click on the name of a slave you're interested in to see a summary of its recent builds.
- Other platforms:
- Go to http://build.webkit.org/builders.
- Click on the name of a builder you're interested in to see a summary of its recent builds.
- The Info column will tell you if any tests failed for that build.
- To see the test output for a particular build, click on the link in the Build # column, then on view results, then on results.html
- Windows:
- Find recent builds that have failed
- Use
webkit-patch
- Run this command:
webkit-patch failure-reason
- When prompted, specify which builder you're interested in.
- Press Enter to continue.
webkit-patch
will look back through the recent builds for that builder until it has found when all current failures were introduced.
- Run this command:
Find out when each test started failing
You can either:
- Look back through old builds on build.webkit.org
- Use the output from
webkit-patch failure-reason
- Use
svn log
/git log
to find out when the test or its results were last changed
Try to figure out why each test is failing
(You probably won't be able to figure out exactly why every test is failing, but the more information you can get now, the better.)
Look at the revision range where the failure was introduced. If you find that:
- The test and/or its expected output was modified
- The test might need new results for the failing platform(s).
- Are the test's results platform-specific (i.e., are they beneath
LayoutTests/platform/
)?- Yes: the failing platforms might just need new results checked in. You'll have to verify that the current output from those platforms is correct.
- No: the failing platforms might have some missing functionality in WebKit or DumpRenderTree.
- Related areas of WebKit were modified
- Were the modifications platform-specific?
- Yes: the failing platforms might need similar modifications made.
- No: there might be some existing platform-specific code that is responsible for the different results.
- Were the modifications platform-specific?
File bugs for the failures
If multiple tests are failing for the same reason, you should group them together into a single bug. If a test fails on multiple platforms and those platforms will need separate fixes, you should file one bug for each failing platform.
- Go to http://webkit.org/new-bug
- Include in your report:
- The name(s) of the failing test(s)
- The reason you determined for the failure, if any
- What platform(s) the failures occur on
- When the failures began, if known
- A link to the failing output
- If a single test had incorrect output, link to the pretty diff (e.g., http://build.webkit.org/results/Windows%20Release%20(Tests)/r71112%20(5886)/fast/blockflow/border-vertical-lr-pretty-diff.html).
- If multiple tests had incorrect output, or if the failure is a crash or hang, link to the results.html for that build (e.g., http://build.webkit.org/results/Windows%20Release%20(Tests)/r71112%20(5886)/results.html).
- Apply keywords
LayoutTestFailure
Regression
, if the failure is due to a regression in WebKitPlatformOnly
, if the test only fails on one platform
- If the test affects one of Apple's ports, and you work for Apple, you should migrate the bug into Radar.
Get the bots green again
- If you know what the root cause is, and know how to fix it (e.g., by making a change to WebKit or checking in new correct results), then fix it and close the bug!
- If the tests fail every time and the tests' output is the same every time, check in new results for the tests and include the bug URL in your ChangeLog.
- You should do this even if the test is "failing". By running the test against these "expected failure" results rather than skipping the test entirely, we can discover when new regressions are introduced.
- If the tests fail intermittently, or crash, or hang, add the tests to the appropriate Skipped files (e.g.,
LayoutTests/platform/mac-leopard/Skipped
). Include a comment in the Skipped file with the bug URL and a brief description of how it fails, e.g.:# Sometimes times out http://webkit.org/b/12345 fast/js/some-cool-test.html