One of the ongoing tasks of the Web QA department is managing xfails. This document will explain what xfails are, and describe the steps one can take to investigate and update them.
XFail stands for expected failure and that’s exactly what it is: a test that is expected to fail. Why would we have a test that is expected to fail, you may ask? Usually it is because there is a known bug in the software that we are testing. The test exposes the bug and therefore will fail until the bug is fixed. When we find such a bug, and it is not expected to be fixed within a few hours, we mark the test as xfailed (we generally refer to this as xfailing the test), which indicates that it is expected to fail.
XPass does not, as you might expect, stand for expected pass. It actually stands for unexpected pass, and it is essentially what an xfail becomes once the bug is fixed and the test starts passing again. We are alerted to the fact that we can un-xfail a test when it starts xpassing. When managing xfails we need to look at both xfails and xpasses. For the remainder of this document we will often mean both xfails and xpasses when using the term xfails, such as “managing xfails”.
When managing xfails the first task is to find and identify them. To make that very easy we have developed a Web QA XFails Dashboard. When you visit the dashboard you will see all of our automation projects listed, and every test in every project which is currently being xfailed or skipped. For the purposes of this discussion you may ignore the skipped tests. Each test that is either skipped or xfailed will appear as a row in a table similar to this:
As you can see, the source code for the test, in the Github repo, is linked to via the filename, and any tests that are xfailed will have a red box in the Type column.
Now that you’ve found an xfail to check, what’s next?
It would be counterproductive if everyone looked into the same xfails at the same time, so when you start investigating xfails it is a good idea to record that fact. There is a document, called an etherpad, which we use to keep track of this information, and it can be found at etherpad.mozilla.org/webqa-xfails. Please access that etherpad and update it according to the instructions near the top. You can also drop into #mozwebqa on irc.mozilla.organd let someone know you’re doing the work, and then they can help you if you need any help.
When investigating an xfail, you should complete the following steps:
Every xfailed test should have either a Buzilla bug number or Github issue number associated with it. For the remainder of this document we will use the word bug to mean either the Bugzilla bug or the Github issue associated with the xfailed test. You might be able to see this bug number right on the dashboard, as with the example above. If no bug number displays, click the name of the xfailed test and scroll to the error to see if a bug number is listed in the source code of the test. If a test is xfailed and it does not have a bug associated with it then that is a problem that should be addressed. Please open up a Github issue in the repo in which the test resides explaining the problem. For example, “Test test_name in file_name.py is xfailed without a corresponding bug”. That issue (the one you just opened) will then be investigated by someone.
We can learn a lot about an xfailed test by running it. Remember that an xfailed test is one that is expected to fail, but for a specific reason. We need to verify that the test is still failing for that reason. To do this you need to read through the associated bug to understand what behaviour the test exposes. Then run the test and watch it.
Does it appear to fail because of the bug? If so then you are done with this test. The xfail is valid and you can move on to the next test. But if it fails for another reason then that’s an indication that the current xfail may not be valid. The xfail reason for the test should be changed to either replace the current reason with the new one, or, if it’s possible that the test is now failing for yet another reason, a new reason should be added. Depending on how comfortable you are with the following steps, and how much time you have, you may choose to simply open a new Github issue in the test’s repo explaining that the test is now failing for another reason. Please be sure to include details about what you did to determine this.
You may also choose to open a new bug which describes the new behaviour that is causing the test to fail, and then change or add to the xfail reason for the test by doing one of two things:
If the test passes then it might be an XPass, but we need to be sure. Check whether it is also currently passing on our CI server. You can do that yourself if you have access, or you can ask in #mozwebqa on irc.mozilla.org if you do not have access to the CI results. You should also check the bug to see if it is marked as resolved. If it is, this is another good indication that the test is likely xpassing. If it is not then perhaps something else is now allowing the test to pass, or maybe the bug has been fixed but it hasn’t been updated yet. Add a comment on the bug to explain what you’ve done and that the test now seems to be passing. If you believe that the test is now passing reliably and should be un-xfailed, you can do one of two things:
Continue checking tests from the current project until you have checked them all. Then report that you have completed the xfail review of the tests for Project ‘X’, and choose another project to look at.