You've just written a feature and (hopefully!) want to test it. Or you've decided that an existing feature doesn't have enough tests and want to contribute some. But where do you start? You've looked around and found references to things like "xpcshell" or "web-platform-tests" or "talos". What code, features or platforms do they all test? Where do their feature sets overlap? In short, where should your new tests go? This document is a starting point for those who want to start to learn about Mozilla's automated testing tools and procedures. Below you'll find a short summary of each framework we use, and some questions to help you pick the framework(s) you need for your purposes.
Note: If you read this article and still have questions, try reading some of the further links provided below for more information, and hop on to the #ateam or #qa IRC channels or Mozilla Forums to ask questions.
These tests are found within the mozilla-central tree, along with the product code. They are all run when a changeset is pushed to mozilla-central, mozilla-inbound, or try, with the results showing up on Treeherder. They can also be run on their own.
The tests are run on machines in a very large pool. For the most part, all tests of a particular type run on the same pool of machines, regardless of branch. One substantial exception is that try builds are performed on a pool isolated from all other builds.
Note: refer also to how test harnesses work for more information on how these tests are run.
Lint tests help to ensure better quality, less error-prone code by analysing the code with a linter.
Symbol | Name | File type |
Platform | Process | Environment | Privilege | What is tested | ||||
---|---|---|---|---|---|---|---|---|---|---|---|
Desktop | Mobile | Parent | Child | Shell | Browser Profile |
Low | High | ||||
ES |
ESLint | JS | Yes | Yes | N/A | Terminal | – | N/A | Javascript is analysed for correctness. | ||
mocha(EPM) |
ESLint-plugin-mozilla | JS | Yes | No | N/A | Terminal | – | N/A | The ESLint plugin rules. | ||
f8 |
flake8 | Python | Yes | Yes | N/A | Terminal | - | N/A | Python analysed for style and correctness. | ||
W |
wpt lint | All | Yes | No | N/A | Terminal | – | N/A | web-platform-tests analysed for style and manifest correctness | ||
android-checkstyle |
android-checkstyle | Java | No | Yes | N/A | Terminal | - | N/A | Java analysed for coding style consistency. | ||
android-lint |
android-lint | Java | No | Yes | N/A | Terminal | - | N/A | Java analysed for common Android coding errors. | ||
android-findbugs |
android-findbugs | Java | No | Yes | N/A | Terminal | - | N/A | Java analysed for common Java coding errors. | ||
WR(tidy) | WebRender servo-tidy | All | Yes | No | N/A | Terminal | - | N/A | Code in gfx/wr is run through servo-tidy |
Symbol | Name | File type |
Platform | Process | Environment | Privilege | What is tested | ||||
---|---|---|---|---|---|---|---|---|---|---|---|
Desktop | Mobile | Parent | Child | Shell | Browser Profile |
Low | High | ||||
R(J) |
JS Reftest | JS | Yes | Yes | N/A | JSShell | – | N/A | The JavaScript engine's implementation of the JavaScript language. | ||
R(C) |
Crashtest | HTML XHTML SVG |
Yes | Yes | – | Yes | Content | Yes | Yes | – | That pages load without crashing, asserting, or leaking. |
R(R) |
Reftest | HTML XHTML SVG |
Yes | Yes | – | Yes | Content | Yes | Yes | – | That pages are rendered (and thus also layed out) correctly. |
R(Rs) |
Reftest sanity | ? | – | – | ? | ? | ? | ? | ? | ? | ? |
Cpp |
Compiled code | C++ Python |
Yes | Yes | N/A | Terminal | – | N/A | Code not exposed to JavaScript (using C++ executables), and the state of the source tree (using Python scripts). | ||
X |
xpcshell | JS | Yes | Yes | Yes | Allow | XPCShell [Note 1] |
Allow | – | Yes | Low-level code exposed to JavaScript, such as XPCOM components. |
M(oth) |
IPC | ? | – | – | ? | ? | ? | ? | ? | ? | Plugin APIs, particularly out-of-process plugins. |
M(oth) |
Accessibility (mochitest-a11y) |
? | Yes | – | ? | Yes | Content | Yes | ? | ? | Accessibility interfaces. |
|
Mochitest plain |
HTML XHTML XUL |
Yes | Yes | – | Yes | Content | Yes | Yes | Allow | Features exposed to JavaScript in web content, like DOM and other Web APIs, where the APIs do not require elevated permissions to test. |
M(oth) |
Mochitest chrome |
HTML XHTML XUL |
Yes | – | Allow | Yes | Content | Yes | – | Yes | Code requiring UI or JavaScript interactions with privileged objects. |
M(bc) |
Mochitest browser-chrome |
JS | Yes | – | Yes | Allow | Browser | Yes | – | Yes | How the browser UI interacts with itself and with content. |
M(dt) |
Mochitest devtools | JS | Yes | – | Yes | Allow | Browser | Yes | – | Yes | Firefox developer tools. Based on Mochitest browser-chrome. |
M(rc) |
Mochitest robocop | Java | – | Yes | ? | ? | ? | ? | ? | ? | Native UI of Android devices (sends events to the front end). Some use a combination of Java and JavaScript. |
M(gl) |
Mochitest WebGL | ? | ? | ? | ? | ? | ? | ? | ? | ? | ? |
M(remote) |
Mochitest Remote Protocol |
JS | Yes | No | Yes | Allow | Browser | Yes | - | Yes | Firefox Remote Protocol (Implements parts of Chrome dev-tools protocol). Based on Mochitest browser-chrome. |
SM(...) |
SpiderMonkey automation | JS C++ |
Yes | No | N/A | JSShell | - | Yes | - | SpiderMonkey engine shell tests and JSAPI tests | |
W |
web-platform-tests | HTML XHTML |
Yes | – | – | Yes | Content | Yes | Yes | – | Standardized features exposed to ECMAScript in web content; tests are shared with other vendors. |
Wr |
web-platform-tests reftests | HTML XHTML |
Yes | – | – | Yes | Content | Yes | Yes | – | Layout and graphic correctness for standardized features; tests are shared with other vendors |
Mn |
Marionette | Python | Yes | – | ? | ? | Content Browser |
? | – | Yes | Large out-of-process function integration tests and tests that do communication with multiple remote Gecko processes. |
Mn-h |
Marionette harness tests | Python | - | - | - | - | - | - | - | - | Marionette Python Runner |
JP |
Jetpack | JS | ? | ? | ? | ? | ? | ? | ? | ? | Add-on SDK code included in Firefox. If you need help interpreting test failures or fixing bugs in the SDK code, talk to the SDK team in #jetpack. |
V |
Valgrind | ? | Linux only |
– | ? | ? | ? | ? | ? | ? | Memory-related errors, such as heap block overflows/underflows, use of uninitialized memory, bad frees, and memory leaks. |
Fxfn | Firefox UI Tests | Python | Yes | - | ? | ? | Content Browser |
Yes | - | Yes | Integration tests with a focus on the user interface and localization. |
tt(c) | telemetry-tests-client | Python | Yes | No | - | - | Content Browser |
Yes | - | Yes | Integration tests for the Firefox Telemetry client. |
TV | Test Verification (test-verify) |
JS |
Yes | Yes | ? | ? | ? | ? | ? | ? | Uses other test harnesses - mochitest, reftest, xpcshell - to perform extra testing on new/modified tests. |
TVw |
JS HTML Python |
Yes | No | ? | ? | ? | ? | ? | ? | Uses wpt test harnesses to perform extra testing on new/modified web-platform tests. | |
A(...) | Autophone | HTML | - | Yes | ? | ? | Content | Yes | ? | ? |
Performance and select Unit Tests on Android devices. |
android-test |
android-test | Java | - | Yes | N/A | N/A | Java | No | N/A | N/A |
Run Android local unit tests. |
WR(wrench) | WebRender standalone tests | YAML/Rust | Yes | No | N/A | Terminal | N/A | N/A | WebRender rust code (as a standalone module, with Gecko integration) |
Symbol | Name | File type |
Platform | Process | Environment | Privilege | What is tested | |||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Desktop | Mobile | B2G | Parent | Child | Shell | Browser Profile |
Low | High | ||||
Gb |
gaia-build-integration | ? | – | – | Yes | ? | ? | ? | ? | ? | ? | The Gaia build system. |
Gip |
gaia-ui-tests | Python | – | – | Yes | ? | ? | ? | ? | ? | ? | Gaia integration. Marionette- and Python- based. |
Gij |
gaia-integration | JS | – | – | Yes | ? | ? | ? | ? | ? | ? | Gaia integration. Marionette- and JavaScript- based. |
Li |
gaia-linter | ? | – | – | Yes | ? | ? | ? | ? | ? | ? | Gaia JavaScript code formatting. |
Mnw |
Marionette WebAPI | JS | – | – | Emul. only |
? | ? | ? | ? | ? | ? | WebAPIs. This takes advantage of the Firefox OS emulator's ability to virtualize much of the hardware on a B2G device. |
Most test suites are supported only on a subset of the available plaforms and operating systems. Unless otherwise noted:
Talos is the framework used for performance testing. Up-to-date information on the set of tests and what they do can be found at https://wiki.mozilla.org/Performance_sheriffing/Talos/Tests
Generally, you should pick the lowest-level framework that you can. If you are testing JavaScript but don't need a window, use XPCShell or even JSShell. If you're testing page layout, try to use Reftest. The advantage in lower level testing is that you don't drag in a lot of other components that might have their own problems, so you can home in quickly on any bugs in what you are specifically testing.
Here's a series of questions to ask about your work when you want to write some tests.
If the functionality is exposed to JavaScript, and you don't need a window, consider XPCShell. If not, you'll probably have to use compiled-code tests, which can test pretty much anything but are difficult to write properly and are often less suitable than a domain-specific harness. In general, this should be your last option for a new test, unless you have to test something that is not exposed to JavaScript.
If you've found pages that crash Firefox, add a crashtest to make sure future versions don't experience this crash (assertion or leak) again. Note that this may lead to more tests once the core problem is found.
Reftest is your best bet, if possible.
Use Talos!
If it's mobile UI, look into Robocop. For desktop, try browser chrome tests (soon to be split out of Mochitest), or Marionette if the application also needs to be restarted, or tested with localized builds.
For Mobile UI, look at Robocop. For Mobile features that are purely Java, look at android-test
. There are some specific features that Mochitest or Reftest can cover. Browser-chrome tests do not run on Android. If you want to test performance, Talos runs fine with a few limitations (use the --noChrome
options) and smaller cycles (e.g. 10 iterations instead of 20, etc.)
Most test jobs now expose an environment variable named $MOZ_UPLOAD_DIR
. If this variable is set during automated test runs, you can drop additional files into this directory, and they will be uploaded to a web server when the test finishes. The URLs to retrieve the files will be output in the test log.
First ask yourself if these prefs need to be enabled for all tests or just a subset of tests (e.g to enable a feature).
If the answer is the latter, try to set the pref as local to the tests that need it as possible. Here are some options:
All variants of mochitest can set prefs in their manifests. For example, to set a pref for all tests in a manifest:
[DEFAULT] prefs = my.awesome.pref=foo, my.other.awesome.pref=bar, [test_foo.js] [test_bar.js]
Most test suites define prefs in user.js files that live under testing/profiles. Each directory is a profile that contains a user.js
file with a number of prefs defined in it. Test suites will then merge one or more of these basic profiles into their own profile at runtime. To see which profiles apply to which test suites, you can inspect testing/profiles/profiles.json. Profiles at the beginning of the list get overridden by profiles at the end of the list.
Because this system makes it hard to get an overall view of which profiles are set for any given test suite, a handy profile
utility was created:
$ cd testing/profiles $ ./profile -- --help usage: profile [-h] {diff,sort,show,rm} ... $ ./profile show mochitest # prints all prefs that will be set in mochitest $ ./profile diff mochitest reftest # prints differences between the mochitest and reftest suites
Note: JS engine tests do not use testing/profiles yet, instead set prefs here.
Note: The previous page was largely out of date. The new page is intended as a guide to the testing frameworks at Mozilla; some of the old page should be split out to a "Running Automated Tests" page to complement the "Developing Tests" page (which itself needs to be updated).
Also see the "Continuous Integration" page.