This documentation is for Mozilla developers who need to work on Mozilla's build system. The contents below will explain the basic concepts and terminology of the build system and how to do common tasks such as compiling components and creating jar files.

For many developers, typing mach build to build the tree is sufficient to work with the source tree. This document explains how the build system works.


This document is not intended for developers who just want to build Mozilla. For that, see the Build Documentation.


When you type mach build to build the tree, three high-level phases occur within the build system:

  1. System detection and validation
  2. Preparation of the build backend
  3. Invocation of the build backend

Phase 1: configure

Phase 1 centers around the configure script. The configure script is a bash shell script. The file is generated from a file called, which is written in M4 and processed using Autoconf 2.13 to create the final configure script. You don't have to worry about how you obtain a configure file: the build system does this for you.

The primary job of configure is to determine characteristics of the system and compiler, apply options passed into it, and validate everything looks OK to build. The primary output of the configure script is an executable file in the object directory called config.status. configure also produces some additional files (like However, the most important file in terms of architecture is config.status.

The existence of a config.status file may be familiar to those who have worked with Autoconf before. However, Mozilla's config.status is different from many other config.status files, as it's written in Python! Instead of our configure script producing a shell script, it generates a Python script.

Python 2 is prevalent in a Mozilla build system. If we need to write code for the build system, we do it in Python 2 instead of editing a makefile.

config.status contains 2 parts:

These data structures describe the current state of the system and what the existing build configuration looks like. For example, a data structure defines which compiler to use, how to invoke it, which application features are enabled, and so on. You are encouraged to open config.status to have a look!

After we have emitted a config.status file, we proceed to Phase 2.

Phase 2: Build Backend Preparation and the Build Definition

Once configure has determined what the current build configuration is, we need to apply this to the source tree so we can actually build.

What essentially happens is the automatically produced config.status Python script is executed as soon as configure has generated it. config.status is charged with the task of telling a tool how to build the tree. To do this, config.status must first scan the build system definition.

The build system definition consists of various files in the tree. There is roughly one file per directory or per set of related directories. Each file defines how its part of the build configuration works. For example, it says I want these C++ files compiled or look for additional information in these directories. config.status starts with the main file and then recurses into all referenced files and directories. As the files are read, data structures describing the overall build system definition are emitted. These data structures are then read by a build backend generator, which then converts them into files, function calls, and so on. In the case of a make backend, the generator writes out makefiles.

When config.status runs, you'll see the following output:

Reticulating splines...
Finished reading 1096 files into 1276 descriptors in 2.40s
Backend executed in 2.39s
2188 total backend files. 0 created; 1 updated; 2187 unchanged
Total wall time: 5.03s; CPU time: 3.79s; Efficiency: 75%

What this is saying is that a total of 1,096 files were read. Altogether, 1,276 data structures describing the build configuration were derived from them. It took 2.40s wall time just to read these files and produce the data structures. The 1,276 data structures were fed into the build backend, which then determined it had to manage 2,188 files derived from those data structures. Most of the files already existed and didn't need to be changed. However, one was updated as a result of the new configuration. The whole process took 5.03s. Of this, only 3.79s were in CPU time. This means we spent roughly 25% of the time waiting on I/O.

Phase 3: Invocation of the Build Backend

When most people think of the build system, they think of Phase 3. This is where we take all the code in the tree and produce the Firefox binary program file or the application you are creating. Phase 3 effectively takes whatever was generated by Phase 2 and runs it. Since the dawn of Mozilla, this has been done with the make tool, consuming makefiles. However, with the transition to files, you may eventually see non-make build backends, such as Tup or Visual Studio.

When building the tree, most of the time is spent in Phase 3. This is when header files are installed, C++ files are compiled, files are preprocessed, and so on.

Recursive Make Backend

The recursive make backend is the tried-and-true backend used to build the tree. It's what's been used since the dawn of Mozilla. Essentially, there are makefiles in each directory. make starts processing the makefile in the root directory and then recursively descends into child directories until it's done. But there's more to the process than that.

The recursive make backend divides the source tree into tiers. A tier is a grouping of related directories containing makefiles of their own. For example, there is a tier for the Netscape Portable Runtime (nspr), one for the JavaScript engine, one for the core Gecko platform, one for the XUL app being built, and so on.

The main file defines the tiers and directories in the tiers. In reality, the main files include other files, such as /toolkit/toolkit.mozbuild, which define the tiers. They do this via the add_tier_dir() function.

At build time, the tiers are traversed in the order they are defined. Typically, the traversal order looks something like base, nspr, nssjs, platform, app.

Each tier consists of three sub-tiers: export, libs, and tools. These sub-tiers roughly correspond to the actions of pre-build, main-build, and post-build. This naming, however, can be misleading because all three sub-tiers are part of the build:

When make is invoked, it starts at the export sub-tier of the first tier, and traverses all the directories in that tier. Then, it does the same thing for the libs sub-tier and, subsequently, the tools sub-tier. It then moves on to the next tier and continues until no tiers remain.

To view information about the tiers, you can execute the following special make targets:

Command Effect
make echo-tiers Show the final list of tiers.
make echo-dirs Show the list of non-static source directories to iterate over, as determined by the tier list.
make echo-variable-STATIC_DIRS Show the list of static source directories to iterate over, as determined by the tier list. Files files are how each part of the source tree defines how it is integrated with the build system. Think of each file as a data structure telling the build system what to do.

During build backend generation, all files relevant to the current build configuration are read and converted into files and actions used to build the tree (such as makefiles). In this section, we'll talk about how files actually work.

An individual file is actually a Python script. However, they are unlike most Python scripts. The execution environment is strictly controlled, so files can only perform a limited set of operations. files are limited to performing the following actions:

  1. Calling functions that are explicitly made available to the environment.
  2. Assigning to a well-defined set of variables whose name is UPPERCASE.
  3. Creating new variables whose name is not UPPERCASE (this includes defining functions). files cannot do the following:

The most important actions of files are #1 and #2 from the above list. These are how the execution of a file tells the build system what to do. For example, you can assign to the DIRS list to define which directories to traverse into looking for additional files.

The output of the execution of an individual file is a Python dictionary. This dictionary contains the UPPERCASE variables directly assigned to and the special variables indirectly assigned to by calling functions exported to the execution environment. This is what we were referring to when we said you can think of files as data structures. UPPERCASE Variables and Functions

The set of special symbols available to files is centrally defined and is under the purview of the build configuration module. To view the variables and functions available in your checkout of the tree, run the following:

mach mozbuild-reference

Or, you can view the raw file at /python/mozbuild/mozbuild/frontend/

How Processing Works

For most developers, knowing that files are Python scripts that are executed and emit Python dictionaries describing the build configuration is enough. If you insist on knowing more, this section is for you.

All the code for reading files lives under /python/mozbuild/mozbuild/frontend/. mozbuild is the name of our Python package that contains most of the code for defining how the build system works. files and mozbuild are different, so be careful not to confuse the two. contains code for a generic Python sandbox. This code is used to restrict the environment files are executed under. contains the code that defines the sandbox (the MozbuildSandbox class) and the code for traversing a tree of files (the BuildReader class) by following DIRS and TIERS variables. A BuildReader is instantiated with a configuration, is told to read the source tree, and then emits a stream of MozbuildSandbox instances corresponding to the executed files.

The MozbuildSandbox stream produced by the BuildReader is typically fed into the TreeMetadataEmitter class from The role of TreeMetadataEmitter is to convert the low-level MozbuildSandbox dictionaries into higher-level function-specific data structures. These data structures are the classes defined in Each class defines a specific aspect of the build system, such as directories to traverse, C++ files to compile, and so on. The TreeMetadataEmitter output is a stream of instances of these classes.

The build system stream describing class instances emitted from TreeMetadataEmitter is then fed into a build backend. A build backend is an instance of a child class of a BuildBackend from (in the mozbuild.backend package now, not mozbuild.frontend). The child class implements methods for processing individual class instances as well as common hook points, such as processing has finished. See for an implementation of a BuildBackend.

Although we call the base class BuildBackend, the class doesn't need to be focused with building at all. If you wanted to create a consumer that performed a line count of all C++ files or generated a Clang compilation database, for example, this would be an acceptable use of a BuildBackend.

Technically, we don't need to feed the TreeMetadataEmitter output into a BuildBackend: it's possible to create your own consumer. However, a BuildBackend provides a common framework from which to author consumers. Along the same vein, you don't need to use TreeMetadataEmitter to consume MozbuildSandbox instances. Nor do you need to use BuildReader to traverse the files. This is just the default framework we've established for our build system.

Legacy Content


Makefile basics

Makefiles can be quite complicated, but Mozilla provides a number of built-in rules that should enable most makefiles to be simpler. Complete documentation for make is beyond the scope of this document but is available here.

One concept you will need be familiar with is variables in make. Variables are defined by the syntax VARIABLE = VALUE, and the value of a variable is referenced by writing $(VARIABLE). All variables are strings.

All files in Mozilla have the same basic format:

{{ page("/en-US/docs/Standard_Makefile_Header")}}
# ... Main body of makefile goes here ...

include $(topsrcdir)/config/

# ... Additional rules go here ...

One other frequently used variable not specific to a particular build target is DIRS. DIRS is a list of subdirectories of the current directory to recursively build in. Subdirectories are traversed after their parent directories. For example, you could have:

DIRS = \
  public \
  resources \
  src \

This example demonstrates another concept called continuation lines. A backslash as the last character on a line allows the variable definition to be continued on the next line. The extra whitespace is compressed. The terminating $(NULL) is a method for consistency; it allows you to add and remove lines without worrying about whether the last line has an ending backslash or not.

Makefile examples

Building libraries

There are three main types of libraries that are built in Mozilla:

Non-component shared libraries

A non-component shared library is useful when there is common code that several components need to share and sharing it through XPCOM is not appropriate or not possible. As an example, below is a portion of the makefile for libmsgbaseutil, which is linked against by all of the main news components:

 DEPTH           = ../../..
 topsrcdir       = @top_srcdir@
 srcdir          = @srcdir@
 VPATH           = @srcdir@

 include $(DEPTH)/config/

 MODULE          = msgbaseutil
 LIBRARY_NAME    = msgbaseutil
 SHORT_LIBNAME   = msgbsutl

Notice that the only change from the component example above is that IS_COMPONENT is not set. When this is not set, a shared library will be created and installed to dist/bin.

Static libraries

As mentioned above, static libraries are most commonly used as intermediate steps to building a larger library (usually a component). This lets you spread out the source files in multiple subdirectories. Static libraries may also be linked into an executable. As an example, below is a portion of the makefile from layout/base/src:

 DEPTH           = ../../..
 topsrcdir       = @top_srcdir@
 srcdir          = @srcdir@
 VPATH           = @srcdir@

 include $(DEPTH)/config/

 MODULE          = layout
 LIBRARY_NAME    = gkbase_s

 # REQUIRES and CPPSRCS omitted here for brevity #

 # we don't want the shared lib, but we want to force the creation of a static lib.

 include $(topsrcdir)/config/

The key here is setting FORCE_STATIC_LIB = 1. This creates libgkbase_s.a (on UNIX) and gkbase_s.lib on Windows and copies it to dist/lib. Now, let's take a look at how to link several static libraries together to create a component:

 DEPTH           = ../..
 topsrcdir       = @top_srcdir@
 srcdir          = @srcdir@
 VPATH           = @srcdir@

 include $(DEPTH)/config/

 MODULE          = layout
 LIBRARY_NAME    = gklayout
 MODULE_NAME     = nsLayoutModule

 CPPSRCS         = \
                 nsLayoutModule.cpp \

                 $(DIST)/lib/$(LIB_PREFIX)gkhtmlbase_s.$(LIB_SUFFIX) \
                 $(DIST)/lib/$(LIB_PREFIX)gkhtmldoc_s.$(LIB_SUFFIX) \
                 $(DIST)/lib/$(LIB_PREFIX)gkhtmlforms_s.$(LIB_SUFFIX) \
                 $(DIST)/lib/$(LIB_PREFIX)gkhtmlstyle_s.$(LIB_SUFFIX) \
                 $(DIST)/lib/$(LIB_PREFIX)gkhtmltable_s.$(LIB_SUFFIX) \
                 $(DIST)/lib/$(LIB_PREFIX)gkxulbase_s.$(LIB_SUFFIX) \
                 $(DIST)/lib/$(LIB_PREFIX)gkbase_s.$(LIB_SUFFIX) \
                 $(DIST)/lib/$(LIB_PREFIX)gkconshared_s.$(LIB_SUFFIX) \
                 $(DIST)/lib/$(LIB_PREFIX)gkxultree_s.$(LIB_SUFFIX) \
                 $(DIST)/lib/$(LIB_PREFIX)gkxulgrid_s.$(LIB_SUFFIX) \

 include $(topsrcdir)/config/

SHARED_LIBRARY_LIBS is set to a list of static libraries, which should be linked into this shared library. Note the use of LIB_PREFIX and LIB_SUFFIX to make this work on all platforms.

Building jar files

jar files are used for packaging chrome files (XUL, JavaScript, and CSS). For more information on jar packaging, see this document. Here, we will only cover how to set up a makefile to package jars. Below is an example:

 DEPTH           = ../../../..
 topsrcdir       = @top_srcdir@
 srcdir          = @srcdir@
 VPATH           = @srcdir@

 include $(DEPTH)/config/

 include $(topsrcdir)/config/

As you can see, there are no extra variables to define. If a file exists in the same directory as this, it will automatically be processed. Although the common practice is to have a resources directory that contains the and chrome files, you may also put a file in a directory that creates a library, in which case it will be processed.

See Makefile - variables for information about specific variables and how to use them.

Original Document Information