genode/doc/build_system.txt

504 lines
22 KiB
Plaintext

=======================
The Genode build system
=======================
Norman Feske
Abstract
########
The Genode OS Framework comes with a custom build system that is designed for
the creation of highly modular and portable systems software. Understanding
its basic concepts is pivotal for using the full potential of the framework.
This document introduces those concepts and the best practises of putting them
to good use. Beside building software components from source code, common
and repetitive development tasks are the testing of individual components
and the integration of those components into complex system scenarios. To
streamline such tasks, the build system is accompanied with special tooling
support. This document introduces those tools.
Build directories and repositories
##################################
The build system is supposed to never touch the source tree. The procedure of
building components and integrating them into system scenarios is done at
a distinct build directory. One build directory targets a specific platform,
i.e., a kernel and hardware architecture. Because the source tree is decoupled
from the build directory, one source tree can have many different build
directories associated, each targeted at another platform.
The recommended way for creating a build directory is the use of the
'create_builddir' tool located at '<genode-dir>/tool/'. By starting the tool
without arguments, its usage information will be printed. For creating a new
build directory, one of the listed target platforms must be specified.
Furthermore, the location of the new build directory has to be specified via
the 'BUILD_DIR=' argument. For example:
! cd <genode-dir>
! ./tool/create_builddir linux_x86 BUILD_DIR=/tmp/build.linux_x86
This command will create a new build directory for the Linux/x86 platform
at _/tmp/build.linux_x86/_.
Build-directory configuration via 'build.conf'
==============================================
The fresh build directory will contain a 'Makefile', which is a symlink to
_tool/builddir/build.mk_. This makefile is the front end of the build system
and not supposed to be edited. Beside the makefile, there is a _etc/_
subdirectory that contains the build-directory configuration. For most
platforms, there is only a single _build.conf_ file, which defines the parts of
the Genode source tree incorporated in the build process. Those parts are
called _repositories_.
The repository concept allows for keeping the source code well separated for
different concerns. For example, the platform-specific code for each target
platform is located in a dedicated _base-<platform>_ repository. Also, different
abstraction levels and features of the system are residing in different
repositories. The _etc/build.conf_ file defines the set of repositories to
consider in the build process. At build time, the build system overlays the
directory structures of all repositories specified via the 'REPOSITORIES'
declaration to form a single logical source tree. By changing the list of
'REPOSITORIES', the view of the build system on the source tree can be altered.
The _etc/build.conf_ as found in a fresh created build directory will list the
_base-<platform>_ repository of the platform selected at the 'create_builddir'
command line as well as the 'base', 'os', and 'demo' repositories needed for
compiling Genode's default demonstration scenario. Furthermore, there are a
number of commented-out lines that can be uncommented for enabling additional
repositories.
Note that the order of the repositories listed in the 'REPOSITORIES' declaration
is important. Front-most repositories shadow subsequent repositories. This
makes the repository mechanism a powerful tool for tweaking existing repositories:
By adding a custom repository in front of another one, customized versions of
single files (e.g., header files or target description files) can be supplied to
the build system without changing the original repository.
Building targets
================
To build all targets contained in the list of 'REPOSITORIES' as defined in
_etc/build.conf_, simply issue 'make'. This way, all components that are
compatible with the build directory's base platform will be built. In practice,
however, only some of those components may be of interest. Hence, the build
can be tailored to those components which are of actual interest by specifying
source-code subtrees. For example, using the following command
! make core server/nitpicker
the build system builds all targets found in the 'core' and 'server/nitpicker'
source directories. You may specify any number of subtrees to the build
system. As indicated by the build output, the build system revisits
each library that is used by each target found in the specified subtrees.
This is very handy for developing libraries because instead of re-building
your library and then your library-using program, you just build your program
and that's it. This concept even works recursively, which means that libraries
may depend on other libraries.
In practice, you won't ever need to build the _whole tree_ but only the
targets that you are interested in.
Cleaning the build directory
============================
To remove all but kernel-related generated files, use
! make clean
To remove all generated files, use
! make cleanall
Both 'clean' and 'cleanall' won't remove any files from the _bin/_
subdirectory. This makes the _bin/_ a safe place for files that are
unrelated to the build process, yet required for the integration stage, e.g.,
binary data.
Controlling the verbosity of the build process
==============================================
To understand the inner workings of the build process in more detail, you can
tell the build system to display each directory change by specifying
! make VERBOSE_DIR=
If you are interested in the arguments that are passed to each invocation of
'make', you can make them visible via
! make VERBOSE_MK=
Furthermore, you can observe each single shell-command invocation by specifying
! make VERBOSE=
Of course, you can combine these verboseness toggles for maximizing the noise.
Enabling parallel builds
========================
To utilize multiple CPU cores during the build process, you may invoke 'make'
with the '-j' argument. If manually specifying this argument becomes an
inconvenience, you may add the following line to your _etc/build.conf_ file:
! MAKE += -j<N>
This way, the build system will always use '<N>' CPUs for building.
Caching inter-library dependencies
==================================
The build system allows to repeat the last build without performing any
library-dependency checks by using:
! make again
The use of this feature can significantly improve the work flow during
development because in contrast to source-codes, library dependencies rarely
change. So the time needed for re-creating inter-library dependencies at each
build can be saved.
Repository directory layout
###########################
Each Genode repository has the following layout:
Directory | Description
------------------------------------------------------------
'doc/' | Documentation, specific for the repository
------------------------------------------------------------
'etc/' | Default configuration of the build process
------------------------------------------------------------
'mk/' | The build system
------------------------------------------------------------
'include/' | Globally visible header files
------------------------------------------------------------
'src/' | Source codes and target build descriptions
------------------------------------------------------------
'lib/mk/' | Library build descriptions
Creating targets and libraries
##############################
Target descriptions
===================
A good starting point is to look at the init target. The source code of init is
located at _os/src/init/_. In this directory, you will find a target description
file named _target.mk_. This file contains the building instructions and it is
usually very simple. The build process is controlled by defining the following
variables.
Build variables to be defined by you
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:'TARGET': is the name of the binary to be created. This is the
only *mandatory variable* to be defined in a _target.mk_ file.
:'REQUIRES': expresses the requirements that must be satisfied in order to
build the target. You find more details about the underlying mechanism in
Section [Specializations].
:'LIBS': is the list of libraries that are used by the target.
:'SRC_CC': contains the list of '.cc' source files. The default search location
for source codes is the directory, where the _target.mk_ file resides.
:'SRC_C': contains the list of '.c' source files.
:'SRC_S': contains the list of assembly '.s' source files.
:'SRC_BIN': contains binary data files to be linked to the target.
:'INC_DIR': is the list of include search locations. Directories should
always be appended by using +=. Never use an assignment!
:'EXT_OBJECTS': is a list of Genode-external objects or libraries. This
variable is mostly used for interfacing Genode with legacy software
components.
Rarely used variables
---------------------
:'CC_OPT': contains additional compiler options to be used for '.c' as
well as for '.cc' files.
:'CC_CXX_OPT': contains additional compiler options to be used for the
C++ compiler only.
:'CC_C_OPT': contains additional compiler options to be used for the
C compiler only.
Specifying search locations
~~~~~~~~~~~~~~~~~~~~~~~~~~~
When specifying search locations for header files via the 'INC_DIR' variable or
for source files via 'vpath', relative pathnames are illegal to use. Instead,
you can use the following variables to reference locations within the
source-code repository, where your target lives:
:'REP_DIR': is the base directory of the current source-code repository.
Normally, specifying locations relative to the base of the repository is
never used by _target.mk_ files but needed by library descriptions.
:'PRG_DIR': is the directory, where your _target.mk_ file resides. This
variable is always to be used when specifying a relative path.
Library descriptions
====================
In contrast to target descriptions that are scattered across the whole source
tree, library descriptions are located at the central place _lib/mk_. Each
library corresponds to a _<libname>.mk_ file. The base of the description file
is the name of the library. Therefore, no 'TARGET' variable needs to be set.
The source-code locations are expressed as '$(REP_DIR)'-relative 'vpath'
commands.
Library-description files support the following additional declarations:
:'SHARED_LIB = yes': declares that the library should be built as a shared
object rather than a static library. The resulting object will be called
_<libname>.lib.so_.
Specializations
===============
Building components for different platforms likely implicates portions of code
that are tied to certain aspects of the target platform. For example, a target
platform may be characterized by
* A kernel API such as L4v2, Linux, L4.sec,
* A hardware architecture such as x86, ARM, Coldfire,
* A certain hardware facility such as a custom device, or
* Other properties such as software license requirements.
Each of these attributes express a specialization of the build process. The
build system provides a generic mechanism to handle such specializations.
The _programmer_ of a software component knows the properties on which his
software relies and thus, specifies these requirements in his build description
file.
The _user/customer/builder_ decides to build software for a specific platform
and defines the platform specifics via the 'SPECS' variable per build
directory in _etc/specs.conf_. In addition to an (optional) _etc/specs.conf_
file within the build directory, the build system incorporates the first
_etc/specs.conf_ file found in the repositories as configured for the
build directory. For example, for a 'linux_x86' build directory, the
_base-linux/etc/specs.conf_ file is used by default. The build directory's
'specs.conf' file can still be used to extend the 'SPECS' declarations, for
example to enable special features.
Each '<specname>' in the 'SPECS' variable instructs the build system to
* Include the 'make'-rules of a corresponding _base/mk/spec-<specname>.mk_
file. This enables the customization of the build process for each platform.
* Search for _<libname>.mk_ files in the _lib/mk/<specname>/_ subdirectory.
This way, we can provide alternative implementations of one and the same
library interface for different platforms.
Before a target or library gets built, the build system checks if the 'REQUIRES'
entries of the build description file are satisfied by entries of the 'SPECS'
variable. The compilation is executed only if each entry in the 'REQUIRES'
variable is present in the 'SPECS' variable as supplied by the build directory
configuration.
Building tools to be executed on the host platform
===================================================
Sometimes, software requires custom tools that are used to generate source
code or other ingredients for the build process, for example IDL compilers.
Such tools won't be executed on top of Genode but on the host platform
during the build process. Hence, they must be compiled with the tool chain
installed on the host, not the Genode tool chain.
The Genode build system accommodates the building of such host tools as a side
effect of building a library or a target. Even though it is possible to add
the tool compilation step to a regular build description file, it is
recommended to introduce a dedicated pseudo library for building such tools.
This way, the rules for building host tools are kept separate from rules that
refer to Genode programs. By convention, the pseudo library should be named
_<package>_host_tools_ and the host tools should be built at
_<build-dir>/tool/<package>/_. With _<package>_, we refer to the name of the
software package the tool belongs to, e.g., qt5 or mupdf. To build a tool
named _<tool>_, the pseudo library contains a custom make rule like the
following:
! $(BUILD_BASE_DIR)/tool/<package>/<tool>:
! $(MSG_BUILD)$(notdir $@)
! $(VERBOSE)mkdir -p $(dir $@)
! $(VERBOSE)...build commands...
To let the build system trigger the rule, add the custom target to the
'HOST_TOOLS' variable:
! HOST_TOOLS += $(BUILD_BASE_DIR)/tool/<package>/<tool>
Once the pseudo library for building the host tools is in place, it can be
referenced by each target or library that relies on the respective tools via
the 'LIBS' declaration. The tool can be invoked by referring to
'$(BUILD_BASE_DIR)/tool/<package>/tool'.
For an example of using custom host tools, please refer to the mupdf package
found within the libports repository. During the build of the mupdf library,
two custom tools fontdump and cmapdump are invoked. The tools are built via
the _lib/mk/mupdf_host_tools.mk_ library description file. The actual mupdf
library (_lib/mk/mupdf.mk_) has the pseudo library 'mupdf_host_tools' listed
in its 'LIBS' declaration and refers to the tools relative to
'$(BUILD_BASE_DIR)'.
Automated integration and testing
#################################
Genode's cross-kernel portability is one of the prime features of the
framework. However, each kernel takes a different route when it comes to
configuring, integrating, and booting the system. Hence, for using a particular
kernel, profound knowledge about the boot concept and the kernel-specific tools
is required. To streamline the testing of Genode-based systems across the many
different supported kernels, the framework comes equipped with tools that
relieve you from these peculiarities.
Run scripts
===========
Using so-called run scripts, complete Genode systems can be described in a
concise and kernel-independent way. Once created, a run script can be used
to integrate and test-drive a system scenario directly from the build directory.
The best way to get acquainted with the concept is reviewing the run script
for the 'hello_tutorial' located at _hello_tutorial/run/hello.run_.
Let's revisit each step expressed in the _hello.run_ script:
* Building the components needed for the system using the 'build' command.
This command instructs the build system to compile the targets listed in
the brace block. It has the same effect as manually invoking 'make' with
the specified argument from within the build directory.
* Creating a new boot directory using the 'create_boot_directory' command.
The integration of the scenario is performed in a dedicated directory at
_<build-dir>/var/run/<run-script-name>/_. When the run script is finished,
this directory will contain all components of the final system. In the
following, we will refer to this directory as run directory.
* Installing the Genode 'config' file into the run directory using the
'install_config' command. The argument to this command will be written
to a file called 'config' at the run directory picked up by
Genode's init process.
* Creating a bootable system image using the 'build_boot_image' command.
This command copies the specified list of files from the _<build-dir>/bin/_
directory to the run directory and executes the platform-specific steps
needed to transform the content of the run directory into a bootable
form. This form depends on the actual base platform and may be an ISO
image or a bootable ELF image.
* Executing the system image using the 'run_genode_until' command. Depending
on the base platform, the system image will be executed using an emulator.
For most platforms, Qemu is the tool of choice used by default. On Linux,
the scenario is executed by starting 'core' directly from the run
directory. The 'run_genode_until' command takes a regular expression
as argument. If the log output of the scenario matches the specified
pattern, the 'run_genode_until' command returns. If specifying 'forever'
as argument (as done in 'hello.run'), this command will never return.
If a regular expression is specified, an additional argument determines
a timeout in seconds. If the regular expression does not match until
the timeout is reached, the run script will abort.
Please note that the _hello.run_ script does not contain kernel-specific
information. Therefore it can be executed from the build directory of any base
platform by using:
! make run/hello
When invoking 'make' with an argument of the form 'run/*', the build system
will look in all repositories for a run script with the specified name. The run
script must be located in one of the repositories 'run/' subdirectories and
have the file extension '.run'.
For a more comprehensive run script, _os/run/demo.run_ serves as a good
example. This run script describes Genode's default demo scenario. As seen in
'demo.run', parts of init's configuration can be made dependent on the
platform's properties expressed as spec values. For example, the PCI driver
gets included in init's configuration only on platforms with a PCI bus. For
appending conditional snippets to the _config_ file, there exists the 'append_if'
command, which takes a condition as first and the snippet as second argument.
To test for a SPEC value, the command '[have_spec <spec-value>]' is used as
condition. Analogously to how 'append_if' appends strings, there exists
'lappend_if' to append list items. The latter command is used to conditionally
include binaries to the list of boot modules passed to the 'build_boot_image'
command.
The run mechanism explained
===========================
Under the hood, run scripts are executed by an expect interpreter. When the
user invokes a run script via _make run/<run-script>_, the build system invokes
the run tool at _<genode-dir>/tool/run_ with the run script as argument. The
run tool is an expect script that has no other purpose than defining several
commands used by run scripts, including a platform-specific script snippet
called run environment ('env'), and finally including the actual run script.
Whereas _tool/run_ provides the implementations of generic and largely
platform-independent commands, the _env_ snippet included from the platform's
respective _base-<platform>/run/env_ file contains all platform-specific
commands. For reference, the most simplistic run environment is the one at
_base-linux/run/env_, which implements the 'create_boot_directory',
'install_config', 'build_boot_image', and 'run_genode_until' commands for Linux
as base platform. For the other platforms, the run environments are far more
elaborative and document precisely how the integration and boot concept works
on each platform. Hence, the _base-<platform>/run/env_ files are not only
necessary parts of Genode's tooling support but serve as resource for
peculiarities of using each kernel.
Using run script to implement test cases
========================================
Because run scripts are actually expect scripts, the whole arsenal of
language features of the Tcl scripting language is available to them. This
turns run scripts into powerful tools for the automated execution of test
cases. A good example is the run script at _libports/run/lwip.run_, which tests
the lwIP stack by running a simple Genode-based HTTP server on Qemu. It fetches
and validates a HTML page from this server. The run script makes use of a
regular expression as argument to the 'run_genode_until' command to detect the
state when the web server becomes ready, subsequently executes the 'lynx' shell
command to fetch the web site, and employs Tcl's support for regular
expressions to validate the result. The run script works across base platforms
that use Qemu as execution environment.
To get the most out of the run mechanism, a basic understanding of the Tcl
scripting language is required. Furthermore the functions provided by
_tool/run_ and _base-<platform>/run/env_ should be studied.
Automated testing across base platforms
=======================================
To execute one or multiple test cases on more than one base platform, there
exists a dedicated tool at _tool/autopilot_. Its primary purpose is the
nightly execution of test cases. The tool takes a list of platforms and of
run scripts as arguments and executes each run script on each platform. The
build directory for each platform is created at
_/tmp/autopilot.<username>/<platform>_ and the output of each run script is
written to a file called _<platform>.<run-script>.log_. On stderr, autopilot
prints the statistics about whether or not each run script executed
successfully on each platform. If at least one run script failed, autopilot
returns a non-zero exit code, which makes it straight forward to include
autopilot into an automated build-and-test environment.