Testing FPC
Development of a compiler and a corresponding runtime library requires an extensive amount of tests to ensure that the addition of a new feature or the fixing of a bug doesn't break any existing code. In FPC this task is handled by the testsuite located in the tests subdirectory of the top level directory (SVN WebView).
Overview
The testsuite consists of the makefiles and utilities that execute the tests as well as the tests themselves. For this article the makefiles and utilities will be used as is and thus they won't be looked at any further.
Detailed information about the testsuite and its parameters can be found in the ReadMe.
Directory Structure
The tests themselves are separated into multiple directories:
- test: systematic tests, usually developed by test driven development
- tbs: tests derived from non tracker reports or ideas while fixing something requiring successfull compilation and (optionally) run
- tbf: tests derived from non tracker reports or ideas while fixing something requiring failing compilation
- webtbs: tests derived from bug tracker bugs requiring successful compilation and (optionally) run
- webtbf: tests derived from bug tracker bugs requiring failing of compilation
Additionally selected tests subdirectories of directories in packages are searched. These are specified in $fpcdir/tests/Makefile.fpc with the variable TESTPACKAGESDIRECTDIRS (the directories are simply separated by spaces). You need to regenerate the Makefile using fpcmake -Tall if you change this.
Not all files in these directories are eligible for the testsuite however as they need to fullfill two formal criteria:
- correct prefix (t for test, tb for tbs and tbf, tw for webtbs and webtbf)
- correct file extension (only '.pp' is accepted)
The remaining name of the test is freely chosen, however there are some general guidelines for this as well:
- tests in test have some short name describing the feature the tests are testing (e.g. rhlp for record helper tests and genfunc for generic function tests)
- tests in tbf and tbs have a unique, four digit index
- tests in webtbs and webtbf have their corresponding bug ID from Mantis as name plus eventually a small letter as suffix if there are multiple tests for the same report
- units that are used by some test have the prefix u, ub or uw depending on the test directory as well as the name of the first test they are used for (e.g. if both thlp3.pp and thlp7.pp use a unit the unit is named uhlp3.pp)
Test Results
When a test is executed its execution essentially consists of two stages, namely compilation and run. The run stage is optional (see Test Options) or not required at all (e.g. if the test should already fail the compilation stage).
The compilation stage can have the following outcomes:
- successful compilation
- compilation failed with a compiler error
- compilation failed with an internal error
- compilation failed with another exception
If the test should succeed then the first is treated as success, the other three as failures. If a test should fail then the second outcome is treated as a success, the other three are treated as failures.
The run stage can have the following outcomes:
- successful run
- failed run
The result of the run is determined with the exit code of the test's execution.
Test Files
A test file is either a program, library or unit file. A library and unit test is never run, only compiled while a program is always run unless told otherwise with the {%NORUN} option.
Test Options
Various options can be used to influence the behavior of the testsuite. This options are given at the top of the testfile using a Borland style comment that starts with a %. The first comment without that marker ends the option parsing (usually that's the mode switch).
The following is a non-extensive list of options that are supported:
- NORUN: don't execute the resulting test program (useful if merely the compilation should be tested)
- SKIP: don't compile the test at all
- CONFIGFILE: Specifies a config file in $fpcdir/tests/config that is to be copied to test. This option can take one or two parameters: in the two parameter form the first name is the source filename and the second the destination filename, in the one parameter form source and destination filename are equal.
For a complete overview of all possible options, see the ReadMe, under the header Test directives.
Running the testsuite
To run the test suite a complete compilation (compiler, rtl, packages) should have been done beforehand. Then after changing into the tests directory you execute the following command:
make full TEST_FPC=/path/to/ppc TEST_CPU=<cpu> TEST_OS=<os> TEST_OPT=<options> CHUNKSIZE=50 -j <count>
- TEST_FPC: absolute path to the compiler that should be tested (mandatory)
- TEST_CPU: name of the CPU for which the tests are run (default is the given compiler's default CPU)
- TEST_OS: name of the OS for which the test should be run (default is the given compiler's default OS)
- TEST_OPT: options that should be passed to the compiler
- CHUNKSIZE and -j <count>: the testsuite can use multiple cpu cores in parallel. It does so by launching <count> instances of the dotest program, each of which will execute CHUNKSIZE tests before terminating (after which a new instance of dotest will be started to check CHUNKSIZE more tests). We split up the test collection in chunks because not all tests take the same amount of time to compile and run. To get optimal throughput (avoiding some cores sitting idle near the end of the testsuite run), choose a smaller CHUNKSIZE when your number of cores increases. The default, CHUNKSIZE=100, performs fairly well for up to 4 cores. Don't make CHUNKSIZE too small, or the overhead from starting more instances of dotest will erase any gains you may get.
At the end of the testsuite run there'll be an ordered list of failed tests available in tests/output/<cpu>-<os>/faillist that can be compared with other runs.
Recommended Workflow
Before doing any changes:
- complete build
- testsuite run
- save faillist file (e.g. to tests/output/unmodified/faillist)
To test your changes:
- complete build (fix any failures that might occur here)
- testsuite run
- compare new faillist with stored faillist (due to the tests being ordered a diff or similar is sufficient)
- fix failures in compiler/RTL/packages
- rinse and repeat