Difference between revisions of "Testing FPC"

From Free Pascal wiki
Jump to navigationJump to search
(Reworked/expanded article)
m (PascalDragon moved page Creating test units for FPC to Testing FPC: Better name)
(No difference)

Revision as of 18:29, 28 November 2016

Development of a compiler and a corresponding runtime library requires an extensive amount of tests to ensure that the addition of a new feature or the fixing of a bug doesn't break any existing code. In FPC this task is handled by the testsuite located in the tests subdirectory of the top level directory (SVN WebView).

Overview

The testsuite consists of the makefiles and utilities that execute the tests as well as the tests themselves. For this article the makefiles and utilities will be used as is and thus they won't be looked at any further.

Detailed information about the testsuite and its parameters can be found in the ReadMe.

Directory Structure

The tests themselves are separated into multiple directories:

  • test: systematic tests, usually developed by test driven development
  • tbs: tests derived from bug tracker bugs requiring successful compilation and (optionally) run
  • tbf: tests derived from bug tracker bugs requiring failing of compilation
  • webtbs: tests derived from non tracker reports or ideas while fixing something requiring successfull compilation and (optionally) run
  • webtbf: tests derived from non tracker reports or ideas while fixing something requiring failing compilation

Not all files in these directories are eligible for the testsuite however as they need to fullfill two formal criteria:

  • correct prefix (t for test, tb for tbs and tbf, tw for webtbs and webtbf)
  • correct file extension (only '.pp' is accepted)

The remaining name of the test is freely chosen, however there are some general guidelines for this as well:

  • tests in test have some short name describing the feature the tests are testing (e.g. rhlp for record helper tests and genfunc for generic function tests)
  • tests in tbf and tbs have a unique, four digit index
  • tests in webtbs and webtbf have their corresponding bug ID from Mantis as name plus eventually a small letter as suffix if there are multiple tests for the same report
  • units that are used by some test have the prefix u, ub or uw depending on the test directory as well as the name of the first test they are used for (e.g. if both thlp3.pp and thlp7.pp use a unit the unit is named uhlp3.pp)

Test Results

When a test is executed its execution essentially consists of two stages, namely compilation and run. The run stage is optional (see [Test Options]) or not required at all (e.g. if the test should already fail the compilation stage).

The compilation stage can have the following outcomes:

  • successful compilation
  • compilation failed with a compiler error
  • compilation failed with an internal error
  • compilation failed with another exception

If the test should succeed then the first is treated as success, the other three as failures. If a test should fail then the second outcome is treated as a success, the other three are treated as failures.

The run stage can have the following outcomes:

  • successful run
  • failed run

The result of the run is determined with the exit code of the test's execution.

Test Files

A test file is either a program, library or unit file. A library and unit test is never run, only compiled while a program is always run unless told otherwise with the {%NORUN} option.

Test Options

Various options can be used to influence the behavior of the testsuite. This options are given at the top of the testfile using a Borland style comment that starts with a %. The first comment without that marker ends the option parsing (usually that's the mode switch).

The following is a non-extensive list of options that are supported:

  • NORUN: don't execute the resulting test program (useful if merely the compilation should be tested)
  • SKIP: don't compile the test at all

Running the testsuite

To run the test suite a complete compilation (compiler, rtl, packages) should have been done beforehand. Then after changing into the tests directory you execute the following command:

make clean full TEST_FPC=/path/to/ppc TEST_CPU=<cpu> TEST_OS=<os> TEST_OPT=<options> CHUNKSIZE=50 -j <count>
  • TEST_FPC: absolute path to the compiler that should be tested (mandatory)
  • TEST_CPU: name of the CPU for which the tests are run (default is the given compiler's default CPU)
  • TEST_OS: name of the OS for which the test should be run (default is the given compiler's default OS)
  • TEST_OPT: options that should be passed to the compiler

At the end of the testsuite run there'll be an ordered list of failed tests available in tests/output/<cpu>-<os>/faillist that can be compared with other runs.

Recommended Workflow

Before doing any chances:

  • complete build
  • testsuite run
  • save faillist file (e.g. to tests/output/unmodified/faillist)

Whenever necessary:

  • complete build (fix any failures that might occur here)
  • testsuite run
  • compare new faillist with stored faillist (due to the tests being ordered a diff or similar is sufficient)
  • fix failures in compiler/RTL/packages
  • rinse and repeat