The current status of Low Level Virtual Machine (LLVM) support is ready for general testing.
FPC with an LLVM code generator backend is available in svn trunk. It currently supports the following targets:
You can use a version included in an LLVM release available from the official LLVM site, or use a version that comes with Xcode (macOS) or your Linux distribution.
FPC can generate LLVM code that can be compiled with LLVM 7.0 until at least 10.0.
Build FPC with LLVM support
First build the compiler with LLVM support (using FPC 3.0.4 or 3.2.0 as a starting compiler)
- now build FPC as usual, but add LLVM=1 to the make command line, and
- Specify the LLVM/Clang version you are using by adding the appropriate -Clv command line parameter to OPTNEW. E.g. OPTNEW="-Clv7.0" (Clang 7.0) or OPTNEW="-ClvXcode-9.3 (the Clang that ships with Xcode 9.3). The latest supported versions currently are Clang 8.0 and Xcode 10.1, but it is quite possible that the generated code will be compatible with later versions. If clang accepts the generated code, it should work fine.
- FPC uses clang to "assemble" the generated LLVM IR. If your clang binary has a custom suffix (as is common on many Linux distributions), you can use the -XlS<x> parameter to specify this suffix. E.g. -XlS-7 in case the clang binary is called clang-7.
- If you use a custom installed LLVM version, specify the path to its Clang binary using FPC's -FD command line option (add it to the make OPTNEW options). The compiler will also use this path to find the LTO library if needed (see later).
- In this case, also add the path to the custom clang binary to your $HOME/.fpc.cfg file (so it will be found when you compile code with the compiler after the "make" command has finished), e.g. like this:
#include /etc/fpc.cfg #ifdef CPULLVM -FD/Users/Me/clang+llvm-8.0.0-x86_64-apple-darwin/bin #endif
- On Linux, also add the path to libgcc_s your '$HOME/.fpc.cfg' file. E.g. on Ubuntu 16.04: -Fl/usr/lib/gcc/x86_64-linux-gnu/5', similar to the above.
Using Link-Time Optimisation (LTO)
Link-time optimisation means that potentially the entire program and all units that it uses are all optimised together.
To compile units with LTO support, or to compile a program or library with LTO, add -Clflto on the compiler command line. If you add this to OPT when building FPC, all standard units and the compiler itself will also be built with LTO.
- If you compile a unit with LTO, it will also be compiled normally. This means you can use it both for LTO and for normal (static or smart) linking afterwards.
- The linker (ld) included with least Xcode 9 until and including 10.1 contain various bugs that cause errors when the system unit is included in the LTO. You can work around this by specifying the -Clfltonosystem command line option in additions to -Clflto.
- On Linux, unless you are using your distribution's default LLVM version, you will also have to build the LLVMgold.so linker plugin and place it in the "lib" directory of your custom LLVM installation (it is not shipped as part of the official LLVM installers, because it needs to be built for the binutils you have on your system). See http://llvm.org/docs/GoldPlugin.html for more information.
- Only a few platforms are supported right now, but more can be added. Windows will be harder, because it requires support for generating SEH-style LLVM-based exception handling code. Other platforms should be "reasonably" easy (the parameter handling needs to be generalised more though).
- add support for the (experimental) llvm.experimental.constrained.* intrinsics, to properly support floating point rounding modes and exceptions
- add support for automatically outlining try-blocks into "noinline" nested functions, so that hardware exceptions can be safely caught
- right now, you have to manually move the body of a try/except or try/finally block that may catch hardware exceptions to a separate procedure/function declared with the "noinline" modifier
- add support for generating debug info
- add support for generating more meta-information for optimizations (e.g. range information about subrange types and enums)
- pass on more FPC-related code generation options to LLVM (currently, mainly -CfXXX and -Ox get passed on)
- add support for TLS-based threadvars
- directly generate bitcode (.bc) instead of bitcode assembly (.ll) files. The reason is that the LLVM project attempts to ensure backward compatibility for bitcode files, but not bitcode assembly. FPC currently generates bitcode assembly files anyway because they're much easier to create and debug (in the sense of debugging the compiler's LLVM code generator).
Frequently Asked Questions
- Will the FPC team, somewhere in the future, adopt the LLVM as the backend on all platforms?
- No, for various reasons:
- LLVM will almost certainly never support all targets that FPC supports (Gameboy Advance, OS/2, WinCE, ...), or at some point drop support for targets that FPC still supports (as already happened with Mac OS X for PowerPC/PowerPC64).
- the native FPC code generators require very little maintenance once written, as they are quite well insulated via abstractions from the rest of the compiler, so there is no reason to drop them
- FPC is a volunteer/hobby project, and several developers' main interest is working on native FPC code generators/optimisers
- you still need some of the hardest parts of the FPC native code generators anyway for LLVM (entry/exit code handling, parameter manager) to deal with assembler routines, and because LLVM does not fully abstract parameter passing
- a hardware architecture seldom changes in backward-compatibility breaking ways once released, while LLVM makes no such promises.
- LLVM changes a lot, all the time. That means there is a high chance of introducing regressions.
- FPC's native code generators are much faster than LLVM's (even if you would neglect the overhead of FPC generating bitcode and the LLVM tool chain reading it back in), so especially while developing it may be more interesting to use FPC's own code generators
- Is it at all likely that an LLVM compiler would produce significantly better/faster optimizations than FPC as it stand currently?
- It depends on the kind of code. The more pure maths (floating point or integer, especially in tight loops), the more likely it will be faster.
- Artificial benchmarks will also be much faster.
- For a typical database program, don't expect much change.
- Example 1: the compiler itself on x86-64 is about 10% faster when compiled with LLVM on an Intel Haswell processor, or 18% if you also enable link-time optimization.
- Example 2: A Viprinet benchmark compiled for ARMv7, running on an APM Mustang X-Gene board: 18% faster.