The current status of Low Level Virtual Machine (LLVM) support is ready for general testing.
FPC with an LLVM code generator backend is available on the git main branch. It currently supports the following targets:
- Darwin/AArch64 (macOS, untested on iOS)
You can use a version included in an LLVM release available from the official LLVM site, or use a version that comes with Xcode (macOS) or your Linux distribution.
FPC can generate LLVM code that can be compiled with LLVM 7.0 until at least 14.0. Use FPC's -il command line parameter to list all supported LLVM and Xcode versions. These can be selected using the -Clv command line parameter.
Build FPC with LLVM support
Build FPC as usual, but add LLVM=1 to the make command line, and
- Specify the LLVM/Clang version you are using by adding the appropriate -Clv command line parameter to OPTNEW and FPCMAKEOPT. E.g. OPTNEW="-Clv11.0" FPCMAKEOPT="-Clv11.0" (LLVM/Clang 11.0) or OPTNEW="-ClvXcode-11.1" FPCMAKEOPT="-ClvXcode-11.1" (the Clang that ships with Xcode 11.1). Regardless of the latest supported version by FPC, it is quite possible that the generated code will be compatible with later versions. If clang accepts the generated code, it should work fine.
- FPC uses clang to "assemble" the generated LLVM IR. If your clang binary has a custom suffix (as is common on many Linux distributions), you can use the -XlS<x> parameter to specify this suffix. E.g. -XlS-7 in case the clang binary is called clang-7.
- If you use a custom installed LLVM version, specify the path to its Clang binary using FPC's -FD command line option (add it to the make OPTNEW options). The compiler will also use this path to find the LTO library if needed (see later).
- In this case, also add the path to the custom clang binary to your $HOME/.fpc.cfg file (so it will be found when you compile code with the compiler after the "make" command has finished), e.g. like this:
#include /etc/fpc.cfg #ifdef CPULLVM -FD/Users/Me/clang+llvm-8.0.0-x86_64-apple-darwin/bin -Clv8.0 #endif
- On Linux, also add the path to libgcc_s your '$HOME/.fpc.cfg' file. E.g. on Ubuntu 16.04: -Fl/usr/lib/gcc/x86_64-linux-gnu/5, similar to the above.
Installing FPC with LLVM support
FPC built with LLVM support does not include the built-in code generator. Additionally, installing it will currently overwrite any FPC with the same version number that's installed in the same prefix (target directory). As the units generated by FPC with the LLVM backend are not compatible with those used by FPC with the built-in code generator, it's better to install such a version in a different prefix (target directory) for now. Use the make parameter INSTALL_PREFIX=/xxx/yyy to specify this prefix. As above, you can use custom block in your '$HOME/.fpc.cfg' to specify the alternative unit directories:
#ifdef CPULLVM -Fu/yourLLVMinstallPREFIX/lib/fpc/$fpcversion/units/$fpctarget/* -Fu/yourLLVMinstallPREFIX/lib/fpc/$fpcversion/units/$fpctarget/rtl #endif
Using Link-Time Optimisation (LTO)
Link-time optimisation means that potentially the entire program and all units that it uses are all optimised together.
To compile units with LTO support, or to compile a program or library with LTO, add -Clflto on the compiler command line. If you add this to OPT/OPTNEW when building FPC, all standard units and the compiler itself will also be built with LTO.
- If you compile a unit with LTO, it will also be compiled normally. This means you can use it both for LTO and for normal (static or smart) linking afterwards.
- The linker (ld) included with least Xcode 9 until and including 10.1 contain various bugs that cause errors when the system unit is included in the LTO. You can work around this by specifying the -Clfltonosystem command line option in additions to -Clflto.
- On Linux, unless you are using your distribution's default LLVM version, you will also have to build the LLVMgold.so linker plugin and place it in the "lib" directory of your custom LLVM installation (it is not shipped as part of the official LLVM installers, because it needs to be built for the binutils you have on your system). See http://llvm.org/docs/GoldPlugin.html for more information.
Using Address Sanitizer (asan)
Address sanitizer is an LLVM code generation pass that instruments all memory accesses to provide (relatively) fast detection of reading uninitialised memory, buffer overruns, and accessing freed memory. It is similar to Valgrind, but faster, and also more accurate because LLVM knows exactly where each local and global variable starts and ends, so it can detect a pointer to one local/global variable crossing into the memory of another local/global variable.
To use it, compile all units and the main program/library with the -Clfsanitize=address option.
- Some versions of address sanitizer automatically enable memory leak detection and abort the program if memory leaks are detected. You can disable this using export ASAN_OPTIONS=detect_leaks=0
- Address sanitizer is not supported for ARM (32 bit)
- Only a few platforms are supported right now, but more can be added. Windows will be harder, because it requires support for generating SEH-style LLVM-based exception handling code. Other platforms should be "reasonably" easy (the parameter handling needs to be generalised more though).
- add support for the (experimental) llvm.experimental.constrained.* intrinsics, to properly support floating point rounding modes and exceptions
- This has been partly done, but due to their experimental nature not all of them supported on all target platforms
- add support for automatically outlining try-blocks into "noinline" nested functions, so that hardware exceptions can be safely caught
- right now, you have to manually move the body of a try/except or try/finally block that may catch hardware exceptions to a separate procedure/function declared with the "noinline" modifier
- extend support for generating debug info. Currently supported:
- line information
- global and local variables, parameters and fields (but not all types are supported yet)
- add support for generating more meta-information for optimizations (e.g. range information about subrange types and enums)
- pass on more FPC-related code generation options to LLVM (currently, mainly -CfXXX and -Ox get passed on)
- add support for TLS-based threadvars
- directly generate bitcode (.bc) instead of bitcode assembly (.ll) files. The reason is that the LLVM project attempts to ensure backward compatibility for bitcode files, but not bitcode assembly. FPC currently generates bitcode assembly files anyway because they're much easier to create and debug (in the sense of debugging the compiler's LLVM code generator).
Frequently Asked Questions
- Will the FPC team, somewhere in the future, adopt the LLVM as the backend on all platforms?
- No, for various reasons:
- LLVM will almost certainly never support all targets that FPC supports (Gameboy Advance, OS/2, WinCE, ...), or at some point drop support for targets that FPC still supports (as already happened with Mac OS X for PowerPC/PowerPC64).
- the native FPC code generators require very little maintenance once written, as they are quite well insulated via abstractions from the rest of the compiler, so there is no reason to drop them
- FPC is a volunteer/hobby project, and several developers' main interest is working on native FPC code generators/optimisers
- you still need some of the hardest parts of the FPC native code generators anyway for LLVM (entry/exit code handling, parameter manager) to deal with assembler routines, and because LLVM does not fully abstract parameter passing
- a hardware architecture seldom changes in backward-compatibility breaking ways once released, while LLVM makes no such promises.
- LLVM changes a lot, all the time. That means there is a high chance of introducing regressions.
- FPC's native code generators are much faster than LLVM's (even if you would neglect the overhead of FPC generating bitcode and the LLVM tool chain reading it back in), so especially while developing it may be more interesting to use FPC's own code generators
- Is it at all likely that an LLVM compiler would produce significantly better/faster optimizations than FPC as it stand currently?
- It depends on the kind of code. The more pure maths (floating point or integer, especially in tight loops), the more likely it will be faster.
- Artificial benchmarks will also be much faster.
- For a typical database program, don't expect much change.
- Example 1: the compiler itself on x86-64 is about 10% faster when compiled with LLVM on an Intel Haswell processor, or 18% if you also enable link-time optimization.
- Example 2: A Viprinet benchmark compiled for ARMv7, running on an APM Mustang X-Gene board: 18% faster.