Difference between revisions of "Size Matters/zh CN"

From Free Pascal wiki
Jump to navigationJump to search
Line 141: Line 141:
  
 
=== 优化 ===
 
=== 优化 ===
 +
使用优化功能也可以减小一些代码大小,但只有约十个百分点。优化后的代码通常更紧凑。
  
Optimization can also shave off a bit of code size. Optimized code is usually tighter. (but only tenths of a percent) Make sure you use -O3.
+
请确保您使用了 <b>-O3</b> 选项。
  
 
=== Lazarus lpr 文件 ===
 
=== Lazarus lpr 文件 ===

Revision as of 12:20, 11 July 2009

Deutsch (de) English (en) français (fr) русский (ru) 中文(中国大陆)‎ (zh_CN)

介绍

本文讨论程序大小的问题。多年来,有关 Free Pascal 和 Lazarus 生成的程序的大小的误解此起彼伏。如果您打算在讨论组中对此发表任何评论,请先阅读此 FAQ。

撰写本 FAQ 的主要原因是:有关这一主题的讨论,多数很快地便开始纠缠于细枝末节。并且,由于今天的人们倾向于将几乎一切直指为“肥大化”(bloat)现象,这种争论常常只能使整体印象变得更加模糊,而不是更加清晰。

粗略估计,对 Free Pascal 或 Lazarus 生成的程序的大小的现实期望大概是多少?

  • 任何小于 1 MB 的程序都不应被当成问题。
    • 确认您在编译时使用了智能连接(smart linking)选项,并且程序已被 strip,并且所有库在编译时都使用了智能连接选项。
    • 不要将 UPX 作为常规使用,除非您有充分的理由这样做(见下文)。使用 UPX 压缩的程序在运行时会产生额外的内存开销,而今天内存空间仍然比磁盘空间昂贵。
  • 小应用程序的大小相对难以估计。这是因为 RTL 的大小是与操作系统有关的。但无论如何,通常而言,一个独立的小程序的大小不会超过 100 KB,通常甚至是在 50 KB 一下。
    • 在 Windows 上,可以实现一个 20 KB 的直接利用 Windows API 的 GUI 程序;
    • SysUtils 单元包含初始化段(initialization section),错误消息,异常处理,以及其他一些代码。当 SysUtils 被使用时,它大约会产生 40 KB 到 100 KB 左右的大小。
  • Windows 上的 Lazarus 程序从 500 KB 起,随着使用的组件的增多,迅速增加到约 1.5 MB。
    • 与 Delphi 相比,这的确大了一些;但这是跨平台特性以及项目的可维护性的代价。
    • 当额外的代码不再依赖于更多 LCL 部件时,这种增长开始减缓。
    • 上面提到的“1.5 MB”是一个粗略的估计。具体大小依赖于您创建 GUI 的方式,以及您所使用的组件的数量及复杂性。
    • 对 Lazarus 生成的程序而言,占其大小比重最大的并不是可执行代码,而是字符串和表。
  • Linux 或 FreeBSD 上的简单 Lazarus 程序比 GCC 生成的大一些。这是因为他们不依赖于共享库(您可以用 ldd 看到这一点)。
  • 64-位程序总比相应的 x86 版本大。一般地,RISC 平台上的程序也要稍大一些。


为什么程序这样大?

答案:它们不应该被认为是大的。

如果您觉得它们大,那么

  • 您可能没有正确地配置 Free Pascal,或者
  • 您对程序的大小的期望并不现实,或者
  • 您所做的与 Free Pascal 的设计目的并不相符。


程序大是件坏事吗?

这当然取决于其大小的量级。不过,我们可以安全的讲:很少有人真正有理由对几兆字节或者甚至超过 10 MB 的程序发愁。

但是,在有些情况下,我们可能希望控制程序的大小:

  1. 创建嵌入式应用(当然,这并不包括在拥有数十兆字节存储能力的嵌入式 PC 上运行的程序);
  2. 需要通过 Modem 发布程序的用户;
  3. 比赛和基准测试(benchmarking;例如臭名昭著的 language shootout)。

请注意,有一个常见的误区认为更大的程序运行起来更慢,而这是不对的。

嵌入式应用

While Free Pascal is reasonably usable for embedded or system purposes, the final release engineering and tradeoffs are more oriented at general application building. For really specialistic purposes, people could set up a shadow project, more in the way like e.g. there are specialised versions of certain Linux distro's. Worrying the already overburdened FPC team with such specialistic needs is not an option, especially since half of the serious embedded users will roll their own anyway.

通过 Modem 发布

The modem case is not just about "downloading from the Net" or "my shareware must be as small as possible", but e.g. in my last job we did a lot of deployment to our customers and our own external sites via remote desktop over ISDN. But even with a 56k modem you can squeeze a MB through in under 5 minutes.

Be careful to not abuse this argument to try to provide a misplaced rational fundament for an emotional opinion about binary size. If you make this point, it is useless without a thorough statistical analysis of what percentage of actual modem users you have for your application (most modem users don't download software from the net, but use e.g. magazine shareware CDs).

比赛

Another reason to keep binaries small is language comparison contents (like the Language Shootout). However this is more like solving a puzzle, and not really related to responsible software engineering.

错误的编译器配置

I'm not going to go explain every aspect of the compiler configuration in great lengths, since this is a FAQ, not the manual. This is meant as an overview only. Read manuals, and buildfaq thoroughly for more background info.

Generally, there are several reasons why the binary would be bigger than expected. This FAQ covers the most common reasons, in descending order of likelihood:

  1. The binary still contains debug information.
  2. The binary was not (fully) smartlinked
  3. The binary includes units that have initialisation sections that execute a lot of code.
  4. You link in complete (external) libraries statically, rather than using shared linking.
  5. Optimization is not (entirely) turned on.
  6. Lazarus project file (lpr) has package units in uses section (this is done automagically by lazarus)

In the future, shared linking to a FPC and/or Lazarus runtime library might significantly alter this picture. Of course then you will have to distribute a big DLL with lots of other stuff in it and the resulting versioning issues. This is all still some time in the future, so it is hard to quantify what the impact on binary sizes would be. Specially because dynamic linking also has size overhead. (on top of unused code in the shared lib)

调试信息

Free Pascal uses GDB as debugger and LD as linker. These work with a system of in-binary debuginfo, be it stabs or dwarf. People often see e.g. Lazarus binaries that are 40MB. The correct size should be about 6MB, the rest is debuginfo (and maybe 6 MB from not smartlinking properly).

Stabs debuginfo is quite bulky, but has as advantage that it is relatively independant of the binary format. In time it will be replaced on all but the most legacy platforms by DWARF.

There is often confusion with respect to the debuginfo, which is caused by the internal strip in a lot of win32 versions of the binutils. Also some versions of the win32 strip binary don't fully strip the debuginfo generated by FPC. So people toggle some (lazarus/IDE or FPC commandline) flag like -Xs and assume it worked, while it didn't. FPC has been adapted to remedy this, but this will only be in versions from 2006 or later.

So, when in doubt, always try to strip manually, and, on windows, preferably with several different STRIP binaries. Don't drive this too far though, using shoddy binaries of doubtful origin to shave of another byte. Stay with general released (cygwin/mingw and their better beta's) versions.

In time, when 2.1.1 goes gold, this kind of problems might get rarer on specially Windows, since the internal linker provides a more consistent treatment of these problems. However they may apply to people using non-core targets for quite some time to come.

Keep in mind that the whole strip system is based on shipping the same build as user version (stripped) while retaining the debug version (unstripped) for e.g. interpreting traceback addresses. So if you do formal releases, retain a copy of the unstripped binary that you ship, and always do a release build with debug info

The design of GDB itself allows to keep and use debug information out of the binary file (external debug information). That means, the size of resulting binary is not increased due to debug information, and still you can successfully debug the binary. Of cause debug information should be stored somewhere else for GDB. That's why additional .dbg file is created to store the stabs. You don't need this file to run and use the application, this file is used by debugger only. Since all debug information is moved from the binary file, you will not get much effect if you try to strip it.

To compile your application in this way, you should use -Xg switch (the option is default for Lazarus 0.9.27 and higher). Blank form application for Win32, compiled with external debug information would take about 1 Mb, and .dbg file would take 10 Mb.

智能连接(smart linking)

(主条目: File size and smartlinking)

The base principle of smartlinking is simple and commonly known: don't link in what is not used. This of course has a good effect on binary size.

However the compiler is merely a program, and doesn't have a magic crystal ball to see what is used, so the base implementation is more like this

  • The compiler finely divides the code up in so called "sections".
  • Then basically the linker determines what sections are used using the rule "if no label in the section is referenced, it can be removed.

There are some problems with this simplistic view:

  • virtual methods may be implicitely called via their VMTs. The GNU linker can't trace call sequences through these VMTs, so they must all be linked in;
  • tables for resource strings reference every string constant, and thus all string constants are linked in (one reason for sysutils being big).
  • symbols that approachable from the outside of the binary (this is possible for non library ELF binaries too) must be kept. This last limitation is necessary to e.g. avoid stripping exported functions from shared libraries..
  • Another such pain point are published functions and properties. References to published functions/properties can be constructed on the fly using string operations, and the compiler can't trace them. This is one of the downsides of reflection.
  • Published properties and methods can be resolved by creating the symbolnames using string manipulation, and must therefore be linked in if the class is referenced anywhere. Published code might in turn call private/protected/public code and thus a fairly large inclusion.

Another important side effect that is logical, but often forgotten is that this algorithm will link in everything referenced in the initialization and finalization parts of units, even if no functionality from those units are used. So be careful what you USE.

Anyway, most problems using smartlinking stem from the fact that for the smallest result FPC generally requires "compile with smartlinking" to be on WHEN COMPILING EACH AND EVERY UNIT, EVEN THE RTL

The reason for this is simple. LD only could "smart" link units that were the size of an entire .o file until fairly recently. This means that for each symbol a separate .o file must be crafted. (and then these tens of thousands of .o files are archived in .a files). This is a time (and linker memory) consuming task, thus it is optional, and is only turned on for release versions, not for snapshots. Often people having problems with smartlinking use a snapshot that contains RTL/FCL etc that aren't compiled with smartlinking on. Only solution is to recompile the source with smartlinking (-CX) on. See buildfaq for more info.

In the future this will be improved when the compiler will emit smartlinking code by default, at least for the main targets. This is made possible by two distinct developments. First, the GNU linker LD now can smartlink more finely grained (at least on Unix) using --gc-sections, second the arrival of the FPC internal linker (in the 2.1.1 branch) for all working Windows platforms (wince/win32/win64). The smartlinking using LD --gc-sections still has a lot of problems because the exact assembler layout and numerous details with respect to tables must be researched, we often run into the typical problem with GNU development software here, the tools are barely tested (or sometimes not even implemented, see DWARF standard) outside what GCC uses/stresses.

The internal linker can now smartlink Lazarus (17 seconds for a full smartlink on my Athlon64 3700+ using about 250MB memory) which is quite well, but is windows only and 2.1.1 for now. The internal linker also opens the door to more advanced smartlinking that requires Pascal specific knowledge, like leaving out unused virtual methods (20% code size on Lazarus examples, 5% on the Lazarus IDE as a rough first estimate), and being smarter about unused resource strings. This is all still in alpha, and above numbers are probably too optimistic, since Lazarus is not working with these optimizations yet.

初始化和终止化段

If you include a unit in USES section, even when USES'd indirectly via a different unit, then IF the unit contains initialization or finalization sections, that code and its dependancies is always linked in.

A unit for which this is important is sysutils. As per Delphi compatibility, sysutils converts runtime errors to exceptions with a textual message. All the strings in sysutils together are a bit bulky. There is nothing that can be done about this, except removing a lot of initialisation from sysutils that would make it delphi incompatible. So this is more something for a embedded release, if such a team would ever volunteer.

“静态”可执行文件

(主条目: Lazarus/FPC Libraries)

One can also make fully static binaries on any OS, incorporating all libraries into the binary. This is usually done to ease deployment, but has as tradeoff huge binaries. Since this is wizard territory I only mention this for the sake of completeness. People that can do this, hopefully know what they are doing.

Instead of making static binaries, many programmers do dynamic linking / shared linking (these are the same, right?). This generates a much, much smaller binary executable.

优化

使用优化功能也可以减小一些代码大小,但只有约十个百分点。优化后的代码通常更紧凑。

请确保您使用了 -O3 选项。

Lazarus lpr 文件

In Lazarus, if you add a package to your project/form you get it's registration unit added to the lpr file. The lpr file is not normally opened, if you want to edit it, first open it (via project -> view source). Then remove all the unnecessary units (Interfaces, Forms, and YOUR FORM units are only required, anything else is useless there, but make sure you don't delete units that only need to register things, such as image readers (jpeg) or testcases).

You can save up to megabytes AND some linking dependencies too if you use big packages (such as glscene).

This kind of behaviour is typical for libraries that do a lot in the initialisation sections of units. Note that it doesn't matter where they are used (.lpr or a normal unit). Of course smartlinking tries to minimalize this effect

版本 2.2.0 的问题

I routinely crack down on size discussions, to keep some sanity in them. This faq was meant for that.

However lately I've seen some size problems that were not of the routine kind. I suspect FPC changed behavior due to the internal linker in 2.2.0+. Since I want to try to be fair, I'll list my suspicions here. Note that these remarks hold for the default setup with internal linker enabled.

  • It seems that FPC 2.2.0 doesn't strip if any -g option is used to compile the main program. This contrary to earlier versions where -Xs had priority over -g
  • It seems that FPC 2.2.0 doesn't always smartlink when crosscompiling. This can be problematic when compiling for windows, not only because of size, but also because dependencies are created to functions that might not exist.

UPX

The whole UPX cult is a funny thing that largely originates in a mindless pursuit of minimal binary sizes. In reality it is a tool with advantages and disadvantages.

The advantages are:

  1. The decompression is easy for the user because it is self contained
  2. If, and only if, some size criterium is on the binary size itself (and not on e.g. the binary in a zip), like with demo contests, it can save some. However, specially in the lowest classes it might be worthwhile to code your compression yourself, because you probably can get the decompression code much tighter for binaries that don't stress all aspects of the binary format.
  3. For rarely used applications or applications run off removable media the diskspace saving may outweigh the performance/memory penalties.
  4. Many users don't know about UPX and judge applications on size (and yes this includes reviewers on shareware listings sites) so if other developers in the category use it you will look bloated if you do not follow suit. (odd argument since most shareware is a self extracting archive)

The disadvantages are:

  1. worse compression (and also the decompression engine must be factored into _EACH_ binary)
  2. decompression must occur each time.
  3. Since windows XP+ now features a built-in decompressor for ZIP, the whole point of SFX goes away a bit.
  4. UPXed binaries are increasingly being fingered by malware heuristics of popular antivirusses and mailfilters.
  5. Binary that are internally compressed can't be memorymapped by windows, and must be loaded in its entirity. This means that the entire binary size is loaded into VM space (memory+swap), including resources.


The last point can use some explanation: With normal binaries under windows, all unused code remains in the .EXE, which is why Windows binaries are locked while running. Code is paged in 4k (8k on 64-bit) at a time as needed, and under low mem conditions simply discarded (because it can be reloaded from bin at any time). This also goes for (graphical/string) resources.

A compressed binary usually must be decompressed in its entirety, or compression ratio will hurt badly. So windows must decompress the whole binary on startup, and page the unused pages to the system swap, where they rot unused.

框架开销

A framework greatly decreases the amount of work to develop an application.

This comes however at a cost, because a framework is not a mere library, but more a whole subsystem that deals with interfacing to the outside world. A framework is designed for a set of applications that can access a lot of functionality, (even if a single application might not).

However the more functionality a framework can access, the bigger a certain minimal subset becomes. Think of internationalization, resource support, translation environments (translation without recompilation), meaning error messages for basic exceptions etc. This is the so called framework overhead.

This size of empty applications is not caused by compiler inefficiencies, but by framework overhead. The compiler will remove unused code automatically, but not all code can be removed automatically. The design of the framework determines what code the compiler will be able to remove at compile time.

Some frameworks cause very little overhead, some cause a lot of overhead. Expected binary sizes for empty applications on well known frameworks:

  • No framework (RTL only): +/- 25kb
  • No framework (RTL+sysutils only): +/- 100-125kb
  • MSEGUI: +/- 600kb
  • Lazarus LCL: +/- 1000kb
  • Free Vision: +/- 100kb
  • Key Objects Library: +/- 50kb

In short, choose your framework well. A powerful framework can save you lots of time, but, if space is tight, a smaller framework might be a better choice. But be sure you really need that smaller size. A lot of amateurs routinely select the smallest framework, and end up with unmaintainable applications and quit. It is also no fun having to maintain applications in multiple frameworks for a few kb.

Note that e.g. the Lazarus framework is relatively heavy due to use of RTTI/introspection for its streaming mechanisms, not (only) due to source-size . RTTI makes more code reachable, degrading smartlinking performance.

不现实的期望

A lot of people simply look at the size of a binary and scream bloat!. When you try to argue with them, they hide behind comparisons (but TP only produces...), they never really say 'why' they need the binary to be smaller at all costs. Some of them don't even realise that 32-bit code is ALWAYS bigger than 16-bit code, or that OS independance comes at a price, or ...,or ..., or...

As said earlier, with the current HD sizes, there is not that much reason to keep binaries extremely small. FPC binaries being 10, 50 or even 100% larger than compilers of the previous millenium shouldn't matter much. A good indicator that these views are pretty emotional and unfounded is the overuse of UPX (see above), which is a typical sign of binary-size madness, since technically it doesn't make much sense.

So where is this emotion coming from them? Is it just resisting change, or being control-freaks? I never saw much justified cause, except that sometimes some of them were pushing their own inferior libraries, and tried to gain ground against well established libs based on size arguments. But this doesn't explain all cases, so I think the binary size thing is really the last "640k should be enough for anybody" artefact. Even though not real, but just mental.

A dead giveway for that is that the number of realistic patches in this field is near zero, if not zero. It's all maillist discussion only, and trivial RTL mods that hardly gain everything, and seriously hamper making real applications and compatability. (and I'm not a compatability freak to begin with). Nobody sits down for a few days and makes a thorough investigation and comes up with patches. There are no cut down RTLs externally maintained, no patch sets etc, while it would be extremely easy. Somehow people are only after the last byte if it is easy to achieve, or if they have something "less bloated" to promote.

Anyway, the few embedded people I know that use FPC intensively all have their own customized cut back libraries. For one person internationalization matters even when embedded (because he talks a language with accents), and exceptions do not, for somebody else requirements are different again. Each one has its own tradeoffs and choices, and if space is 'really' tight, you don't compromise to use the general release distro.

And yes, FPC could use some improvements here and there. But those shouldn't hurt the "general programming", the multiplatform nature of FPC, the ease of use and be realistic in manpower requirements. Complex things take time. Global optimizers don't fall from the sky readily made.

与 GCC 的比较

Somewhat less unrealistic are comparisons with GCC. Even the developers mirror themselves (and FPC) routinely against gcc. Of course gcc is a corporate sponsored behemoth, who is also the Open Source's world favorite. Not all comparisons are reasonable or fair. Even compilers that base themselves on GCC don't support all heavily sponsored "c" gcc's functionality.

Nevertheless, considering the differences in project size, FPC does a surprisingly good job. Speed is ok, except maybe for some cases of heavily scientific calculating, binary sizes and memory use are sufficient or even better in general, the number of platforms doesn't disappoint (though it is a pity that 'real' embedded targets are missing).

Another issue here is that freepascal generally statically links (because it is not abi stable and would be unlikely to be on the target system already even if it was) its own rtl. GCC dynamically links against system libraries. This makes very small (in terms of source size) programs made with fpc have significantly larger binaries than those made with gcc. It's worth mentioning here, that the binary size has nothing to do with the memory footprint of the program. FPC is usually much better in this regard than gcc.

Still, I think that considering the resources, FPC is doing extraordinarily well.

与 Delphi 的比较

In comparisons with Delphi one should keep in mind that 32-bit Delphi's design originates in the period that a lot of people DIDN'T even have pentium-I's, and the developer that had 32MB RAM was a lucky one. Moreover Delphi was not designed to be portable.

Considering this, Delphi scaled pretty well, though there is always room for improvement, and readjustments that correct historical problems and tradeoffs. (it is a pretty well known fact that a lot of assembler routines in newer Delphi's were slower than their Pascal equivalents, because they were never updated for newer processors. Only the recent D2006 is said to have corrected this).

Still, slowly on the compiler front, FPC isn't Delphi's poor cousin anymore. The comparisons are head-on, and FPC 2.1.1 winning over Delphi is slowly getting the rule, and not the exception anymore.

Of course that is only the base compiler. In other fields there is still enough work to do, though the internal linker helps a lot. The debugger won't be fun though :-) Also in the language interoperability (C++, Obj C, JNI) and shared libraries is lots of work to do, even within the base system.

与 .NET 或 Java 的比较

Be very carefull with comparisons to these JIT compiled systems, JITed programs have different benchmark characteristics and also extrapolating results from benchmarks to full programs is different.

While a JIT can do a great job sometimes (specially in small programs that mostly consist out of a single tight loop), but this good result often doesn't scale. Overall my experience is that statically compiled code is usually faster in most code that is not mainly bound by some highly optimizable tight loop, despite the numerous claims on the net otherwise.

A fairly interesting quantitative source for this is this Shootout faq entry. Another interesting one is memory allocation in JVM/.NET.

Note that since about an year (in 2007), java 6 suddenly caused a significant jump in the Java shootout-ratings, and starts touching the bottom of normal native compilers. This shows that one must be very carefully echoing sentiments on the web (both positive and negative) and stick to own measuring, with the border conditions trimmed to the application domain that you are in.