Talk:Size Matters

From Lazarus wiki

Ancient stuff

Most of this "Size Matters" (Size Doesn't Matter) page is a bunch of "In My Opinion" rubbish nonsense. This article needs an author, because it is highly biased and very opinionated. It is definitely not the view of all FPC programmers - but it makes it look like it is, by placing no author at the bottom of the page. I can sort of guess who wrote the article though, because it is an opinion of many FPC developers.

First of all - let's get a few things straight. What is FPC good for? Why would anyone use FPC over Delphi? For linux. For BSD. What is linux and BSD good for? Server applications. What does a server application entail? Bandwidth.

Size is important in CGI programs and large huge scale servers. Guess what - if I have a shared server with 500 CGI programs on it from all sorts of users, then eventually those 300MB FPC executables are going to start affecting bandwidth costs and hard drive costs.

Big hard drives are very nice - but guess what - a big hard drive is extremely hard to SCAN and FIX errors. A big hard drive requires much more time to defragment. A big hard drive is extremely hard to back up - since it takes DAYS rather than hours to back up.

So my question is - if size doesn't matter - Why use FPC? Why not use .NET or Java?

Think about what FPC is useful for. FPC is not useful for windows GUI programs (despite what Lazarus team wants you to believe). FPC is a very niche market that must support the niche market - your niche market is CGI and server programs, systems administration programs, and embedded devices. Delphi cannot create BSD or Linux CGI programs, or embedded software. Delphi can create GUI programs.

CGI programs, embedded software, and systems tools are small. Big GUI programs are big. FPC is not a GUI generator.

Find your niche market. Discover what people are using FPC for in the real world. I'm not talking about those hobbyist people that are using FPC to make Kylix like GUI applications for linux. That market is dead. If linux GUI programs were really what FPC was used for - then I could totally agree that having a 300KB app on ONE PERSON's 200GB hard drive isn't a big deal.

But you try and debug a server with 500 bloatware 3MB programs and then we'll start talking. You want FPC to be taken seriously in the systems world? You want it to be taken seriously in the web world? Then start focusing on that world - because let me repeat - no one is using FPC to create GUI applications. By no one, I mean ask yourself why not just use Delphi. I know there is Pixel. One GUI application. But I'll bet you most people use FPC where Delphi does not shine. Delphi does not shine in the server market. The bandwidth market.

I'm not saying that FPC must create 20KB systems and cgi programs. I'm saying that a 1MB CGI program that loads HELLO WORLD is not acceptable. I realize that FPC can still create fairly small systems programs - and this is good. The current state of FPC is not ridiculous. But it is heading that way - with the attitude I see, like "size never matters". It's funny that in the article someone mentions that "FPC 2.1.1 beats delphi". Huh? What does it matter - speed matters? Beats delphi in what? So speed matters but size doesn't? All of a sudden speed can matter but size is really not important? That's like saying that size matters, but speed doesn't.

So basically the article is rubbish - because it claims that fpc 2.1.1 is going to beat the pants off delphi, while at the same time it says that size aren't really important because FPC is geared toward application development. Applications don't require speed - hint, hint - most of software applications that run on the desktop are IDLE 90 percent of the time. Speed is important in file searching, web server, and systems stuff. Is FPC for systems programming and server programming, or is it for desktop software programming? It seems that some folks think FPC is a great application development tool. I've never seen FPC used as such. I see it being used on the system, on the server, and in niche areas like gameboy, embedded devices. I don't see very many good GUI programs coming from FPC. Nor do I know of anyone that uses very many desktop programs any more - since everyone is so dumb to use simple things called web browsers.

Yes, web browser GUI's suck - but they are good enough. If FPC keeps focusing on the desktop software application market, then FPC has no developers. Because people already have good tools to make desktop software applications - and over 90 percent of desktop software applications run on MS Windows - so no one needs to have a portable compiler to compile their GUI programs on BSD and Linux. What people do need, is a systems, database, and server compiler - which is mainly what FPC has been/should be used for in the real world.

Most of this "Talk:Size Matters" (Size Does Matter in some cases) page is a bunch of "In My Opinion" rubbish nonsense. This article needs an author, because it is highly biased and very opinionated. It is definitely not the view of all FPC programmers - but it makes it look like it is, by placing no author at the bottom of the page. Vincent 09:27, 21 November 2006 (CET)
You simply miss several points:
  • CGI (20 kB up) programs written in FPC are usually very small because they don't depend on the lcl.
  • If you've really a lot of different CGI applications using all FPC simply compile the rtl, fcl, lcl into a shared library. I did this and it works. Why don't we do this by default: it would simply increase the memory footprint of single applications. A FPC rtl compiled as shared lib is around 4 MB. So a simple hello world would cause a 4 MB (!) footprint, lazarus applications will be much worse. So for the common FPC application this makes no sense. Usual users run only a handfull FPC applications at once so shared linking (which causes also 10% slowdown on i386 machines) makes for the default FPC installation no sense.
  • Shared lib hell. You would always need a matching set of shared libs for your application. If you ever tried to install 3rd party applications on linux you might know what I'am talking about.
--FPK 19:49, 10 December 2006 (CET)

First, the page is mine (Marcov). Most developers and IRC regulars have expressed support though.

I don't mind signing the article, the non signing was not on purpose, though I'd rather like the article to be discussed on content than on author.

  1. FPC minimal apps (and thus CGIs) are more like 20-100kb depending on the OS, but I still wouldn't care too much if they were 1MB. You exactly state the minimalistic without clue philosophy that the faq warns for. These "markets" you describe don't exist except in the mind of a few tinkerers that grow over it eventually. We tried to limit the binary size because of the TP comparisons for years, and no effort was ever enough (and never realistic in the first place due to a starting cost of being 32-bit and in a portable HLL). Worse, all the people that whined over this in those early years eventually went with Delphi when they grew up, and generated bigger binaries without thinking twice. Nobody stays in the niche, and embedded users ALSO go for productivity and usability first, and size second. ((flash) memory is awfully cheap nowadays. You could keep your entire CGI example in memory for under the commercial hourly rate of a single programmer)
  2. The article already treats the embedded case, and warns about catering the general purpose FPC distro for embedded use, purely on size issues. Embedded is more that just size. The absence of e.g. systematically maintained embedded patches (something that would be extremely easy) seems to indicate that there is no actual ground for this by the *actual* embedded users.
  3. "FPC over Delphi" is not a real tradeoff. One can also use both. One could also argue "why choose Pascal over PHP for web development" which would be equally senseless. And people use FPC and Lazarus for both. (and a lot more purposes than just these two).
  4. Bandwidth has nothing to do with binary size, since used bandwidth is also a function of compressability (which hurts in the UPX case)
  5. Code bloat and binary size are often linked, but not always.
  6. any harddisk sold in the last 10 years (that includes microdrives in PDAs) is larger than 300MB.
  7. As said in the article. The so called "bloat" in FPC is not linear. There is a one of size (which is the result of a compromise in usability and size that was carefully crafted by a dozen knowledgable developers in over a decade, and it changed over time)
  8. So the only real impact would be startup time of the cgi. This is mitigated by several factors:
    1. Most importantly, modern OSes only map used code into physical memory.
    2. FPC doesn't use libc by default, no costly dynlinker step. In fact, FPC is the fastest starter in the language shootout, by far.
    3. Most webservers that are performance oriented implement some "fastcgi" option that doesn't respawn binaries at all.
  9. I don't defrag harddrives, unless FAT32 under Plain dos and win9x. NT and Linux have improved fat drivers that don't fragment that much. So it only exercises the drive, and only improves a bit of burst performance right after defragmenting, but not much in the long run. IMHO defragging belongs in the UPX category too :-)
  10. FPC has its niches, and it grows in these niches. Admitted, a large part is scraps (specialised use) from the Delphi community, but still. The point is that FPC's non educational use, though modest, still grows considerably every year (and the number of contributors likewise). Wish I could say the same about Delphi. Apparantly, our tradeoffs interest Delphi users.

As an endconclusion to the original poster on this talk page: The page is mainly against opinioned last byte mentality. I provide arguments, magnitude estimates etc, and point to defects in the reasoning. I expect the same of the opposition. At least provide real world projects _with_ all border conditions that prove your thesis. My claims that HDs are sub Eur100 per so and so much GB is easily checkable, what can you really put against it?

Marcov 20:46, 10 December 2006 (CET) Updated/corrected: Marcov 14:52, 11 April 2008 (CEST)

Personaly I consider programming an engineering triangle between size, maintainability, and speed. Sacrificing size or speed for maintainability is a valid trade-off to make, but so is the other way around. Excessive executables are IMHO a matter of bad engineering, regardless wether it causes actual trouble or not. There is always room for better engineering, so there is room to engineer for smaller exes. But is there currently a real problem? IMO not, FPC does a fine job regarding size. Daniel-fpc 22:58, 10 December 2006 (CET)

I'm going to add my piece here. I have ended up on this page as the end of a result of a google search. Reading it makes me embarrassed on behalf of Pascal. To me it reads like a rationalisation of poor quality with a good dose of condescension thrown in for good measure. People have concerns. The response is 'You are wrong for being concerned". I don't particularly care that the executable is a quick download wish high bandwidth speeds, or that it sits on a small area of a hard disk. It does bother me that programs see bigger than they should be to accomplish their tasks. Yes, it is fair to say "well help us then!", but to say "You're wrong, besides gcc gets more developments, and you probably had things configured wrong, and a megabyte isn't big anyway."

A megabyte is big, it is huge, I ran a full multi tasking windowed operating system in a quarter of that. I will freely admit that code does more now, but the fact remains that programs have grown disproportionately greater than capabilities. I also agree that every programmer should not have to spend all their time worrying about these issues. On the other hand I do consider it one of the core responsibilities of people who develop compilers and frameworks. Wastage at that level gets multiplied many times over. Pascal is one of the lighter platforms out there and it is still too big. I have a simple windowed app with some panels sitting at 600k for the exe and 1.4MB(private dirty) ram usage. I would have liked to use it for a panel widget, but on the platform I'm running 1.4 MB is far too much to spend on such a little thing. Quite frankly, executable size seems like misdirection. It is a symptom indicative of excessive RAM usage, it is not always the case but usually so. RAM usage is a real concern.

The criticism "the number of realistic patches in this field is near zero, if not zero." is possibly one of the more ridiculous portions. It doesn't come as a surprise that people have not provided patches to fix what has been so steadfastly claimed is not a problem. Perhaps, instead of belittling people you could offer suggestions on what parts they could look at to reduce the footprint.

People have concerns, It would be better to try and understand them rather than denigrate them. Lerc 23:30, 16 October 2009 (CEST)

Actually that is the core of the entire argument. And I, we don't mind discussing footprint, and nothing is set in stone. But such discussion must be about something concrete. The whole point of this faq however is that the discussions always follow the same pattern: either comparisons with TP and Delphi on bytes (=not concrete), or people overfocussing on the minimal (non GUI) binary size part (usually the things that sysutils always links in as exceptions, locale etc), something that won't save more than than 10-40k (and that is if it is ALL killed, not reduced).

Progress in this category will come for many fine grained little improvements over time. Not by one minimalist who thinks he can solve "all" problems" by cutting what HE doesn't immediately needs.

Never is there a real analysis where the byte goes, nobody ever tracks these issues for more than one discussion (usually if they do, they sooner or later get sidetracked by a real problem). It is all talk, nobody is truly interested.

All real improvement comes from people like Sergei,Jonas and Pierre that constantly review and make changes to the deeper regions of the RTL and compiler to increase the granularity of smartlinking.

Neither that I haven't been immune to this problem either. Probably Michael/Jonas/Florian can show you messages from me from the early days where I tried to shave off 200 bytes of a dos binary. Marcov 09:04, 26 March 2012 (UTC)