Improving language shootout results

From Free Pascal wiki
Revision as of 19:26, 29 September 2006 by Vincent (talk | contribs) (→‎Recursive)
Jump to navigationJump to search

About

The computer language shootout (http://shootout.alioth.debian.org/benchmark.php) is a realisticly flawed benchmarking system which compares many languages on illogical basis. See their homepage for more info.

Goals

Our goals are to be on the highest possible positions. The requirements to reach them can be categorized into two.

1: Optimizations on the assembler level. (core devel work)

2: Optimizations of the benchmarks. (junior devel work)

Way to go

It was decided that to keep memory requirements low it's best to use val() and str() and never include SysUtils unless really required. 32 bit integers should be used when possible to increase speed as the benchmarking machine is x86(athlon 1Gb, 256Mb ram). Use of inline and records instead of classes can also improve performance considerably, but additional testing and comparing with C and other languages should be the main factor. Special note: inlining of recursive functions is possible and sometimes increases speed.

Another thing to use is {$implicitexceptions off} to keep rtl hindering speed. Dropping down to pchar level in tight loops can be benificial. Use also only native sized integers and floats if possible.

Benchmarks notes

Recursive

The recursive benchmark can be made twice as fast, if the 2.1 compiler can be used and ack, fibfp and tak functions are inlined. Inlining with fpc 2.0.x doesn't give a speed increase.

Regex-dna

The regex-dna benchmark is the only missing benchmark. Although fpc has a regexpr unit, it has many todos, such as adding support for | in the search expression. So this unit doesn't have enough functionality.

Shootout criticism

Any page about benchmarking should discuss the use of the benchmarks. FPC does relatively well overall, but is relatively weak in its category.

Criticising the shooutout is not difficult, I mention a few points below.

  • The main problem is that the applications are relatively short and highly computational. This has a lot of consequences:
    • In general they favour lower level languages, well, that is not to Pascal's disadvantage.
    • these benchmarks also favour compilers that aggressively optimize tight heavy calculating loops and have code profiling. FPC is not in that league, but, like Delphi, more geared to overall application performance, and not numeric or visual work with lots of tight loops. This is also why frameworks like Java and .NET don't score _that_ bad. Simply because the JIT can do good work for these tight loops, and take really advantage of its profiling optimizations. Simply being able to use a system specific move is already a serious advantage in benchmarks.
    • Some benchmarks obviously target certain very specific optimizations like tail recursion optimization.
  • Language usability and overall speed of the development process is not measured at all.
    • Systems with a heavy RTL are punished. Both in size and startup time. However for actually getting work done, as long as it is not extreme, a full RTL is nice.
    • The weight put on number of codelines is way too large. All of these are all fairly algoritmic complicated programs. Typing of lines is negligible.
    • ease of debugging is not measured.
    • Languages that have a lower and higher level mode (e.g. FPC with its Pascal and Delphi levels) typically choose the most beneficial mode (e.g. lowest possible version for lowlevel stuff, high level mode if linecount is important), even if that is not the one typically used for a certain job.
    • the few aspects of usability for application development are typically measured in codelines, and typically for some mathematical problem, not for a large fullblown application. Typically they are only added to give functional languages a chance to excel at anything.
    • (not a problem of the shootout but its use) People often put too much weight on the overall ranking. They should be selecting a few benchmarks and compare them, with suitable adopted weights.