The benchmarks of some Common LISP implementations
1. Preface
PRODATA recently had performed benchmarking for a number of different
Common LISP implementations. You can see the results below.
Most benchmarks are based on the well-known Gabriel benchmarks. Some additional
benchmarks were taken from the CMU CL source code tree (src/benchmarks), which include
Richard's benchmark (an operating system simulation) and modified version of cascor1.lisp
(cascade correlation learning algorithm). You can find this modified version here.
All benchmarks have been performed using following hardware, having all three
OS'es (Windows, Linux and FreeBSD) installed with multiboot.
- Model - HP Omnibook XE3 laptop
- CPU - Intel Celeron (Coppermine) 650 MHz
- RAM - 128 MB
- ACPI - disabled
- HDD - internal IBM - 10 GB
- Video - S3 Savage IX / TFT panel (isn't very important for tests)
Some notes and conclusions about these benchmarking are placed at
the end of the page. Please see the results:
2. Benchmarking results
Linux 2.6.3 (interpreted / compiled) |
Benchmark | Allegro CL 6.2 | CMU CL 18e | CLISP 2.30 | GCL 2.5.3 |
|
triangle | 487.240 3.160 |
310.582 3.301 |
105.324 11.393 |
92.370 0.920 |
boyer | 42.080 0.240 |
19.374 0.079 |
2.925 0.536 |
11.730 0.150 |
browse | 54.990 0.160 |
22.996 0.194 |
3.875 0.768 |
6.400 1.080 |
ctak | 37.550 0.140 |
18.455 0.057 |
5.522 0.501 |
5.540 0.150 |
dderiv | 5.340 0.038 |
2.230 0.036 |
0.808 0.229 |
0.640 0.080 |
deriv | 4.340 0.032 |
1.993 0.107 |
0.596 0.256 |
0.510 0.080 |
destructive | 3.330 0.020 |
3.262 0.011 |
1.791 0.074 |
0.680 0.030 |
iterative-div2 | 2.660 0.011 |
2.667 0.014 |
0.606 0.145 |
0.400 0.040 |
recursive-div2 | 7.770 0.017 |
2.354 0.018 |
0.600 0.156 |
0.610 0.050 |
fft | 24.500 0.471 |
6.668 0.013 |
2.269 0.426 |
1.600 0.300 |
frpoly-fixnum | 35.650 0.042 |
4.759 0.017 |
0.910 0.135 |
3.430 0.080 |
frpoly-bignum | 36.000 0.120 |
5.065 0.077 |
1.112 0.317 |
3.510 0.240 |
frpoly-float | 35.770 0.056 |
4.819 0.028 |
0.932 0.234 |
3.430 0.040 |
puzzle | 15.240 0.317 |
19.408 0.027 |
4.275 0.637 |
3.470 0.030 |
puzzle (C-code) | 0.011 |
tak | 34.940 0.033 |
13.156 0.028 |
4.345 0.332 |
5.110 0.040 |
rtak | 34.460 0.032 |
13.124 0.027 |
4.396 0.332 |
5.100 0.050 |
takl | 252.430 0.285 |
109.679 0.151 |
16.676 1.745 |
31.310 0.120 |
takr | 3.620 0.007 |
2.336 0.005 |
0.411 0.039 |
0.560 0.010 |
stak | 45.870 0.470 |
23.035 0.089 |
12.393 0.616 |
4.770 0.090 |
init-traverse | 25.760 0.073 |
28.061 0.026 |
15.363 0.558 |
6.110 0.250 |
run-traverse | 201.170 0.706 |
110.582 0.508 |
43.565 3.413 |
32.140 0.610 |
richards.lisp | 59.940 0.295 |
29.551 0.021 |
12.184 0.873 |
9.920 0.160 |
cascor1.lisp sec./epoch (*) |
29.721 0.0478 |
1.401 0.0215 |
2.005 0.1843 |
3.236 0.0310 |
|
Total score: (sum of places) |
91 52 |
68 30 |
35 90 |
36 58 |
Final ranking: |
4 2 |
3 1 |
1 4 |
2 3 |
FreeBSD 5.2.1 (interpreted / compiled) |
Benchmark | Allegro CL 6.2 | CMU CL 18e | CLISP 2.30 | GCL 2.5.3 |
|
triangle | 513.523 3.039 |
297.250 3.286 |
99.696 10.450 |
207.690 1.050 |
boyer | 51.266 0.171 |
17.975 0.056 |
2.982 0.688 |
16.520 0.290 |
browse | 58.593 0.180 |
23.590 0.169 |
3.837 0.793 |
14.120 6.590 |
ctak | 40.133 0.141 |
17.346 0.054 |
5.300 0.382 |
13.100 0.430 |
dderiv | 5.797 0.023 |
2.177 0.027 |
0.743 0.233 |
1.870 0.200 |
deriv | 4.602 0.023 |
1.937 0.026 |
0.618 0.227 |
1.590 0.120 |
destructive | 3.320 0.007 |
3.212 0.010 |
1.762 0.055 |
1.830 0.160 |
iterative-div2 | 2.771 0.008 |
2.607 0.014 |
0.618 0.107 |
1.160 0.020 |
recursive-div2 | 6.172 0.016 |
2.315 0.018 |
0.618 0.116 |
1.640 0.100 |
fft | 24.109 0.476 |
6.628 0.012 |
2.161 0.420 |
2.790 0.590 |
frpoly-fixnum | 37.352 0.023 |
4.348 0.003 |
0.933 0.211 |
6.830 0.190 |
frpoly-bignum | 37.742 0.110 |
4.763 0.072 |
1.136 0.341 |
7.230 0.740 |
frpoly-float | 37.508 0.047 |
4.563 0.025 |
0.951 0.222 |
6.830 0.040 |
puzzle | 15.461 0.289 |
18.029 0.023 |
4.335 0.564 |
8.460 0.030 |
puzzle (C-code) | 0.017 |
tak | 37.218 0.023 |
12.872 0.027 |
4.314 0.245 |
11.520 0.050 |
rtak | 37.469 0.023 |
12.982 0.027 |
4.243 0.245 |
11.370 0.050 |
takl | 273.594 0.234 |
106.377 0.146 |
16.537 1.235 |
64.630 0.140 |
takr | 3.898 0.008 |
2.391 0.005 |
0.483 0.032 |
1.170 0.010 |
stak | 48.422 0.484 |
22.102 0.087 |
11.742 0.598 |
11.740 0.100 |
init-traverse | 25.742 0.063 |
27.579 0.027 |
14.589 0.418 |
14.580 0.850 |
run-traverse | 214.172 0.680 |
105.640 0.483 |
40.482 2.642 |
69.290 0.770 |
richards.lisp | 59.969 0.273 |
27.519 0.020 |
11.064 0.759 |
27.560 0.450 |
cascor1.lisp sec./epoch (*) |
30.055 0.0504 |
1.184 0.0238 |
1.561 0.1744 |
10.933 0.0883 |
|
Total score: (sum of places) |
91 44 |
64 33 |
26 85 |
49 68 |
Final ranking: |
4 2 |
3 1 |
1 4 |
2 3 |
MS Windows 98 SE (interpreted / compiled) |
Benchmark | Allegro CL 6.2 | LispWorks 4.3 | CLISP 2.30 (**) | Corman Lisp 2.5 (***) |
|
triangle | 504.820 3.080 |
111.720 1.650 |
948.020 81.620 |
6.643 |
boyer | 43.770 0.440 |
4.830 0.980 |
34.320 5.490 |
0.135 |
browse | 56.740 0.220 |
3.960 0.550 |
23.730 2.360 |
0.832 |
ctak | 38.620 0.160 |
3.510 0.060 |
43.500 3.630 |
0.851 |
dderiv | 5.600 0.060 |
0.660 0.170 |
3.730 1.260 |
0.080 |
deriv | 4.450 0.010 |
0.550 0.160 |
2.530 1.040 |
0.063 |
destructive | 3.290 0.050 |
1.100 0.060 |
10.710 0.280 |
0.059 |
iterative-div2 | 2.750 0.010 |
0.600 0.050 |
2.740 0.390 |
0.014 |
recursive-div2 | 7.300 0.060 |
0.490 0.110 |
5.330 1.700 |
0.040 |
fft | 24.610 0.490 |
2.530 0.550 |
8.900 2.300 |
1.040 |
frpoly-fixnum | 36.090 0.060 |
0.990 0.060 |
5.600 0.880 |
0.041 |
frpoly-bignum | 36.690 0.110 |
1.210 0.160 |
6.310 1.380 |
0.206 |
frpoly-float | 36.360 0.050 |
1.040 0.060 |
5.660 1.050 |
0.081 |
puzzle | 15.330 0.330 |
4.010 0.220 |
49.490 3.350 |
0.459 |
puzzle (C-code) | not caclulated |
tak | 35.920 0.010 |
3.180 0.010 |
32.630 2.420 |
0.040 |
rtak | 35.970 0.060 |
3.190 0.060 |
33.610 2.970 |
0.041 |
takl | 258.980 0.220 |
18.730 0.210 |
116.060 6.640 |
0.188 |
takr | 3.730 0.010 |
0.440 0.060 |
3.290 0.550 |
0.008 |
stak | 46.520 0.440 |
5.600 0.110 |
131.880 4.830 |
Failed (***) |
init-traverse | 25.050 0.110 |
12.580 0.160 |
101.000 2.740 |
0.238 |
run-traverse | 206.250 0.660 |
32.460 1.210 |
511.910 15.370 |
1.097 |
richards.lisp | 59.590 0.160 |
5.050 0.110 |
74.590 3.730 |
0.593 |
cascor1.lisp sec./epoch (*) |
24.752 0.0596 |
2.026 0.0501 |
14.824 0.4821 |
0.1788 |
|
Total score: (sum of places) |
61 37 |
23 49 |
54 91 |
- 53 |
Final ranking: |
3 1 |
1 2 |
2 4 |
- 3 |
3. Notes on benchmarking process
All benchmarks where performed in a 'quiet' environment, which means
that almost all daemons on Linux and FreeBSD had been stopped and no graphical environment
was running. Also the amount of physical
memory was big enough for running LISP processes without page faults and swapping.
All open-source Lisp distributions (all, except Allegro CL) either had been found preinstalled,
or installed from legal source (RPM's and 'ports') in both Linux and FreeBSD distributions.
The version numbers of all LISPs on both platforms are chosen the same.
Due to strange GCL test results on FreeBSD (much slower, than on Linux) it
had been recompiled from the sources, but nothing significantly had changed.
(*) The cascor1.lisp benchmark was finishing every time with different 'epoch' counts
on various LISPs, so the benchmark result was calculated as a mean value for one epoch. Even more, it has
to be pointed out, every latest epoch calculation took slightly longer time to complete, so this
benchmark has not to be treated as very strict. Also, this benchmark was slightly modified in order
to run on all LISPs by working around with floating point arithmetic underflows and setting up all
arithmetic strictly to float type (slowing benchmark results by this, though).
(**) The Windows version of CLISP was extremely slow. We think that it is
because CLISP uses the virtual machine concept. The other possible reason - it runs in MSDOS window,
which is scheduling rather slowly under MS Windows.
(***) The Corman Lisp 'stak' benchmark took about 5 minutes and then had hung.
There is nothing we can tell more, except that there was intensive garbage collecting during the test.
We can suppose, that this probably had happened because of unregistered version being used.
Also, the Corman Lisp benchmarks were performed in compiled form only,
because all LISP source files were compiled on-the-fly by it, - so there was no such possibility
to test non-compiled interpretation.
Each benchmark was run in non-compiled (interpreted) and compiled variants,
because we also like to inspect the LISP behaviour as a dynamic language (even when the code is
compiled, you still can use the constructs like (eval) and lambda expressions, which are dynamically
interpreted). These test results show, that not all LISP implementations are very good in that.
Very important note: These benchmarks are NOT Linux vs. FreeBSD performance
tests. They are only cross-LISPs tests. In this particular case Linux performed some
better. For those who want to know where FreeBSD performs better - please read more about
FreeBSD and the cases when FreeBSD really outperforms Linux. It's only to say here - one of them is
when memory is tight, and/or load is high, and this is not the case we had here when we've tested
LISPs.
4. Conclusion
Despite some important and unique features of other LISP inplementations we can state here,
that the clear winner between LISPs on UNIX platforms is CMU CL.
Even its speed of interpretation is very close to winners in this category.
But CMU CL has one serious disadvantage - it has no MS Windows implementation, nor it has a good
graphics support (there are GTK bindings for Linux, though). So if we chose to write a
cross-platform graphical application, the Allegro CL with its Common Graphic Interface
would be a better alternative. CLISP and GCL have their own benefits in some cases,
especially for writing applications, which are designed to run on UNIX-like platforms only.
Disclaimer: We are aware, that there are lies, damn lies, statistics and benchmarks.
The benchmarks published here, are proposed for your own use (and risk) only.
Comments and corrections are welcome. Please visit the
PRODATA contacts page for contact information.
|