This package contains Lisp code intended for performance benchmarking of different Common Lisp implementations. The tests it runs include
Except for the CLOS COMPILER tests, timings do not include compilation time. The garbage collector is run before each test to try to make the timings more repeatable. For certain targets, we assume that the times reported by GET-INTERNAL-RUN-TIME and GET-INTERNAL-REAL-TIME are accurate. Timings for a given Common Lisp environment may be quite sensitive to the optimization settings; these are set at the beginning of the Makefile.
Common Lisp is a very large language, so it is difficult to evaluate the performance of all aspects of an implementation. Remember that the only real benchmark is your application: this code is only representative of real-life programs to a limited extent.
Further information on obtaining the source code and running the suite.
Raymond Toy, Christophe Rhodes, Peter Van Eynde, Sven Van Caekenberghe, Kevin Layers, Duane Rettig, Juno Snell, Bradley Lucier and likely others that I have forgotten to note.
"Life is short and it was not meant to be spent making people feel guilty about instruction pipelines being only partly full or caches being missed." -- Kent Pitman in <sfw7ksm3b7k.fsf@shell01.TheWorld.com>
@misc{ gabriel86performance, author = "Richard Gabriel", title = "Performance and Evaluation of Lisp Systems", text = "MIT Press, Cambridge, Massachusetts", year = "1986", url = "http://www.dreamsongs.com/NewFiles/Timrep.pdf", }
@InProceedings { Rhodes:2004:grouping, author = "Christophe Rhodes", title = "Grouping Common Lisp Benchmarks", booktitle = "1st European Lisp and Scheme Workshop", address = "Oslo, Norway", month = Jun, year = 2004, url = "http://www.cs.uni-bonn.de/~costanza/lisp-ecoop/submissions/Rhodes.pdf", }
Christophe applies a clustering algorithm to the cl-bench results obtained from successive versions of SBCL, to find a grouping of benchmarks according to the similarity of the codepaths tested.
Eric Marsden, eric.marsden@free.fr