New Freecell Solver gcc-4.5.0 vs. LLVM+clang Benchmark

New Freecell Solver gcc-4.5.0 vs. LLVM+clang Benchmark

Elazar Leibovich elazarl at gmail.com
Tue Feb 1 00:30:41 IST 2011


On Mon, Jul 19, 2010 at 12:18 PM, Nadav Har'El <nyh at math.technion.ac.il>wrote:

> Imagine, for example, that you run a certain program 5 times and get the
> times: 20.0, 18.0, 18.1, 27.0, 18.1
> Evidently, the first run was slower because things were not in the cache,
> and the run that took 27.0 was delayed by some other process in the
> background
> taking up the CPU or disk. The minimum run time, 18.0, is the most
> interesting
> one - it is the time a run would take every time, if things were perfect.
> If you average the above numbers, or find the standard deviation, etc.,
> the numbers would not be very interesting...
>

Just heard my intuition against that claim recited by a master, Joshua Bloch
of the "Effective Java" fame.

Long story short, he claims there that modern computers are now highly
non-deterministic, he demonstrated 20% running time variation by the same
JVM running the same code. He claims like I felt, you must employ statistics
on benchmark to get a meaningful result, and  I think it implies that
minimum is not the way to go here. I recommend this 30 minutes talk without
any relation to the discussion.

Video: http://parleys.com/#id=2103&sl=12&st=5
slides: http://wiki.jvmlangsummit.com/images/1/1d/PerformanceAnxiety2010.pdf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cs.huji.ac.il/pipermail/linux-il/attachments/20110201/f2cc2a2e/attachment.html>


More information about the Linux-il mailing list