New Freecell Solver gcc-4.5.0 vs. LLVM+clang Benchmark

New Freecell Solver gcc-4.5.0 vs. LLVM+clang Benchmark

Shlomi Fish shlomif at iglu.org.il
Tue Feb 1 14:07:49 IST 2011


On Tuesday 01 Feb 2011 12:09:17 Nadav Har'El wrote:
> On Tue, Feb 01, 2011, Elazar Leibovich wrote about "Re: New Freecell Solver 
gcc-4.5.0 vs. LLVM+clang Benchmark":
> > Long story short, he claims there that modern computers are now highly
> > non-deterministic, he demonstrated 20% running time variation by the same
> > JVM running the same code.
> 

I have noticed small variations in my testing times for Freecell Solver, but 
they were not nearly as dramatic as a 20% difference in run-time. At most they 
were 1 or 2 seconds out of 72-73 total run-time.

> I haven't listened to that talk, so I'm just talking now from my own
> experiences.
> 
> It depends on what you're benchmarking, and why. Shlomi was benchmarking a
> single-process, CPU-intensive, program, and garbage collection was not
> involved. Everything in his program was perfectly deterministic.
> 

Well, my program is not single-process, but it is multi-threaded (because I 
noticed using two threads makes it somewhat faster), and I have a multi-
process version that used to perform a little better than the multi-threaded 
version, but no longer does. What I do is solve different deals in different 
tasks.

> Indeed, the computer around him is *not* deterministic - other processes
> might randomly decide to do something - he might get mail in the middle,
> "updatedb" might start, he might be doing some interactive work at the
> same time, and some clock application might be updating the display every
> second, and something might suddely decide to access the disk. Or
> whatever.

Well, normally, before benchmarking, I exit from X-Windows, kill all stray 
processes from the X session, and also I run the benchmarked process under the 
highest possible priority (by using a sudo_renice script). Currently a typical 
entry in my benchmarks reads:

[quote]

bash scripts/pgo.bash gcc total
r3509 ; trunk
ARGS="--worker-step 16 -l eo"
2.6.37 (vanilla from kernel.org and a custom .config file).
ssh (while not running X).
gcc-4.5.1 with "-flto and -fwhole-program".
google-perftools-1.6 compiled with gcc-4.5.1.
Using sudo_renice.
./Tatzer -l p4b

    Highlight: vanilla 2.6.37 kernel.

72.7685720920563s

========================================================================

[/quote]

So it's somewhat more deterministic.

> 
> But if he runs his deterministic application 5 times and gets 5 different
> durations, each duration is composed of the deterministic run-time of
> the application plus a random delay caused by other things on the system.
> The *minimum* of these 5 durations is the one that had the minimum random
> delay, and is thus closest to the "true" run time. Presumably, if one runs
> the application on a machine which is as idle as humanly possible, you'd
> get something close to this minimum.
> 

Right.

Regards,

	Shlomi Fish

> It is true that when the application itself is non-deterministic, or when
> it closely interacts with other non-deterministic parts of the system
> (e.g., io-intensive, depends on network delays, etc.) averaging might make
> more sense.
> 
> Nadav.

-- 
-----------------------------------------------------------------
Shlomi Fish       http://www.shlomifish.org/
Understand what Open Source is - http://shlom.in/oss-fs

Chuck Norris can make the statement "This statement is false" a true one.

Please reply to list if it's a mailing list post - http://shlom.in/reply .



More information about the Linux-il mailing list