Measuring time period with rdtsc in liux
Elazar Leibovich
elazarl at gmail.com
Wed Apr 20 08:10:07 IDT 2016
Hi,
In all recent Intel hardware, rdtsc is providing number of ticks since
boot, with a constant rate, and is equal among CPUs.
Vol III 17.14
For Pentium 4 processors, (...): the time-stamp counter increments at
a constant rate. That rate may be set by the
maximum core-clock to bus-clock ratio of the processor or may be set
by the maximum resolved frequency at
which the processor is booted.
Vol III 17.14.1
"On processors with invariant TSC support, the OS may use the TSC for
wall clock timer services
(instead of ACPI or HPET timers). TSC reads are much more efficient
and do not incur the overhead associated with
a ring transition or access to a platform resource."
This patch by Andi Kleen, which should know a thing or two about
processor architecture, seems to imply that sysfs cpuinfo_max_freq is
rdtsc's rate:
https://sourceware.org/ml/libc-alpha/2009-08/msg00001.html
However when running the following, I get a constant drift from the
expected result:
https://gist.github.com/elazarl/140ccc8ebe8c98fc5050a4fdb7545df8
gist of the code
uint64_t nanosleep_jitter(struct timespec req) {
uint64_t now = __real_tsc();
nanosleep_must(req);
return __real_tsc() - now;
}
When running it I get
$ ./jitter -s 10ms -n 1
-3.5615ms jitter actual 6.4385ms expected 10.0000ms
What am I missing?
How can I get rdtsc's frequency in Linux, preferably from user land.
PS
Yes, this is not correct for all processors, it is correct enough to
work on virtually all recent server hardware, and constant_tsc flag
can be verified with /proc/cpu_info. So while not perfect for all use
cases, this is good enough.
More information about the Linux-il
mailing list