<div dir="ltr"><font face="georgia,serif"><br></font><br><div class="gmail_quote">On Wed, Jul 25, 2012 at 11:07 AM, Nadav Har'El <span dir="ltr"><<a href="mailto:nyh@math.technion.ac.il" target="_blank">nyh@math.technion.ac.il</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Wed, Jul 25, 2012, Nadav Har'El wrote about "High-resolution user/system times?":<br>
<div>> I'm now trying to measure a process running around 3 milliseconds, less<br>
> than one jiffy, and I still want to understand how much of it is spent in<br>
> user space, and how much of it is spent in kernel space (e.g., handling on<br>
<br>
</div>Thanks for all the ideas.<br>
I think I found an even simpler solution:<br>
<br>
It appears that while times(2) has a 4-ms resolution, </blockquote><div><br>Sanity check: I assume you measured it, right? Out of curiosity I did<br><br>#include <unistd.h><br>#include <sys/times.h><br>#include <stdio.h><br>
<br>int main(void) {<br> return printf("%ld\n",sysconf(_SC_CLK_TCK));<br>}<br><br>on a couple of systems, and got 100 both times, which corresponds to a 10ms resolution. This is what I'd expect if HZ is 100 in the kernel.<br>
<br>I recall a discussion here at work where, I believe, one of the conclusions was that it was possible to have hi-res timers with clock_gettime(0 and clock_getres() (which i mentioned already), but functions used to getting system vs. user time such as getrusage() were limtied by software timer accuracy (jiffies).<br>
<br></div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
it doesn't mean<br>
that every time a 3-ms process is measured it returns 0. Rather, it<br>
seems probabilistic - the user-space count seems (but I didn't verify in<br>
the code...) to return 0 if the timer interrupt happened while the code<br>
was in the kernel or 1 if the code was in user space.<br>
So what I can do is to run the whole process 1,000 times, and add up the<br>
"1" and "0"s which I get for system time, and if I get for example 30,<br>
I know that the process was in the kernel for 30/1000 of the time.<br>
<br>
I no idea how accurate this will be, but it might just work, and is<br>
very simple...<br></blockquote><div><br>So I gather you think this will be different from what you wanted to avoid in your original post because you think the startup code will be repeated on each iteration? It's difficult to assess without knowing what it is that you are trying to measure and what, in the context, the difference is.<br>
<br></div>I cannot really point to why this would not be valid, but intuitively I have my doubts. One possible cause of mixup may be the difference between accuracy and precision. Averaging over a set of possibly imprecise and possibly inaccurate measurements may improve precision, but not accuracy.<br clear="all">
</div><br>-- <br>Oleg Goldshmidt | <a href="mailto:oleg@goldshmidt.org" target="_blank">oleg@goldshmidt.org</a><br>
</div>