User Tools

Site Tools


developersguide:performancemeasurement

Measuring Performance in Minix

This page documents some features in Minix for measuring where the system is spending its time. If you are interested in optimising the system, this is the first thing to do.

top

First question is: is the time i/o time or cpu time? A simple way to see whether the system is working as hard as it can is 'top.' Then look at the 'idle' time. Example:

CPU states:  19.14% user,  22.76% system,  32.03% kernel,  26.07% idle

The cpu has been idle for 26% of the time, so optimising code for cpu efficiency might be less useful than finding out why the system is blocking.

Statistical Profiling

To measure where the CPU is spending its time without having to change the source code, you can turn on statistical profiling in the kernel. For more information, see StatisticalProfiling.

Code instrumentation

If you want to find out how many cycles a particular block of codes takes, use the TIME_BLOCK macro in <timers.h>. Simply place it around a piece of code like so:

TIME_BLOCK(assert(ptr);)

and this will periodically print out a summary of the executed code using printf() (i.e. to stdout for user programs, and the console for system processes and kernel). It looks like this:

/usr/src/lib/libsys/asynsend.c:136,2477,427,1058031,0.0002%

The fields are: file-line,executions,mean cycles per execution,total cycles,percentage of real time spent there

This is wall-time of course, so you can also measure blocking things such as waiting for an interrupt in the i/o driver.

If you're measuring many different bits of code (such as.. all asserts to see if any are too 'hot'), aggregate the total number of cycles per instance to get better statistics.

awk -F, '{ totals[$1] += $4; total += $4;} END { for(k in totals) { print k, totals[k], totals[k]*100/total; } }'

The biggest spenders are:

sort -rn +1 | awk '{ cum += $3; print $1, $2, $3, cum; }'

e.g.:

slaballoc.c:415 35661436700 5.69854 5.69854
slaballoc.c:295 35442880232 5.66361 11.3621
/usr/src/servers/vm/arch/i386/pagetable.c:700 18634118063 2.97765 14.3398
region.c:372 17549949879 2.8044 17.1442
slaballoc.c:484 17381610920 2.7775 19.9217
slaballoc.c:337 17007185712 2.71767 22.6394
region.c:374 16000833123 2.55686 25.1962
region.c:373 15592228182 2.49157 27.6878
slaballoc.c:327 15497846925 2.47649 30.1643
/usr/src/servers/vm/arch/i386/pagetable.c:673 15425901084 2.46499 32.6293

i.e. the top 10 asserts (this measurement didn't include kernel asserts) account for 33% of all spent assert time.

Benchmarking

Meaningful benchmarking is tricky. Things to consider are cache effects, deferred i/o, timer resolution, statistical significance, and so on. In Minix we are mostly concerned with doing measurements before and after an optimisation, so we can see its effects given a certain benchmark. Minix ships with a 'worldstone' benchmark script that automates much of this process for you - see its manpage for details. By default it runs 'make clean,' 'make all' 5 times and measures the times this takes (as compiling is much of what we do on minix, this is a popular benchmark) using the cpu cyclecounter (for optimal resolution and easy parsing) and prints these to a logfile. These logfiles can be fed to ministat directly to analyse for min, max, median, mean and stddev elapsed time values, and to compare two datasets to report for statistically significant difference.

Example of two worldstone runs fed to ministat, in src/servers:

# worldstone
[..]
1. 2. 3. 4. 5. Done.
Time measurements logfile is time.master.worldstone.log.
Config file is .worldstone.21501.
# worldstone
[..]
1. 2. 3. 4. 5. Done.
Time measurements logfile is time.master.worldstone.log.next.
Config file is .worldstone.21501.
# ministat time.master.worldstone.log time.master.worldstone.log.next
x time.master.worldstone.log
+ time.master.worldstone.log.next
+--------------------------------------------------------------------------------+
|                                                           +                    |
|+         x  x x                  +         x              +         +         x|
|   |___________M|_______________A___________A______________M__|_________|       |
+--------------------------------------------------------------------------------+
    N            Min            Max         Median            Avg         Stddev
x   5    27161806509    27258328426    27169649200  27193155863.8  41020150.7714
+   5    27148582405    27244754550    27230223357  27210093689.2  38710394.3698
No difference proven at 95.0% confidence

Ministat reports the datasets are not statistically significantly different. Which is just as well, as they are the same experiment. But you get the idea.

References

developersguide/performancemeasurement.txt · Last modified: 2014/11/11 08:52 (external edit)