User Tools

Site Tools



This shows you the differences between two versions of the page.

Link to this comparison view

Last revision Both sides next revision
developersguide:gcov [2014/11/11 08:52] external edit
developersguide:gcov [2016/09/25 08:42]
dcvmoole a full page update to reflect the current (working!) state
Line 1: Line 1:
-**GCC GCOV support ​by Anton Kuijsten**+===== GCOV (code coverage) ​support ​=====
-<div round info>+==== Introduction ====
-Much of this information ​is out-of-date.+Code coverage testing is a powerful development aid, predominantly to verify the completeness ​of test sets. A program that has been compiled with code coverage support will keep track of how many times each of its basic blocks ​is invoked. This information can be extracted at run time, and mapped back to the program'​s source code in order to the programmer an idea how many times each line of source code has been triggered at run time.
-It was written when compiling the system ​could be done with either ack (a.out) or gcc (elf). Nowadays ​it is ELF-only and this procedure has not been tested in a while. +MINIX3 offers support for code coverage testing of its system ​servicesThe original code coverage infrastructure aimed at supporting GCOV for GCC, but MINIX3 has since largely moved to using LLVM, and the GCC GCOV facilities likely no longer work. However, as of MINIX 3.4.0 (git commit 3ac5849)it is possible to obtaining coverage data for MINIX3 system services compiled with LLVM. This new facility extends the previous GCOV infrastructure.
 +The code coverage infrastructure currently supports obtaining coverage data for the source code modules of MINIX3 system services, including all servers and drivers. System service libraries are not yet included for practical reasons. Code coverage is not yet supported for the kernel either. Userland programs are not supported by this infrastructure at all.
-====== Introduction ======+==== How to use ====
-Code coverage testing is offered by gcc through gcov. Special compiler +Coverage support has to be compiled in. This can be done by setting ​the MKCOVERAGE build variable. MKCOVERAGE is supported for native compilation ​of MINIX3 on MINIX3:
-flags add execution counters ​to a binary, along with counter increment +
-operations around each basic block of code. This way, the counters indicate +
-how many times a line of code has run.+
-The regular gcov interface uses fopen() and fwrite() directly, which is +> MKCOVERAGE=yes make build
-impossible from minix system processes and the kernel, of course. Anton +
-changed libgcc to call gcov_ wrapper functions instead, with are implemented +
-separately in libc (for normal programs) and libsys (for system processes.)+
-====== How to use ====== +It is also supported ​for crosscompilation:
-Simply set MKCOVERAGE=yes as an environment or make variable. This will cause (included by to set the compiler to gcc, and the CFLAGS to extra necessary gcov options. Of course you have to compile everything like this (both for gcov to work and to not mix ack and gcc object files), so do 'make clean' first. e.g.:+
-<code> +BUILDVARS="​-V ​MKCOVERAGE=yes" ./releasetools/​
-# cd /​usr/​src/​servers/​vfs +
-# make MKCOVERAGE=yes ​clean all +
-Significantly,​ this build process ​also writes gcc .gcno files in this directory, which is line number ​information. These are needed later by lcov.+The compilation ​process ​will generate ​.gcno files that contain static ​information ​on how to map back coverage data to the original source code.
-The env/make variable means you can do: +After booting a system built with MKCOVERAGE=yes, one can then obtain the dynamic coverage information using the gcov-pull(8) command:
-<​code>​ +
-# cd /​usr/​src/​tools +
-# make MKCOVERAGE=yes ​clean hdboot +
-in order to generate coverage code and data for all servers and drivers. (Except that that doesn'​t work literally because the kernel is a special case currently ​it won't build with gcov so you have to do make clean, make kernel without gcov, then make hdboot with gcov.)+> gcov-pull <​label>​
-Reboot with the new gcov-enabled ​system.+The <​label>​ parameter is the label of a running ​system ​serviceFor example:
-To retrieve the data, run gcov-pull ​with the pid of the server as an argument (not endpoint/​slot number). This pretty much has to happen in the +gcov-pull vfs
-server'​s own directory in order for lcov to use the .gcno files: +
-<​code>​ +
-# cd /​usr/​src/​servers/​vfs +
-# gcov-pull 7 +
-Now you can run lcov to interpret the gcov data and generate a pretty html report-c means capture, -d . specifies ​the directory ​with .gcda files +This will generate a set of .gcda files in the current ​directory, one for each source code moduleFor native systems, ​it is typically most convenient ​to invoke gcov-pull(8from the corresponding system service'​s ​source ​code directory (e.g., in /usr/​src/​minix/​servers/​vfs),​ so that the source, .gcno, and .gcda files for each module are in one place.
-and implies ​it has to read from those files as opposed ​to retrieve the data from the kernel ​(linux). Install lcov first if it'​s ​not installed yet. +
-# pkgin -y in lcov +
-# lcov -c -d >lcov.vfs  # make lcov is a shortcut +
-# genhtml lcov +
-That's it; the html report is in that directoryMore advanced use combines this coverage data into a single report; ​from a higher level: +With crosscompilation,​ one will typically want to copy the .gcda files back to the crosscompilation environment,​ and analyze the results ​from thereNote that in this case, the .gcno files will also not be located in the same directory as the source ​code, but rather in its object directory.
-<​code>​ +
-# genhtml -o report ''​find ​-name 'lcov.*'''​ +
-====== How to implement ​in new servers ======+The LLVM llvm-cov(1) utility can be used to produce meaningful output from the combination of the source, .gcno, and .gcda files. The llvm-cov(1) utility itself has been modified heavily ​in recent times, which means that different syntax ​
-In theory, no peculiar extra work has to be done if your process uses SEFBy default, the standard libsys gcov call handler is invoked +On systems with LLVM 3.4 (as of writing, the LLVM version on native MINIX3), one can get a view of the source code of, say, module "foo.c" with per-line coverage information using the following commanddumped ​to stdout:
-which handles ​the VFS request properlyIf you are unusual, such as if you are the kernelyou need a different mechanism ​to provide the gcov +
-====== Implementation ======+> llvm-cov -gcno=foo.gcno -gcda=foo.gcda
-libgcov ​is linked ​to each executable. This library writes ​the +With later LLVM versions, a different syntax ​is needed ​to achieve ​the same results:
-counters to disk (gcda files) whenever the executable exits or forks. +
-The gcov tool can transform those gcda files, along with gcno files that +
-are created at compile time, to gcov files. Those gcov files are human +
-readable, and contain the source code with an execution count at the +
-beginning of each line. Lcov is a tool made by the Linux test project, +
-which can transform those gcov files into a nice bundle of html files.+
 +> llvm-cov gcov [-o path/​to/​gcno/​files] foo.c
-===== GCOV IN MINIX SERVERS =====+This will then generate a file "​foo.c.gcov"​ with the same per-line coverage output.
-When making ​GCOV call to a server, ​the gcov library linked into the +The output prefixes each source code line with counter that shows the number of times the corresponding basic block has been invoked, or "#####"​ if the basic block has not been invoked yetThe newer llvm-cov has other features that may be interesting as wellFor even fancier things such as generating webpages ​from the resultssee various webpages on how to use LLVM GCOV.
-server will try to write gcov data to disk. This  writing is normally +
-done with calls to the vfs,  using stdio library calls. This is not +
-correct behavior for servers, especially vfs itselfTherefore, the +
-server catches those attempts ​Instead,​ the gcov data is stored in a +
-buffer. When the  gcov operation is  done, the buffer is copied ​from the +
-server to a helping user space processfrom where the calls are finally +
-made to the vfs. GCOV calls to the various servers are all routed trough +
 +==== Implementation ====
-===== PARTS OF THE SYSTEM =====+The system service part of the GCOV implementation is in libsys and hooked into the System Event Framework, which practically means that system services need not support code coverage support explicitly.
-==== gcc, gcov, libgcov ====+VFS is used as gateway to obtain the coverage information:​ gcov-pull(8) calls into VFS requesting that coverage data is to be obtained from a particular system serviceinto a buffer provided by gcov-pull. Unless VFS itself is the targetVFS relays the request to the system service identified by the label given to gcov-pull. The system service'​s SEF routines intercept the request and force the compiler-provided GCOV support routines to flush the coverage data. These compiler-provided GCOV support routines call particular hook functions that would typically write the resulting coverage data to files directly. These hook functions are implemented in libsys as well, and instead copy back the data to the buffer provided by gcov-pull. Once done, gcov-pull then produces the actual coverage data files.
-In the gcc source packagethe header file gcc/gcov-minix-fs-wrapper.h +The hook functions differ between GCC and LLVM. In our current implementationlibsys/llvm_gcov.c implements the LLVM hook functions by performing a translation ​to the GCC hook functions ​in libsys/gcov.c.
-is added. ​ That file is included in gcc/libgcov.c. The header file add +
-pointers ​to file system calls, and redefine all file system function +
-calls as calls to these pointers. ​ In effect, these pointers are an +
-extra layer between libgcov and the file system calls. These pointers +
-can be pointed to functions ​that do not call the file system, but +
-transfer the gcov data to the user space buffer Gcc and gcov are also +
-used when compiling the servers and converting the gcov data.+
-==== sysutil library ==== +There are inherent dangers associated with VFS's relay function in this story: VFS's relay calls are blocking ​(ipc_sendreccallswhich means that if the target system service ​does not respond ​to the requestVFS and with that the entire ​system ​may deadlockThis should be improved ​in the futurebut for now the GCOV infrastructure should be considered as somewhat dangerousdebugging-only facility. The corresponding system ​call is not part of the MINIX3 ABI and should ​never be used outside ​gcov-pull(8).
-In the sysutil library, the functions that replace the standard file +
-system ​calls are implemented ​(in gcov.c). Alsothere is one main entry +
-point that does the function replacement,​ and calls the libgcov +
-functionality. This function is called do_gcov_flush_impl. +
- +
-==== the minix servers ==== +
- +
-In the minix servers, the gcov function call is added. It's called +
-do_gcov_flush. ​ That function call calls the do_gcov_flush_impl in the +
-sysutil library. Among other arguments, it passes a pointer ​to the +
-actual gcov_flush implementation in libgcov, and a function pointer ​that +
-replaces ​the file system ​function calls with the sysutil versions. +
-Sysutil does not know about those pointers, because it is not linked to +
-libgcov. +
- +
-==== gcov-tools ==== +
- +
-There are three tools for using gcov with minix servers. These are +
-located ​in /​usr/​src/​commands/​gcov-tools. First of all, gcov-pretty adds +
-extra new lines to the server source codeso that all statements get +
-their own line. This way, one gets more information on coverage of each +
-statement. ​ Secondly, gcov-pull creates a buffer, makes a gcov call to +
-vfs, and then writes to disk all the gcov data that was placed in the +
-buffer by server. ​ Thirdlygcov-lcov is a shell script that combines +
-a gcov-pull ​call with an subsequent lcov call. That way, all gcov data +
-is transformed into html. Also, When previous gcov-data is found, the +
-two data sets are combined by adding the counters together. +
- +
-===== BUGS ===== +
- +
-Shortcomings ​of the system are.. +
- +
-  - If you give a non-server/​driver but existing pid to vfs, it will try to sendrec() to that pid anyway, but never receive a reply, hanging the system. +
-  - You have to give the pid to gcov-pull, which is a pain to do with many different processes in batches. The problem with name lookups is that they'​re not unique ​(mfs specifically).+
developersguide/gcov.txt · Last modified: 2016/09/25 08:48 by dcvmoole