Student: Bingzheng Wu
Mentor: Jorrit N. Herder
SVN branch name: src.20090525.r4372.wu
Extend MINIX 3's memory grant model in order to benefit from paging.
At this point, we have two subprojects in mind that we want to complete by the mid-term evaluation.
Once the above infrastructure is in place, you should test it by using the memory mappings in the INET server–ethernet driver and/or FS server–sata driver protocol, rather than using safe copies all the time. Hopefully, you'll be able to find performance (higher throughput or, more likely, better CPU utilization) improvements.
We want to complete all this, including code cleanup and testing, etc. by the mid-term evaluation point. I have some ideas for the second half of GSOC (also dealing with VM extensions for MINIX), but I think it's good to focus on this part of the work first.
5.22 - 5.29
Receive a homework during the application phase. It's to implement copy-on-write page-mapping optimization for safe copies, using VM. I only implemented part of that homework because I encountered a problem:
For the mapping, I want to make 2 phys_regions in source and destiny process point to the same phys_block, to share it. However, the field which means the phys_block's offset to the @vaddr in vir_region is in phys_block, as @offset, but not in phys_region. As a result, if 2 phys_regions point to the same phys_block, the phys_block should has the same offset to the @vaddr. It's almost impossible in this task!
I think it's better to move the @refcount from phys_block to phys_region. But this is a huge change, which will cause changing much existing code. So I have to wait Ben to make the decision to go on.
I only read code about VM this week to get more familiar with it.
5.30 - 6.7
Merge Ben's modification on the problem I mentioned in the report last week.
Adjust the code in homework phase, but have not finished.
6.8 - 6.14
Port the code in homework. But it still need debug.
I wrote the test program to test the safecopy.
6.15 - 6.21
The safe-copy works! This is the first task before mid-term evaluation.
6.22 - 6.28
Refine the test program, and do a simple test on QEMU. The result maybe not precise, but it shows something.
The result is so bad, that using MAP to copy is more that 10 times than regular data copy.
size | 8 | 10 | 20 | 40 | 80 | 100 | 200 | 400 | 800 | 1000* | 2000* | 4000* | |
time | 104496 | 103272 | 103280 | 105432 | 104144 | 103992 | 106040 | 126296 | 112936 | 1887376 | 1394456 | 1376136 |
According to the result, we can see that the key reason is not data size. The time changes little from size 8 to size 800.
I think the reason is that, in most situations, the virtual range, which will be copied data to, in source process, has been mapped by physical memory. So the regular copy just data copy. But if using mapping, it has to communicate with VM, which is a big overhead.
Another problem is that, if the virtual range in source process has been mapped by physical memory (as mentioned before), using mapping will first unmap these physical memory, and then map the destiny process's memory. But if later, one of the 2 processes(src or dest) writes the memory, it will have to map memory again and copy the data (copy on write). I think this will happen in all probability. At least more possible than fork(), which is always followed by exec(). So, finally, we do the data copy. What's more, we did some needless map and unmap.
So, my idea is that, using map to do the data copy is not good, especially in Minix, in which the VM is an independent server, and kernel has to communicate with it by messages.
But the shared memory using map based on grant table maybe a good idea.
6.29 - 7.5
Implemente the SAFEMAP.
It's very similar with using map to do datacopy. The only difference is that:
I am going to implement the revoke-map and un-map in the next week.
7.6 - 7.10
Implemented SAFEREVMAP and SAFEUNMAP.
Wrap sys_safemap, sys_saferevmap and sys_safeunmap in library calls.
SAFEMAP, accepts arguments: grantor endpoint, grant_id and offset in grant table, requester's virtual address(segment+offset), and map type(RO or RW). After checking the permit and validity, send request to VM. The request is very same with the one used in copy-on-write(COW) mapping, a few weeks ago. Besides, for supporting the revocation, keep the mapping information in a global table.
SAFEREVMAP and SAFEUNMAP, both un-map the mapped memory. The difference is that, the former is invoked by grantor, by grant-id; while the later is invoked by requestor, by virtual address. When receiving these request, kernel searchs in mapping-information table, and sends un-map request to VM, if any.
MAP, VM handles it almost the same with the COW mapping before.
UNMAP, VM explicitly copies the shared memory, such that the both the grantor and requester have a private, non-shared copy of the physical pages. There's another way: setting the shared pages as COW-shared, so the explicit copy will be done when the pages are written. But this is ineffective for some reasons.
By now, VM assumes that all pages in user space are writable. So if there's a page read-only, it must be COW.
But now, the situation changes. Because the program may call safemap() readonly, so there will be some pages readonly, but not for COW. So if a #PF caused by a program writing a read-only page, VM has to check whether it's COW or read-only MAP, before handle the page fault.
Unfortunately, since the mapping information(that global table, mentioned above) is saved in kernel (because it has to contain the 'grant_id'), so we can't use this table to distinguish the COW or read-only MAP. A better way is that, add a member 'share_flag' in struct phys_block, which show whether this phys_block is shared as COW or read-only MAP.
However, if we add 'share_flag', what should we do, if a page(or phys_block) is shared by 2 processes as MAP, and also shared by other 2 processes as COW? In fact, even if we don't add 'share_flag', we still have to face this problem.
Let PA, PB and PC denote for 3 processes. The problem is that: if first, PA and PB share a page as MAP, read-write; and then PB forks PC, so PC and PB share the same page as COW. The page in PB's page table is read-only, for COW. But in PA's page table, it's still read-write. So the PA can write this page, while PC doesn't expect that and can't detect it.
There is also problem that if MAP a page which has been shared as COW already.
My solution is: avoid this case.
I tested all cases:
7.13 - 7.19
Did nothing.
7.20 - 7.26
From this week, I begin to the part III of GSoC, to extend data store(DS) server.
DS just store some memory range. What I am going to add is that, store some memory range by mapping. As a result,
I re-wrote DS mostly, added some new APIs for mapping-store, and changed some old ones. Fortunately, the old APIs that I changed was not used now in Minix3, so it doesn't need to change other code.
I have not tested all of the new DS, which is the task in next week.
7.27 - 7.31
DS now supports 5 types: U32(unsigned int), STRING (string that shorter that 16-chars), MEM(memory range), MAP(mapped memory range), and LABEL(as name server).
Add new APIs for these 5 types.
TODO: test!
8.3 - 8.16
Holiday :)