User Tools

Site Tools


VM design notes


In order to encapsulate VM functionality, PM is split in process management and memory management. The memory management task is called VM and implements the memory part of the fork, exec, exit, etc calls, by being called by PM when those calls are done, and talks to SYSTEM instead of PM mostly. This has made PM architecture independent.

VM server

VM manages memory (keeping track of used and unused memory, assigning memory to processes, freeing it, ..) and can operate in paged and non-paged mode. In non-paged mode, only segments are used to separate processes. In paged mode, every process (except for boot image processes currently) have their own virtual address space that is managed by VM. There is a clear split between architecture dependent and independent code in VM. For i386, there are intel page tables that reside in VM's address space. VM maps these into its own address space, by editing its own page table, so it can edit them directly.


The kernel has almost no knowledge of page tables. It doesn't create a page table either, and initially all processes (i.e. boot time processes) run without page tables, just in segments. kernel/table.c defines which boot time processes get their own full page table with sparse address space (FULLVM flag) (memory preallocated by the boot monitor for stack+heap, the executable's chmem size, gets freed by VM) and which ones keep their preallocated memory and small address space. These processes get blocked from running initially so that VM gets the opportunity to set up their address space and start the stack off where it's supposed to be (it can't be moved of course).

The kernel is mapped into all page tables and does not have its own page table (any more). Therefore it can't access all of physical memory (any more). It maps in pieces of other process address space as needed and does copying optimistically, trapping the pagefault if memory turns out to be absent or readonly. (VM handles this case, the kernel then retries the copying later.)

Old segments

Segments must always be set under intel, so large segments are loaded by VM of the size of the virtual address space. This has the effect of introducing a third address type in especially the kernel.

Previously, there were only 'virtual' (sometimes called 'logical'), and 'physical' addresses. virtual was translated by the segment mapping to physical. Now there is an extra layer of address mapping: virtual is translated by the segment to linear, and linear is translated by the page table to physical. Although this has little impact on the kernel code, all memory arithmetic must know whether it's dealing with a virtual, linear of physical address. Broadly, the old 'physical' addresses are now linear addresses. umap_* translates to a linear address, which needs further interpretation by the relevant page table. The memory copying code, for instance, has been taught to parse the page tables to do this mapping from linear to physical.

Handling absent memory

There are two major cases in which memory is needed that can't be used directly:

  • memory in a range that is mapped logically, but not physically (currently that is on-demand anonymous memory)
  • memory that is mapped physically, but readonly as it's mapped in more than once (shared between processes that have forked), and so can't be written to directly.

VM makes sure the page is mapped readonly in the second case. There is no page table entry in the first case.

There are two major situations in which either of these cases can arise:

  • a process uses the memory itself (page fault)
  • the kernel wants to use that memory

The kernel must check for these cases whenever it wants to touch memory; e.g. in IPC but also in copying memory to/from processes in kernel context. If the kernel detects this, it stores this event, notifies VM, doesn't reply to the requester yet, and continues its event loop. VM then handles the situation (specifically, mapping in a copy of the page, or an entirely new page, as the case may be) and sends a message to the kernel.

Physical / contiguous memory

Many areas in the system, inside and outside the kernel, assume memory that is contiguous in the virtual address space is also contiguous in physical memory, but this assumption is no longer true. Therefore all instances of umap calls in the kernel had to be checked to see

  • whether an extra lookup had to be done to get the real physical address
  • whether that code assumes the memory is contiguous physically, and the memory is present even

Processes that need physically contiguous memory specifically have to ask for it. A warning in the kernel is printed if an old umap function is called. A new umap segment (VM_D as opposed to D) was added that does a physically-contiguous check, but doesn't print a warning (the VM_D is meant to indicate that the caller is aware that memory isn't automatically contiguous physically, and that if it wants it to be, it has made arrangements for that itself, e.g. use alloc_contig()).


Drivers have been updated to

  • Request contiguous memory if necessary (DMA)
  • Request it below 16MB physical memory (DMA; lance and floppy) or below 1MB physical memory (BIOS driver)
releases/3.2.0/developersguide/vminternals.txt · Last modified: 2014/11/14 16:28 by lionelsambuc