We have implemented a new network stack in Minix 3, which is split into multiple servers and runs on multiple cores to improve performance of the original INET server by several orders of magnitude and to improve dependability of the stack by being able to update and recover from crashes of individual components of the stack. Crucial part of the design is passing data through the stack without copying. We have a basic implementation as a proof of concept, however, we believe that the implementation is sub-optimal. The goal of this project is to identify inefficiencies in the memory allocation and implement better infrastructure for better performance and reliability. For instance, our current implementation must allocate large memory pools which are contiguous in physical address space since the network devices use physical addresses and this makes mapping from virtual addresses to physical trivial. However, it is difficult/impossible to allocate such pools after the system runs for a while, for example after a process crashes, due to fragmentation of the physical memory. On the other hand, it is not possible to translate from virtual addresses to physical ones in the drivers since querying memory manager is too costly. Clearly, we must be able to allocated from fragmented memory and quickly translate the addresses. It is not clear how. In addition, network drivers (mostly due to bugs in hardware) put various constraints on alignment, sizes etc. Also our current stack has to trust drivers that the devices will not use DMA to access random memory. We envision to use IOMMU to tackle both problems at once. Essential part of the project is evaluation of the final implementation and discussion of trade-offs between performance and reliability.