next up previous
Next: Fast reader preference. Up: Buffer management Previous: Buffer management

Slow reader preference.

In SRP, as long as the buffer is full, all new packets are dropped. Both \ensuremath{R} and \ensuremath{W} are mapped read-only to an application's address space and updated in kernel or network card. The packet grabber in the kernel/card writes data in a group's \ensuremath{PBuf} and updates \ensuremath{W} until the buffer is full, i.e., until \ensuremath{W} catches up with the slowest reader in the group. Thus, the slowest reader in a group may block all other readers in that group. The \ensuremath{R} value of the slowest reader will be denoted by R$^*$. An application explicitly updates its own \ensuremath{R} by way of system call after it has processed a number of packets and, if needed, the kernel then also updates R$^*$. One of the keys to speed is that \ensuremath{R} need not be incremented by one for every packet that is processed. Instead, an application may process a thousand packets and then increment \ensuremath{R} by a thousand in one go. Doing so saves many kernel boundary crossings. A similar mechanism is used for DAG cards [10].

As an example, consider the implementation on IXP1200 network processors, where packet processing and buffer management is handled entirely by the IXP. The IXP receives a packet, places it in \ensuremath{PBuf}, updates \ensuremath{W}, receives the next packet, and so on. Meanwhile, the filters are executed in independent processing engines on the network processor and determine whether a reference to the packet should be placed in the filters' index buffers. Assuming that the administrator chose to use `zero-copy' packet handling (more about the various options in Section 4.3), applications access packets immediately, as the buffers are memory mapped through to userspace. While applications process the packets, the kernel is not used at all. Only after an application has processed $n$ packets of a flow and decides to advance its \ensuremath{R} explicitly, the kernel is activated. On the reception of a request to advance an application's \ensuremath{R}, the kernel also calculates the new value of R$^*$ and passes it to the packet receiving code on the IXP. In the extreme case, where a single application is active, the IXP code and application work fully independently and the number of interrupts and context switches is minimal. The way FFPF's SRP captures packets in a circular buffer and memory maps them to user space is similar to Luca Deri's PF_RING [13], although PF_RING copies packets to each application individually.


next up previous
Next: Fast reader preference. Up: Buffer management Previous: Buffer management
Herbert Bos 2004-10-06