next up previous
Next: Conclusions and future work Up: FFPF: Fairly Fast Packet Previous: Analysis of operational costs


Related work

Much of the related work was discussed in the text. In this section, we discuss projects that, although related, could not easily be linked with any specific aspect of FFPF.

MPF enhances the BPF virtual machine with new instructions for demultiplexing to multiple applications and merges filters that have the same prefix [34]. This approach is generalised by PathFinder which represents different filters as predicates of which common prefixes are removed [3]. PathFinder is interesting in that it is amenable to implementation in hardware. DPF extends the PathFinder model by introducing dynamic code generation [17]. BPF+ [5] shows how an intermediate static single assignment representation of BPF can be optimised, and how just-in-time-compilation can be used to produce efficient native filtering code. All of these approaches target filter optimisation especially in the presence of many filters, and as a result are not supported directly in FFPF (although it is simple to add them as external functions). With FPL-2, FFPF relies on gcc's optimisation techniques and on external functions for expensive operations.

Like FPL-2, and DPF, the Windmill protocol filters also target high-performance by compiling filters in native code [24]. And like MPF, Windmill explicitly supports multiple applications with overlapping filters. However, compared to FPL-2, Windmill filters are fairly simple conjunctions of header field predicates. MPF extends the BPF instruction set to exploit the fact that most filters concern the same protocol, so that common filter tests can be collapsed. It seems that the suppport is at the level of assembly instructions which makes it fairly hard to use. Moreover, for each of these approaches packets are still copied to individual processes and require a context switch to perform processing other than filtering. As FFPF is extensible and language neutral, each of these approaches can be added to FFPF if needed.

Operating systems like Exokernel, and Nemesis [16,22] allow users to add code to the operating system and implement single address spaces to minimise copying. While FFPF no doubt can be efficiently implemented on these systems, one of its strengths is that it minimises copying on a very popular OS that does not have a single address space.

Support for high-speed traffic capture is provided by OCxMon [2]. Like the work conducted at Sprint [18], OCxMon supports DAG cards to cater to multi-gigabit speeds [10]. Unlike FFPF, both approaches have made an a priori decision not to capture the entire packet at high speeds.

Nprobe is aimed at monitoring multiple protocols [27] and is therefore, like Windmill, geared towards applying protocol stacks. Also, Nprobe focuses on disk bandwidth limitations and for this reason captures as few bytes of the packets as possible. FFPF has no a priori notion of protocol stacks and supports full packet processing.

Gigascope is a stream database for network analysis that resembles FFPF in that it supports an SQL-like stream query language that is compiled and distributed over a processing hierarchy which may include the NIC itself [11]. The focus is on data management and there is no support for backward compatibility, peristent storage or handling dynamic ports.

Most related to FFPF is the SCAMPI architecture which also pushes processing to the lowest levels [30] []. SCAMPI borrows heavily from the way packets are handled by DAG cards [10]. It assumes the hardware can write packets immediately in the applications' address spaces and implements access to the packet buffers through a userspace daemon. Common NICs are supported by standard pcap whereby packets are first pushed to userspace. Moreover, SCAMPI does not support user-provided external functions, supports a single BMS and relies on traditional filtering languages (BPF). Finally, SCAMPI allows only a non-branching (linear) list of functions to be applied to a stream.


next up previous
Next: Conclusions and future work Up: FFPF: Fairly Fast Packet Previous: Analysis of operational costs
Herbert Bos 2004-10-06