next up previous
Next: FFPF high-level overview Up: FFPF: Fairly Fast Packet Previous: FFPF: Fairly Fast Packet


Most network monitoring tools in use today were designed for low-speed networks under the assumption that computing speed compares favourably to network speed. In such environments, the costs of copying packets to user space prior to processing them are acceptable. In today's networks, this assumption is no longer true. The number of cycles available to process a packet before the next one arrives (the cycle budget) is minimal. The situation is even worse if multiple monitoring applications are active simultaneously, which is increasingly common as monitors are used for traffic engineering, SLA monitoring, intrusion detection, steering schedulers in GRID computing, etc. Moreover, the processing requirements are increasing. Consider the following monitoring applications:

  1. An intrusion detection system (IDS) checks the payload of every packet for worm signatures [31].
  2. An application based on the `Coralreef' suite keeps statistics for the ten most active flows [21].
  3. A tool is interested in monitoring flows for which the port numbers are not known a priori. Such flows are found, for example, in peer-to-peer and H.323 multimedia flows where the control channels use well-known port numbers, while the data transfer takes place on dynamically assigned ports [32].
  4. Multiple monitoring applications (e.g. snort, tcpdump, etc.) access identical or overlapping sets of packets.

In high-speed networks, none of these applications are catered to in the kernel in a satisfactory manner by existing solutions such as BPF, the BSD Packet Filter [25], and its Linux cousin, the Linux Socket Filter (LSF) []. In our view, they require a rethinking of the way packets are handled in the operating system.

In this paper, we discuss the implementation of the fairly fast packet filter (FFPF). FFPF introduces a novel packet processing architecture that provides a solution for filtering and classification at high speeds. FFPF has three ambitious goals: speed (high rates), scalability (in number of applications) and flexibility. Speed and scalability are achieved by performing complex processing either in the kernel or on a network processor, and by minimising copying and context switches. Flexibility is considered equally important, and for this reason, FFPF is explicitly extensible with native code and allows complex behaviour to be constructed from simple components in various ways.

On the one hand, FFPF is designed as an alternative to kernel packet filters such as CSPF [26], BPF [25], mmdump [32], and xPF [19]. All of these approaches rely on copying many packets to userspace for complex processing (such as scanning the packets for intrusion attempts). In contrast, FFPF permits processing at lower levels and may require as few as zero copies (depending on the configuration) while minimising context switches. On the other hand, the FFPF framework allows one to add support for any of the above approaches.

FFPF is not meant to compete with monitoring suites like Coralreef that operate at a higher level and provide libraries, applications and drivers to analyse data [21]. Also, unlike MPF [34], Pathfinder [3], DPF [17] and BPF+ [5], the goal of this research is not to optimise filter expressions. Indeed, the FFPF framework itself is language neutral and currently supports five different filter languages. One of these languages is BPF, and an implementation of libpcap1 [] exists, which ensures not only that FFPF is backward compatible with many popular tools (e.g., tcpdump, ntop, snort, etc. [31]), but also that these tools get a significant performance boost (see Section 5). Better still, FFPF allows users to mix and match packet functions written in different languages.

To take full advantage of all features offered by FFPF, we implemented two languages from scratch: FPL-1 (FFPF Packet Language 1) and its successor, FPL-2. The main difference between the two is that FPL-1 runs in an interpreter, while FPL-2 code is compiled to fully optimised native code.

The aim of FFPF is to provide a complete, fast, and safe packet handling architecture that caters to all monitoring applications in existence today and provides extensibility for future applications. Since its first release in May 2003 we have constantly improved the code and gained a fair amount of experience in monitoring. We now feel that the architecture has stabilised and the ideas are applicable to systems other than FFPF as well. FFPF is available from Some contributions of this paper are summarised below.

  1. We generalise the concept of a `flow' to a stream of packets that matches arbitrary user criteria.

  2. Context switching and packet copying are reduced (up to `zero copy').

  3. We introduce the concept of a `flow group', a group of applications that share a common packet buffer.

  4. Complex processing is possible in the kernel or NIC (reducing the number of packets that must be sent up to userspace), while Unix-style filter `pipes' allow for building complex flow graphs.

  5. Persistent storage for flow-specific state (e.g., counters) is added, allowing filters to generate statistics, handle flows with dynamic ports, etc.

  6. `Flow groups': applications with the same access rights to packets. Within a group packets are shared, between groups they are copied.

  7. Different processing languages may be mixed.

  8. We support different ways of writing packets in buffers, including a novel one favouring fast readers.

  9. The language APIs are extensible with native functions.

  10. Flows with `dynamic ports' are handled elegantly.

  11. Authorisation control prevents unauthorised access to groups, functions, or even `silly filters'.

To our knowledge, few solutions exist that support any of these features and none that provide all in a single, intuitive, architecture. In this paper, we present the FFPF architecture and its implementation in the Linux kernel. The remainder of this paper is organised as follows. In Section 2, a high-level overview of the FFPF architecture is presented. In Section 3, implementation details are discussed. A separate section, Section 4 is devoted to the implementation of FFPF on the IXP1200. FFPF is evaluated in Section 5. Related work is discussed throughout the text and summarised in Section 6. Conclusions and future work are presented in Section 7.

next up previous
Next: FFPF high-level overview Up: FFPF: Fairly Fast Packet Previous: FFPF: Fairly Fast Packet
Herbert Bos 2004-10-06