general

news
downloads

projects

streamline
beltway buffers
pipesfs
model-t

documentation

introduction
papers
slides
videos
examples
dev. manual
browse code

mailing list

join / leave
archive

development

svn download
svn log
browse svn
people

vrije universiteit amsterdam logo
Streamline is a Vrije Universiteit Amsterdam research project.

opensource logo
All code is made available under a mixed GNU Lesser General Public (LGPL) and Simplified BSD license.

:: introduction

Design

Streamline is an I/O architecture for operating systems. It handles I/O transport between applications, kernel and peripheral devices. For this purpose, it consists of an application-layer library, a kernel module and optional device drivers. We have also ported Streamline to a few "smart" devices (Intel IXPs) to integrate those into the host architecture.

Architecture of the Streamline I/O architecture
A diagram of the Streamline I/O architecture. Click on the figure for a higher-resolution version.

Streamline processes data through a pipeline, similar to Unix pipes. Unlike pipes, it does not construct pipelines from multiple processes, but builds a pipeline from small pieces of code (filters) inside the same process. This reduces the number of hardware tasks and therefore the task-switching overhead for pipelined execution. We call each location where it can create a pipeline (e.g., a userspace process or kernel task) an execution space and the links between filters streams, as they transport data between filters.

Pipelines may also extend across multiple spaces. Because cross-space data transport is generally expensive, Streamline optimizes this scenario. Instead of copying each block, it creates large ring buffers that are contiguous in (virtual) memory and maps these into all memory address spaces of participating execution spaces. This way, it reduces copying and can further reduce task-switching by batching blocks before switching.

Research goal

Streamline is born out of research into high-performance network processing. Our goal is to facilitate network processing at multi-gigabit rates on cheap PC hardware. The approach we take is to (1) construct pipelines on demand for each application, at runtime, from an extensible set of filters and to (2) automatically optimize these paths to minimize transport overhead (copying, context switching and cache pollution) while exploiting all available hardware (such as peripheral cards or asymmetric cores). Streamline tasks are similar to Unix pipelines; both are an example of stream computing.

Example

Applications offload I/O processing to Streamline by asking it to apply a graph of processing filters to the raw input streams, for example:

select the stream of TCP segments destined for port 80,
reassemble the segments into separate streams,
filter out those streams that carry known attacks,
save the remainder to tracefiles

which in Streamline's native request language (SRL) is written as

(tcp) -80-> (untcp) > (ac) > (pcapout)

At runtime, streamline searches for implementations of these filters. These can exist in a library, but equally in the OS kernel or on dedicated hardware such as programmable network cards or FPGAs. If all filters can be accommodated, streams are setup to connect them. Streams may need to cross the PCI bus, userspace/kernelspace barrier or even LANs. Optimization of these paths is one of the key factors contributing to Streamline's high performance.

The base system comes bundled with filters for pattern matching (Aho Corasick, RegEx), accounting, filtering (among others BPF), stream reassembly, packet rewriting, protocol inspection, and more. Obvious uses are intrusion detection, network address translation, media streaming and real-time processing of scientific data.

Further reading

Check out the publications and presentations pages for many more details.

Are you a student interested in systems research? Streamline is a Vrije Universiteit Amsterdam research project. We're always looking for exceptional candidates for our Master's program.