general

news
downloads

projects

streamline
beltway buffers
pipesfs
model-t

documentation

introduction
papers
slides
videos
examples
dev. manual
browse code

mailing list

join / leave
archive

development

svn download
svn log
browse svn
people

vrije universiteit amsterdam logo
Streamline is a Vrije Universiteit Amsterdam research project.

opensource logo
All code is made available under a mixed GNU Lesser General Public (LGPL) and Simplified BSD license.

:: pipesfs

Introduction

Pipesfs is a Linux virtual filesystem (VFS) for pipelines. With it you can connect together kernel tasks in a style similar to UNIX pipelines. Each directory represents an active task, or filter. Data generated by a filter automatically streams to all its children. Thus (for pipesfs mounted at /dev/pipes),

/dev/pipes/netfilter_in/iinspect

receives network frames from a Linux netfilter hook and forwards these to a 'tcpdump-light' filter that generates ascii and hex output.

All directories contain a pipe all through which userspace processes can directly access the kernel stream. In our example, the tcpdump stream can be read with

cat /dev/pipes/netfilter_in/iinspect/all

Streaming I/O in kernelspace

Because PipesFS is based on the filesystem interface, you can construct kernel pipelines using command line tools and from most scripting and programming languages. For instance, to sniff all outgoing HTTP GET requests, create the directory

mkdir -p /dev/pipes/netfilter_out/ip/httpplus

and start reading from the pipe

cat /dev/pipes/netfilter_out/ip/httpplus/get/all

As soon as the 'httpplus' filter observes GET requests, it creates the subdirectory ./get and stores the packets in ./get/all. For performance reasons, no data is actually copied into a separate pipe. PipesFS pipes internally use pointers into a shared data stream, so that all filters can export their streams with low overhead.

Application interaction with kernel streams

PipesFS not only allows reading of live kernel data from a directory's pipe, it also forwards all data written to this pipe into all subdirectories. This makes it easy to build pipelines that incorporate both kernel filters and userspace processes. For example, the following pipeline parsers a libpcap tracefile and sends out all the traffic onto the network

cat trace.pcap > /dev/pipes/accept/all

after the proper directories are setup with

mkdir -p /dev/pipes/accept/unpcap/transmit

NB: This example is currently not fully functional. Drop an email on the mailing-list to force me to implement the last part.

Further reading

Read the documentation in doc/pipesfs of the streamline software package. or for more in-depth information check out the PipesFS paper on our publications page.

Are you a student interested in systems research? Streamline is a Vrije Universiteit Amsterdam research project. We're always looking for exceptional candidates for our Master's program.