Header Header
gradient

Ibis users

Below we list some users of Ibis. We are always interested in users expericence with Ibis and are more than happy to add any projects which uses Ibis to this page.

MaRVIN:a distributed platform for massive RDF inference

The Knowledge Representation and Reasoning Group of the Vrije Universiteit uses the Ibis Portability Layer to implement their MaRVIN system for distibuted RDF inference. Their submission to the Billion Triple Challenge using this software won 3rd place.

From the Marvin Page: We have built MaRVIN (Massive RDF Versatile Inference Network), a parallel and distributed platform for performing RDF(S) inference on a network of loosely coupled machines using a peer-to-peer model. MaRVIN can be scaled to arbitrary size by adding computing nodes to the network, is robust against failure of individual components, and displays anytime behaviour (producing results incrementally over time). MaRVIN is not built as a single reasoner, but rather as a platform for experimenting with different reasoning strategies. MaRVIN contains instrumentation to log and visualise its run-time behaviour, and its modular design allows to vary different aspects of the reasoning strategy.

Parallel simulation of soil-structure-interaction

Michael Mueller of the Institute of Numerical Methods and Informatics in Civil Engineering is implementing an Intra-Grid-System for the parallel simulation of soil-structure-interaction for geotechnical problems.

The constitutive equations are described by the theory of porous media, where the interaction between the involved phases is considered. The averaging of the phases over a representive elementary volume is carried out using the concept of volumetric content. The complexity and size of three-dimensional engineering-problems in the field of fluid-soil-interaction makes the simulation numerically very costly, especially when the results are to be improved by fully-adaptive methods. In our project we apply the fully-adaptive p-refinement of the FEM.

The parallelization is based on mobile Java-Agents, taking advantage of the agent features such as portability, mobility, reactivity etc. In the time critical parts of the parallel simulation, mainly the solving of the linear equation system, we use Ibis for a more efficient communication, solving the problem of the very slow agent-based messaging. Ibis offers all features needed for an efficient and trouble-free implementation, such as one-2-many communication and the possiblity to send arbitrary objects.

The numerical part of the simulation works reasonably well and at present we are mainly dealing with aspects of load balancing and job-scheduling.

ProActive

ProActive logoProActive is a GRID Java library for parallel, distributed, and concurrent computing, also featuring mobility and security in a uniform framework. With a reduced set of simple primitives, ProActive provides a comprehensive API allowing to simplify the programming of applications that are distributed on Local Area Network (LAN), on cluster of workstations, or on Internet Grids. See the ProActive website for more information.

ProActive has been ported to Ibis by Fabrice Huet. Laurent Baduel used this version for a Java version of EM3D: a parallel solver for electromagnetic waves propagation. Performance results are presented in Section 6.4.3 of his thesis.

MEG scanner data processing

The Vrije Universiteit Medical Centre (VUmc) posesses a MEG scanner which produces lots of data which must be processed. Magnetoencephalography (MEG) is a tool to study the function of the human brain. MEG measures the magnetic field intensity at hundreds of points over the surface of the skull up to several thousand times per second. Measurements made while a subject is recognizing a picture, performing mathematical calculations, watching an alternating check board pattern, sitting quietly with their eyes closed, or a host of other tasks, provide insight into the functioning of the brain, whether healthy or diseased.

The size of a data set from one session with one subject is often hundreds of megabytes. When this is multiplied by dozens of subjects in a study, and perhaps multiple sessions per subject, the computational demands become arduous and clustered computer resources are often necessary. As our processing code is written in Java, the Ibis ability to move hundreds of megabytes to many different nodes very quickly, simply and efficiently is a great advantage. Once a copy of a data set is on the local hard disk of a node, processing of the data becomes much quicker and easier.

Jylab

Jylab is a portable and flexible scientific computing environment. At minimum, it provides a user with a scripting language and a core set of libraries implementing numerical linear algebra routines (NLA) and communication models. The communication models are provided by Ibis. Jylab thus enables the development of scientific applications over distributed computing platforms.

Jylab owes its portability to Jython. Jython is an implementation of the Python programming language written in Java. So Jylab runs at all platforms providing a recent JVM. Also it is flexible enough, since one normally programs in Python, which is a very high-level, object-oriented, dynamically-typed language.

A Grid-wide file system

Sasha Ruppert from the Universität Erlangen-Nürnberg in Erlangen, Germany built a Grid-wide file system using Ibis as the communication layer. Click here for more information.

The KOALA grid scheduler

KOALA is a grid scheduler that has been designed, implemented, and deployed by the PDS group in Delft on the DAS-2 multicluster system in the context of the Virtual Lab for e-Science (VL-e) research project. The main feature of KOALA is its support for co-allocation, that is, the simultaneous allocation of resources in multiple clusters of the DAS to single applications which consist of multiple components. Ibis has been used for many of the KOALA scheduling experiments.

JavaGAT users

Project or Oranization Are using JavaGAT for
Vrije Universiteit Amsterdam www.vu.nl JavaGAT is developed here. Several of our grid projects use it. We also use it for the Grid Computing course here, students have to write grid progams with the JavaGAT.
The Open Grid Forum SAGA standard (Simple API for Grid Applications). saga.cct.lsu.edu The Java reference implementation of the SAGA standard (JavaSAGA) uses JavaGAT to access the grid middleware.
European Space Agency (ESA) www.esa.int JavaSAGA and JavaGAT are currently evaluated by ESA for use in a GRID infrastructure.
The Dutch Virtual Labs for E-science project (Vl-e) www.vl-e.nl The JavaGAT is currently developed as a part of the Dutch Virtual Labs for E-science project (Vl-e) middleware.
VUMC (Vrije Universiteit Medical Center (Amsterdam) www.vumc.nl Submit complex medical applications to the grid, data management.
AMC (Amsterdam Medical Center www.amc.nl Medical data management.
Max Planck Institute for Astrophysics in Garching www.mpa-garching.mpg.de Job submission, file transfer
Louisiane State University, University of Texas ChemGrid: A computational chemistry middleware project that needs a lot of grid-type features
AMOLF, the Institute for Atomic and Molecular Physics, The Netherlands www.amolf.nl Using JavaGAT in a Fourier Transform Mass Spectrometry (FTMS) analysis application. The FTMS dataset can be streamed to compute resources with the JavaGAT, with ssh, sftp and gridftp.
The workflow system Triana (University of Cardiff) www.trianacode.org Can use GAT to start jobs on the grid
DataFinder, from the AeroGrid project, German Aerospace Center, Germany http://www.aero-grid.de Job submission and file access
PartnerGrid Job submission
Georgia State University unknown
The Multimedian project www.multimedian.nl Uses JavaGAT to start parallel jobs on the grid.
Zuse Institute Berlin, Germany unknown
INRIA, France unknown
The TextGrid project www.textgrid.de files and jobs
Jylab unknown
The Integrade Project Integrade adaptor set (an object oriented grid middleware layer), an Eclipse based IDE
CoreGRID unknown
The Large Knowledge Collider project (LarKC) Job submission, files
Universitat Politècnica de Catalunya (UPC) / Barcelona Supercomputing Center (BSC) Job submission and file transfer in the COMP Superscalar Grid framework