Announcements

6 Jul 2015

DAS-5 is now fully operational! To make room for DAS-5, DAS-4/UvA and DAS-4/ASTRON have been decomissioned, only their headnodes remain available.

28 Oct 2014

The Hadoop setup on DAS-4/VU has been updated to version 2.5.0.

30 Jan 2014

The Intel OpenCL package for Intel CPU's and Xeon Phi has been updated to version 3.2.1.

3 Sep 2013

The Nvidia CUDA development kit has been updated to version 5.5.

25 April 2013

Slides of the DAS-4 workshop presentations are now available.

14 Jan 2013

DAS-4/VU now has 8 new nodes with latest Nvidia K20 GPU.


Network setup

Internal network

Each DAS-4 cluster has both an internal 1 Gbit/s Ethernet and a QDR InfiniBand network. The Ethernet network (with IP addresses 10.141.<site-id>.<node-id>) is mainly used for management of the cluster, but can also be used by applications.

The best performance is obtained using the InfiniBand network, which offers low latency and throughputs exceeding 20 Gbit/s (depending on the networking API used). The InfiniBand network can be accessed both via a fast native interface, typically through MPI, but also by means of the regular IP layer, using IP addresses 10.149.<site-id>.<node-id>.

External connectivity

For external connectivity, each cluster has a 1 or 10 Gbit/s ethernet connection to the local university backbone; the head node acts as a router for this purpose.

In addition, DAS-4 compute nodes can communicate over a very efficient Wide-Area network called StarPlane, which is based on dedicated 10 Gbit/s light paths provided by SURFnet. To use this network, simply specify InfiniBand-based IP addresses (10.149). Routes to external InfiniBand IP addresses are set up to go through a local WAN router, which routes between InfiniBand and the SURFnet 10 Gbit/s lightpaths. For details on the wide-area topology, see file /cm/shared/package/StarPlane/overview on DAS-4.