Announcements

25 April 2018

Hadoop on DAS-4/VU has been updated to version 2.7.6.

5 April 2018

OpenNebula on DAS-4/VU and DAS-4/Delft has been updated to version 5.4.6.

4 April 2018

The DAS-4 clusters have been updated to CentOS 7, and the Nvidia CUDA development kit has been updated to version 8.0 (version 9.1 is also available for recent GPUs). This update makes DAS-4 software-compatible with DAS-5.

6 Jul 2015

DAS-5 is now fully operational! To make room for DAS-5, DAS-4/UvA and DAS-4/ASTRON have been decomissioned, only their headnodes remain available.

25 April 2013

Slides of the DAS-4 workshop presentations are now available.

Network setup

Internal network

Each DAS-4 cluster has both an internal 1 Gbit/s Ethernet and a QDR InfiniBand network. The Ethernet network (with IP addresses 10.141.<site-id>.<node-id>) is mainly used for management of the cluster, but can also be used by applications.

The best performance is obtained using the InfiniBand network, which offers low latency and throughputs exceeding 20 Gbit/s (depending on the networking API used). The InfiniBand network can be accessed both via a fast native interface, typically through MPI, but also by means of the regular IP layer, using IP addresses 10.149.<site-id>.<node-id>.

External connectivity

For external connectivity, each cluster has a 1 or 10 Gbit/s ethernet connection to the local university backbone; the head node acts as a router for this purpose.

In addition, DAS-4 compute nodes can communicate over a very efficient Wide-Area network called StarPlane, which is based on dedicated 10 Gbit/s light paths provided by SURFnet. To use this network, simply specify InfiniBand-based IP addresses (10.149). Routes to external InfiniBand IP addresses are set up to go through a local WAN router, which routes between InfiniBand and the SURFnet 10 Gbit/s lightpaths. For details on the wide-area topology, see file /cm/shared/package/StarPlane/overview on DAS-4.