Announcements

2 Jan 2017

DAS-5/VU has been extended with 4 TitanX-Pascal GPUs. Check DAS-5 special nodes for an overview of all DAS-5 special nodes.

9 Nov 2016

CUDA 8 is now available on all DAS-5 sites with GPU nodes. Check the DAS-5 GPU page for usage info.

May, 2016

IEEE Computer publishes paper about 20 years of Distributed ASCI Supercomputer. See the DAS Achievements page.

28 Sep 2015

DAS-5/VU has been extended with 16 GTX TitanX GPUs.

6 Aug 2015

DAS-5/UvA has been extended with 4 GTX TitanX GPUs: two on both node205 and node206.

6 Jul 2015

DAS-5 is fully operational!


Network setup

Internal network

Each DAS-5 cluster has both an internal 1 Gbit/s Ethernet and a FDR InfiniBand network. The Ethernet network (with IP addresses 10.141.0.<node-id>) is mainly used for management of the cluster.

The best performance is obtained using the InfiniBand network, which offers low latency and throughputs up to 48 Gbit/s (depending on the networking API used). The InfiniBand network can be accessed both via a fast native interface, typically through MPI, but also by means of the regular IP layer, using IP addresses 10.149.<site-id>.<node-id>.

External connectivity

For external connectivity, each cluster has a 1 or 10 Gbit/s ethernet connection to the local university backbone; the head node acts as a router for this purpose.

In addition, DAS-5 compute nodes will be able to communicate over a Wide-Area network based on dedicated ethernet light paths provided by SURFnet. To use this network, specify IP addresses on the internal DAS-5 OpenFlow network (10.150). This networking option is still under construction.