9 October 2018

The next generation DAS, DAS-6, will get funding! For details, see the DAS Achievements page. DAS-6 is expected to become operational in the second half of 2019.

2 Jan 2017

DAS-5/VU has been extended with 4 TitanX-Pascal GPUs.

May, 2016

IEEE Computer publishes paper about 20 years of Distributed ASCI Supercomputer. See the DAS Achievements page.

28 Sep 2015

DAS-5/VU has been extended with 16 GTX TitanX GPUs.

Network setup

Internal network

Each DAS-5 cluster has both an internal 1 Gbit/s Ethernet and a FDR InfiniBand network. The Ethernet network (with IP addresses 10.141.0.<node-id>) is mainly used for management of the cluster.

The best performance is obtained using the InfiniBand network, which offers low latency and throughputs up to 48 Gbit/s (depending on the networking API used). The InfiniBand network can be accessed both via a fast native interface, typically through MPI, but also by means of the regular IP layer, using IP addresses 10.149.<site-id>.<node-id>.

External connectivity

For external connectivity, each cluster has a 1 or 10 Gbit/s ethernet connection to the local university backbone; the head node acts as a router for this purpose.

In addition, DAS-5 compute nodes will be able to communicate over a Wide-Area network based on dedicated ethernet light paths provided by SURFnet. To use this network, specify IP addresses on the internal DAS-5 OpenFlow network (10.150). This networking option is still under construction.