2 Jan 2017

DAS-5/VU has been extended with 4 TitanX-Pascal GPUs. Check DAS-5 special nodes for an overview of all DAS-5 special nodes.

9 Nov 2016

CUDA 8 is now available on all DAS-5 sites with GPU nodes. Check the DAS-5 GPU page for usage info.

May, 2016

IEEE Computer publishes paper about 20 years of Distributed ASCI Supercomputer. See the DAS Achievements page.

28 Sep 2015

DAS-5/VU has been extended with 16 GTX TitanX GPUs.

6 Aug 2015

DAS-5/UvA has been extended with 4 GTX TitanX GPUs: two on both node205 and node206.

6 Jul 2015

DAS-5 is fully operational!

DAS History

The first DAS system, which we now refer to as DAS-1, consisted of four cluster computers connected by a wide-area ATM network. The clusters were located at four different ASCI universities, but were used and managed as a single integrated distributed system. DAS-1 was operational from mid 1997 to the end of 2001.

The successor system DAS-2 was fully operational between January 2002 and 2007. It consisted of five clusters, located at five ASCI universities (VU University, University of Amsterdam, Delft University of Technology, Leiden University, and University of Utrecht). It was connected by the Dutch national research network SURFnet using 1 Gb/s ethernet. DAS-2 consisted of 200 dual-CPU nodes (1GHz Pentium-IIIs).

DAS-1 cluster at VU University (1997) DAS-2 cluster at VU University (2002)

Both DAS-1 and DAS-2 were designed as homogeneous systems, to allow meaningful performance experiments, stimulate research collaborations, and to ease systems administration. All clusters used the same operating system (Linux), CPU (Intel) and local interconnect (Myrinet), and differed only in configuration parameters like the number of CPUs and the size of the local memory and disk.

Both DAS-1 and DAS-2 were used by many researchers from ASCI, often in collaborative projects. The amount of research, publications, and Ph.D. theses performed with these systems increased dramatically over the years. Also, there was a clear shift from local cluster computing (on one cluster) to large-scale distributed computing (using the whole wide-area system) and subsequently to Grid computing. DAS-2 was also clearly positioned in the national computer Grid infrastructure through the NCF Grid project. NCF (Dutch National Computer Facilities) invested 650,000 Euro to extend the DAS-2 system into a Grid that could be used by application scientists outside ASCI to experiment with Grid applications.

Several developments then drastically increased the need for a new, up-to-date wide-area infrastructure. The desire to request a new system largely came from the success of DAS-1 and DAS-2 and from the demand of major new research projects that needed an infrastructure for large-scale distributed computing and Grid computing. In addition, there were many technological advances that made a new system essential, such as 64 bit processors, faster buses (e.g., PCI Express), 10 Gbit/s and DWDM-capable networks, and larger memories and disks. This was the focus of DAS-3, which was constructed late 2006.

Next step in this technological development is the increased upscaling of the number of cores in general purpose CPUs, the arrival of many-core "HPC Accelerators" such as GPUs, and an important additional focus on "green computing" aspects. This led to the construction of our DAS-4.

Compared to DAS-4, our most recent DAS generation DAS-5 again doubles the number of cores per node, more than doubles the memory per node, and uses an FDR InfiniBand network that in practice also doubles the performance of the previous QDR IB network. We expect to upgrade DAS-5 with additional modern GPU accelerators shortly.