2 Jan 2017

DAS-5/VU has been extended with 4 TitanX-Pascal GPUs. Check DAS-5 special nodes for an overview of all DAS-5 special nodes.

9 Nov 2016

CUDA 8 is now available on all DAS-5 sites with GPU nodes. Check the DAS-5 GPU page for usage info.

May, 2016

IEEE Computer publishes paper about 20 years of Distributed ASCI Supercomputer. See the DAS Achievements page.

28 Sep 2015

DAS-5/VU has been extended with 16 GTX TitanX GPUs.

6 Aug 2015

DAS-5/UvA has been extended with 4 GTX TitanX GPUs: two on both node205 and node206.

6 Jul 2015

DAS-5 is fully operational!

DAS-5 User Accounts

Login accounts

If you are a faculty member or student at one of the Computer Science groups in the ASCI research school, or if you are ASTRON- or NLeSC-employee, you may request a login account on the DAS-5 system by sending mail to Please include your affiliation, and planned purpose of your account. In case you are a student, specify the course or project your account is needed for, and the staff member acting as advisor.


If you have used DAS for your research, please refer to this paper:

Henri Bal, Dick Epema, Cees de Laat, Rob van Nieuwpoort, John Romein, Frank Seinstra, Cees Snoek, and Harry Wijshoff:
"A Medium-Scale Distributed System for Computer Science Research: Infrastructure for the Long Term",
IEEE Computer, Vol. 49, No. 5, pp. 54-63, May 2016.
A draft version of this paper is here.

Access credentials

The DAS-5 clusters do not form one seamless whole. In particular, the password administration is done per site: the master copy of the password files is at, and users should only change their password there. The password files at the other sites will be regularly updated with the version at the master site.

File systems

The file systems at different sites are completely separate, so it will be necessary to transfer the files between the home directories manually (e.g., by means of scp), or by using Grid software that does this transparently for you.

Besides the home directories, the directory /var/scratch of the file server is also mounted on each DAS-5 node. You can use the subdirectory /var/scratch/<userid> for temporary bulk storage during or between jobs. In case you need space for temporary files on each node separately, you can use /local/<userid>. Please empty /var/scratch/<userid> and /local/<userid> when you no longer need the space.

The default file quotum for users in their home directory is 4 GB. The /home quotum is limited to this amount since we maintain daily backups for this, and our backup capacity is restricted. The default file quotum for users in their scratch directory is 40 GB (i.e., a factor 10 higher than on home); scratch directories are not backed up however, so only use this for data sets or applications that can easily be reconstructed if needed.

System administrators

Problem reports, e.g., missing software or documentation or apparent hardware failures, should be sent to

For account related support (e.g., if you ran out of diskquota or you forgot your password) please send mail to


Access to the DAS-5 is possible by logging in to the DAS-5 fileserver at one of the participating sites using ssh (secure shell). To log in to DAS-5 when you are not on-site, first login to a compute server, workstation or access server at your site using your site's account (which may be named different from your DAS-5 account).

For VU University, external access is provided via using your VUnetid. At the VU on-site, get direct access from a registered network, or access DAS-5 via the compute servers or

The DAS-5 head node (fileserver) and compute nodes are named as follows:

Cluster Head node Compute nodes
VU node001-node068
LU node101-node124
TUD node301-node348
ASTRON node501-node509

Users will typically login to one of the DAS-5 file servers, develop software there, and start the parallel applications on a subset of the compute nodes. To avoid people getting in each other's way on the compute nodes, every DAS-5 user is required to use the cluster startup and reservation commands described in this section.