Announcements

4 April 2018

The DAS-4 clusters have been updated to CentOS 7, and the Nvidia CUDA development kit has been updated to version 8.0 (version 9.1 is also available for recent GPUs). This update makes DAS-4 software-compatible with DAS-5.

6 Jul 2015

DAS-5 is now fully operational! To make room for DAS-5, DAS-4/UvA and DAS-4/ASTRON have been decomissioned, only their headnodes remain available.

25 April 2013

Slides of the DAS-4 workshop presentations are now available.

Accelerators and special compute nodes

The standard compute node type of DAS-4 sites has a dual-quad-core 2.4 GHz CPU configuration and 24GB memory (48 GB in Leiden). In addition, several DAS-4 sites include non-standard node types for specific research purposes. Compute nodes are by default allocated from queue/partition "defq" containing the standard node types. To allocate a special resource in SLURM or prun, a so-called "constraint" for a required node property should be specified as follows:

  • -C GTX480
    nodes with an Nvidia GTX480 (with 1.5 GB onboard memory)
  • -C GTX680
    node with an Nvidia "Kepler" GTX680 (with 2 GB onboard memory)
  • -C C2050
    nodes with an Nvidia Tesla C2050 (with 2.625 GB onboard memory)
  • -C K20
    nodes with an Nvidia "Kepler" K20 (with 6 GB onboard memory)
  • -C Titan
    node with an Nvidia "Kepler" GTX-Titan (with 6 GB onboard memory)
  • -C TitanX
    node with an Nvidia GTX-TitanX
  • -C TitanBlack
    node with an Nvidia GTX-TitanBlack
  • -p fatq
    node with non-standard configuration, e.g. DAS-4/VU nodes with Sandybridge architecture

This resource selector should be added as

#SLURM -C resource
in a SLURM job script, or passed as
-native '-C resource'
option to prun/preserve.

To allocate a GPU on a node, besides specifying the GPU type, the option "--gres=gpu:1" should be added as well. Examples can be found on the DAS-4 GPU page.

Nodes that have a different CPU or node architecture than the default dual-4-core 2.4 GHz are typically in a different queue (SLURM calls them "partitions") to avoid unpredictable performance. To run a job on node in partition "part", add the following:

#SLURM -p part
in a SLURM job script, or passed as
-native '-p part'
option to prun/preserve. When specifying multiple constraints or partitions, group them all together as argument to the single -native option in prun as follows:
-native '-p part -C resource1,resource2'

VU University

At fs0.das4.cs.vu.nl the following special-purpose equipment is available for various experiments:

  • 16 (out of 72) of the regular nodes in addition have an NVidia GTX480 GPU.
  • 7 additional regular nodes also have an NVidia GTX480 GPU (but no SSD)
  • 2 of the regular nodes (node061 and node062) have an Nvidia C2050 Tesla GPU.
  • regular node node068 has an NVidia GTX680 GPU
  • node079 to node085: Intel "Sandy Bridge" E5-2620 (2.0 GHz) nodes, with 64 GB memory and a K20m "Kepler" GPU. In addition, node079 is equipped with an Intel Xeon Phi accelerator.
  • node078, node086-node091, node093: Intel "Sandy Bridge" E5-2620 (2.0 GHz) nodes, with 64 GB memory, reserved for running Hadoop and Cloud VMs.

Leiden University

At fs1.das4.liacs.nl all 16 compute nodes are "fat" in that they have more memory and local storage than default on the other sites:

  • the nodes have 48 GB instead of 24 GB memory;
  • the nodes have have 10 TB of local storage (5*2 TB RAID);
  • each compute node also has a fast 512 GB SSD (OCZ Z-Drive p88).

Delft University of Technology

At fs3.das4.tudelft.nl the following special-purpose equipment is available:
  • 8 (out of 28) of the regular nodes have an NVidia GTX480 GPU;
  • 1 of the regular nodes has an Nvidia GTX1080Ti GPU;
  • 1 of the regular nodes has an Nvidia C2050 Tesla GPU;
  • 4 "fat" nodes are available that have 48GB memory and 2*2TB RAID0 local storage.

University of Amsterdam - MultiMediaN

At fs4.das4.science.uva.nl the following special-purpose equipment is available:

  • 12 (out of 34) of the regular nodes in addition have an NVidia GTX TitanX GPU;
  • 4 of the regular nodes have an NVidia TitanBlack GPU;
  • 3 of the nodes have an Nvidia GTX-Titan GPU;