25 April 2018

Hadoop on DAS-4/VU has been updated to version 2.7.6.

5 April 2018

OpenNebula on DAS-4/VU and DAS-4/Delft has been updated to version 5.4.6.

4 April 2018

The DAS-4 clusters have been updated to CentOS 7, and the Nvidia CUDA development kit has been updated to version 8.0 (version 9.1 is also available for recent GPUs). This update makes DAS-4 software-compatible with DAS-5.

6 Jul 2015

DAS-5 is now fully operational! To make room for DAS-5, DAS-4/UvA and DAS-4/ASTRON have been decomissioned, only their headnodes remain available.

25 April 2013

Slides of the DAS-4 workshop presentations are now available.

preserve - Reserve compute nodes from cluster to run job


preserve [option] -# ncpus time


preserve is part of a reservation system, which users generally invoke via prun(1). There are several situations where users directly call preserve:
  • to reserve nodes for multiple sequential runs with prun(1), where an identical set of nodes is needed (using "prun -reserve id", where id was obtained from preserve);
  • to reserve nodes before starting an application with a different startup method than offered by prun;
  • to display the current set of node reservations;
  • to cancel a reservation.

preserve and prun build on a reservation system that is partly based on goodwill: compute node reservation is implemented, but generally not strictly enforced. However, users sidestepping the reservation mechanism and accessing compute nodes directly will incur the wrath of both fellow users and the system administrators.
preserve reserves a number of cpus or nodes during the requested time (in seconds), by default 900 seconds (15 minutes), which is also the maximum allowed reservation during daytime. If no start time is specified explicitly, the reservation is allocated as soon as the requested number of processors is available. The users are themselves responsible to honor the required execution limits, or request an exception by email to the system administrators.
By design, preserve allocates all CPUs of a compute node, irrespective of the number of processes started there by prun. This ensures that there are no other jobs on the reserved nodes (from the same or another user) that might interfere.


-c id
cancel reservation id id. preserve attempts to kill all processes on the nodes that correspond to the reservation.
list current reservations, terse output.
-llist, -long-list
list current reservations, more verbose output.
-q queue
Enter the reservation into the SGE queue named queue. The default is the system-default queue; typically this is the queue containing all available nodes.
For preserve running on SGE (as on DAS-4), this option can be used to enforce scheduling on a specific subset of nodes. E.g., using the following:
-q "all.q@node001,all.q@node002".
-s time
start reservation at time [[mm-]dd-]hh:mm. Month and day are optional, hour and minute mandatory. Default: now.
-t time
reserve time = [[hh:]mm:]ss hours, minutes seconds.
-# cpus
reserve cpus cpus. Note: there is no default; this option must be present.
By default, preserve reserves the requested amount of cpus, i.e., on DAS-2 and DAS-3 this is based on 2 cpus per node. If -[124] is specified, however, preserve allocates the specified number of nodes, in anticipation of a prun invocation running the same number of processes per node specified by this option (irrespective of the actual number of cpus per node).
print usage