Announcements

2 Jan 2017

DAS-5/VU has been extended with 4 TitanX-Pascal GPUs. Check DAS-5 special nodes for an overview of all DAS-5 special nodes.

9 Nov 2016

CUDA 8 is now available on all DAS-5 sites with GPU nodes. Check the DAS-5 GPU page for usage info.

May, 2016

IEEE Computer publishes paper about 20 years of Distributed ASCI Supercomputer. See the DAS Achievements page.

28 Sep 2015

DAS-5/VU has been extended with 16 GTX TitanX GPUs.

6 Aug 2015

DAS-5/UvA has been extended with 4 GTX TitanX GPUs: two on both node205 and node206.

6 Jul 2015

DAS-5 is fully operational!

preserve - Reserve compute nodes from cluster to run job


SYNOPSIS

preserve [option] -# ncpus time


DESCRIPTION

preserve is part of a reservation system, which users generally invoke via prun(1). There are several situations where users directly call preserve:
  • to reserve nodes for multiple sequential runs with prun(1), where an identical set of nodes is needed (using "prun -reserve id", where id was obtained from preserve);
  • to reserve nodes before starting an application with a different startup method than offered by prun;
  • to display the current set of node reservations;
  • to cancel a reservation.

preserve and prun build on a reservation system that is partly based on goodwill: compute node reservation is implemented, but generally not strictly enforced. However, users sidestepping the reservation mechanism and accessing compute nodes directly will incur the wrath of both fellow users and the system administrators.
preserve reserves a number of cpus or nodes during the requested time (in seconds), by default 900 seconds (15 minutes), which is also the maximum allowed reservation during daytime. If no start time is specified explicitly, the reservation is allocated as soon as the requested number of processors is available. The users are themselves responsible to honor the required execution limits, or request an exception by email to the system administrators.
By design, preserve allocates all CPUs of a compute node, irrespective of the number of processes started there by prun. This ensures that there are no other jobs on the reserved nodes (from the same or another user) that might interfere.

OPTIONS

-c id
cancel reservation id id. preserve attempts to kill all processes on the nodes that correspond to the reservation.
-list
list current reservations, terse output.
-llist, -long-list
list current reservations, more verbose output.
-q queue
Enter the reservation into the queue named queue. The default is the system-default queue; typically this is the queue containing all regular nodes, but sometimes a system has special nodes that are in sepearate queues, or "partitions" in SLURM terminology.
-s time
start reservation at time [[mm-]dd-]hh:mm. Month and day are optional, hour and minute mandatory. Default: now.
-t time
reserve time = [[hh:]mm:]ss hours, minutes seconds.
-# cpus
reserve cpus cpus. Note: there is no default; this option must be present.
-[124]
By default, preserve reserves the requested amount of cpus, i.e., on DAS-2 and DAS-3 this is based on 2 cpus per node. If -[124] is specified, however, preserve allocates the specified number of nodes, in anticipation of a prun invocation running the same number of processes per node specified by this option (irrespective of the actual number of cpus per node).
-?
print usage

SEE ALSO

prun(1)