6 Jul 2015

DAS-5 is now fully operational! To make room for DAS-5, DAS-4/UvA and DAS-4/ASTRON have been decomissioned, only their headnodes remain available.

28 Oct 2014

The Hadoop setup on DAS-4/VU has been updated to version 2.5.0.

30 Jan 2014

The Intel OpenCL package for Intel CPU's and Xeon Phi has been updated to version 3.2.1.

3 Sep 2013

The Nvidia CUDA development kit has been updated to version 5.5.

25 April 2013

Slides of the DAS-4 workshop presentations are now available.

14 Jan 2013

DAS-4/VU now has 8 new nodes with latest Nvidia K20 GPU.

Prun manual page

prun [options] -np ncpus application [application args]


Prun provides a convenient way to run applications on a cluster. It reserves the requested number of cpus (or nodes) and executes a parallel application on them. Host scheduling is exclusive, i.e., Prun does not allocate multiple jobs on one host. Prun builds on a reservation system that is partly based on goodwill: compute node reservation is implemented, but not strictly enforced. However, users sidestepping the reservation mechanism and accessing compute nodes directly will incur the wrath of both fellow users and the system administrators.


Prun runs an application in parallel on the requested number of cpus. The default maximum execution time is 15 minutes, which is also the maximum allowed reservation during daytime. If no start time is specified explicitly, the parallel run is scheduled as soon as the requested number of cpus is available. If not enough cpus are available immediately, Prun waits until the reserved time, or until canceled reservations allow earlier execution. In the latter case, the reservation schedule is compressed. If a start time is specified explicitly, and the requested resources are available, the reservation is scheduled, and Prun sleeps until the specified time. The users are themselves responsible to honor the required execution limits, or request an exception by email to the system administrators.
By design, Prun allocates all CPUs of a compute node, irrespective of the number of processes started there by prun. This ensures that there are no other jobs on the reserved nodes (from the same or another user) that might interfere.

SSH usage requirements

Prun by default uses ssh(1) for application invocation. SSH only works conveniently in this setting if the user's ssh configration allows password-less remote command execution from the head node to the compute nodes. By default, this is the case, but when changing the ssh configuration the user needs to keep this requirement in mind.
When dealing with standard input, Prun imposes two limitations. The first is that only cpu 0 of the started parallel application is allowed to read from standard input. The second is caused by the fact that standard input is always opened, whether the application wants to read it or not. Therefore, if Prun must run in the background and no input is necessary, standard input must be redirected to /dev/null.
Since Prun uses ssh to start remote processes, the process limits (like memory usage, execution time limit) are derived from the user's default values. When the process limits are to be changed, users must change them in their .bashrc (bash users) or .cshrc (csh, tcsh users).

Single-shot property

Prun generates a run-unique key for each parallel run. This key can be used for synchronization by other software layers. Prun should not be used to invoke scripts that run multiple parallel programs in sequence, since in that case the run-unique key would be shared between consecutive runs. This leads to start-up problems. Therefore invoke Prun for each parallel program run separately.


-c dir
Symbolic name of the directory where the parallel application is to be executed (default: current directory).
Prun writes temporary files into the current directory, with instructions and environment information for the worker processes. For this reason, worker processes must run from the current directory. SSH starts its remote execution from the user's home directory, so prun must remotely change to the desired directory. However, the current directory name on the local host may differ from the (symbolic) name in remote hosts (since the file system may have been differently mounted). To overcome this, an option -c dir is supplied, in which the (symbolic) name of the current directory is specified. To determine the current directory, Prun first inspects the environment variable PWD, which is set by tcsh(1) and bash(1). For other shells, it may be necessary to specify the (symbolic) name of the current directory with -c.
Since Prun creates temporary files in this directory, the user must have write permission in it.
allow application core dumps (default).
suppress application core dumps.
-d time
poll every time seconds (default: 1).
-delay time
add a delay of time seconds (default: depends on file size) between spawns of remote processes. time is a floating point number.
export Prun's process environment to forked application processes (default).
do not export Prun's process environment to forked application processes.
do not return reservation after execution. Generally used in conjunction with -reserve.
return reservation after execution (default).
echo SSH and reserve commands, but do not execute.
-np ncpus
The -np option expresses the (common) case of parallel runs that do not expect Panda-style cpu rank arguments in a more natural way.
prun -np ncpu app args
is an alias for
prun -no-panda app ncpu args.
-o outputfile
output from each of the parallel processes is diverted to a separate file, named outputfile.0, outputfile.1, ....
This option does not work in combination with -sge-script.
feed the application as the first two command line arguments its process rank the total number of processes.
do not add any process ranking arguments to the application command line (default with -np option).
-sge-script script
Runs script on cpu 0. The script should start up the processes on the other cpus, as is customary for SGE scripts. To ease the development cycle, prun -sge-script sets a number of environment variables: PRUN_CPUS contains the total number of cpus; PRUN_CPUS_PER_NODE contains the number of cpus per node; PRUN_PROG contains the name of the executable specified to Prun; PRUN_PROGARGS contains the list of application arguments specified to Prun.
Note: native resource scheduler directives in the script (for SGE, lines starting with "#$") are ignored, as prun interfaces directly with the resource manager to start the job, just based on the default or modified prun parameters (like -t for maximum walltime).
An example script is found in /cm/shared/package/reserve.sge/etc/prun-openmpi; it allows the user to run an OpenMPI application without having to bother about SGE or MPI configuration. As is usual with Prun, but in contrast to SGE schedules, stdin, stdout and stderr are redirected to the terminal, and the program is run from the current directory or the directory indicated with the -c option.

-pg dir_prefix
each process changes directory to dir_prefixXX, where XX is the instance number. E.g. this can be used to generate separate profile dumps or core dumps.
ping all hosts on which the program is to run before forking the processes. If the ping fails, an indication that the host is down is printed (default).
do not ping the hosts before forking.
-q queue
Enter the reservation into the cluster queue named queue. The default is not specifying a specific queue; this means all nodes by default available to users can be allocated.
This option can be used to enforce scheduling on a specific subset of nodes. E.g., using the following:
-q "all.q@node001,all.q@node002".
-reserve id
use previously obtained reservation id id. By default this also sets -keep. This option can be used to reserve nodes for a time spanning a number of runs. A reservation id can be obtained by calling preserve(1) with the required time and nodes.
-rsh remote-shell
use remote-shell (as an absolute path name) to spawn remote processes, instead of ssh.
-s time
start at time [[mm-]dd-]hh:mm (default: now).
-t time
the maximum application walltime is set to time = [[hh:]mm:]ss (default: 15 minutes).
report host allocation.
By default, Prun reserves the requested amount of cpus, and starts just one processes per node. If -[procspernode] is specified, however, Prun allocates and schedules the specified number of nodes, and then runs the number of processes per node specified by this option (ignoring the actual number of cpus per node).
add var=value to application environment.
print usage.


ssh(1), preserve(1).


Prun copies all its own environment variables to the environment of the spawned processes. It adds some extra variables: PRUN_CPU_RANK contains the rank of the current spawned process; PRUN_HOSTNAMES contains a list of host names, one per spawned process. The -sge-script option adds some more environment variables.


``Illegal option: 0 16''
The user has not specified the -np or -no-panda flags, in which case Prun reads the number of hosts to allocate as a parameter (an old calling convention kept for backward compatibility), and adds host numbers to the command line consistent with the Panda calling convention.
$ prun -v /bin/echo 2 hello
Reserved 2 hosts for 900 seconds from Tue Mar 27 14:43:02 CEST 2007
: node001 node002
All hosts are alive

1 2 hello
0 2 hello

Another example:
$ prun -np 2 -v /bin/echo hello
Reserved 2 hosts for 900 seconds from Tue Mar 27 14:43:25 CEST 2007
: node001 node002
All hosts are alive

``Fatal error: cannot stat application a.out''
The user has specified an incomplete path for his executable.
Prolongued silence
There is no room in the current schedule for the requested number of cpus and compute time, so prun waits. Prun -v or preserve -llist shows information on host allocation and presumed start time.
``Out of memory''
The user has not changed the process memory limit in his .cshrc or .bashrc file. Maybe he did it in his .profile or his .login file, but ssh(1) does not look there.
No core dumps
The user has not changed the process coredump limit in his .cshrc or .bashrc file. Maybe he did it in his .profile or his .login file, but ssh(1) does not look there.
``watchit fatal error: Cannot open environment file''
One of two possibilities: either the user has no write permission in the current directory, or he has specified an illegal -c directory option. That way, Prun is requested to run from an unexisting directory, a directory without write permission, or a directory which has not been remote-mounted: /tmp and /var/tmp are excellent examples of the latter error.
I cancelled my reservation, but the compute jobs live on!
Reservation and execution are two different things. True, prun obtains a reservation, executes your jobs and cancels the reservation. But these are three separate actions. To kill the jobs, interrupt your Prun process by sending it a ^C or a SIGINT (kill(1)). Your Prun process will propagate a SIGINT to your jobs so they will die, and then cancel your reservation.