The HPC cluster is used by scientists for heavy computing tasks. It consists of 255 multi-core nodes, all connected through a high-speed (20 Gb/s) network.
The HPC cluster is divided into 3 parts:
236 nodes with
- 12 Opteron 2.6 GHz cores
- 24 GB memory
16 nodes with
- 24 Opteron 2.6 GHz cores
- 128 GB memory
and 1 node with
- 64 cores
- 512 GB memory
All 200 nodes are connected by an Infiniband switch, and are connected to 110 TB of storage.
To benefit optimally from the cluster architecture, programs often need to be adapted, for example by adding MPI calls. On request, HPC/V organizes a course on 'Distributed Computing using MPI' to help scientists get the most out of the HPC cluster.
More information can be found on the page with documentation.
To request an account for the HPC cluster, please fill in the requestform.
Maintenance and Problems
For a list of planned maintenance and current problems have a look at the status page
|Last modified:||October 01, 2013 10:35|