Skip to ContentSkip to Navigation
Society/business Center for Information Technology Research and Innovation Support Services High Performance Computing Facilities

High Performance Computing cluster


To perform large complex calculations, the university has its own high performance computing (HPC) cluster called Peregrine. The 5740 CPU cores and 220000 CUDA cores cluster is available for general use by University of Groningen scientists and is eminently suited to solve computing problems for which a single computer is not powerful enough.

Specifications Peregrine

The cluster nodes come in three variants.

  1. There are 210 'default' nodes with 24 or 28 Intel Xeon 2.5 GHz cores or 64 AMD EPYC 7601 cores and 28 GB or 512 internal memory.
  2. In addition, there are 42 nodes equipped with special accelerator cards 6 with 2 Nvidia K40 cards 36 virtual nodes with 1 Nvidia V100 GPU card.
  3. For jobs that require extra memory, there are 7 nodes with 48 Intel Xeon 2.6GHz cores and 1024 or 2048 GB internal memory.

Each node of the cluster can access 463TByte of hard drive storage. This storage is accessed through the Luster parallel file system. In addition, each node has 1TB of local disk space. In addition, the Data Handling facilities, the size of 3PB, are also available for Peregrine.


The nodes in the cluster are interconnected via a 56 Gb/s Infiniband network. This network is both fast in bandwidth (amount of data per second) and in latency (minimum time required for communication). This makes this network extremely suitable for calculation tasks across multiple computers.

In addition, there is also a 10 Gb/s Ethernet network for accessing external data.

Parallel calculation

Of course you can use the machines on which Linux is installed as the operating system to perform a large number of smaller jobs. However, by making use of the fast network connection between the nodes, you can run a job parallel, and therefore extra fast, on multiple nodes at the same time. For this, the programs used must be adapted by adding MPI calls. The CIT can assist with parallelizing programs if necessary.


More information about Peregrine is available on the documentation page.


In 2023, the Peregrine cluster will be replaced by a new cluster named Hábrók. In Norse mythology, Hábrók is described as the best hawk. This is to emphasize that the system is an improved version of the existing Peregrine cluster.

Specifications Hábrók

  • 119 computers with 128 cores and 512 GB memory. These systems are specially tailored for data-intensive tasks;
  • 24 computers with 128 cores, 512 GB memory and a super fast, low latency interconnect. These systems are suitable for large-scale processing of multiple computers;
  • 4 nodes with 80 cores and 4 TB of memory for memory-intensive tasks;
  • 6 nodes with 64 cores, 512 GB memory and 4 Nvidia A100 cards with 40 GB memory each. These can, for example, accelerate machine learning;
  • 2PB shared storage for data to be processed on or generated by the cluster.

HPC courses and training

Beginner courses and courses for advanced users are regularly organized. In the course overview of the Corporate Academy you will find more information about all training courses offered by the CIT.

Maintenance and problems

For a list of planned maintenance and current problems have a look at the status page.

Last modified:14 February 2023 2.09 p.m.
View this page in: Nederlands