Skip to ContentSkip to Navigation
Society/business Center for Information Technology Research and Innovation Support Services High Performance Computing

High Performance Computing cluster - Hábrók

Hábrók

To perform large complex calculations, the university has its own high-performance computing (HPC) cluster called Hábrók. In Norse mythology, Hábrók is described as the best hawk. This name was chosen to emphasize that the system is an improved version of the Peregrine cluster, which it replaced at the beginning of 2023.

How can Hábrók help:

A computer cluster is a collection of large computers that can be used for doing calculations that exceed the capacity of a desktop or laptop computer. It is useful when you:

  • need to run many computations 
  • need to run long computations
  • are struggling with a large volume of data that needs to be analyzed. 

By using the power of the computer cluster, you may be able to get results that you couldn't get on a laptop or desktop.

The cluster is designed to be used in parallel for applications that support this, and it is coupled together using a fast network. All participating computers can access central shared storage, giving all of them access to the same user data. Additionally, the storage system is large enough to hold substantially sized data sets. By moving your calculations or data analysis to a computer cluster, you can free up your regular computer for other work that you need to do.

The cluster is available for general use by University of Groningen scientists.

Specifications Hábrók

  • 119 computers with 128 cores and 512 GB memory. These systems are specially tailored for data-intensive tasks;
  • 24 computers with 128 cores, 512 GB memory and a super fast, low latency interconnect. These systems are suitable for large-scale processing of multiple computers;
  • 4 nodes with 80 cores and 4 TB of memory for memory-intensive tasks;
  • 6 nodes with 64 cores, 512 GB memory and 4 Nvidia A100 cards with 40 GB memory each. These can, for example, accelerate machine learning;
  • 2PB shared storage for data to be processed on or generated by the cluster.

Storage

Each node of the cluster can access 3PB shared storage for data to be processed. This storage is accessed through the Luster parallel file system. In addition, each node has at least 1TB of local disk space. Finally, there is permanent storage available as well, provided by the Data Handling project and accessible only from the login nodes.

Network

All nodes in the cluster are interconnected via a 25 Gb/s ethernet network.

24 nodes are interconnected with a fast 100 Gb/s Omni-Path network. This network is both fast in bandwidth (amount of data per second) and in latency (minimum time required for communication). This makes this network extremely suitable for calculation tasks across multiple computers.

Parallel calculation

Of course, you can use the cluster machines (on which Linux is installed as the operating system) to perform a large number of smaller jobs. However, by making use of the fast network connection between the nodes, you can run a job in parallel, and therefore extra fast, on multiple nodes at the same time. For this, the programs used must support this (please check the program documentation for the options to use). Contact the CIT if you need more help with this.

HPC courses and training

Beginner courses and courses for advanced users are regularly organized. In the course overview of the Corporate Academy you will find more information about all training courses offered by the CIT.

Apply for an account

If you would like an account to use the cluster, please fill in the application form.

Maintenance and problems

For a list of planned maintenance and current problems have a look at the status page.

Last modified:07 February 2024 09.29 a.m.
View this page in: Nederlands