Skip to ContentSkip to Navigation
Society/business Center for Information Technology Research and Innovation Support Virtual Reality and Visualisation

Millipede cluster user guide: Introduction

Millipede is end-of-life. New accounts will be made on the Peregrine cluster.

This user guide has been written to help new users of the Millipede HPC cluster at the CIT getting started in using the cluster.

Common notation

Commands that can be typed in on the Linux command line are denoted with:

$ command

The $ sign is what Linux will present you with after logging in. After this $ sign you can give commands, denoted with "command" above, which you can input using the keyboard. You have to confirm the command with the <Enter> key.

Cluster setup

Hardware setup

The Millipede cluster is a heterogeneous cluster consisting of 4 parts.

  • A front-end node where users login to with 12 2.6 GHz AMD Opteron cores and 24GB of memory;
  • 235 nodes with 12 2.6 GHz AMD Opteron cores, 24GB of memory, and 320 GB of local disk space;
  • 16 nodes with 24 2.6 GHz AMD Opteron cores, 128 GB of memory, and 320 GB of local disk space;
  • 1 node with 64 cores and 576 GB of memory.

All the nodes are connected with a 20Gbps Infiniband network. Attached to the cluster is 350TB of storage space, which is accessible from all the nodes. (To get some idea of the power of the machines, a normal desktop PC now has 2-4 cores running at around 2.6 GHz, 4 GB of memory and 1 TB of diskspace. )

Login node

One node of the cluster is used as a login node. This is the node you login to with the username and password given to you by the system administrator. The other nodes in the cluster are so called 'batch' nodes. They are used to perform calculations on behalf of the users. These nodes can only be reached through the job scheduler. In order to use these, a description of what you want the node(s) to do has to be written first. This description is called a job. How to submit jobs will be

explained later on.

File systems

The cluster has a number of file systems that can be used. On Unix systems these file systems are not pointed to with a drive letter, like on Windows systems but appear as a certain directory path. The file systems available on the system are:

/home

  • This file system is the place where you arrive at after logging in to the system. Every user has a private directory on this file system. Your directory on /home, and its subdirectories are available on all the nodes of the system. You can use this directory to store your programs and data. In order to prevent the system from running out of space the amount of data you can store here is limited, however. On the /home file system quota are in place to prevent a user from filling up all the available disk space. This means that you can only store a limited amount of data on the file system. For /home the amount of space is limited to 10 GB. When you are in need of more space you should contact the system administrators to discuss this, and depending on your requirements and the availability your quota may be changed. The data stored on /home is backed up every night to prevent data loss in case the file system breaks down or because of user or administrative errors. If you need data to be restored you can ask the site administrators to do this, but of course it is better to be careful when removing data. Note, however, that using the home directory for reading or writing large amounts of data may be slow. In some cases it may be useful to copy input data from your home directory to a temporary directory on /local on the batch node first at the beginning of your job. This directory is reachable using $TMPDIR. Note that relevant output has to be copied back at the end of the job, otherwise it will be lost, because the temporary directory on /local  is automatically cleaned up after your job finishes.
  • In order to see how much space you have used on /home the command quota can be used. The output looks like this:
Disk quotas for user p123456 (uid 65534): Filesystem blocks quota limit grace files quota limit grace master:/home 3424388 10000000 12500000 117027 0 0

/data

  • For storing large data sets a file system /data has been created. This file system is 350 TB large. Part of it is meant for temporary usage (/data/scratch), the rest is for permanent storage.In order to prevent the file system from running out of space there is a limit to how much you can store on the file system. The current limit is 200 GB per user. There is no active quota system, but when you use more space you will be sent a reminder to clean up. The /data file system is a fast clustered file system that is well suited for storing large data sets. Because of the amount of disk space involved no backup is done on these files, however.

/data/scratch

  • The file system mounted at /data/scratch is a temporary space that can be used by your jobs while they are running. Note that relevant output has therefore to be copied back at the end of the job, otherwise it will be lost.Files you store on /data/scratch at other locations may be removed after a couple of days.

/local

  • Each node of the cluster also has a 320GB local disk. This disk can also be used as temporary storage. Note that you should clean up the data here in your jobscript after your work has finished. To make this process easier a temporary directory is automatically created for each job on /local on the master node of your job. This directory can be reached using $TMPDIR.

Prerequisites for cluster jobs

Programs that need to be run on the cluster need to fulfil some requirements. These are:

  • The program should be able to run under Linux. If in doubt, the author of the program should be able to help you with this. Some hints:
    • It is helpful if there is source code available so that you can compile the program yourself;
    • Programs written in Fortran, C or C++ can in principle be compiled on the cluster;
    • Java programs can also be run because Java is platform independent;
    • Some scripting languages like e.g. Python or Perl can also be used.
  • Programs running on the batch nodes can not easily be run interactively. This means that it is in principle not possible to run programs that expect input from you while they are running. This makes it hard to run programs that use a graphical user interface (GUI) for controlling them. Note also that jobs may run in the middle of the night or during the weekend, so it is also much easier for you if you don’t have to interfere with the jobs while they are running.It is possible, however, to startup interactive jobs. These are still scheduled, but you will be presented with a command line prompt when they are started.
  • Matlab and R are also available on the cluster and can be run in batch mode (where the graphical user interface is not displayed).

If you have any questions on how to run your programs on the cluster, please contact the CIT central service desk.

Last modified:02 October 2015 10.23 p.m.