Skip to ContentSkip to Navigation
Society/business Center for Information Technology Research and Innovation Support Virtual Reality and Visualisation

Upgrade RUG Linux Cluster

Working proposal Linux cluster upgrade

Document history:

  • Version 1, 30 august 2003

The Linux Cluster

The purpose of the Linux cluster is to provide users within the university access to large scale computational resources in a cost effective manner. The resource is intended to be beyond that which can be readily achieved by individual groups or institutes. To be cost effective the cluster exploits commodity components and open source operating systems.  Part of the system is connected via a high speed network giving the opportunity to perform large computations in parallel.

The following information is available about the current cluster:

General description

Linux cluster upgrade

The current upgrade was planned as part of the initial proposal for the current cluster. It is intended that the new cluster will be based on a similar architecture to the current cluster but will incorporate new components (modern processors and networking possibilities) to give a large increase in capability. In addition, it is intended that the new Linux cluster will play a role in driving elements of the high performance visualization facilities housed within the RC.

Compute nodes

X x single CPU compute node, approximately 120 GB scratch space, 1 GB memory

32 x dual CPU compute node, approximately 120 GB scratch space, 2 GB memory

For the CPU type benchmarks are planned with PIV, Itanium (Intel) and Opteron (AMD). X will depend on hardware prices and the available budget (approximately 100-400 nodes).

Compile front ends

Minimal 3 front ends (dual CPU, 2 GB memory). User groups will be divided among the front ends.

Visualization front ends

5 visualization front ends will make it possible to use the cluster in combination with the VR Theater or the Reality Cube. The visualization front ends will have a direct connection to the internal network of the cluster. The graphics cards will be gen locked and connected to the visualization facilities via a video switch.

File servers

For each 64 compute nodes one NFS server (dual CPU, 2 GB memory).

Network

All compute nodes are connected by a 1 Gb network, only wire speed switches will be used. The dual CPU compute nodes will have a additional network connection with a fast special purpose network (Myrinet, SCI or Infiniband).

Storage area network

All file servers will be connected to the RUG SAN with a Fiber Channel connection (1 Gb/s). Initially 1 TB will be available. Quota policy will be based om a standard diskquota per user. Projects can request temporally additional space. Back ups are made on the RUG Exabyte tape robot (total capacity 32 TB). Archive facilities are not standard available but can be realized on request.

Software

The Debian version of the Linux operation system will be used. The following software will be available on the front ends:

  • MPI
  • PVM

  • Portland compilers (C/C++/Fortran)

  • PBS-pro queing system)

Plus additional software for the different research groups.

Time Schedule

The next meeting of scientific board will be 16 september, this working proposal will be on the agenda.

Laatst gewijzigd:04 oktober 2024 12:41