Skip to ContentSkip to Navigation
Maatschappij/bedrijvenTargetAbout TargetFacilities

Storage facilities

The data storage infrastructure, based on the IBM GPFS, has a capacity of 10 Petabytes that can be easily scaled up if necessary. The GPFS-based data storage offers some crucial capabilities to the Target facilities. Among them are:

  • Proven and effective scalability to above Petabyte scales
  • Separation of hardware and software with the possibility of extensions and upgrades with minimal disturbance of normal operations
  • An integrated life cycle management system and a combination of different data storage types, including tape storage
  • Fast I/O for large data files and datasets ensured by the parallel file system
  • Dynamic access patterns that warrant smooth operation of applications with diverse requirements ranging from high number of I/O operations per second (many small files) to high streaming bandwidth (few very large files)
The storage facilities of Target has recently reached the impressive 10 Petabytes. Based on the GPFS file system of IBM, Target storage is easily scalable and cost-effective.
The storage facilities of Target has recently reached the impressive 10 Petabytes. Based on the GPFS file system of IBM, Target storage is easily scalable and cost-effective.

The design and architecture of the computing and storage facilities of Target focus on performance, reliability, robustness and flexibility in terms of meeting the diverse requirements of the Target users. Recent design configuration reflect the transition from testbed to production mode of operation of the Target facilities. Furthermore, the storage facilities provide several single-site filesystems and the choice of a filesystem is determined by the I/O characteristics for each application of every Target customer.

Currently, the storage architecture is split into three major clusters:

  • a relatively small target01 cluster that contains only WISE/Oracle databases (filesystems /gpfs1a and /gpfs1b)
  • a general purpose HPC cluster (target02) that hosts most of the Target users’ data (filesystems /gpfs2 for general use, /gpfs3 for small files and /scratch)
  • a test and development cluster (filesystem /test) used by Target developers to test new GPFS releases, updates and configuration settings. This ensures smooth and uninterrupted operations in case of production changes.

The clusters are built of six storage pools (tiers) with the following characteristics:

  • Tier A pool – has a capacity of 104 TB and is used by /gpfs1a and /gpfs1b
  • Tier B pool – has a capacity of 436TB and is used by /scratch
  • Tier C pool – has a capacity of 104 TB and is formatted for use by small files. The tier is used by /gpfs3 and partly by the /test cluster.
  • Tier D pool – contains all the tapes in the TS3500 library that are used for Hierarchical Storage Management (HSM) and backup by Tivoli Storage Manager (TSM). Tier D has a capacity of around 8000 TB
  • Tier E pool – has a capacity of 1535 TB and is used by /gpfs2
  • Tier F pool – contains two DCS3700 524 TB and is used by /gpfs2
  • System pool /gpfs1a & /gpfs1b: This pool is hosted on the DS3200 storage. It has 8 RAID1 LUNs of 420GB each and additional 5 RAID1 LUNs of 420 GB each
  • System pool /gpfs2 - is hosted on two fusion I/O cards of 1.2 PB each
  • System pool /gpfs3- is hosted on two fusion I/O cards of 1.2 PB each
  • System pool /test: This pool is hosted on the first LUN of two fusion I/O cards of 600GB each

Target 01 Cluster

The target01 cluster provides storage to the WISE nodes. These nodes use GPFS to host Oracle databases, and directly store data onto the filesystems /gpfs1a and /gpfs1b. Within the cluster RAID1 storage in Tier A is used to store metadata, and RAID5 storage - to store data.

For a high-resolution image of the Target 01 cluster, click here.

Target 01 Cluster Storage
Target 01 Cluster Storage

Target 02 Cluster

The target02 cluster is a general purpose HPC cluster with three filesystems, /gpfs02, /gpfs3 and /scratch that provide storage for and access to data of the majority of the Target clients. Most users are hosted on the /gpfs2 filesystem, while /gpfs3 filesystem specifically caters towards the needs of Monk.

The /gpfs2 is a general purpose filesystem that has its metadata on fusion I/O nodes, and two data tiers located on LSI & IBM storage. The /gpfs3 is optimized for small files with high I/O requirements. This filesystem has its metadata on fusion I/O cards and its data on the storage servers of Tier C. The /scratch is used for temporary data or buffering of input streams while /gpfs2 is not available. It’s hosted on Tier B with no separation of data and metadata.

For a high-resolution image of the Target 02 cluster, click here.

Target 02 cluster
Target 02 cluster

Test and Development Cluster

The test cluster is used to test GPFS releases, updates, and configuration settings. It consists of a single filesystem called /test. It will be used by Target system management and development to test actions on the GPFS cluster. As such, it needs to be a representation of the features used in the production filesystems and therefore has two tiers and tape access. The /test filesystem has two fusion I/O cards with two 600GB LUNs each, of which one LUN is used for metadata, and one LUN for a data tier. Regular disk data is provided by a small fraction of the two DS5000 and DCS9900 storage servers, each of which provide the test array with one storage array each. The storage array is split into two LUNs to allow distribution over the two storage controllers.

For a high-resolution image of the Test and Development Cluster, click here.

Test and Development Cluster
Test and Development Cluster
Laatst gewijzigd:02 oktober 2015 22:56