Supercomputing and Computation
High Performance Storage and Archival Systems
To meet the needs of ORNL’s diverse computational platforms, a shared parallel file system capable of meeting the performance and scalability requirements of these platforms has been successfully deployed. This shared file system, based on Lustre, Data Direct Networks (DDN), and InfiniBand technologies, is known as Spider and provides centralized access to petascale datasets from all major on-site computational platforms. Delivering more than 240 GB/s of aggregate performance, scalability to more than 26,000 file system clients, and more than 10-petabyte (PB) storage capacity, Spider is the world’s largest scale Lustre file system. Spider consists of 48 DDN 9900 storage arrays managing 13,440 1-TB SATA drives; 192 Dell dual-socket, quad-core I/O servers providing more than 14 TF in performance; and more than 3 TB of system memory. Metadata are stored on 2 LSI Engino 7900s (XBB2) and are served by three Dell quad-socket, quad-core systems. ORNL systems are interconnected to Spider via an InfiniBand system area network which consists of four 288-port Cisco 7024D IB switches and more than 3 miles of optical cables. Archival data are stored on the center’s High Performance Storage System (HPSS), developed and operated by ORNL. HPSS is capable of archiving hundreds of petabytes of data and can be accessed by all major leadership computing platforms. Incoming data are written to disk and later migrated to tape for long term archiving. This hierarchical infrastructure provides high-performance data transfers while leveraging cost effective tape technologies. Robotic tape libraries provide tape storage. The center has five SL8500 tape libraries holding up to 10,000 cartridges each and is deploying a sixth SL8500 in 2013. The libraries house a total of 24 T10K-A tape drives (500 GB cartridges, uncompressed) and 32 T‑10K-B tape drives (1 terabyte cartridges, uncompressed). Each drive has a bandwidth of 120 MB/s. ORNL’s HPSS disk storage is provided by DDN storage arrays with nearly a petabyte of capacity and over 12 GB/s of bandwidth. This infrastructure has allowed the archival system to scale to meet increasingly demanding capacity and bandwidth requirements with more than 28 PB of data stored as of November 2012.