Physical and Cyber Security
ORNL has a comprehensive physical security strategy including fenced perimeters, patrolled facilities, and authorization checks for physical access. An integrated cyber security plan encompasses all aspects of computing. Cyber security plans are risk-based. Separate systems of differing security requirements allow the appropriate level of protection for each system, while not hindering the science needs of the projects.
The ORNL campus is connected to every major research network at rates of between 10 GB/s and 100 GB/s. Connectivity to these networks is provided via optical networking equipment owned and operated by UT-Battelle that runs over leased fiber-optic cable. This equipment has the capability of simultaneously carrying either 192 10-GB/s circuits or 96 40-GB/s circuits and connects the OLCF to major networking hubs in Atlanta and Chicago. Currently, 16 of the 10 GB circuits are committed to various purposes, allowing for virtually unlimited expansion of the networking capability. The connections into ORNL provide access to research and education networks including ESnet, XSEDE, and Internet2. To meet the increasingly demanding needs of data transfers between major facilities, ORNL participated in the Advanced Networking Initiative that provides a native 100 GB optical network fabric that includes ORNL, Argonne National Laboratory, Lawrence Berkeley National Laboratory, and other facilities in the northeast. This 100G fabric will become the production network in December 2013.
The local-area network is a common physical infrastructure that supports separate logical networks, each with varying levels of security and performance. Each of these networks is protected from the outside world and from each other with access control lists and network intrusion detection. Line rate connectivity is provided between the networks and to the outside world via redundant paths and switching fabrics. A tiered security structure is designed into the network to mitigate many attacks and to contain others.
Visualization and Collaboration
ORNL has state-of-the-art visualization facilities that can be used on site or accessed remotely.
ORNL’s Exploratory Visualization Environment for REsearch in Science and Technology (EVEREST) is a large-scale venue for data exploration and visualization. The EVEREST room is undergoing renovation and will be completely reconfigured by January 2013. The EVEREST room contains two large-format displays. The primary display is a 30.5’ x 8.5’ tiled wall containing 18 individual displays and an aggregate pixel count of 37 million pixels. It is capable of displaying interactive stereo 3D imagery for an immersive user experience. The secondary display is a tiled display containing 16 individual panels, and an aggregate pixel count of 33 million pixels. Both displays may be operated independently, providing the ability to view two or more sources of information simultaneously. The EVEREST displays are controlled by both a dedicated Linux cluster and by “fat nodes” allowing the display of information from commodity hardware and software. The diversity of display and control systems allows for a wide array of uses, from interactive and deep exploration of scientific datasets to engaging scientific communication to the public. A dedicated Lustre file system provides high bandwidth data delivery to the EVEREST power wall. ORNL also provides Lens, a 77-“fat node” cluster dedicated to data analysis and visualization. The Lens cluster has been demonstrated with a variety of commercial off-the-shelf software and open-source visualization tools including VisIt, Paraview, CEI Ensight, and AVS-Express. The Everest cluster rendering environment utilizes Chromium and Distributed Multi-Head X (DMX) for tiled, parallel rendering. The Lens cluster cross mounts the Center-wide Lustre file system to allow “zero copy” access to simulation data from other OLCF computational resources. The Everest facility will complete a significant upgrade late in 2012 that will provide state of the art visualization and rendering facilities to the user community.