Frontier, an HPE Cray EX supercomputer capable of 1018 calculations per second — or 10 with 18 zeroes — was installed in late 2021 and is undergoing integration and testing. Frontier is on track to be the nation’s first exascale supercomputer this year. Early science users are accessing Frontier through Crusher, and the Frontier system will enter full user operations on January 1, 2023, with the INCITE program users. Crusher is a 1.5-cabinet iteration of the massive system featuring 192 nodes connected by the HPE Slingshot Interconnect. Each node contains one optimized 3rd Gen AMD EPYC™ CPU and four AMD Instinct™ MI250x accelerators.
Four well-established projects — the CANcer Distributed Learning Environment, or CANDLE, project; the Computational hydrodynamics on ∥ (parallel) architectures, or Cholla, project; the Locally Self-Consistent Multiple Scattering, or LSMS, project; and the Nuclear Coupled-Cluster Oak Ridge, or NuCCOR, project — have codes successfully optimized on the Frontier architecture via Crusher. Some of these codes have been used on platforms since the OLCF’s first hybrid-architecture system, the decommissioned 27-petaflop Cray XK7 Titan supercomputer, which debuted 10 years ago this year. Taking up only 44 square feet of floor space, Crusher is 1/100th the size of the previous Titan supercomputer but faster than the entire 4,352-square-foot system was, packing a massive computing punch for its small size.
The OLCF, a U.S. Department of Energy Office of Science user facility at DOE’s Oak Ridge National Laboratory, has built a reputation around developing and deploying some of the most powerful high-performance computing resources for open science, and Frontier will follow the success of the previous systems as the nation’s first exascale supercomputer. Frontier provides an 8-fold increase in computational power from the center’s current 200-petaflop IBM AC922 Summit supercomputer.
“Crusher is the latest in a long line of test and development systems we have deployed for early users of OLCF platforms and is easily the most powerful of these we have ever provided,” said ORNL’s Bronson Messer, OLCF director of science. “The results these code teams are realizing on the machine are very encouraging as we look toward the dawn of the exascale era with Frontier.”
The OLCF is hosting hackathons geared toward getting users up and running on Crusher — and soon Frontier. Hosted by the User Assistance Group at the OLCF, the 3-day events target Frontier’s architecture and are extremely valuable to the facility, the vendors, and the user community.
“As more people run on this hardware — when we have more codes and styles of programming on the system — it provides us opportunities to discover and overcome challenges and prepares us to run science on Frontier with no hiccups,” said ORNL’s Bálint Joó, group leader of the OLCF’s Advanced Computing for Nuclear, Particle and Astrophysics Group. More hackathons will be held in the coming months.
Below are descriptions of these four pivotal projects currently running codes on the Crusher cabinet and the breakthrough science that will be enabled with Frontier in the areas of cancer research, astrophysics, materials, and nuclear physics.
“Transformer” Deep Learning Model
Formed out of a partnership between DOE and the National Cancer Institute, or NCI, CANDLE is part of the Cancer Moonshot effort and exists within DOE’s Exascale Computing Project. Its objective is to develop applications from the pilot projects previously in the Cancer Moonshot effort, scale them up to next-generation supercomputers and support their deep learning components to accelerate cancer research on machines like Frontier. The CANDLE project is currently developing next-generation natural language processing models for precision medicine using “Transformers,” deep learning models that identify unseen connections between words in clinical text.
Led by Gina Tourassi, director of the National Center for Computational Sciences at ORNL, the CANDLE team has successfully run one of their Transformer models on Crusher, achieving an 80% speedup on a Crusher node from previous systems. The effort to optimize and run the code on Crusher was undertaken by John Gounley, a computational scientist and technical lead of ORNL’s CANDLE team. Frontier will enable them to use a much larger neural language processing model with many more parameters. Ultimately, the team aims to provide NCI with better, more accurate models for cancer surveillance.
“We hope that our next generation of models trained on systems like Frontier is going to be based on this Transformer architecture and is going to be significantly more accurate than the models we have in practice today,” Gounley said.
The Cholla code is an astrophysical hydrodynamics code used to simulate the dynamics of galaxies, providing insights into how they form and evolve. One of the codes in the Center for Accelerated Application Readiness, or CAAR, program, Cholla was one of the first codes to be rewritten for Frontier. Now, the team’s code is running on Crusher, and the team is seeing major results that are propelling them toward an understanding of the physics driving star formation and of why galaxies stop forming stars.
“We are seeing a roughly 15-fold speedup on Crusher compared with our baseline tests from fall 2019 on Summit,” said Evan Schneider, assistant professor at the University of Pittsburgh and principal investigator of Cholla. “About 3-fold of the improvement is hardware based, and about 5-fold is from software development improvements made through the CAAR project.”
The promising performance on Crusher points toward success on the full Frontier system, which will be operational in the second half of 2022.
LSMS is a first-principles code used to calculate the properties of materials, including magnetic materials, metallic systems, and alloys. LSMS, another one of the OLCF’s CAAR codes, can perform calculations of the physics of extremely large material systems — more than 100,000 atoms — as determined by the motions of electrons in a solid. The code is currently deployed on Crusher and will soon be capable of scaling to the full Frontier system.
“With Frontier, we will be able to perform LSMS calculations of larger systems and also study new physics,” said ORNL’s Markus Eisenbach, senior computational scientist at the OLCF and principal investigator of LSMS. “Because we will have significantly more computational power available with Frontier, we can actually use physics models that include more correlation effects that we can’t capture as easily on current systems.”
Eisenbach and team are also looking forward to combining classical statistical mechanics — which provides the team with the behavior of materials at different temperatures — with machine learning workflows on Frontier to calculate material behaviors more rapidly.
NuCCOR is a nuclear physics code in the OLCF’s CAAR program that is used to calculate the properties of atomic nuclei and their reactions. The code is an ab-initio quantum many-body application, meaning it calculates atomic nuclear properties “from the beginning” rather than making assumptions about their behavior. NuCCOR is capable of computing properties of large nuclei, breaking new ground to approach nuclear sizes that define the limits of the existence of matter. NuCCOR is currently running on Crusher and is on track to scale to the full Frontier system.
“With Frontier, we arrive at a paradigm shift in nuclear physics,” said ORNL’s Gustav Jansen, computational scientist at the National Center for Computational Sciences and CAAR liaison for NuCCOR. “The use of ab-initio methods, methods that only use the forces between protons and neutrons as input, will no longer be limited by the size of a nucleus, and the whole nuclear chart will be within reach. This will lead the way to more accurate and precise calculations, a better understanding of the fundamental interactions between protons and neutrons, and the exploration of nuclear isotopes that have yet to be discovered.”
During the NuCCOR team’s initial testing on Crusher, it found that its computational kernels were up to 8 times faster on one of the AMD Instinct™ MI250x GPUs that power Frontier than on one of Summit’s NVIDIA V100 GPUs.
UT-Battelle manages Oak Ridge National Laboratory for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.