Skip to main content
SHARE
News

Early Frontier users seize exascale advantage, grapple with grand scientific challenges

The Frontier supercomputer at ORNL remains in the number one spot on the May 2023 TOP500 rankings, with an updated high-performance Linpack score of 1.194 exaflops. Engineers at the Oak Ridge Leadership Computing Facility, which houses Frontier and its predecessor Summit, expect that Frontier’s speeds could ultimately top 1.4 exaflops, or 1.4 quintillion calculations per second. Credit: Carlos Jones/ORNL, U.S. Dept. of Energy

With the world’s first exascale supercomputing system now open to full user operations, research teams are harnessing Frontier’s power and speed to tackle some of the most challenging problems in modern science.

The HPE Cray EX system at the Department of Energy’s Oak Ridge National Laboratory debuted in May 2022 as the fastest computer on the planet and first machine to break the exascale barrier at 1.1 exaflops, or 1.1 quintillion calculations per second. That’s more calculations per second than every human on Earth could perform in four years, assuming they completed one calculation each second.

Frontier remains in the number one spot on the May 2023 TOP500 rankings, with an updated HPL, or High-Performance Linpack, score of 1.194 exaflops. The increase of .092 exaflops, or 92 petaflops, is equivalent to the eighth most powerful supercomputer in the world on the TOP500 list. Engineers at the Oak Ridge Leadership Computing Facility, which houses Frontier and its predecessor Summit, expect that Frontier’s speeds could ultimately top 1.4 exaflops, or 1.4 quintillion calculations per second.

In addition to the updated HPL number, the Frontier team has improved the High-Performance Linpack-Mixed Precision Benchmark, HPL-MxP, to nearly 10 exaflops. Frontier’s HPL-MxP performance is now 9.950 exaflops, improved from 7.9 exaflops in November 2022.

“Frontier represents the culmination of more than a decade of hard work by dedicated professionals from across academia, private business and the national laboratory complex through the Exascale Computing Project to realize a goal that once seemed barely possible,” said Doug Kothe, ORNL’s associate laboratory director for computing and computational sciences. “This machine will shrink the timeline for discoveries that will change the world for the better and touch everyone on Earth.”

Exascale computing’s promise rests on the ability to synthesize massive amounts of data into detailed simulations so complex that previous generations of computers couldn’t process the calculations. The faster the computer, the more possibilities and probabilities can be plugged into the simulation to be tested against what’s already known. The process helps researchers target their experiments and fine-tune designs while saving the time and expense of real-world testing, producing results that are ready to be validated.

“I don’t think we can overstate the impact Frontier promises to make for some of these studies,” said Justin Whitt, the OLCF’s director. “The science that will be done on this computer will be fundamentally different from what we have done before with computation. Our early research teams have already begun exploring fundamental questions about everything from nuclear fusion to forecasting earthquakes to building a better combustion engine.”

Some of the studies underway on Frontier include:

  • ExaSMR: Led by ORNL’s Steven Hamilton, this study seeks to cut out the long timelines and high front-end costs of advanced nuclear reactor design and use exascale computing power to simulate modular reactors that would not only be smaller but also safer, more versatile and customizable to sizes beyond the traditional huge reactors that power cities.
  • Exascale Atomistic Capability for Accuracy, Length and Time (EXAALT): This molecular dynamics study, led by Danny Perez of Los Alamos National Laboratory, seeks to transform fundamental materials science for energy by using exascale computing speeds to enable vastly larger, faster and more accurate simulations for such applications as nuclear fission and fusion.
  • Combustion PELE: This study, named for the Hawaiian goddess of fire and led by Jacqueline Chen of Sandia National Laboratories, is designed to simulate the physics inside an internal combustion engine in pursuit of developing cleaner, more efficient engines that would reduce carbon emissions and conserve fossil fuels.
  • Whole Device Model Application (WDMApp): This study, led Amitava Bhattacharjee of Princeton Plasma Physics Laboratory, is designed to simulate the magnetically confined fusion plasma – a boiling stew of charged nuclear particles hotter than the sun – necessary for the contained reactions to power nuclear fusion technologies for energy production.
  • WarpX: Led by Jean-Luc Vay of Lawrence Berkeley National Laboratory, this study seeks to simulate smaller, more versatile plasma-based particle accelerators, which would enable scientists to design particle accelerators for many applications from radiation therapy to semiconductor chip manufacturing and beyond. The team’s work won the Association of Computing Machinery’s 2022 Gordon Bell Prize, which recognizes outstanding achievement in high-performance computing.
  • ExaSky: This study, led by Salman Habib of Argonne National Laboratory, seeks to expand the size, scope and accuracy of simulations for complex cosmological phenomena, such as dark energy and dark matter, to uncover new insights into the dynamics of the universe.
  • EQSIM: Led by LBNL’s David McCallen, this study is designed to simulate the physics and tectonic conditions that cause earthquakes to enable assessment of areas at risk.
  • Energy Exascale Earth System Model (E3SM): This study, led by Sandia’s Mark Taylor, seeks to enable more accurate and detailed predictions of climate change and its effect on the national and global water cycle by simulating the complex interactions between the large-scale, mostly 2D motions of the atmosphere and the smaller, mostly 3D motions that occur in clouds and storms.
  • Cancer Distributed Learning Environment (CANDLE): Led by Argonne’s Rick Stevens, this study seeks to develop predictive simulations that could help identify and streamline trials for promising cancer treatments, reducing years of lengthy, expensive clinical studies.

“We’ve been carefully fine-tuning Frontier for the past year, and these teams have been our test pilots, helping us see what heights we can reach,” said Bronson Messer, OLCF’s director of science at ORNL. “We’ve just begun to discover where exascale can take us.”

Frontier is an HPE-Cray EX system with more than 9,400 nodes, each equipped with a third-generation AMD EPYC CPU and four AMD Instinct MI250X graphics processing units, or GPUs. The OLCF is a DOE Office of Science user facility.

UT-Battelle manages ORNL for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science. – Matt Lakin