The U.S. Department of Energy’s Oak Ridge National Laboratory celebrated the debut of Frontier, the world’s fastest supercomputer and the dawn of the exascale computing era.
Deputy Secretary of Energy David Turk, DOE Office of Science Director Asmeret Asefaw Berhe and U.S. Rep. Chuck Fleischmann joined ORNL Director Thomas Zacharia, ORNL Site Office Director Johnny Moore and computing vendor partners Lisa Su, chair and chief executive officer of AMD, and Antonio Neri, president and CEO of HPE, to congratulate the public-private team that made Frontier’s record-setting performance possible.
“Research that might once have taken weeks to complete, Frontier will tear through in hours, even seconds,” Turk said. “Oak Ridge has positioned the United States to lead the world in solving massive scientific challenges across the board.”
“Exascale computing is a powerful tool that will allow us to advance the core missions of the Office of Science — to deliver scientific discoveries and major scientific tools that will transform our understanding of nature and advance the energy, economic, and national security of the U.S.,” Berhe said. “Frontier makes exascale computing a reality and opens many doors for the future of scientific research to solve big problems.”
Frontier leverages ORNL’s extensive expertise in accelerated computing for open science and will enable researchers to tackle problems of national and global importance deemed impossible to solve as recently as five years ago.
“We are incredibly proud of the team that has made ORNL home to the world’s first exascale computer. This accomplishment was possible due to the strong public-private partnerships between DOE, ORNL, HPE, and AMD,” Zacharia said. “Working with our sister labs and academic partners, Frontier is already delivering science on day one.”
Frontier earned the No. 1 spot on the 59th TOP500 list in May 2022 with 1.1 exaflops of performance – more than a quintillion, or 1018, calculations per second – making it the fastest computer in the world and the first to achieve exascale.
“As the world’s most powerful AI machine, Frontier’s novel architecture is also ideally suited for delivering unprecedented machine learning and data science insights and automations that could vastly improve our understanding of critical processes, from drug delivery to nuclear fusion to the global climate,” said Doug Kothe, associate laboratory director of ORNL’s Computing and Computational Sciences Directorate and director of the Exascale Computing Project.
“Frontier marks the start of the exascale era for scientific computing,” said Bronson Messer, director of science for ORNL’s Oak Ridge Leadership Computing Facility, which houses Frontier. “The science that’s going to be done on Frontier is going to ignite an explosion of innovation – and of new questions we haven’t even thought of before.”
The new machine also claimed the top spot on the Green500 list, which rates a supercomputer’s energy efficiency in terms of performance per watt. Frontier clocked in at 62.68 gigaflops, or nearly 63 billion calculations, per watt. Frontier also holds the top ranking in the new mixed-precision computing benchmark that rates performance in arithmetic precisions commonly used for artificial intelligence problems.
“This is a very important milestone for the nation and the world,” said Gina Tourassi, director of ORNL’s National Center for Computational Sciences, which oversees the OLCF. “The computational models we can build with this computer will help us fill in missing pieces of the puzzle for a range of scientific inquiries, from matter and energy to life itself, and will give the next generation of scientists the tools and the springboard they need to make even greater leaps of understanding.”
ORNL’s scientific partners, such as General Electric Aviation and GE Power, plan to leverage the power of Frontier to revolutionize the future of flight with sustainable hydrogen propulsion and hybrid electric technologies and to maximize the potential of clean-energy technologies such as wind power.
“GE Aerospace and Research will be using exascale computing, including time on the Frontier supercomputer, to revolutionize the future of flight with sustainable hydrogen propulsion and hybrid electric technologies,” said David Kepczynski, chief information officer at GE Research. “In pursuit of a net-zero carbon future, exascale supercomputing systems will be indispensable tools for GE researchers and engineers working at the cutting edge to ‘Build a World that Works.’”
The work to deliver, install and test Frontier began in the midst of the COVID-19 pandemic, as shutdowns around the world strained international supply chains. More than 100 team members worked around the clock to source millions of components, ensure timely deliveries of system parts, and carefully install and test 74 HPE Cray EX cabinets that include more than 9,400 AMD-powered nodes and 90 miles of interconnect cables.
“Frontier is a landmark in computing that will usher in a new era of insights and innovation,” said Antonio Neri, president and CEO of HPE. “We are proud of this massive achievement that will help make significant contributions to science, push the envelope for artificial intelligence, and strengthen U.S. industrial competitiveness. Frontier was made possible through powerful engineering and design, and most importantly, through a strong partnership between Oak Ridge National Laboratory, HPE and AMD.”
Each of Frontier’s more than 9,400 nodes is equipped with a third-generation AMD EPYC CPU and four AMD Instinct MI250X graphic processing units, or GPUs. Combining traditional CPUs with GPUs to accelerate the performance of leadership-class scientific supercomputers is indicative of the hybrid computing paradigm pioneered by ORNL and its partners.
“At its heart, Frontier highlights the importance of long-term public private partnerships and the important role high performance computing plays advancing scientific research and national security,” said Lisa Su, chair and CEO of AMD. “I am excited to see Frontier enable large scale science research that was previously not possible, leading to new discoveries in physics, medicine, climate research and energy that will transform our daily lives.”
“This project marks the culmination of more than three years of effort by hundreds of dedicated ORNL professionals and their counterparts at HPE and AMD and across the DOE community,” said Justin Whitt, director of the OLCF. “Their hard work will enable scientists around the world to begin their explorations on Frontier. At the OLCF, we’re proud of our legacy of world-leading computer excellence.”
ORNL and its partners are on schedule as they continue the stand-up of Frontier. Next steps include additional testing and validation of the system, which remains on track for final acceptance and early science access later in 2022. Full access for science applications is expected at the beginning of 2023.
Facts about Frontier
The Frontier supercomputer includes some of the world’s most advanced technologies from AMD and HPE.
- Each node contains one optimized third-generation AMD EPYC processor and four AMD Instinct MI250X accelerators for a system-wide total of 9,472 CPUs and 37,888 GPUs. These nodes provide developers with ease of programming for their applications owing to the coherency enabled by the EPYC processors and Instinct accelerators.
- HPE’s Slingshot interconnect is the world’s only high-performance Ethernet fabric designed for HPC and AI solutions. By connecting several core components for improved performance (e.g., CPUs, GPUs, high-performance storage), Slingshot enables larger data-intensive workloads that would otherwise be bandwidth limited and provides higher speed and congestion control to ensure applications run smoothly. Owing to this unique configuration and expanded performance, teams have taken a thoughtful approach to scaling the interconnect to a massive supercomputer such as Frontier, made up of 74 HPE Cray EX cabinets, to ensure reliable performance across applications.
- An I/O subsystem from HPE is being brought online this year to support Frontier and the OLCF. The I/O subsystem features an in-system storage layer and Orion, which is a Lustre-based, enhanced center-wide file system. The in-system storage layer will employ compute-node local storage devices connected via PCIe Gen4 links to provide peak read speeds of more than 75 terabytes per second, peak write speeds of more than 35 terabytes per second, and more than 15 billion random-read input/output operations per second. The Orion center-wide file system will provide around 700 petabytes of storage capacity and peak write speeds of 5 terabytes per second.
- As a next-generation supercomputing system and the world’s fastest for open science, Frontier is also liquid cooled. This cooling system promotes a quieter datacenter by removing the need for a noisier, air-cooled system.
UT-Battelle manages ORNL for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.