Abstract
In May 2022, the newest supercomputer to top the TOP 500 list was Frontier at Oak Ridge National Laboratory, demonstrating the capability of computing more than 1.1 quintillion (1018) floating-point calculations every second. Driving this ground-breaking rate of computing is Frontier’s more than 37,000 graphics processing units (GPUs) and 9,408 central processing units (CPUs). In total, Frontier contains more than 60 million parts. At this scale, the smallest margin of error may generate hundreds of hardware errors across the system. These errors are capable of directly hindering world-class science performed on Frontier if not found. In this work, we describe and evaluate two strategies for finding hardware-level faults in Frontier’s 9,408 compute nodes. There are two strategies developed: the first uses the Slurm scheduler to scavenge available compute time to run the node screen, the second builds upon the lessons learned in the first strategy and enforces a weekly screen of each node. Using June 2023 as a case study, we find that the first scheduling strategy consumed more than ten times the resources as the second scheduling strategy, but successfully detected five hardware defects in Frontier. We summarize the lessons learned while developing and running a node screen on the world’s first exascale supercomputer.