Skip to main content
A male researcher is standing next to a poster board displayed on an easel to the right. The researcher is dressed in professional attire, and the poster board is positioned beside him, showing research or visual information.

Ryan Culler is the program manager at Oak Ridge National Laboratory, where he oversees the production of actinium-225, a promising treatment for cancer. Driven by a personal connection to cancer through his late brother, Culler is dedicated to advancing medical isotopes to help improve cancer care. 

Summit Supercomputer

Scientists conducted a groundbreaking study on the genetic data of over half a million U.S. veterans, using tools from the Oak Ridge National Laboratory to analyze 2,068 traits from the Million Veteran Program.

Picture shows magnetic domains in uranium with a blue and orange organic shapes, similar to lava flowing through water, but in graphic form

The US focuses on nuclear nonproliferation, and ORNL plays a key role in this mission. The lab conducts advanced research in uranium science, materials analysis and nuclear forensics to detect illicit nuclear activities. Using cutting-edge tools and operational systems, ORNL supports global efforts to reduce nuclear threats by uncovering the history of nuclear materials and providing solutions for uranium removal. 

ORNL computing staff members Hector Suarez (middle) and William Castillo (right) talk HPC at the Tapia Conference career fair in San Diego, California. Credit: ORNL, U.S. Dept of Energy

The National Center for Computational Sciences, located at the Department of Energy’s Oak Ridge National Laboratory, made a strong showing at computing conferences this fall. Staff from across the center participated in numerous workshops and invited speaking engagements.

Wide shot of the expo center, ground filled with people walking and a green, white and blue 3D circle sign above everyone in the center

The Department of Energy’s Oak Ridge National Laboratory had a major presence at this year’s International Conference for High Performance Computing, Networking, Storage, and Analysis (SC24). 

A small sample from the Frontier simulations reveals the evolution of the expanding universe in a region containing a massive cluster of galaxies from billions of years ago to present day (left).

In early November, researchers at the Department of Energy’s Argonne National Laboratory used the fastest supercomputer on the planet to run the largest astrophysical simulation of the universe ever conducted. The achievement was made using the Frontier supercomputer at Oak Ridge National Laboratory. 

Black computing cabinets in a row on a white floor in the data center that houses the Frontier supercomputer at Oak Ridge National Laboratory

Two-and-a-half years after breaking the exascale barrier, the Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory continues to set new standards for its computing speed and performance.

Graphic representation of ai model that identifies proteins

Researchers used the world’s fastest supercomputer, Frontier, to train an AI model that designs proteins, with applications in fields like vaccines, cancer treatments, and environmental bioremediation. The study earned a finalist nomination for the Gordon Bell Prize, recognizing innovation in high-performance computing for science.

Pictured here are 9 scientists standing in a line in front of the frontier supercomputer logo/computer

Researchers at Oak Ridge National Laboratory used the Frontier supercomputer to train the world’s largest AI model for weather prediction, paving the way for hyperlocal, ultra-accurate forecasts. This achievement earned them a finalist nomination for the prestigious Gordon Bell Prize for Climate Modeling.

Nine men are pictured here standing in front of a window, posing for a group photo with 5 standing and 4 sitting.

A research team led by the University of Maryland has been nominated for the Association for Computing Machinery’s Gordon Bell Prize. The team is being recognized for developing a scalable, distributed training framework called AxoNN, which leverages GPUs to rapidly train large language models.