Filter News
Area of Research
News Topics
- (-) Exascale Computing (64)
- (-) Fusion (65)
- 3-D Printing/Advanced Manufacturing (142)
- Advanced Reactors (40)
- Artificial Intelligence (124)
- Big Data (77)
- Bioenergy (108)
- Biology (124)
- Biomedical (72)
- Biotechnology (35)
- Buildings (73)
- Chemical Sciences (84)
- Clean Water (32)
- Composites (33)
- Computer Science (223)
- Coronavirus (48)
- Critical Materials (29)
- Cybersecurity (35)
- Education (5)
- Element Discovery (1)
- Emergency (4)
- Energy Storage (114)
- Environment (217)
- Fossil Energy (8)
- Frontier (62)
- Grid (73)
- High-Performance Computing (128)
- Hydropower (12)
- Irradiation (3)
- Isotopes (62)
- ITER (9)
- Machine Learning (66)
- Materials (156)
- Materials Science (155)
- Mathematics (12)
- Mercury (12)
- Microelectronics (4)
- Microscopy (56)
- Molten Salt (10)
- Nanotechnology (62)
- National Security (85)
- Neutron Science (169)
- Nuclear Energy (121)
- Partnerships (66)
- Physics (68)
- Polymers (34)
- Quantum Computing (51)
- Quantum Science (87)
- Security (30)
- Simulation (64)
- Software (1)
- Space Exploration (26)
- Statistics (4)
- Summit (70)
- Transportation (102)
Media Contacts

Scientists designing the world’s first controlled nuclear fusion power plant, ITER, needed to solve the problem of runaway electrons, negatively charged particles in the soup of matter in the plasma within the tokamak, the magnetic bottle intended to contain the massive energy produced. Simulations performed on Summit, the 200-petaflop supercomputer at ORNL, could offer the first step toward a solution.

The US focuses on nuclear nonproliferation, and ORNL plays a key role in this mission. The lab conducts advanced research in uranium science, materials analysis and nuclear forensics to detect illicit nuclear activities. Using cutting-edge tools and operational systems, ORNL supports global efforts to reduce nuclear threats by uncovering the history of nuclear materials and providing solutions for uranium removal.

The National Center for Computational Sciences, located at the Department of Energy’s Oak Ridge National Laboratory, made a strong showing at computing conferences this fall. Staff from across the center participated in numerous workshops and invited speaking engagements.

FREDA is a new tool being developed at ORNL that will accelerate the design and testing of next-generation fusion devices. It is the first tool of its kind to combine plasma and engineering modeling capabilities and utilize high performance computing resources.

The Department of Energy’s Oak Ridge National Laboratory had a major presence at this year’s International Conference for High Performance Computing, Networking, Storage, and Analysis (SC24).

In early November, researchers at the Department of Energy’s Argonne National Laboratory used the fastest supercomputer on the planet to run the largest astrophysical simulation of the universe ever conducted. The achievement was made using the Frontier supercomputer at Oak Ridge National Laboratory.

Two-and-a-half years after breaking the exascale barrier, the Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory continues to set new standards for its computing speed and performance.

Researchers used the world’s fastest supercomputer, Frontier, to train an AI model that designs proteins, with applications in fields like vaccines, cancer treatments, and environmental bioremediation. The study earned a finalist nomination for the Gordon Bell Prize, recognizing innovation in high-performance computing for science.

Researchers at Oak Ridge National Laboratory used the Frontier supercomputer to train the world’s largest AI model for weather prediction, paving the way for hyperlocal, ultra-accurate forecasts. This achievement earned them a finalist nomination for the prestigious Gordon Bell Prize for Climate Modeling.

A research team led by the University of Maryland has been nominated for the Association for Computing Machinery’s Gordon Bell Prize. The team is being recognized for developing a scalable, distributed training framework called AxoNN, which leverages GPUs to rapidly train large language models.