Filter News
Area of Research
News Type
News Topics
- (-) Frontier (8)
- (-) Physics (4)
- 3-D Printing/Advanced Manufacturing (3)
- Advanced Reactors (2)
- Artificial Intelligence (26)
- Big Data (32)
- Bioenergy (5)
- Biology (6)
- Biomedical (7)
- Biotechnology (3)
- Buildings (3)
- Chemical Sciences (2)
- Clean Water (3)
- Computer Science (36)
- Coronavirus (2)
- Cybersecurity (3)
- Emergency (1)
- Energy Storage (1)
- Environment (27)
- Exascale Computing (8)
- Fusion (2)
- Grid (6)
- High-Performance Computing (15)
- Hydropower (2)
- Isotopes (1)
- ITER (1)
- Machine Learning (13)
- Materials Science (6)
- Mathematics (2)
- Microscopy (2)
- Molten Salt (1)
- Nanotechnology (4)
- National Security (24)
- Neutron Science (2)
- Nuclear Energy (3)
- Quantum Science (1)
- Security (4)
- Simulation (6)
- Space Exploration (1)
- Statistics (2)
- Summit (10)
- Transportation (5)
Media Contacts

Researchers from ORNL have developed a new application to increase efficiency in memory systems for high performance computing. Rather than allow data to bog down traditional memory systems in supercomputers and impact performance, the team from ORNL, along with researchers from the University of Tennessee, Knoxville, created a framework to manage data more efficiently with memory systems that employ more complex structures.

To bridge the gap between experimental facilities and supercomputers, experts from SLAC National Accelerator Laboratory are teaming up with other DOE national laboratories to build a new data streaming pipeline. The pipeline will allow researchers to send their data to the nation’s leading computing centers for analysis in real time even as their experiments are taking place.

John Lagergren, a staff scientist in Oak Ridge National Laboratory’s Plant Systems Biology group, is using his expertise in applied math and machine learning to develop neural networks to quickly analyze the vast amounts of data on plant traits amassed at ORNL’s Advanced Plant Phenotyping Laboratory.
Integral to the functionality of ORNL's Frontier supercomputer is its ability to store the vast amounts of data it produces onto its file system, Orion. But even more important to the computational scientists running simulations on Frontier is their capability to quickly write and read to Orion along with effectively analyzing all that data. And that’s where ADIOS comes in.


To support the development of a revolutionary new open fan engine architecture for the future of flight, GE Aerospace has run simulations using the world’s fastest supercomputer capable of crunching data in excess of exascale speed, or more than a quintillion calculations per second.

The old photos show her casually writing data in a logbook with stacks of lead bricks nearby, or sealing a vacuum chamber with a wrench. ORNL researcher Frances Pleasonton was instrumental in some of the earliest explorations of the properties of the neutron as the X-10 Site was finding its postwar footing as a research lab.

For nearly six years, the Majorana Demonstrator quietly listened to the universe. Nearly a mile underground at the Sanford Underground Research Facility, or SURF, in Lead, South Dakota, the experiment collected data that could answer one of the most perplexing questions in physics: Why is the universe filled with something instead of nothing?

Scientists at the Department of Energy’s Oak Ridge National Laboratory are leading a new project to ensure that the fastest supercomputers can keep up with big data from high energy physics research.

The Earth System Grid Federation, a multi-agency initiative that gathers and distributes data for top-tier projections of the Earth’s climate, is preparing a series of upgrades.