Filter News
Area of Research
News Type
News Topics
- (-) Exascale Computing (8)
- (-) Neutron Science (2)
- 3-D Printing/Advanced Manufacturing (3)
- Advanced Reactors (2)
- Artificial Intelligence (26)
- Big Data (32)
- Bioenergy (5)
- Biology (6)
- Biomedical (7)
- Biotechnology (3)
- Buildings (3)
- Chemical Sciences (2)
- Clean Water (3)
- Computer Science (36)
- Coronavirus (2)
- Cybersecurity (3)
- Emergency (1)
- Energy Storage (1)
- Environment (27)
- Frontier (8)
- Fusion (2)
- Grid (6)
- High-Performance Computing (15)
- Hydropower (2)
- Isotopes (1)
- ITER (1)
- Machine Learning (13)
- Materials Science (6)
- Mathematics (2)
- Microscopy (2)
- Molten Salt (1)
- Nanotechnology (4)
- National Security (24)
- Nuclear Energy (3)
- Physics (4)
- Quantum Science (1)
- Security (4)
- Simulation (6)
- Space Exploration (1)
- Statistics (2)
- Summit (10)
- Transportation (5)
Media Contacts

Researchers from ORNL have developed a new application to increase efficiency in memory systems for high performance computing. Rather than allow data to bog down traditional memory systems in supercomputers and impact performance, the team from ORNL, along with researchers from the University of Tennessee, Knoxville, created a framework to manage data more efficiently with memory systems that employ more complex structures.

To bridge the gap between experimental facilities and supercomputers, experts from SLAC National Accelerator Laboratory are teaming up with other DOE national laboratories to build a new data streaming pipeline. The pipeline will allow researchers to send their data to the nation’s leading computing centers for analysis in real time even as their experiments are taking place.

John Lagergren, a staff scientist in Oak Ridge National Laboratory’s Plant Systems Biology group, is using his expertise in applied math and machine learning to develop neural networks to quickly analyze the vast amounts of data on plant traits amassed at ORNL’s Advanced Plant Phenotyping Laboratory.
Integral to the functionality of ORNL's Frontier supercomputer is its ability to store the vast amounts of data it produces onto its file system, Orion. But even more important to the computational scientists running simulations on Frontier is their capability to quickly write and read to Orion along with effectively analyzing all that data. And that’s where ADIOS comes in.

To support the development of a revolutionary new open fan engine architecture for the future of flight, GE Aerospace has run simulations using the world’s fastest supercomputer capable of crunching data in excess of exascale speed, or more than a quintillion calculations per second.

For decades, scientists sought a way to apply the outstanding analytical capabilities of neutrons to materials under pressures approaching those surrounding the Earth’s core.

The Earth System Grid Federation, a multi-agency initiative that gathers and distributes data for top-tier projections of the Earth’s climate, is preparing a series of upgrades.

In the race to identify solutions to the COVID-19 pandemic, researchers at the Department of Energy’s Oak Ridge National Laboratory are joining the fight by applying expertise in computational science, advanced manufacturing, data science and neutron science.

Researchers across the scientific spectrum crave data, as it is essential to understanding the natural world and, by extension, accelerating scientific progress.

For nearly three decades, scientists and engineers across the globe have worked on the Square Kilometre Array (SKA), a project focused on designing and building the world’s largest radio telescope. Although the SKA will collect enormous amounts of precise astronomical data in record time, scientific breakthroughs will only be possible with systems able to efficiently process that data.