Filter News
Area of Research
- (-) Supercomputing (51)
- Advanced Manufacturing (1)
- Biology and Environment (22)
- Clean Energy (26)
- Computer Science (2)
- Fusion and Fission (4)
- Fusion Energy (1)
- Isotopes (2)
- Materials (58)
- Materials Characterization (1)
- Materials for Computing (3)
- Materials Under Extremes (1)
- National Security (14)
- Neutron Science (22)
- Nuclear Science and Technology (5)
- Quantum information Science (2)
- Sensors and Controls (1)
News Topics
- (-) Artificial Intelligence (20)
- (-) Biomedical (10)
- (-) Coronavirus (9)
- (-) Frontier (16)
- (-) Materials Science (7)
- (-) Microscopy (3)
- (-) Security (1)
- 3-D Printing/Advanced Manufacturing (3)
- Big Data (12)
- Bioenergy (5)
- Biology (5)
- Biotechnology (2)
- Chemical Sciences (2)
- Climate Change (10)
- Computer Science (47)
- Critical Materials (2)
- Cybersecurity (3)
- Decarbonization (3)
- Energy Storage (2)
- Environment (10)
- Exascale Computing (14)
- Fusion (1)
- Grid (2)
- High-Performance Computing (20)
- Isotopes (1)
- Machine Learning (8)
- Materials (7)
- Mathematics (1)
- Molten Salt (1)
- Nanotechnology (6)
- National Security (4)
- Net Zero (1)
- Neutron Science (9)
- Nuclear Energy (2)
- Physics (4)
- Polymers (2)
- Quantum Computing (7)
- Quantum Science (13)
- Simulation (9)
- Software (1)
- Summit (21)
- Sustainable Energy (5)
- Transportation (4)
Media Contacts
![ORNL’s Steven Young (left) and Travis Johnston used Titan to prove the design and training of deep learning networks could be greatly accelerated with a capable computing system. ORNL’s Steven Young (left) and Travis Johnston used Titan to prove the design and training of deep learning networks could be greatly accelerated with a capable computing system.](/sites/default/files/styles/list_page_thumbnail/public/news/images/RAvENNA%20release%20pic.png?itok=2bDpK5Mo)
A team of researchers from the Department of Energy’s Oak Ridge National Laboratory has married artificial intelligence and high-performance computing to achieve a peak speed of 20 petaflops in the generation and training of deep learning networks on the