Skip to main content
Oak Ridge National Laboratory entrance sign

A team from ORNL, Stanford University and Purdue University developed and demonstrated a novel, fully functional quantum local area network, or QLAN, to enable real-time adjustments to information shared with geographically isolated systems at ORNL

Oak Ridge National Laboratory’s MENNDL AI software system can design thousands of neural networks in a matter of hours. One example uses a driving simulator to evaluate a network’s ability to perceive objects under various lighting conditions. Credit: ORNL, U.S. Dept. of Energy

The Department of Energy’s Oak Ridge National Laboratory has licensed its award-winning artificial intelligence software system, the Multinode Evolutionary Neural Networks for Deep Learning, to General Motors for use in vehicle technology and design.

ORNL has modeled the spike protein that binds the novel coronavirus to a human cell for better understanding of the dynamics of COVID-19. Credit: Stephan Irle/ORNL, U.S. Dept. of Energy

To better understand the spread of SARS-CoV-2, the virus that causes COVID-19, Oak Ridge National Laboratory researchers have harnessed the power of supercomputers to accurately model the spike protein that binds the novel coronavirus to a human cell receptor.

Graphical representation of a deuteron, the bound state of a proton (red) and a neutron (blue). Credit: Andy Sproles/Oak Ridge National Laboratory, U.S. Dept. of Energy.

Scientists at the Department of Energy’s Oak Ridge National Laboratory are the first to successfully simulate an atomic nucleus using a quantum computer. The results, published in Physical Review Letters, demonstrate the ability of quantum systems to compute nuclear ph...

ORNL’s Steven Young (left) and Travis Johnston used Titan to prove the design and training of deep learning networks could be greatly accelerated with a capable computing system.

A team of researchers from the Department of Energy’s Oak Ridge National Laboratory has married artificial intelligence and high-performance computing to achieve a peak speed of 20 petaflops in the generation and training of deep learning networks on the