
ORNL researchers developed a novel nonlinear level set learning method to reduce dimensionality in high-dimensional function approximation.
ORNL researchers developed a novel nonlinear level set learning method to reduce dimensionality in high-dimensional function approximation.
The team conducted numerical studies to demonstrate the connection between the parameters of neural networks and the stochastic stability of DMMs.
A research team from ORNL and Pacific Northwest National Laboratory has developed a deep variational framework to learn an approximate posterior for uncertainty quantification.
Estimating complex, non-linear model states and parameters from uncertain systems of equations and noisy observation data with current filtering methods is a key challenge in mathematical modeling.
ORNL researchers developed a stochastic approximate gradient ascent method to reduce posterior uncertainty in Bayesian experimental design involving implicit models.
Deep neural networks—a form of artificial intelligence—have demonstrated mastery of tasks once thought uniquely human.
A team of researchers from Oak Ridge National Laboratory has been awarded nearly $2 million over three years from the Department of Energy to explore the potential of machine learning in revolutionizing scientific data analysis.
Supercomputers like Oak Ridge National Laboratory’s Titan are advancing science at a frenetic pace and helping researchers make sense of data that could have easily been missed, says Ramakrishnan “Ramki” Kannan.
From machine learning to neuromorphic architectures that enable greater computing flexibility and utility, Oak Ridge National Laboratory researchers are pushing boundaries with Titan.