Skip to main content

Verification, Validation and Uncertainty Quantification for Machine Learning Systems


Data analytics and machine learning applications are growing in popularity and use.  However, there are few means and metrics with which to determine the reliability of these codes.  To conduct effective verification and validation (V&V), a software engineering process is followed, wherein requirements are defined, goals and performance objectives are identified, and test activities are identified at the onset of the project and continued throughout the software lifecycle.  Some standard V&V activities can be conducted, without change, on data analytics and machine learning (DAML) software, however, the non-deterministic nature of DAML results produces the need for additional approaches to verify, validate, and evaluate these algorithms. This project explored and identified the process and techniques for V&V of DAML software.

Related Publications

Pullum, Laura L. Phase II Report: Verification, Validation, and Uncertainty Quantification for Predictive Systems. ORNL/TR-2016. Oak Ridge, TN: Oak Ridge National Laboratory. 2016.

Ramanathan, A., Pullum, L., S. Jha, et al. “Statistical Methods for Testing Intelligent Systems: Applications to Machine Learning and Computer Vision.” IEEE Design, Automation & Test in Europe (DATE), 2016.

Pullum, Laura L. and Arvind Ramanathan. Quantitative Approaches to Verify and Validate Anomaly Detection Algorithms. ORNL/LTR-2015/589. Oak Ridge, TN: Oak Ridge National Laboratory. 2015.