Skip to main content

Bayesian Monte Carlo Evaluation of Imperfect (n, 233U) Data and Model

by Jesse M Brown, Dorothea A Wiarda, Klaus H Guber, Andrew Holcomb, Vladimir Sobes
Publication Type
Conference Paper
Book Title
15th International Conference on Nuclear Data for Science and Technology
Publication Date
Page Numbers
1 to 3
Conference Name
15th International Conference on Nuclear Data for Science and Technology (ND2022)
Conference Location
Sacramento (virtual), California, United States of America
Conference Sponsor
Conference Date

Conventional nuclear data evaluation methods using generalized linear least squares make the following assumptions: prior and posterior probability distribution functions (PDFs) of all model parameters and data are normal (Gaussian); the linear approximation is sufficiently accurate to minimize the cost function (even for nonlinear models); the model (e.g., of neutron cross section) and experimental data (including covariance data) are without defect and prior PDFs of parameters and measured data are known perfectly. Neglect of covariance between model parameters and measured data in conventional evaluations contributes to imperfections. These assumptions are inherent to the generalized linear least squares minimization method commonly used for resolved resonance region neutron cross section evaluations but are often not justified due to the presence of non-normal PDFs, nonlinear models (e.g., R-matrix formalism), and inherent imperfections in data and models (e.g. imperfect covariance data). Here, these assumptions are removed in a mathematical framework of Bayes’ theorem, which is implemented using the Metropolis-Hastings Monte Carlo method. Most importantly, new parameters are introduced to parameterize discrepancies between the theoretical model and measured data to quantify judgement about discrepancies or imperfections in a reproducible manner. An evaluation of 233U in the eV region using the ENDF-B/VIII.0 library and transmission data (Guber et al.) is presented, and posterior parameters are compared to those obtained by conventional evaluation methods. This example illustrates the effects of removing the most harmful assumption: that of model-data perfection.