- Jiaxin Zhang, Department of Civil Engineering, Johns Hopkins University , Baltimore, Maryland
This work addresses the challenge of uncertainty quantification (UQ) and propagation when data for characterizing probability model is limited. We propose a Bayesian multimodel UQ methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic/Bayesian multimodel inference methods to identify plausible candidate probability densities and associated probabilities. The model parameter densities for each plausible model are then estimated using Bayes’ rule with affine-invariant ensemble Markov chain Monte Carlo (MCMC) algorithm. We then propagate this full set of probability models by identifying, through an analytical optimization procedure, an optimal importance sampling density that is representative of all plausible models, and reweighting the samples according to each of the candidate model. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. As additional data are collected, the probability measure inferred from Bayesian method may change significantly. In such cases, it is undesirable to perform a new Monte Carlo analysis using the updated density as it results in large added computational costs. We further develop a mixed augmenting-filtering resampling algorithm that can efficiently accommodate a measure change in Monte Carlo simulation that minimizes the impact on the sample set and saves a large amount of additional computational cost. We also present an investigation into the effect of prior probabilities on the resulting uncertainties and show that prior probabilities can have a significant impact on multimodel UQ for small datasets and inappropriate (but seemingly reasonable) priors may even have lingering effects that bias probabilities even for large datasets.