Skip to main content

Accelerating Scientific Simulations with Bi-Fidelity Weighted Transfer Learning

Publication Type
Conference Paper
Book Title
2023 International Conference on Machine Learning and Applications (ICMLA)
Publication Date
Page Numbers
994 to 999
Publisher Location
New Jersey, United States of America
Conference Name
IEEE ICMLA'22: International Conference on Machine Learning and Applications
Conference Location
Nassau, Bahamas
Conference Sponsor
Conference Date

High-fidelity modeling is an essential design tool for many engineering applications. However, for complex systems, computational cost can be a limiting factor. Analyzing parameter sensitivity, uncertainty quantification, and design optimization require many model evaluations. Surrogate models are often used to develop the relationship between model parameters and quantities of interest. However, in the case of complex systems, surrogate models require several degrees of freedom and, thus, a large number of data points to determine the correct dependencies. For many applications, this may be prohibitively expensive. The reduction of computational requirements can be achieved by leveraging low-fidelity models. Low-fidelity models represent the system at a coarser resolution with the advantage of computational efficiency. Therefore, a bi-fidelity modeling paradigm, which augments the accuracy of a low-fidelity model in a computationally efficient manner by invoking limited runs of a high-fidelity model, can be leveraged to sufficiently balance the accuracy and computational requirements. In this work, a bi-fidelity weighted transfer learning method using neural networks was applied to a computational fluid dynamics heat transfer modeling problem. The transfer learning advantage was investigated as a function of hyperparameters. Our main finding is that the use of a bi-fidelity modeling paradigm achieves accuracy close to that of a high-fidelity Gaussian process model while significantly reducing computational cost. The bi-fidelity model achieves comparable performance with 90 high-fidelity samples-that is, 60% less than the samples needed to achieve similar accuracy without the use of bi-fidelity modeling,