Skip to main content

Troubleshooting deep-learner training data problems using an evolutionary algorithm on Summit...

by Mark A Coletti, Alex J Fafard, David L Page
Publication Type
Journal Name
IBM Journal of Research and Development
Publication Date

Architectural and hyper-parameter design choices can influence deep-learner (DL) model fidelity but can also be affected by malformed training and validation data. However, practitioners may spend significant time refining layers and hyper-parameters before discovering that distorted training data was impeding training progress. We found that an evolutionary algorithm (EA) can be used to diagnose this kind of DL problem. An EA evaluated thousands of DL configurations on Summit that yielded no overall improvement in DL performance, which suggested problems with the training and validation data. We suspected that Contrast Limited Adaptive Histogram Equalization (CLAHE) enhancement that was applied to previously generated digital surface models (DSMs), for which we were training DLs to find errors, had damaged the training data. Subsequent runs with an alternative global normalization yielded significantly improved DL performance. However, the DL Intersection Over Union (IOU) still exhibited consistent sub-par performance, which suggested further problems with the training data and DL approach. Nonetheless, we were able to diagnose this problem within a 12-hour span via Summit runs, which prevented several weeks of unproductive trial-and-error DL configuration refinement, and allowed for a more timely convergence on an ultimately viable solution.