Abstract
Anderson acceleration (AA) is an extrapolation technique that has recently gained interest in the deep learning (DL) community to speed-up the sequential training of DL models. However, when performed at large scale, the DL training is exposed to a higher risk of getting trapped into steep local minima of the training loss function, and standard AA does not provide sufficient acceleration to escape from these steep local minima. This results in poor generalizability and makes AA ineffective. To restore AA’s advantage to speed-up the training of DL models on large scale computing platforms, we combine AA with an adaptive moving average procedure that boosts the training to escape from steep local minima. By monitoring the relative standard deviation between consecutive iterations, we also introduce a criterion to automatically assess whether the moving average is needed. We applied the method to the following DL instantiations for image classification: (i) ResNet50 trained on the open-source CIFAR100 dataset and (ii) ResNet50 trained on the open-source ImageNet1k dataset. Numerical results obtained using up to 1,536 NVIDIA V100 GPUs on the OLCF supercomputer Summit showed the stabilizing effect of the moving average on AA for all the problems above.