Skip to main content
SHARE
Publication

Fine-Grained Exploitation of Mixed Precision for Faster CNN Training...

Publication Type
Conference Paper
Journal Name
IEEE/ACM Machine Learning in HPC Environments (MLHPC)
Book Title
2019 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC)
Publication Date
Page Numbers
9 to 18
Issue
None
Conference Name
IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC)
Conference Location
Denver, Colorado, United States of America
Conference Sponsor
IEEE
Conference Date
-

As deep convolutional neural networks (CNNs) have become increasingly popular and successful at an ever-widening number of machine learning tasks specialized hardware has become increasingly available for training and deploying them. NVIDIA's recent Volta architecture includes tensor cores which perform a fused operation reduced and mixed precision (16-bit multiply, 32-bit accumulate). Recent research indicates that, typically, very little is lost (in terms of training accuracy) when half precision is used in place of single precision, and performance gains can be made by doing arithmetic in reduced precision. In this work we demonstrate that making layer-by-layer choices as to the arithmetic/data precision can lead to further performance improvement. In our study of 25,200 CNNs we demonstrate an average speedup (over purely half precision) of 1.27x and speedups as high as 3.64x by appropriately combining single and half precision arithmetic and data types on a layer-by-layer basis.c