Skip to main content
SHARE
Publication

An analysis of image storage systems for scalable training of deep neural networks...

by Seung-hwan Lim, Steven R Young, Robert M Patton
Publication Type
Conference Paper
Publication Date
Conference Name
The seventh workshop on Big Data Benchmarks, Performance Optimization, and Emerging Hardware (in conjunction with ASPLOS'16)
Conference Location
Atlanta, Georgia, United States of America
Conference Date

This study presents a principled empirical evaluation of image storage systems for training deep neural networks.
We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet.
While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-value storage; and (5) loading the training data into LMDB, a B+tree based key-value storage.
The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems.
When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option.
Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks.
We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.