Steven R Young
Steven R Young
Steven Young is a postdoctoral research associate at Oak Ridge National Laboratory working in the Computational Data Analytics Group. He has a PhD in Computer Engineering from The University of Tennessee where he studied machine learning in the Machine Intelligence Lab. He also has a BS in Electrical Engineering from The University of Tennessee. His current research involves applying machine learning to large scale datasets with a focus on deep learning methods.
T. Potok, C. Schuman, S. Young, R. Patton, F. Spedalieri, J. Liu, K. Yao, G. Rose, G. Chakma, “A study of complex deep learning networks on high performance, neuromorphic, and quantum computers,” Proceedings of the Workshop on Machine Learning in High Performance Computing Environments, Supercomputing, 2016.
S. Lim, S. Young, R. Patton, “An analysis of image storage systems for scalable training of deep neural networks,” The Seventh workshop on Big Data Benchmarks, Performance Optimization, and Emerging Hardware (in conjunction with ASPLOS’16), 2016.
S. Young, D. Rose, T. Karnowski, S. Lim, R. Patton, “Optimizing deep learning hyper-parameters through an evolutionary algorithm,” Proceedings of the Workshop on Machine Learning in High-Performance Computing Environments, Supercomputing, 2015.
J. Holleman, I. Arel, S. Young, J. Lu, “Analog inference circuits for deep learning,” in Biomedical Circuits and Systems Conference (BioCAS), 2015 IEEE , vol., no., pp.1-4, 22-24 Oct. 2015.
J. Lu, S. Young, I. Arel, and J. Holleman, “A 1 tops/w analog deep machine-learning engine with floating-gate storage in 0.13 μm cmos,” IEEE Journal of Solid-State Circuits, vol. 50, no. 1, pp. 270–281, 2015.
S. Young, “Scalable Hardware Efficient Deep Spatio-Temporal Inference Networks,” PhD thesis, The University of Tennessee, 2014.
J. Lu, S. Young, I. Arel, and J. Holleman, “A 1 tops/w analog deep machine-learning engine with floating-gate storage in 0.13 μm cmos,” in IEEE Int. Solid-State Circuits Conf.(ISSCC) Dig. Tech. Papers, 2014.
S. Young, J. Lu, J. Holleman, and I. Arel, “On the impact of approximate computation in an analog destin architecture,” Neural Networks and Learning Systems, IEEE Transactions on, vol. 25, no. 5, pp. 934–946, 2014.
S. Young, A. Davis, A. Mishtal, and I. Arel, “Hierarchical spatiotemporal feature extraction using recurrent online clustering,” Pattern Recognition Letters, vol. 37, pp. 115–123, 2014.
J. Lu, S. Young, I. Arel, and J. Holleman, “An analog online clustering circuit in 130nm cmos,” in Solid-State Circuits Conference (A-SSCC), 2013 IEEE Asian, pp. 177–180, IEEE, 2013.
S. Young and I. Arel, “Recurrent clustering for unsupervised feature extraction with application to sequence detection,” in Machine Learning and Applications (ICMLA), 2012 11th International Conference on, vol. 2, pp. 54–55, IEEE, 2012.
T. Karnowski, I. Arel, and S. Young, “Modeling temporal dynamics with function approximation in deep spatio-temporal inference network,” in Biologically Inspired Cognitive Architectures, International Conference on, 2011.
S. Young, I. Arel, T. Karnowski, and D. Rose, “A fast and stable incremental clustering algorithm,” in Information Technology: New Generations (ITNG), 2010 Seventh International Conference on, pp. 204–209, IEEE, 2010.
S. Young, I. Arel, and O. Arazi, “Pi-pifo: A scalable pipelined pifo memory management architecture,” in Telecommunications, 10th International Conference on, pp. 265–270, IEEE, 2009.