Abstract
HPC is designed for large-scale simulations using monolithic codes of tightly coupled processes highly optimized to deliver decreased time to solution. Medical image processing is not a traditional field of HPC. Similar to AI applications, medical image processing parses large datasets, typically multiple times, to support a variety of studies for classification, diagnosis or monitoring purposes. The convergence of AI, HPC and Big Data encouraged more fields using image processing to transition to HPC. However, not all applications benefit from the same optimizations. In this paper we focus on high throughput medical image processing applications that analyze a huge dataset of small MRI images and that require HPC systems to decrease the time of parsing the entire dataset and not individual MRIs. We show in this research the performance of running SLANT, an image processing application for a whole brain segmentation, on large-scale systems and highlight performance limitations. We present optimizations prioritizing throughput that exhibit a 3.5x speed-up on the Summit Supercomputer that can be used as a baseline for building a high-throughput execution framework for other HPC systems.