Skip to main content
SHARE
Publication

High-Throughput Computing: Case Study of Medical Image Processing Applications

by Maria Predescu, Cosmin Samoila, Emil Slusanschi, Ana Gainaru
Publication Type
Conference Paper
Book Title
FlexScience'24: Proceedings of the 14th Workshop on AI and Scientific Computing at Scale using Flexible Computing Infrastructures
Publication Date
Page Numbers
17 to 25
Publisher Location
New York, New York, United States of America
Conference Name
14th Workshop on AI and Scientific Computing at Scale using Flexible Computing Infrastructures (FlexScience)
Conference Location
Pisa, Italy
Conference Sponsor
ACM
Conference Date
-

HPC is designed for large-scale simulations using monolithic codes of tightly coupled processes highly optimized to deliver decreased time to solution. Medical image processing is not a traditional field of HPC. Similar to AI applications, medical image processing parses large datasets, typically multiple times, to support a variety of studies for classification, diagnosis or monitoring purposes. The convergence of AI, HPC and Big Data encouraged more fields using image processing to transition to HPC. However, not all applications benefit from the same optimizations. In this paper we focus on high throughput medical image processing applications that analyze a huge dataset of small MRI images and that require HPC systems to decrease the time of parsing the entire dataset and not individual MRIs. We show in this research the performance of running SLANT, an image processing application for a whole brain segmentation, on large-scale systems and highlight performance limitations. We present optimizations prioritizing throughput that exhibit a 3.5x speed-up on the Summit Supercomputer that can be used as a baseline for building a high-throughput execution framework for other HPC systems.