Skip to main content
SHARE
Publication

Evaluating performance and portability of high-level programming models: Julia, Python/Numba, and Kokkos on exascale nodes

Publication Type
Conference Paper
Book Title
2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
Publication Date
Page Numbers
373 to 382
Publisher Location
New Jersey, United States of America
Conference Name
37th IEEE International Parallel & Distributed Processing Symposium (IPDPS 2023)
Conference Location
St. Petersburg, Florida, United States of America
Conference Sponsor
IEEE
Conference Date
-

We explore the performance and portability of the high-level programming models: the LLVM-based Julia and Python/Numba, and Kokkos on high-performance computing (HPC) nodes: AMD Epyc CPUs and MI250X graphical processing units (GPUs) on Frontier’s test bed Crusher system and Ampere’s Arm-based CPUs and NVIDIA’s A100 GPUs on the Wombat system at the Oak Ridge Leadership Computing Facilities. We compare the default performance of a hand-rolled dense matrix multiplication algorithm on CPUs against vendor-compiled C/OpenMP implementations, and on each GPU against CUDA and HIP. Rather than focusing on the kernel optimization per-se, we select this naive approach to resemble exploratory work in science and as a lower-bound for performance to isolate the effect of each programming model. Julia and Kokkos perform comparably with C/OpenMP on CPUs, while Julia implementations are competitive with CUDA and HIP on GPUs. Performance gaps are identified on NVIDIA A100 GPUs for Julia’s single precision and Kokkos, and for Python/Numba in all scenarios. We also comment on half-precision support, productivity, performance portability metrics, and platform readiness. We expect to contribute to the understanding and direction for high-level, high-productivity languages in HPC as the first-generation exascale systems are deployed.