Massively Parallel OpenSHMEM

Massively Parallel OpenSHMEM


  • Max Grossman, Rice University, Houston , Texas
July 26, 2017 - 2:00pm to 3:00pm


While OpenSHMEM is used to scalably execute some of the most massive and irregular distributed workloads in the world, the OpenSHMEM community has been relatively slow to adapt to the drastic changes in shared-memory parallelism of the last decade. Indeed as of the latest version of OpenSHMEM (v1.3), the standard continues to say nothing about the thread-safety of any OpenSHMEM APIs (i.e. there is no equivalent to MPI_Init_thread). While the simplicity of programming OpenSHMEM in a flat model is attractive, the increased dimensionality of intra-node parallelism has led the OpenSHMEM community to start considering several unique proposals and research directions for supporting well-performing OpenSHMEM in a hybrid, massively multi-threaded environment.

This talk will provide an overview of community proposals and research in multi-threaded programming with OpenSHMEM, and discuss how some of this work is setting up OpenSHMEM to be more future-proof against hardware and software changes than other threading models used in libraries like MPI or UPC++. This talk will focus on the AsyncSHMEM project at Rice University, which investigates the combination of asynchronous tasking runtimes and OpenSHMEM for improved scalability, programmability, and tooling. We will also discuss the OpenSHMEM thread registration proposal, contexts proposal, and the nvshmem project for supporting OpenSHMEM on NVIDIA GPUs.

Additional Information 

Point of Contact:

If you are interested in meeting with the speaker or would like more information, please contact Jeff Vetter at

Sponsoring Organization 

Computer Science and Mathematics Division Seminar


  • Research Office Building
  • Building: 5700
  • Room: L-204

Contact Information