Skip to main content
SHARE
Publication

Juggler: a dependence-aware task-based execution framework for GPUs...

by Mehmet E Belviranli, Seyong Lee, Jeffrey S Vetter, Laxmi Bhuyan
Publication Type
Conference Paper
Journal Name
Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
Book Title
Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '18)
Publication Date
Page Numbers
54 to 67
Volume
1
Issue
1
Conference Name
Principles and Practice of Parallel Programming (PPoPP) 2018
Conference Location
Vienna, Austria
Conference Sponsor
ACM
Conference Date
-

Scientific applications with single instruction, multiple data (SIMD) computations show considerable performance improvements when run on today's graphics processing units (GPUs). However, the existence of data dependences across thread blocks may significantly impact the speedup by requiring global synchronization across multiprocessors (SMs) inside the GPU. To efficiently run applications with interblock data dependences, we need fine-granular task-based execution models that will treat SMs inside a GPU as stand-alone parallel processing units. Such a scheme will enable faster execution by utilizing all internal computation elements inside the GPU and eliminating unnecessary waits during device-wide global barriers.

In this paper, we propose Juggler, a task-based execution scheme for GPU workloads with data dependences. The Juggler framework takes applications embedding OpenMP 4.5 tasks as input and executes them on the GPU via an efficient in-device runtime, hence eliminating the need for kernel-wide global synchronization. Juggler requires no or little modification to the source code, and once launched, the runtime entirely runs on the GPU without relying on the host through the entire execution. We have evaluated Juggler on an NVIDIA Tesla P100 GPU and obtained up to 31% performance improvement against global barrier based implementation, with minimal runtime overhead.