We use OpenMP to target hardware accelerators (GPUs) on Summit, a newly deployed supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), demonstrating simplified access to GPU devices for users of our astrophysics code GenASiS and useful speedup on a sample fluid dynamics problem. We modify our workhorse class for data storage to include members and methods that significantly streamline the persistent allocation of and association to GPU memory. Users offload computational kernels with OpenMP target directives that are rather similar to constructs already familiar from multi-core parallelization. In this initial example we ask, “With a given number of Summit nodes, how fast can we compute with and without GPUs?”, and find total wall time speedups of ∼ 12X. We also find reasonable weak scaling up to 8000 GPUs (1334 Summit nodes). We make available the source code from this work at https://github.com/GenASiS/GenASiS_Basics.