Abstract
Large-scale simulations can produce tens of terabytes of data per
analysis cycle, complicating and limiting the efficiency of workflows.
Traditionally, outputs are stored on the file system and analyzed
in post-processing. With the rapidly increasing size and
complexity of simulations, this approach faces an uncertain future.
Trending techniques consist of performing the analysis in situ, utilizing
the same resources as the simulation, and/or off-loading subsets
of the data to a compute-intensive analysis system. We introduce
an analysis framework developed for HACC, a cosmological
N-body code, that uses both in situ and co-scheduling approaches
for handling Petabyte-size outputs. An initial in situ step is used to
reduce the amount of data to be analyzed, and to separate out the
data-intensive tasks handled off-line. The analysis routines are implemented
using the PISTON/VTK-m framework, allowing a single
implementation of an algorithm that simultaneously targets a
variety of GPU, multi-core, and many-core architectures.