Skip to main content

Pseudonymization at Scale: OLCF’s Summit Usage Data Case Study...

Publication Type
Conference Paper
Book Title
2022 IEEE International Conference on Big Data (Big Data)
Publication Date
Page Numbers
3432 to 3440
Publisher Location
New Jersey, United States of America
Conference Name
2022 IEEE International Conference on Big Data Workshop BTSD
Conference Location
Osaka, Japan
Conference Sponsor
Conference Date

The analysis of vast amounts of data and the processing of complex computational jobs have traditionally relied upon high performance computing (HPC) systems, which offer reliable and efficient management of large-scale computational and data resources. Understanding these analyses’ needs is paramount for designing solutions that can lead to better science, and similarly, understanding the characteristics of the user behavior on those systems is important for improving user experiences on HPC systems. A common approach to gathering data about user behavior is to extract workload characteristics from system log data available only to system administrators. Recently at Oak Ridge Leadership Computing Facility (OLCF), however, we unveiled user behavior about the Summit supercomputer by collecting data from a user’s point of view with ordinary Unix commands.In this paper, we discuss the process, challenges, and lessons learned while preparing this dataset for publication and submission to an open data challenge. The original dataset contains personal identifiable information (PII) about the users of OLCF which needed be masked prior to publication, and we determined that anonymization, which scrubs PII completely, destroyed too much of the structure of the data to be interesting for the data challenge. We instead chose to pseudonymize the dataset, which reduced the linkability of the dataset to the users’ identities. Pseudonymization is significantly more computationally expensive than anonymization, and the size of our dataset, which is approximately 175 million lines of raw text, necessitated the development of a parallelized workflow that could be reused on different HPC machines. We demonstrate the scaling behavior of the workflow on two leadership class HPC systems at OLCF, and we show that we were able to bring the overall makespan time from an impractical 20+ hours on a single node down to around 2 hours. As a result of this work, we release the entire pseudonymized dataset and make the workflows and source code publicly available.