Skip to main content

Training reinforcement learning models via an adversarial evolutionary algorithm...

by Mark A Coletti, Chathika S Gunaratne, Catherine Schuman, Robert M Patton
Publication Type
Conference Paper
Book Title
51th International Conference on Parallel Processing Workshop (ICPP Workshops '22), August 29-September 1, 2022, Bordeaux, France
Publication Date
Publisher Location
Bordeaux, France
Conference Name
The Second International Workshop on Parallel and Distributed Algorithms for Decision Sciences (PDADS), 2022
Conference Location
Bordeaux, France
Conference Sponsor
51st International Conference on Parallel Processing (ICPP 2022)
Conference Date

When training for control problems, more episodes used in training usually leads to better generalizability, but more episodes also requires significantly more training time. There are a variety of approaches for selecting the way that training episodes are chosen, including fixed episodes, uniform sampling, and stochastic sampling, but they can all leave gaps in the training landscape. In this work, we describe an approach that leverages an adversarial evolutionary algorithm to identify the worst performing states for a given model. We then use information about these states in the next cycle of training; this process can be repeated until the desired level of model performance is met. We demonstrate this approach with the OpenAI Gym cart-pole problem. With this problem, we show that the adversarial evolutionary algorithm did not reduce the number of episodes required in training needed to attain model generalizability when compared with stochastic sampling, and actually performed slightly worse.