Abstract
Deep learning neuroarchitecture and hyperparameter search are important in finding the best configuration that maximizes learned model accuracy. However, the number of types of layers, their associated hyperparameters, and the myriad of ways to connect layers poses a significant computational challenge in discovering ideal model configurations. Here, we assess two different approaches for neuroarchitecture search for a LeNet style neural network, one that uses a fixed-length approach where there is a preset number of possible layers that can be toggled on or off via mutation, and a variable-length approach where layers can be freely added or removed via special mutation operators. We found that the variable-length implementation trained better models while discovering unusual layer configurations worth further exploration.