In this case study, we will describe the design and assembly of a cyber security test range we have built at Oak Ridge National Laboratory in Oak Ridge, TN, USA. The range is designed to provide a flexible environment to evaluate cyber security tools—particularly those involving AI/ML—in a way that provides realistic environments and where we can control the experiments to determine the strengths and weaknesses of the tools. We have designed in the ability to repeat the evaluations, so additional tools can be evaluated and compared at a later time. The system is one that can be scaled up or down for experiment sizes. At the time of the conference we will have completed two full-scale, national, government challenges on this range. These challenges are evaluating the performance and operating costs for AI/ML-based cyber security tools for application into large, government-sized environments. These evaluations will be described, in order to provide motivation and context for various design decisions and adaptations we have made. The first challenge measured end-point security tools against 100K malware samples chosen across a range of types. The second is network detection of attempted penetration and exploitations with varying levels of covertness in a high-volume, business network. The scale of each of these challenges is requiring us to create automation systems to repeat the experiments identically for each tool. Preventing there being easy signs of malicious activity for the AI/ML tools to focus on has been a particularly interesting and challenging aspect of designing and executing these challenge events. After the events, the range continues to be used for other research such as adversarial machine learning where the repeatability, scale, and automation required for the national challenge events become essential elements for research.