Bringing scientific rigor to the study of AI risks
To address emerging AI threats, ORNL has established the Center for Artificial Intelligence Security Research, or CAISER. With a particular focus on cyber, biometrics, geospatial analysis, and nonproliferation, CAISER will analyze vulnerabilities, threats, and risks related to the security and misuse of AI tools in national security domains.
“We are at a crossroads. AI tools and AI-based technologies are inherently vulnerable and exploitable, which can lead to unforeseen consequences. We’re defining a new field of AI security research and committing to intensive research and development of mitigating strategies and solutions against emerging AI risks.”
- Edmon Begoli, CAISER founding director
Artificial intelligence is rapidly being incorporated into many industries and applications, including systems designed to keep our nation safe. While it swiftly performs tasks that would take humans much longer to complete, it is hackable and exploitable. AI capabilities are advancing significantly faster than our ability secure it from attacks. ORNL resolves to lead research to understand AI threats, vulnerabilities, exploitation risks, and misuses to inform reliability and robustness needs.
Initially, CAISER will focus on four national security domains that align with ORNL strengths — AI for cybersecurity, biometrics, geospatial intelligence and nuclear nonproliferation — in collaboration with national security and industry partners. By elucidating a clear, science-based picture of risks and mitigation strategies, CAISER’s research will provide greater assurance to federal partners that the AI tools they adopt are reliable and robust against adversarial attacks.
Address problems related threats and risks to national security originating from the misuse of AI, the proliferation of AI with state and non-state actors, and super-intelligent AI.
Leverage expertise and facilities across ORNL, partner organizations, and the government to serve as the focal point for collaborative engagement.
Establish science-backed solutions to protect national security assets from rising concerns about AI safety and associated risks.
Our talented researchers make the difference!
Experts in machine learning, mathematics, and natural language processing are applying the latest scientific research to securing our nation from AI threats.