Skip to main content

Center for Artificial Intelligence Security Research

Logo for CAISER

“We are at a crossroads. AI tools and AI-based technologies are inherently vulnerable and exploitable, which can lead to unforeseen consequences. We’re defining a new field of AI security research and committing to intensive research and development of mitigating strategies and solutions against emerging AI risks.”

- Edmon Begoli, ORNL’s Advanced Intelligent Systems section head and CAISER founding director

Artificial intelligence is rapidly being incorporated into many industries and applications, including systems designed to keep our nation safe. While it swiftly performs tasks that would take humans much longer to complete, it is hackable and exploitable. AI capabilities are advancing significantly faster than our ability secure it from attacks. ORNL resolves to lead research to understand AI threats, vulnerabilities, exploitation risks, and misuses to inform reliability and robustness needs.

Initially, CAISER will focus on four national security domains that align with ORNL strengths — AI for cybersecurity, biometrics, geospatial intelligence and nuclear nonproliferation — in collaboration with national security and industry partners. By elucidating a clear, science-based picture of risks and mitigation strategies, CAISER’s research will provide greater assurance to federal partners that the AI tools they adopt are reliable and robust against adversarial attacks.

Our talented researchers make the difference!

Experts in machine learning, mathematics, and natural language processing are applying the latest scientific research to securing our nation from AI threats. 

Air Force Research Lab

AFRL brings such capabilities to secure AI systems.
Air Force Research Laboratory logo

Dept of Homeland Security

DHS brings such capabilities to secure AI systems.
Dept of Homeland Security logo