Skip to main content

Center for Artificial Intelligence Security Research

Bringing scientific rigor to the study of AI risks

To address emerging AI threats, ORNL has established the Center for Artificial Intelligence Security Research, or CAISER. With a particular focus on cyber, biometrics, geospatial analysis, and nonproliferation, CAISER will analyze vulnerabilities, threats, and risks related to the security and misuse of AI tools in national security domains.

Artificial intelligence is rapidly being incorporated into many industries and applications, including systems designed to keep our nation safe. While it swiftly performs tasks that would take humans much longer to complete, it is hackable and exploitable. AI capabilities are advancing significantly faster than our ability secure it from attacks. ORNL resolves to lead research to understand AI threats, vulnerabilities, exploitation risks, and misuses to inform reliability and robustness needs.

Initially, CAISER will focus on four national security domains that align with ORNL strengths — AI for cybersecurity, biometrics, geospatial intelligence and nuclear nonproliferation — in collaboration with national security and industry partners. By elucidating a clear, science-based picture of risks and mitigation strategies, CAISER’s research will provide greater assurance to federal partners that the AI tools they adopt are reliable and robust against adversarial attacks.

Our talented researchers make the difference!

Experts in machine learning, mathematics, and natural language processing are applying the latest scientific research to securing our nation from AI threats. 


Air Force Research Laboratory logo

Air Force Research Lab

AFRL brings such capabilities to secure AI systems.
Dept of Homeland Security logo

Dept of Homeland Security

DHS brings such capabilities to secure AI systems.