Skip to main content
SHARE
Blog

AI for national security

Ombre photo from green to blue to purple (left to right) showing images in the background of powerlines, airplanes and a trojan mask

The development of powerful and seemingly human-like artificial intelligence can be a great boon to society. On the other hand, AI has also created new ways for doing great harm.

These risks are not only about misuses of AI but also about threats to AI systems. In the wrong hands, these systems can be used to develop sophisticated misinformation, extract personal information, generate cyber threats or produce even more extreme threats to critical infrastructure such as the electrical grid and military systems.

ORNL is working to protect this incredible technology and make it reliable, safe, trustworthy and energy efficient.

“We are looking at how to prevent AI-based cyber defenses from being defeated by very advanced malware types,” said ORNL’s Edmon Begoli, director of the Center for AI Security Research, or CAISER. “So, we do a lot of AI for cyber operations research.”

ORNL scientists are using AI to reduce nuclear risk, secure critical assets like energy and other infrastructure, and accelerate innovation in defense manufacturing. And with the creation of CAISER, ORNL is bringing together the lab’s world-class resources to find vulnerabilities, threats and risks to national security using AI.

“Part of our work is to understand vulnerabilities in our systems, and we use very sophisticated methods to discover those vulnerabilities,” Begoli said.

This ORNL-led technology in action has already made a major difference in combat areas in Ukraine and Gaza, where AI is used to assess damage to buildings and other structures. It’s also in use to recognize how large language models can be used to generate malware and other kinds of harmful content.

“We are also looking how to prevent AI-based cyber defenses from being defeated by very advanced, sophisticated malware types,” Begoli said.

 

Continue reading ORNL Review: Turning AI into something we can trust