- In October, President Biden signed an executive order outlining how the United States will promote safe, secure and trustworthy AI.
- It supports the creation of standards, tools and tests to regulate the field, alongside cybersecurity programs that can find and fix vulnerabilities in critical software.
- The executive order aligns nicely with ORNL’s AI Initiative, which supports the field’s development by connecting subject matter experts with the laboratory’s resources and the development of secure, trustworthy and energy-efficient AI for scientific discovery and experimental facilities and national security applications.
As artificial intelligence technologies improve, they increase the efficiency and capabilities of research across the scientific spectrum. Because of the rapid pace of the field, AI tools must be developed sustainably, a guiding principle for the Department of Energy’s Oak Ridge National Laboratory throughout its 40 years of AI research. Now, its extensive array of resources are supporting the nation as it harnesses the power of these transformative technologies.
In October, President Biden signed an executive order outlining how the United States will promote safe, secure and trustworthy AI. The order established various requirements for AI across industry, academia, national laboratories and other federal institutions. It supports the creation of standards, tools and tests to regulate the field, alongside cybersecurity programs that can find and fix vulnerabilities in critical software.
Other goals of the executive order include:
- Developing tools to understand and mitigate the risks of AI
- Establishing a pilot program to enhance training programs for scientists
- Reducing risks at the intersection of AI and chemical, biological, radiological and nuclear, or CBRN, threats
- Developing guidelines, standards and best practices for AI safety and security
- Expanding new capabilities in AI to accelerate progress and identifies the pressing need for scientific grounding in areas such as bias, transparency, security and validation.
The executive order aligns well with ORNL’s AI Initiative, which supports the field’s development by connecting subject matter experts with the laboratory’s resources. “The overarching goal is to develop secure, trustworthy and energy-efficient AI for scientific discovery and experimental facilities and national security applications,” said Prasanna Balaprakash, director of AI programs at ORNL. “The initiative empowers systems that align with both the scientific objectives and goals, creating technologies that support ethical and societal goals.”
The Oak Ridge Leadership Computing Facility, or OLCF, is an important resource for the AI community because it enables researchers to tackle a large of range of the most complex scientific questions and was constructed in part to facilitate AI applications. “The OLCF has Frontier, which is the fastest supercomputer in the world and the first to break the exascale barrier,” said Balaprakash. “Its expansive and energy-efficient power gives us the capability to train large AI models in a responsible way.”
Further, ORNL established the Center for AI Security Research, or CAISER, to address and respond to threats against AI in government and industry. The center supports basic and applied scientific research about the vulnerabilities, risks and national security threats related to AI.
“Some call national laboratories the brains of the federal government,” said ORNL’s Edmon Begoli, founding director of CAISER. “We take that responsibility seriously. We observe potential vulnerabilities in AI systems, and we work to understand those experimentally and theoretically.”
“This executive order highlights areas that ORNL has been in a very strong leadership position for quite a few years,” he added. “Earlier this year, CAISER was established as one of the earliest research organizations that researched these topics in a scientific setting. We create capabilities to test and evaluate the robustness and vulnerabilities of AI tools and products.”
CAISER also provides outreach to inform the public, policymakers and the national security community on the true promise, and potential pitfalls, of AI. Because there’s a conception among many that AI is inherently harmful, CAISER works to both protect and educate the public on responsible policies.
Overall, ORNL’s community and infrastructure will support the goals and guidance set forth in the recent executive order, ensuring that this promising technology is developed to be safe, secure and trustworthy.
“AI is completely changing the way that we do science,” Balaprakash said. “It's transformative, but it is important to evaluate and develop these models in a much more systematic, rigorous and responsible way that maximizes the potential while minimizing the risks.”
UT-Battelle manages ORNL for the Department of Energy’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. The Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science. — Reece Brown