The digital systems that keep your lights on, your water running and your flights on schedule are getting new guidance.
On Thursday, the Department of Homeland Security introduced the Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure, a set of voluntary recommendations aimed at ensuring AI technologies are deployed safely and securely across the systems that power daily life.
Developed with input from cloud providers, AI developers, critical infrastructure operators, civil society and public sector entities, the framework builds on broader policies DHS established in 2023 to comply with White House directives to create a shared roadmap for addressing AI’s risks — and potential.
AI is already transforming critical infrastructure, from detecting earthquakes and optimizing energy grids to improving mail distribution. However, as these technologies integrate deeper into the systems that underpin power, water, transportation and communications, they introduce new vulnerabilities.
While the new framework is voluntary, GovCon leaders seeking to maintain trusted partnerships while helping agencies deploy AI solutions are paying attention, ensuring compliance with evolving safety standards in critical infrastructure sectors and beyond.
Arvind Krishna, chairman and CEO of IBM, said the company is proud to support the development of the framework, which is a “powerful tool” to help guide responsible AI deployment.
“We look forward to continuing to work with the Department to promote shared and individual responsibilities in the advancement of trusted AI systems,” Krishna said.
Marc Benioff, chair, CEO and co-founder of Salesforce, said the framework “promotes collaboration” among all key stakeholders while prioritizing trust, transparency and accountability.
“Salesforce is committed to humans and AI working together to advance critical infrastructure industries in the U.S.,” Benioff said. “We support this framework as a vital step toward shaping the future of AI in a safe and sustainable manner.”
DHS Secretary Alejandro N. Mayorkas emphasized the importance of being proactive.
“AI offers a once-in-a-generation opportunity to improve the strength and resilience of U.S. critical infrastructure, and we must seize it while minimizing its potential harms,” he said. “The Framework, if widely adopted, will go a long way to better ensure the safety and security of critical services… I urge every executive, developer, and elected official to adopt and use this Framework to help build a safer future for all.”
The guidance categorizes AI safety and security vulnerabilities in critical infrastructure into three types: AI-driven attacks, attacks targeting AI systems, and design/implementation failures.
It also addresses sector-specific vulnerabilities and offers steps for ensuring AI enhances infrastructure resilience while mitigating misuse risks.
Summary of DHS Recommendations to Stakeholders
- Cloud and Compute Providers: Secure the AI development environment by vetting suppliers, managing access, protecting data centers and monitoring anomalous activities.
- AI Developers: Emphasize a secure by design approach, assess AI risks, ensure alignment with human-centric values, address biases and vulnerabilities and support independent model evaluations.
- Critical Infrastructure Operators: Maintain strong cybersecurity practices, safeguard customer data, ensure transparency in AI use, and monitor system performance, sharing insights with developers.
- Civil Society: Engage in standards development, research AI evaluations for critical infrastructure, and advocate for values and safeguards to shape AI’s societal impact.
- Public Sector Entities: Promote responsible AI adoption through statutory and regulatory action, research funding and collaboration across government levels and with international partners.