New Department of Homeland Security policies aimed at managing the risks of artificial intelligence while harnessing its benefits are part of a growing shift toward responsible AI use as the technology proliferates, according to DHS contractors.
“The policy direction announced by DHS will likely have a significant impact on how companies and teams address the needs of DHS clients,” said Rocky Thurston, CEO of DMI. “These policies emphasize responsible and ethical AI use, non-discrimination, transparency and oversight. Companies and teams working with DHS will need to align their products and services with these principles to ensure compliance with the spirit and letter of the announced policies.”
DHS uses AI for various purposes including combating fentanyl trafficking and child sexual exploitation, ensuring supply chain and critical infrastructure security, among others. However, AI use can have unintended consequences. New policies intend to manage these risks by mandating thorough testing and oversight, especially in facial recognition use. One directive also permits U.S. citizens to opt out of “specific, non-law enforcement uses.” Additionally, the policies introduce a continuous review process for AI applications.
The DHS policy announcements join a succession of government-led initiatives aimed at harnessing AI’s power while building public trust, minimizing bias and misuse, and securing data.
Earlier in September, the Biden-Harris administration announced more companies had voluntarily committed to what the White House describes as “safe, secure and trustworthy AI.” And as DHS announced its new policies, it also appointed Eric Hysen ⏤ who was already co-chairing the AI Task Force ⏤ as its first chief AI officer.
Policy Impacts
Thurston said companies and teams across the industry may need to change or update their current services or policies in response to the new DHS directives. Those could include testing to identify and mitigate unintended bias within AI systems, providing opt-out options where required, ensuring DHS-related products adhere to constitutional and legal requirements, and enhancing transparency and oversight.
Not only do the policies signal a shift toward increasingly responsible AI use; they set a precedent for ethical AI use across government and industry, Thurston said.
“Companies and teams operating in the AI space should view these policies as an opportunity to demonstrate their commitment to fairness, equity and responsible AI practices,” he said. “While compliance may require adjustments, it also aligns with the growing public demand for ethical AI solutions.”
At Sev1Tech, Chief Technology Officer Hector Collazo said the new policies are vital to the ongoing work around addressing bias, data privacy, security and accountability.
“The policy direction will play a pivotal role in shaping how our team and company address the needs of DHS clients,” he said. “Our commitment to providing DHS clients with cutting-edge technologies and comprehensive support today requires continuous evaluation and adjustment in response to policy shifts.”
In response to the announced policies, Sev1Tech is actively evaluating its current services and products to ensure alignment with the new directives, he said.
“Anticipating policy changes is integral to our commitment to staying ahead of evolving regulations and standards,” Collazo said. “Sev1Tech is making the necessary adjustments to enhance our AI offerings’ compliance, security and ethical considerations. Our goal is to meet the requirements set forth in the announced policies and proactively contribute to their successful implementation, ensuring that our clients continue to benefit from the highest levels of quality and integrity in our services and products.”
Mike Horton, vice president of national security at ASRC Federal, said the company doesn’t anticipate significant changes to its current processes or support models ⏤ the policy changes will add transparency in its supported products and require documentation to demonstrate compliance.
ASRC Federal is also developing its own corporate AI ethics policy and training for anyone interact ingwith its AI tools or systems.
“While the policy additions may seem simply supportive in nature, they illustrate the Department’s commitment to the protection of privacy,” Horton said. “We have a responsibility to work together with our government customers to develop and handle federal AI solutions with care and intention. We commend DHS for tackling this complex issue proactively, as establishing clear guidelines will strengthen implementation and mitigate any risks.”