WashingtonExec
  • News & Headlines
  • Executive Councils
  • Videos
  • Podcast
  • Events
    • 🏆 Pinnacle Awards
    • 🏆 Chief Officer Awards
  • About
  • Contact Us
Twitter Feed
LinkedIn Facebook Twitter Instagram YouTube
LinkedIn Facebook Twitter Instagram YouTube
WashingtonExec
Subscribe To The Daily
  • News & Headlines
  • Executive Councils
  • Videos
  • Podcast
  • Events
    • 🏆 Pinnacle Awards
    • 🏆 Chief Officer Awards
  • About
  • Contact Us
WashingtonExec
You are at:Home»News»Information Technology»Putting AI Governance to Work
Information Technology

Putting AI Governance to Work

By Josh ElliotApril 29, 2020
Share
LinkedIn Facebook Twitter Email
Josh Elliot
Josh Elliot, Modzy

This column is written by Josh Elliot, head of operations at Modzy.

While governments and other organizations continue to debate and design macro-level artificial intelligence policy, regulations and ethical frameworks, it is on us to responsibly deploy AI today. If your organization has an established governance function, you’re likely already exploiting the opportunities associated with AI. If not, you might be missing out and unnecessarily putting your organization at risk. 

For some, “governance” is viewed as bureaucracy or an obstacle. To the employee jazzed about tinkering with new technologies, it may be discouraging. However, seasoned AI practitioners know that proactive governance not only protects them from decisions made — but when done right — it spurs innovation, optimizes resources and helps realize project or organization benefits. 

Before exploring how to make that happen, let’s get on the same page.

What is AI governance? 

Let’s agree to use an industry accepted definition from ISACA as the basis for thinking about AI governance. Good governance results in a culture of strategic planning, risk management, resource management and performance management that leads to sustained value creation. The culture is substantiated by a systemic approach that ensures stakeholder needs are considered, decision authorities establish priorities and direction, and performance and compliance is monitored. 

Like other corporate processes, AI governance takes a long view. Organizations deploying AI need to provide guidelines for efforts that promote explainability, transparency and accountability — and ensure consistency into the future. When employees have clarity around scope, roles and decision authorities, and the enablers that make it effective, organizations are positioned to see the best results and respond to change.

As the scale of AI and associated opportunities and risk grows, AI governance becomes everyone’s responsibility. Now, it’s incumbent upon organizations to provide the enablers that make it possible — even easy — for everyone to meet that expectation. 

The answer to putting AI governance to work for you is simple, even elegant: embed it. 

Embedding AI governance

The concept of embedding governance into the AI ecosystem is gaining ground, and with the right processes and tools in place, organizations will be positioned to easily adapt to any evolving legal frameworks and regulations. Here are five suggestions you and your organization can follow to begin embedding AI governance in your organization:

1. Define and communicate corporate AI guidelines that espouse the values and principles of your organization, as well as the decision rights and processes for AI deployment based on application context and scale of stakeholder impact. This should be developed and reviewed by an interdisciplinary group and ultimately approved and regularly updated by an organization’s corporate governance body. 

2. Create a development environment of pre-approved machine learning training data sets and tools that data scientists and technologists can access and use to hone their tradecraft and experiment with new AI techniques. Also, establish processes for proposing, evaluating and adding new data sets that are trusted, fair and respect human rights.

3. Set up automated workflows or easy-to-follow templates for your employees to propose new AI applications and create AI model design checklists that encourage good AI systems engineering principles, critical thinking around stakeholder implications and unintended consequences, and other topics like data use and intellectual property rights. This level of guidance can open the door to new possibilities and better-quality solutions as well as reduce the risk associated with shadow AI. 

4. Plan and design for modular AI model deployment and a robust API/SDK at the onset. When teams employ this design technique, it not only allows them to focus on what they like doing best — building really cool models — but it also results in other benefits such as, avoiding software vendor lock-in, supporting due diligence reviews of models before deployment and making it easy to swap out or upgrade AI models in the future without extensive system redesigns. Look for AI technologies that make deploying containerized AI models intuitive, as well as include extensibility features that enable interoperability and reuse across the organization.

5. Incorporate technologies to manage your AI models and monitor their performance in real-time, looking for indicators of model drift, bias and adversarial attacks, while also providing role-based access and the requisite levels of logging and explainability to support audits and address questions from regulators. Model management technologies also enable greater awareness of AI use across an organization, quickly revealing any duplication of effort that could save time and resources.  

A robust AI governance program should empower employees to focus on bigger picture objectives, long-term goals and priorities, while also ensuring that your AI is explainable, transparent and accountable. Embedding governance enablers takes away a lot of the dirty work, making it easier to leverage AI across the enterprise, so employees are more likely to follow protocol and spend more time on analysis and discovery of insights. 

Remember, when AI governance best practices are followed, confidence in, and use of, AI increases.

As Modzy’s head of operations, Josh Elliot brings more than 20 years of experience developing and executing business strategy, overseeing technical delivery and creating new solutions for civil, defense and commercial markets. He is passionate about driving new and evolving technologies in data science and AI with a unique partnership approach extended to both industry colleagues and customers. He is also certified in governance of enterprise information technology through ISACA. 

Previous ArticleMaxar Snags $20M NGA Change Detection and Land Cover Classification Contracts
Next Article SOSi Expands Community Outreach Initiatives to Meet COVID-19 Pandemic Needs

Related Posts

Northrop Grumman, AT&T Join Forces on 5G-Enabled Defense Capabilities

NTT DATA Research Shows Execs Face Digital Experience Tech Challenges

Gartner Predicts Huge Increase in Global Cloud Revenue in 2022, GovCon Execs Weigh In

Comments are closed.

Trending

‘Givers of GovCon’ Podcast: Accenture Federal Services’ Shawn Roman and Boulder Crest Foundation’s Ken Falke

May 18, 2022

Nathan Jones on How TaxBit is Helping Government Agencies Navigate the Crypto Space

May 18, 2022

2022 Chief Officer Awards Winner: Accenture Federal Services’ Deborah Ringel

May 18, 2022

Top Chief Financial Officers to Watch in 2022: Arcfield’s Lori Becker

May 18, 2022

Top Chief Financial Officers to Watch in 2022: Red River’s Warren Kohm

May 18, 2022
Quick Links
  • Executive Councils & Committees
  • Chief Officer Awards
  • Pinnacle Awards
  • K-12 STEM Symposium
  • Advertise With Us
  • About WashingtonExec
  • Contact

Subscribe to The Daily

Get federal business news & insights delivered to your inbox.

  • Facebook
  • Twitter
  • LinkedIn
  • YouTube
Copyright 2021 © WashingtonExec, Inc. | All Rights Reserved. Powered by J Media Group

Type above and press Enter to search. Press Esc to cancel.