Rumman Chowdhury found two major trends as organizations try to tap into AI: ethical debt and untapped potential because of an inability to deal with ethical consequences.
Chowdhury’s job is unusual, in that she works as global lead for responsible artificial intelligence for Accenture, a Fortune Global 500 company. She explores the human component and human agency to AI, and what actions other organizations can take to properly begin scaling this technology to its full potential.
Chowdhury holds degrees in quantitative social science and has been a practicing data scientist and AI developer since 2013, so a major part of her job is to steer clear of fear-mongering.
“There are things to raise awareness about,” she said in a March 19 media roundtable in Washington, D.C., “but I worry about if there is an overemphasis on quote, ‘here are the terrible things all these bad companies are doing to you,’ we will find people will just give up.”
For example, in terms of data privacy and security, rather than thinking companies collecting and mining user data is just a cost of being a digital citizen, Chowdhury wants people to feel empowered, and that they have a right to make choices around their data.
And that’s part of why she’s in Washington, D.C., from Silicon Valley — to meet with Capitol Hill staffers, policy organizations and different government organizations to emphasis a need, democratically, for citizens to be informed and make decisions about how their data is used and how AI affects them.
“Right now, we don’t have that reciprocal pathway where companies . . . give you things or products and whatever, and you fully understand how it is impacting you,” Chowdhury said, “and we are just beginning the research in ethics in AI.”
As AI is built, embedded and institutionalized, users won’t even know whether it’s biased or discriminatory, especially considering the rate of hyperpersonalization and customized experiences. One reason is that people won’t know what they don’t see; they won’t know they’re being discriminated against, because that data and decision-making process aren’t visible to them. This is important to consider as companies scale AI.
And while companies are extremely interested in AI and understand its potential, Chowdhury noticed two major trends when working with clients.
One is the notion of ethical debt; that if companies don’t build AI ethically from the ground up, it will be at best a reputational risk if something goes wrong — perhaps requiring system overhauls. At worst, companies can face irreparable harm to their reputation or legal liability. Just as cybsersecurity needs to be baked in from the start, so do AI ethics.
And this is important to note because responsible AI is like cybersecurity, where if the job is done right, nothing will happen.
“It’s hard to illustrate the value of something when nothing happens is the goal,” Chowdhury said.
The second trend is untapped potential. Chowdhury recognizes clients want to “do” AI, and the AI currently being done is low-hanging fruit — such as automating back-end processes and using natural language processing to understand billing.
“That is not the AI we imagine,” Chowdhury said, referring to futuristic AI like flying cars and scenes from science fiction movies. And company leaders want to pursue more advanced AI, but fear the “what ifs” of something going wrong, like extreme discrimination.
“And because we haven’t solved all of those problems, they are afraid to move forward with a more ambitious idea,” Chowdhury said. “They will often start ambitious and scale themselves downward, be more realistic about it.”
So, part of responsible AI relies on a strong business imperative.
“I strongly believe that we will not scale ambitious artificial intelligence until we have figured out how to deal with the ethical consequences,” she added.
Integrating AI and technology jobs into different industries will rely on a holistic combination of technical and strategic skills, according to Chowdhury. And companies face a need to push products to market and develop and launch new products while reaching the highest standards of ethics possible. But it takes time to guarantee ethical standards.
Chowdhury shared the four pillars of the ethical AI launchpad, intended to prevent AI consequences by enabling innovation while identifying and fixing biases:
Technical: Understanding the ethical implications of AI through explainable AI frameworks and collaborative interfaces.
Reputational/Brand: Strategically understanding the reputational risk of the AI they are considering. Sometimes, it is about the design and implementation of a product and not the bias in the data. Considerate design decisions can make a better product.
Governance: Establishing roots of accountability, traceability of algorithms, and creating high-level groups to create standards and metrics by which to understand bias. This needs to be an all-of-company, consistent approach to AI decisions to set standards, benchmarks and a threshold for bias.
Organizational: Finding the right kind of talent and data scientists, having quality data accessible for teams to work with, and “unsiloing” an organization in terms of structure and ideologically so different teams can work together. For example, data scientists should be working with legal teams to fully understand AI legal compliance, technological potential and technical implications.
Related: Accenture’s Annette Rippert on Bridging STEM Gaps and Debunking STEM Pipeline Misconceptions