Published by COSMSO: CEP Magazine
By Mohamad M. Nasr-Azadani, Jean-Luc Chatelain, and Randolph A. Kahn, Esq.
ChatGPT—the generative artificial intelligence (AI) tool for the masses owned by OpenAI— was barely on the market before it was noticed that it may be sued for defamation. According to a Reuters article, a mayor gave notice to OpenAI that it was putting alleged false information into the marketplace. He further indicated that if OpenAI failed to clean up the “false claims that he had served time in prison for bribery,” he would file the first defamation lawsuit against the company.[1]
AI is a multi-trillion-dollar affair that is transforming much of the business world and the way we do almost everything. However, laws are reactive and always play catch-up with respect to advancements in AI. The governance model introduced here assists organizations in addressing existing and potential future laws pertaining to AI.
Across the globe, many governments and legislative bodies are actively working to regulate AI. For example, in October 2023, the Biden administration issued an executive order on “safe, secure, and trustworthy” AI.[2] Shortly thereafter, EU members passed the EU–AI Act.[3] There are many other AI laws being considered in many jurisdictions that will provide different rules. For example, the U.S. Federal Trade Commission (FTC) made clear that “the FTC’s law enforcement actions, studies, and guidance emphasize that the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability.”[4]
What is the GovernTrust AI Model
While there is concern and uncertainty surrounding the use of AI and how laws will develop, organizations need guidance and governance now. The GovernTrust © AI Model (GTAIM) is our current thinking in the development of this powerful new technology to help organizations implement AI with confidence that they will not run afoul of the law or create liability. However, as laws advance, the GTAIM may need to be enhanced.
Evolving and conflicting law
In the domain of privacy, creating a comprehensive policy encompassing relevant laws by finding the common attribute across the various laws has been a successful approach. Similarly, as AI evolves, adopting a comparable strategy—establishing rules that address common legal attributes from various laws applicable to AI—will be helpful. The GTAIM helps establish the rules and build reasonableness into AI development and implementations.
GTAIM incorporates common criteria
Because AI-related laws will continue to be developed and jurisdictions will take different approaches, the GTAIM seeks to incorporate the common elements of existing AI-related laws and anticipate what may be coming in new laws.
Below are a few organizations’ AI frameworks (there are many others) and what they think are essential elements of building trust in AI systems. The GTAIM seeks to incorporate these types of issues and concerns.
Guiding the implementation of AI
GTAIM is a governance model that can guide the implementation and use of AI technologies. And in fact, GTAIM bakes into its model the same types of issues the FTC thinks are essential to building trust in AI (“transparent, explainable, fair, and empirically sound, while fostering accountability”). GTAIM incorporates concepts already advanced as the basis for several governmental directives on AI. The GTAIM consists of six attributes that will be addressed later.
A word of caution: AI involves many terms of art, such as “black box,” “hallucinations,” or “bias,” which have very specific meanings in the AI community. Further, the GTAIM uses a wide breadth of terminology for its attributes, including its various activities, issues, and concerns; it has been updated to include many other terms and important attributes to build trust in AI systems. And there may be different ways to describe the attributes.
For starters, the word “trust” is used as a descriptor of the GTAIM because it speaks to the need for lawyers, businesspeople, customers, regulators, and privacy and technology professionals to have confidence in the AI tool and what it does. Trust is essential in every phase of the AI process because of the common perception (albeit fallacious) that AI is a “black box,” meaning that no one knows what’s happening inside the AI algorithm. So, trust in the system, technology, implementation, management, informational input, and people who manage the system are all that more essential.
Before we delve into each attribute, it is essential to point out that there is no one-size-fits-all approach to AI, and there are many ways to attain the desired end-state of addressing each attribute. In that way, the GTAIM is technologically neutral as it’s generally not prudent to mandate the use of particular technology to be compliant. Technologies come and go, and new technologies constantly evolve.
The GTAIM provides a framework that can be applied to AI to help improve governance. It also helps demonstrate that companies using AI seek to be good corporate citizens by acting reasonably and seeking to do the right thing when implementing AI. Ultimately, there is a clear trade-off between an enterprise’s resources (e.g., capital investment, human expertise, and sophisticated audit systems) and its AI-backed products.
Assess and mitigate risk
While most business activities have associated risks, AI may have more inherent risks as an algorithm makes business decisions, predicts the future, or develops new content merely by “crunching” huge amounts of data with limited human intervention. Risk management should be part of the AI building and implementation process. Risk management assessment and mitigation should seek to translate and apply the risk management profile to the new AI world. In other words, just because AI is being used doesn’t mean the company should be more risk averse. In fact, scientific fields such as operations research (OR) have developed to help businesses measure risks and plan ahead systematically and reliably.
Continue reading in PDF below
Commentaires