ABBYY
Back to The Intelligent Enterprise

Is Generative AI Trustworthy?

by Andrew Pery, AI Ethics Evangelist & Maxime Vermeir, Senior Director of AI Strategy

Regulations that mitigate against the most obvious potential harms of generative AI is just one dimension to harness its potential. There are best practices and technological approaches that can improve the accuracy and reliability of generative AI-based applications.

One such approach is to leverage already proven narrow AI applications such as intelligent document processing (IDP).

Share

The recent buzz around ChatGPT raises legitimate questions relating to its trustworthiness. By his own admission, Open AI’s CEO Sam Altman acknowledged that ChatGPT has "shortcomings around bias".

A recent article by Forbes, “Uncovering the Different Types of ChatGPT Bias” went even further: “Problematic forms of bias make ChatGPT’s output untrustworthy.” The article cites five categories of ChatGPT bias, which in general are risks to be aware of with generative artificial intelligence (AI) technologies:

  1. Sample bias
  2. Programmatic morality bias
  3. Ignorance bias
  4. Overton window bias
  5. Deference bias

The first is sample bias. Keep in mind that 60 percent of ChatGPT training data is based on information scraped from the internet, limited by a knowledge base of up to the year 2021. It can generate convincing but entirely inaccurate results. Its filters are not yet effective in recognizing inappropriate content.

Second is what Forbes refers to as “programmatic morality bias,” reflecting software developers’ subjective opinions of what may be deemed to be socially acceptable responses and injecting their norms into the model.

Third is “ignorance bias”—that the model is inherently designed to generate natural language-based responses that appear like human conversations; however, without the ability to really understand the meaning behind the content.

Fourth is what is referred to as the “Overton window bias”, whereby ChatGPT tries to generate responses that are deemed to be socially acceptable based on the training data content, which may in fact amplify bias in the absence of rigorous data governance strategies.

Fifth is “deference bias,” wherein there is a tendency to trust technology, considering the “workload of most knowledge workers and the immense promise of this shiny new toy”.

The urgent need for AI regulation

To address these biases, it’s prudent to adhere to ethical principles and values relating to the development and regulation of artificial intelligence (AI) technologies. There are four key reasons that the need for AI regulations must be addressed immediately:

  1. AI is becoming pervasive, impacting virtually every facet of our lives. AI is forecast to contribute $15.7 trillion to the global gross domestic product (GDP) by the end of the decade. Moreover, a Goldman Sachs study projects that 300 million jobs may be subsumed by AI technologies such as generative AI. AI is simply too big to be governed by self regulation.
  2. The propensity by innovators of disruptive technologies to release products with a “ship first and fix later” mentality in order to gain first-mover advantage. For example, while OpenAI is somewhat transparent about the potential risks of ChatGPT, they have released it for broad commercial use, its harmful impacts notwithstanding. Placing the burden on users and consumers who may be adversely impacted by AI results is unfair.
  3. In many instances, AI harms may be uncontestable as consumers lack visibility to how AI systems work. AI regulation is needed to impose on developers much higher standards of accountability, transparency, and disclosure obligations that ensure AI systems are safe and protect fundamental privacy and economic rights.
  4. AI tends to amplify bias. As was put by David Weinberger of Harvard, “bias is machine learning’s original sin.” AI is proven to amplify bias that spans facial recognition, employment, credit, and criminal justice, which profoundly impact and marginalize disadvantaged groups.

Achieving the promise of AI technology requires a harmonized approach that encompasses human values of dignity, equality, and fairness.

It’s not just technology or just conformance with specific laws, but as Dr. Stephen Cave, executive director of the Leverhulme Centre for the Future of Intelligence at Cambridge, said: “AI is only as good as the data behind it, and as such, this data must be fair and representative of all people and cultures in the world.”

It is for these reasons that there is increased urgency in developing AI regulations. Chief among them is the European Union Artificial Intelligence Act, the compromise text of which was approved by the European Parliament on June 14, 2023. It is likely to become the first comprehensive AI regulation that imposes prescriptive obligations on AI providers to safeguard human rights, and the safety of AI systems while promoting innovation. In the US, recently Chuck Schumer, the Senate Majority Leader, urged implementation of AI regulation.

It can’t be ignored, however, that artificial intelligence systems are man-made, including the data on which they are trained. The bias that is being demonstrated by these systems mirrors the bias that exists in the human race. To improve this, the responsibility lies not only on developers but the end users of these technologies themselves—everyone must be held to a higher standard in order to correct the problem we have created.

Leveraging ChatGPT with ABBYY Vantage

Blog
Learn more

What can be done to optimize ChatGPT output accuracy?

Regulations that mitigate against the most obvious potential harms of generative AI is just one dimension to harness its potential. There are best practices and technological approaches that can improve the accuracy and reliability of generative AI-based applications.

One such approach is to leverage already proven narrow AI applications such as intelligent document processing (IDP). Narrow AI applications are proven to perform certain tasks exceptionally well—even better than humans. They use advanced machine learning such as convolutional neural networks and supervised machine learning to recognize and extract text and data from images and documents with high degree of accuracy.

It is important to contextualize applications of generative AI to specific use cases. For example, using intelligent document processing to classify and extract content from complex processes such as loan applications, insurance claims, and patient onboarding will improve the generative AI’s knowledge base, thereby improving the accuracy and reliability of the generated content. Intelligent document processing can be a valuable tool to improve the quality and accuracy of the ChatGPT training data knowledge base.

We at ABBYY can add value to foundation models, particularly in the area of training data accuracy. ABBYY’s globally deployed IDP portfolio leverages advanced AI technologies such as convolutional neural networks and natural language processing to increase document classification and recognition accuracy, which can mitigate potential inaccurate outputs generated by foundational AI.

Ultimately, the utility of foundational AI systems will depend on contextualizing use cases by applying rigorous data governance strategies to mitigate bias, inaccurate results, copyright and privacy infringement, and harmful content. This may become even more important as legal issues relating to copyright and privacy concerns are raised in training generative AI, as evidenced by recent class action suits initiated against Google and OpenAI.

Continue reading on this particular topic in Techopedia: RDS and Trust Aware Process Mining: Keys to Trustworthy AI?

Would you like to stay up to date on the latest thought leadership from ABBYY exploring a range of topics including the latest AI regulations and developments in applied AI? We invite you to subscribe to The Intelligent Enterprise today by filling out the form on the right side of the page on desktop or below this article on mobile.

Andrewpery 99X99

Andrew Pery

Digital transformation expert and AI Ethics Evangelist for ABBYY

Andrew Pery is an AI Ethics Evangelist at intelligent automation company ABBYY. His expertise is in artificial intelligence (AI) technologies, application software, data privacy and AI ethics. He has written and presented several papers on the ethical use of AI and is currently co-authoring a book for the American Bar Association. He holds a Masters of Law degree with Distinction from Northwestern University Pritzker School of Law and is a Certified Information Privacy Professional (CIPP/C), (CIPP/E) and a Certified Information Professional (CIP/AIIM).

Connect with Andrew on LinkedIn.

Maxime Vermeir 110X110 (1)

Maxime Vermeir

Senior Director of AI Strategy

With a decade of experience in product and technology, Maxime Vermeir is an entrepreneurial professional with a passion for creating exceptional customer experiences. As a leader, he has managed global teams of innovation consultants and led large enterprises' transformation initiatives. Creating insights into new technologies and how they can drive higher customer value is a key point in Maxime’s array of Subject Matter Expertise. He is a trusted advisor and thought leader in his field, guiding market awareness for ABBYY's technologies.

Connect with Max on LinkedIn.

Subscribe for updates

Get updated on the latest insights and perspectives for business & technology leaders

Loading...

Connect with us