ABBYY
Back to The Intelligent Enterprise

AI at a Crossroads: Regulate or Innovate?

by Andrew Pery, AI Ethics Evangelist
Advocates emphasize the need for oversight to prevent risks, while opponents argue that excessive oversight stifles innovation and economic growth. The debate leaves many wondering: Can AI regulation and innovation coexist?

In 2023, the world witnessed an unprecedented surge in the development and application of generative artificial intelligence (“GenAI”), marking it as the defining technology of the year. ChatGPT reached 100 million monthly active users within three months of its launch, setting a record as the fastest-growing consumer application in history. Advanced algorithms and “foundational models,” such as GPT-4, became integral tools for generating human-like text, images, and even music, pushing the boundaries of what AI could achieve.

As GenAI continues to evolve, it sparks excitement and debate—and concern, because of its transformative impact on society. Thus, it should come as no surprise that the momentum for AI regulation is accelerating. I recently authored a paper with Michael Scott Simon, published by the American Bar Association, that explores the healthy tension between the parties impacting AI regulation. Advocates emphasize the need for oversight to prevent risks such as bias, misinformation, and threats to privacy and security, while opponents, particularly in the U.S., argue that excessive oversight stifles innovation and economic growth. The debate leaves many wondering: Can AI regulation and innovation coexist?

AI regulation—recent events

The European Union Artificial Intelligence Act (EU AIA) was officially adopted on May 21, 2024, as a comprehensive legislative framework designed to regulate AI technologies across the EU. The AIA, the first of its kind globally, aims to ensure that AI systems used within the EU are safe, transparent, and respect fundamental rights. The European Commission High-Level Expert Group on AI articulated its ambition for AI along three dimensions: AI systems should be lawful, robust, and ethical.

Notably, the compromise text incorporates an expanded list of prohibited AI systems that pose a potential threat to fundamental rights and democracy:

  • “[B]iometric categorisation systems that . . . deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation,”
  • “[F]acial recognition [systems that use] untargeted scraping of facial images from the internet or CCTV footage,”
  • Emotion recognition systems in the workplace and educational institutions,
  • Social scoring systems based on social behavior or personal characteristics,
  • Systems that manipulate human behavior in malicious ways, and
  • Systems that exploit the vulnerabilities of people materially distorting their behavior in potentially harmful ways.

(Source: Artificial Intelligence Act, Council Regulation 2024/1689, art. 5(1), 2024 O.J.)

On October 30, 2023, President Biden issued EO 14110, entitled Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

The order outlines a government-wide approach to addressing the challenges and opportunities presented by AI. Notably, EO 14110 establishes eight guiding principles and priorities:

  1. Ensuring that AI is safe and secure
  2. Promoting responsible innovation, competition, and collaboration
  3. Supporting American workers
  4. Advancing equity and civil rights
  5. Protecting Americans’ privacy
  6. Protecting civil liberties
  7. Managing risks from federal government’s use of AI
  8. Strengthening American leadership abroad

However, the Trump administration rescinded the Biden Executive Order relating to AI Safety primarily driven by the overriding policy objective of removing any regulatory burden that may hamper innovation and potentially stifle U.S. dominance in AI technologies.

This executive order further complicates an already complex AI regulatory framework that lacks a consistent federal regulation, leaving the matter for states to institute. It also appears that the Trump administration will work more closely with Big Tech to develop industry-sponsored and endorsed AI governance, which is expected to be voluntary codes of conduct. Furthermore, the White House appointed former tech executive David Sacks as his newly created “White House AI & Crypto Czar.

The United States has taken a more cautious approach to AI regulation compared to the European Union, and other global players. This reluctance to implement comprehensive AI laws stems from a combination of economic priorities, innovation incentives, political challenges, and ideological perspectives on government intervention in technology. The U.S. leads the world in AI research and development, with major tech companies such as Google, Microsoft, OpenAI, and Meta investing billions into AI advancements. It is argued that stricter regulations could slow down innovation, making it harder for U.S. firms to maintain their competitive edge against international rivals, particularly China.

Additionally, the decentralized nature of the U.S. government presents challenges to passing comprehensive AI regulations. Unlike the European Union, which has a centralized approach to policymaking, the U.S. regulatory system involves multiple stakeholders, including Congress, federal agencies, and state governments. This fragmentation makes it difficult to reach consensus on a unified AI regulatory framework. As a result, AI governance in the U.S. is currently managed through sector-specific guidelines and voluntary standards rather than overarching federal legislation.

Impact on international trade

While the U.S. has taken a pro-innovation approach, American businesses placing technologies on the EU market that offer AI as a service must align business practices with the EU Artificial Intelligence Act, regardless of whether they are located in the EU. This requirement is poised to become a potential irritant for the current U.S. administration, given its strategy to re-balance trade relations. However, the EU has considerable leverage in this regard, since the EU is the largest U.S. trading partner. In 2023, the EU and U.S. traded over €1.5 trillion in goods and services, and the EU and U.S. together represent almost 30% of global trade in goods and services and 43% of global GDP.

For businesses, the challenge lies in finding a middle ground—ensuring AI safety without hindering progress. It is in their strategic interest to espouse trustworthy AI best practices by adopting voluntary frameworks, ethical guidelines, and AI standards initiatives such as those developed by NIST and ISO. The future of AI governance will likely involve flexible regulations that promote innovation while mitigating risks.

Moving forward, enhancing global collaboration on AI regulation requires increased diplomatic engagement, regulatory interoperability, and mechanisms for cross-border enforcement. Establishing baseline international standards, akin to those in cybersecurity and data protection, could facilitate cooperation while allowing for regional regulatory nuances. Ultimately, ensuring AI governance that balances innovation with ethical considerations will be essential in shaping the future of AI on a global scale. Robust commitment to trustworthy AI engenders trust, thereby creating sustainable competitive advantages for all parties that embrace it.

The full version of the ABA paper, “For Two Years, We Have Been Telling You That AI Would Soon Be Regulated, and Now, It Will,” can be accessed here.

Subscribe for updates

Get updated on the latest insights and perspectives for business & technology leaders

Loading...

Connect with us