Loading component...

Back to The Intelligent Enterprise

The Race to Harness AI:
Systems Design, Human Values, and Societal Norms

by Andrew Pery, AI Ethics Evangelist
To bridge the gap between innovation and compliance, organizations should consider adopting a layered approach to AI regulation that strikes a balance between innovation and potential adverse impacts.

As artificial intelligence (AI) becomes increasingly embedded in our lives, regulators around the world are racing to establish rules of the road. AI regulation is facing fragmented legal frontiers as the new digital phenomenon of generative AI advances rapidly. Suddenly, AI isn’t something buried in the algorithms of social media feeds or tucked away in data centers; it is generating content, art, accelerating novel treatment methods, and helping code software.

While its utility can be compelling, the proliferation of AI technology is reshaping socio-economic dynamics, where individuals are increasingly subject to the power of Big Tech. Data collected about them is being monetized, often without their explicit consent, and used to influence their behavior, thereby creating new forms of economic power proven to have adverse impacts. The ability of these companies to influence behavior and shape public discourse means that they can exert disproportionate control over societal norms and values.

There is a fundamental alignment problem between the design of AI systems and human values, ethics, and societal norms:

1 Brian Christian, The Alignment Problem: Machine Learning and Human Values, W. W. Norton & Company, Oct 6, 2020

Loading component...

Subscribe for updates

Get updated on the latest insights and perspectives for business & technology leaders

Loading...
Follow ABBYY
Tag a friend