Back to ABBYY Blog

Regulatory Efforts in the U.S. Present an Encouraging Perspective

Andrew Pery

March 19, 2020

SMM Blog | ABBYY Blog Post

Government efforts to regulate AI in the United States have made significantly less progress than in the E.U. Several federal and local regulatory initiatives have been slow to move forward or were paused altogether. Many tech giants have taken it upon themselves to set forth AI ethics guidelines, task forces, and in some cases, to establish self-regulation efforts. Furthermore, a number of researchers, academics, and organizations have also begun exploring and establishing guidelines surrounding the ethical use of AI technologies. In this article, the following three domains will be explored with regard to the recent initiatives and efforts to establish an ethical AI framework:

  • Government regulatory efforts in the U.S.: On a federal, state-wide, and local level, multiple government entities have proposed legal regulations surrounding the development, implementation, and utilization of AI technologies. Unfortunately, many of these efforts have been slow to progress. This article explores a few recent efforts including the Algorithmic Accountability Act of 2019, H.R.4625, the "FUTURE of Artificial Intelligence Act of 2017", and California Consumer Privacy Act (CCPA).
  • AI ethics initiatives from major tech companies: Global tech companies have taken it upon themselves to self-regulate or put forth doctrines proposing how they plan to adhere to ethical AI principles in the development of their own technologies. Facebook, as one example, banned discriminatory advertising on their platform. Additionally, Microsoft notably issued a public statement calling for government regulation of AI and setting forth their own responsibilities.
  • AI ethics initiatives from technology organizations and associations: Researchers, academics, associations, and non-profit organizations have also taken up efforts to contribute to the ongoing AI ethics dialogue and movement. The AI Now Institute; Center for Democracy and Technology (CDT); Partnership on AI; and Fairness, Accountability, and Transparency in Machine Learning are all key organizations making strides in creating guidelines, fostering collaborations, and facilitating initiatives that move the needle for AI ethics.

AI technologies will play an increasingly integral role in our future. As the technology evolves, advances, and proliferates, it becomes increasingly important that it is fair, transparent, unbiased, and utilized in a manner that contributes to the common good.

The full article, “Regulatory Efforts in the U.S. Present an Encouraging Perspective,” can be read on the Association for Intelligent Information Management (AIIM) website. The article is part three of a three-part series, “Ethical Use of Data for Training Machine Learning Technology,” by Andrew Pery, digital transformation expert and consultant for ABBYY.

Artificial Intelligence (AI) Government
Andrew Pery ABBYY

Andrew Pery

Digital transformation expert and AI Ethics Evangelist for ABBYY

Andrew Pery is an AI Ethics Evangelist at intelligent automation company ABBYY. His expertise is in artificial intelligence (AI) technologies, application software, data privacy and AI ethics. He has written and presented several papers on the ethical use of AI and is currently co-authoring a book for the American Bar Association. He holds a Masters of Law degree with Distinction from Northwestern University Pritzker School of Law and is a Certified Information Privacy Professional (CIPP/C), (CIPP/E) and a Certified Information Professional (CIP/AIIM).

Connect with Andrew on LinkedIn.

Subscribe for blog updates


Connect with us