In a major step toward transparent and accountable AI, ABBYY, a global leader in intelligent process automation, engaged ForHumanity Europe to institute robust organizational and technological measures toward compliance with the European Union’s Artificial Intelligence Act (EU AI Act).
The project with ForHumanity involves two phases. The first is the development of a comprehensive AI Risk Management Policy in conformance with Article 9 of the EU AI Act, and implementation of an AI Risk Assessment register designed to demonstrate auditability and traceability of compliance obligations under the EU AIA. This builds a living framework for legal, ethical, and technical adherence to AI governance.
The ABBYY AI Risk Management Policy establishes guidelines and processes for managing risks associated with artificial intelligence (AI) in compliance with applicable legal and AI risk management frameworks. It ensures responsible AI development, deployment, and governance to align with ethical, legal, and regulatory standards while mitigating risks to individuals and society. The policy aims to foster trust, transparency, and accountability in AI systems while promoting innovation and minimizing potential harm.
In particular, the ABBYY AI Risk Management Policy:
- Adheres to generally accepted AI risk management principles of transparency, accountability, fairness, privacy, security and human oversight and as prescribed by Chapter III of the EU AIA.
- Institutes rigorous AI risk management assessment of risk factors to identify potential AI-related risks across ethical, technological, security, legal, and operational dimensions.
- Provides for risk evaluation processes and applicable controls that include data governance, model conceptual soundness, fit for purpose in assessing logical underpinning of AI models, alignment with intended use case interface and integration, pipeline, deployment environment, human-in-the-loop mechanisms, and AI system outcomes.
- Mandates the creation of an AI Risk Register based on the severity, likelihood, and detectability of AI risk levels and provide auditability of appropriate risk mitigation measures ensuring regulatory compliance, ongoing performance oversight, and timely response to incidents or failures.
- Establishes continuous monitoring of AI system performance, risks, and compliance through automated and manual processes.
- Creates a detailed delegation of authority for designated ABBYY personnel to provide AI governance and oversight of ABBYY’s AI compliance obligations throughout the entire life cycle of ABBYY AI portfolio and to engender a culture of trustworthy AI.
The ABBYY AI Risk Management Policy is designed to future-proof delivery of ABBYY’s market leading intelligent process automation portfolio our customers can trust as a pragmatic foundation for achieving both legal compliance and sustainable AI success.
To learn more, visit the ABBYY AI Trust Center to register for access to and view the ABBYY AI Risk Management Policy.
You can also register to attend a webinar June 12th hosted by ForHumanity, Law+ Data and ABBYY focused on Model Risk Management certification scheme designed to augment the US Federal Reserve SR 11-7 Model Risk Management guidance by integrating robust and globally harmonized AI Governance, Oversight, and Accountability for AI, Algorithmic, and Autonomous (AAA) Systems and share best practices to mitigate AI regulatory compliance with prescribed compliance protocols to ensure auditability and traceability of compliance obligations.