The Race to Harness AI:
Systems Design, Human Values, and Societal Norms
by Andrew Pery, AI Ethics Evangelist
As artificial intelligence (AI) becomes increasingly embedded in our lives, regulators around the world are racing to establish rules of the road. AI regulation is facing fragmented legal frontiers as the new digital phenomenon of generative AI advances rapidly. Suddenly, AI isn’t something buried in the algorithms of social media feeds or tucked away in data centers; it is generating content, art, accelerating novel treatment methods, and helping code software.
While its utility can be compelling, the proliferation of AI technology is reshaping socio-economic dynamics, where individuals are increasingly subject to the power of Big Tech. Data collected about them is being monetized, often without their explicit consent, and used to influence their behavior, thereby creating new forms of economic power proven to have adverse impacts. The ability of these companies to influence behavior and shape public discourse means that they can exert disproportionate control over societal norms and values.
There is a fundamental alignment problem between the design of AI systems and human values, ethics, and societal norms:
- First, AI shapes all of our interactions, mediating all of our experiences in ubiquitous and invisible ways, whereby AI is putting the world on autopilot, defining the course of our lives.1
- Second, there is an overreliance on AI to trust its outcomes, often without human agency to override its adverse impacts.
- Third, AI tends to perpetuate the biases inherent in historical training data.
- Fourth, AI models are designed to optimize efficiency rather than understanding the broader context in which they operate, which demands a more holistic approach to AI governance—one that considers not only the technical aspects of AI design but also the social, ethical, and economic dimensions.
1 Brian Christian, The Alignment Problem: Machine Learning and Human Values, W. W. Norton & Company, Oct 6, 2020
Two philosophies of AI regulation: Innovation vs. accountability
Broadly, two regulatory philosophies are emerging: innovation-focused, industry-led frameworks that prioritize competitiveness, and prescriptive, risk-based regulations that emphasize safety, ethics, and human rights. These approaches are not merely technocratic disagreements but expressions of deeper philosophical worldviews.
The innovation-centric model aligns with a libertarian or utilitarian philosophy, where technological progress is seen as a primary engine of social good. Proponents argue that minimizing regulatory friction unleashes the full potential of AI to address pressing global challenges, from climate modeling to medical diagnostics. Here, the moral imperative lies in maximizing utility, fostering economic growth, and maintaining international competitiveness. Regulation, if overly restrictive, is viewed as an impediment to innovation, which is itself regarded as a valuable asset.
In contrast, risk-based, prescriptive regulation echoes a deontological or rights-based framework, emphasizing duties, constraints, and the intrinsic worth of human dignity. From this perspective, AI is not merely a tool for optimization but a potentially disruptive force that must be ethically tamed. Here, the emphasis is on safeguarding human rights, preserving democratic values, and preventing harm, even if it means slowing technological advancement. Europe's AI Act, with its focus on risk categorization and prohibited uses, embodies this approach.
Underlying tensions in AI governance: Autonomy, trust, and power
These competing philosophies reflect more profound philosophical questions.
First, how do we define human autonomy in an AI-mediated world? AI systems are increasingly making decisions that shape people's lives, from credit scoring to hiring to legal sentencing recommendations. The libertarian model risks eroding human autonomy by embedding opaque, unaccountable decision-making into daily life. Conversely, heavy-handed regulation can also limit autonomy by restricting access to empowering technologies. Striking a balance requires a nuanced understanding of autonomy, not merely as freedom from interference, but as the ability to shape one's life meaningfully in an increasingly automated world.
Second, how do we cultivate trust in technological systems? Trust is both an ethical and practical necessity for the adoption of AI. From a philosophical standpoint, trust is not blind acceptance; rather, it is founded on transparency, accountability, and shared values. A regulatory environment that neglects these dimensions may breed suspicion, while one that overregulates may stifle innovation and erode the social contract by implying that AI is inherently dangerous.
Third, who holds power, and how is it distributed? AI is a vector of power over information, labor, and even social norms. Industry-led frameworks often consolidate this power in the hands of a few tech companies, raising concerns about digital oligarchies. Conversely, state-centric regulatory models raise fears of overreach, surveillance, or stifling bureaucracy. The philosophical challenge lies in ensuring that AI governance reflects democratic principles of accountability, pluralism, and equity.
Bridging the divide: A layered approach for trustworthy AI
To bridge the gap between innovation and compliance, organizations should consider adopting a layered approach to AI regulation that strikes a balance between innovation and potential adverse impacts. This consists of:
-
Institute a voluntary code of trustworthy AI as a base layer for universal AI principles, providing a foundational, non-binding framework for AI development and deployment. It offers universal principles, such as those articulated by the OECD, which emphasize fairness, transparency, human-centered design, and accountability. The OECD AI Principles strike a balance between innovation, economic progress, and ethical responsibility, making them a widely accepted foundation for industry-friendly yet socially responsible AI governance. Operationalizing the OECD AI Principles entails going beyond policy statements by establishing formal AI governance structures, implementing practical technical safeguards to mitigate downstream risks, and fostering cultural awareness throughout every stage of the AI lifecycle, thereby ensuring responsible innovation that earns public trust. A handy resource for operationalizing trustworthy AI principles is the NIST Trustworthy and Responsible AI NIST AI 600-1, which is the first comprehensive policy aimed at addressing the entire life cycle of General Artificial Intelligence systems along eleven (11) policy imperatives that span safeguarding privacy, protecting intellectual property, robust cybersecurity against adversarial attacks across the AI pipeline, mitigating bias, misinformation, disinformation and guard against nefarious AI uses.
-
Integrate risk management frameworks into the AI development lifecycle, such as those from NIST or ISO, which operationalize ethical principles into practical safeguards. NIST AI RMF 1.0, released in January 2023, is the first comprehensive risk management framework explicitly designed to reduce risks associated with AI systems. The framework is voluntary and provides a structured approach to AI risk management, focusing on fostering trustworthiness in AI systems. The RMF 1.0 is based on an end-to-end risk life cycle management process that includes four components – govern, map, measure, and manage. The NIST RMF also consists of an online Playbook designed to help organizations operationalize AI risk management best practices.
-
Undertake an independent audit of AI systems, such as those developed by ForHumanity, that empower organizations to demonstrate the auditability and traceability of AI compliance obligations aligned with AI regulatory mandates across multiple jurisdictions. This provides a publicly auditable process that enhances stakeholder confidence and supports regulatory alignment to establish an “infrastructure of trust” for AI and autonomous systems that impact humans, targeting key risk areas: ethics, bias, privacy, trust, and cybersecurity. ForHumanity’s Independent Audit of AI Systems provides a robust, voluntary, and transparent infrastructure, supported by crowdsourced standards, third-party auditing, training, and jurisdictional flexibility to ensure AI systems are ethical, secure, fair, and trustworthy.
-
Making it easier to adapt to AI regulations regardless of jurisdiction and regulatory philosophy by proactively developing voluntary codes of conduct, implementing AI risk management frameworks, and undergoing independent AI audits, thereby complying more easily and credibly with both industry-friendly and prescriptive AI regulations as they emerge, turning compliance from a burden into a competitive advantage. It is a daunting challenge to navigate the jurisdictional divergence in AI regulatory frameworks, which makes it difficult to plan for long-term AI strategies due to the rapidly evolving technology and regulatory flux. Just consider recent developments between the US and EU approaches to AI regulation. The current U.S. administration has repealed the Biden-era AI guardrails, until it was removed from the “Big Beautiful Bill,” which proposed a ten-year moratorium on AI regulations, and is pressuring the EU to weaken or delay its EU AI Act as well to scrap the Digital Services Act, linking such regulatory concessions to trade discussions. It remains to be seen how the EU will respond to U.S. efforts to dilute AI standards. Prominent European CEOs (Airbus, BNP, Philips) urged Brussels to pause or simplify the AI Act, citing concerns it could “stifle innovation” ahead of its August 2025 implementation. The EU is considering a "code of practice" to clarify requirements and ease compliance, hinting at potential retreat or recalibration.
Conclusion: How AI regulation and innovation can co-exist
Framing regulation and innovation as opposing forces is misleading. Regulation and innovation are complementary, not adversarial. Common-sense regulation fosters prosperity and opportunity without compromising innovation. In Why Nations Fail, the authors make a compelling argument for the proposition that “inclusive institutions are those capable of producing greater transparency, equitable legal frameworks, and effective economic systems that produce long-term growth.” Similarly, the inherent value of an inclusive AI regulatory approach is that it aligns AI innovation with democratic values of fairness, accountability, and rights protection; embeds transparency, contestability, and oversight into AI systems; and distributes the benefits of AI equitably across society.
The real choice isn't between regulation and innovation, but rather between unchecked, exclusionary AI systems, designed in opaque corporate silos, and inclusive, democratically governed AI institutions that foster innovation serving people, rights, and democracy. In the final analysis, the case for Inclusive AI is not merely rooted in ethics, but it reflects apparent market demand and delivers measurable competitive advantages. The Edelman Trust Barometer 2024 reveals a clear trend that 62% of global consumers say they are more likely to buy from or engage with companies they perceive as using AI responsibly and transparently.