4.10

AI Governance

Generative Artificial Intelligence, the harnessing of digital technology to perform human-like tasks, has emerged as an increasingly mainstream toolset and a force broadly disruptive of the status quo since the launch of ChatGPT, alongside competing offerings from all the major technology players.

This active frontier of technological innovation holds great promise and also peril, in a variety of risky scenarios.

Most chilling is the idea of an anti-human rogue AI. Another possibility is that AI “agents” overzealously following seemingly benign instructions without constraints (the canonical example is maximizing paperclip production), could hypothetically lead to an AI bot takeover of social media and precipitate another flash crash, tanking financial markets.

These two risks are best mitigated by robust global safeguards. A patchwork of laws varying by country will simply lead to offshoring of operations and headquarters, as corporations engage in regulatory arbitrage.

A third risk is the intentional weaponization of AI as an instrument of warfare. In fact this is already occurring, with AI autonomously targeting air strikes. To mitigate this risk, one guidepost is our multinational response to nuclear technology. The nuclear non-proliferation treaty has arguably reduced the risk of nuclear war. Following that line of action, tens of thousands of leading scientists and public figures have called on the UN to ban weaponized AI.

Anne-Marie Slaughter, CEO of the think tank New America, and Fadi Chehadé, former president and CEO of ICANN, have endorsed applying the principles and practices that led to limits on nuclear technology to AI, writing that “leading scientists, technologists, philosophers, ethicists, and humanitarians from every continent must… come together to secure broad agreement on a framework for governing AI that can win support at the local, national, and global levels.”

Ian Bremmer of the Eurasia Group recently echoed the point that any credible response to AI risks must be global, saying, “If you want to govern AI the first thing you need to do is have a global conversation… because you can’t fix anything if you’re all rowing in different directions… and if we don’t we’re going to break things that are unacceptable to be broken.” This sentiment was repeated recently by Chris Anderson of TED and Sam Altman of OpenAI. When pressed on the need for a global agreement on AI safety standards by Anderson, Altman responded, “Of course… A lot has been decided in small elite summits, but one of the cool things about AI… is that our AI could talk to everyone on Earth… and we can learn the collective value preference of what everybody wants.”

A central challenge is that tech companies leading the development of AI generally prefer to operate free of regulatory constraints: “Large Silicon Valley companies involved in AI software — including Google, Microsoft, Meta, Amazon Web Services, and OpenAI — have mounted pushback to proposals for comprehensive AI regulation in the EU, Canada, and California.”

Nonetheless, a variety of governmental and non-governmental organizations are attempting to meet this moment with regulation, agreements, and shared statements of principles.

The EU’s adoption in 2024 of the Artificial Intelligence Act, constitutes the world’s first comprehensive rulemaking on AI. It sets forth a legal framework classifying AI activities into four distinct levels of risk, banning those deemed unacceptable, imposing requirements for the middle tiers, and leaving the lowest-risk tier unregulated.

In the US, the Biden administration issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence in late 2023, calling for the US government to develop guidelines and principles, acknowledging US leadership while calling for global cooperation. The Trump administration rescinded this policy, replacing it with an order of his own, focused on “enhancing America’s global AI dominance.”

The Global Partnership on Artificial Intelligence (launched in 2020), is an intergovernmental partnership between 30 governments encouraging responsible development and use of AI. It released a declaration of principles in July 2024.

A UN High-Level Advisory Body on Artificial Intelligence sought input from many experts and stakeholders before releasing a report in September 2024, titled “Governing AI for Humanity,” which called for a globally inclusive and cooperative approach and highlighted gaps in AI governance.

The AI Action Summit convened in Paris in February 2025 and aimed to establish international cooperation on AI governance, focusing on ethical, inclusive, and sustainable development. To many concerned about AI safety, however, the summit declaration seemed negligent. Over 60 countries participated, but the US and the UK declined to sign the declaration, with US Vice President JD Vance criticizing the regulatory approach, cautioning that such regulations could stifle innovation in the AI sector.

This urgent need to address safety risks stems from the potential impact of Artificial General Intelligence, seen by many as a gateway to Artificial Super Intelligence.  This urgency is further exacerbated by the existence of a race between AI developers, both a corporate race and a geo-political one.

A logical first step would be for the U.S. and China to reach an agreement based on mutual recognition of redlines that experts from both countries acknowledge should not be crossed. The issues need to be addressed in whatever forum will facilitate timely agreement. If that forum is not global, then the agreement needs to be followed up by the negotiation of a global agreement. The choice of forum should not, however, be allowed to delay the negotiation of an initial agreement.

Several prominent civil society organizations are also contributing frameworks and recommendations and fostering dialogue toward responsible governance of AI, including the Partnership on AI (founded 2016), the Center for the Governance of AI (2018), the Center for AI Safety (2022), and the World Economic Forum’s AI Governance Alliance (2023).

 

The AI & Philanthropy Steering Committee unites donors and other leaders to guide initiatives that leverage AI for positive social impact. In addition, the AI4D Funders Collaborative is a global partnership dedicated to bridging gaps in AI access and readiness in the Global South.

White paper index

1.0 – A Possible Future – Opening Fictional Narrative
2.0 – Abstract
3.0 – Introduction: Crisis and Opportunity
4.0 – Global Problems Need Global Solutions
4.1 – The Climate
4.2 – Tropical Deforestation, the Amazon and the Global Water Cycle
4.3 – The Ocean
4.4 – Global Environmental Governance
4.5 – Preventing International Conflict
4.6 – No Safe Haven for War Criminals
4.7 – Strengthening Nuclear Governance
4.8 – Inequality and the Need for Global Taxation
4.9 – Grand Corruption, Illicit Trade, Money Laundering, Financial Offshoring, and Corporate Accountability
4.10 – AI Governance
4.11 – Pandemic Prevention and Bioweapons
4.12 – Refugees
4.13 – Governance of Outer Space Activities
5.0 – Global Governance Success Stories
6.0 – Attempts at Reform
7.0 – Global Citizenship and Pluralism
8.0 – Global Governance Innovations and the 21st Century
8.1 – Inclusive Global Governance and Modern Technology
8.2 – A Global Commons Fund
8.3 – Payments for Ecosystem Services
8.4 – Carbon Markets and Carbon Rewards
8.5 – Global Currencies, Payment Networks, Bank Charters and Transaction Fees
8.5.1 – Global Currencies
8.5.2 – Payment Networks
8.5.3 – Bank Charters and Transaction Fees
8.6 – Markets and Consumers Can Shape Global Policy
8.7 – Technology Innovated States and Global Opportunity
8.8 – A New Approach to Global Economic Cooperation
9.0 – Legitimacy, Celebrity and the Voices of Indigenous People
10.0 – The Leading Edge
10.1 – Philanthropy is Stepping Up
10.2 – Rapid Scaling Is Possible
11.0 – Further Reading