OpenAI launched ChatGPT in November 2022, amassing over a million users within five days. Six months later, top AI company CEOs, alongside numerous researchers and experts, issued a brief statement. They highlighted the urgency of prioritizing global measures to mitigate AI-induced extinction risks, likening it to preventing nuclear war.
AI’s rapid advancements and the stark warnings from its creators spurred reactions from world capitals. However, as lawmakers rushed to regulate AI’s trajectory, concerns arose about the insufficiency of these efforts to manage AI risks and leverage its benefits.
2023 Milestones in AI Policy
1. President Biden’s Executive Order on AI
President Biden signed a comprehensive executive order addressing various AI concerns, including risks posed by powerful AI systems, labor market impacts due to automation, and civil rights and privacy implications. This order instructs government agencies to develop guidance and reports, promising an impactful but gradual implementation.
2. The U.K. AI Safety Summit
Held at Bletchley Park, the birthplace of computing, this summit hosted representatives from 28 countries and AI leaders. British Prime Minister Rishi Sunak led the effort to prioritize AI safety by establishing the AI Safety Institute and promoting international collaboration. Though lauded for its focus on risk mitigation, concerns arose regarding the absence of diverse voices and shortcomings in AI companies’ safety policies.
3. The E.U. AI Act
After two years of development and negotiations, the E.U. finalized the AI Act. Recent agreements highlight stricter regulations on foundation models, classifying highly computational models as systemically important. Additionally, the act aims to ban certain AI applications and impose stringent controls on “high-risk” areas like law enforcement and education, with potential fines of up to 7% of global revenue for non-compliance. However, negotiations on finer details continue amid concerns of stifling innovation and potential alterations by member states before its final enforcement in two years.
In June, U.S. Senate Majority Leader Chuck Schumer unveiled plans to champion comprehensive AI legislation in Congress. To build legislative momentum and educate policymakers about AI, Schumer has been conducting “AI Insight Forums.” The window for passing AI laws is narrowing with the impending 2024 presidential election. In parallel, U.N. Secretary-General António Guterres established an AI advisory body in October 2023, tasked with publishing two reports by the end of 2023 and August 2024, potentially shaping a new U.N. AI agency. According to U.N. tech envoy Amandeep Gill, a U.N. Summit of the Future in September 2024 could serve as a pivotal moment for the agency’s establishment if political will aligns.
FAQs
1. What key steps has the U.S. government taken to address AI concerns?
President Biden signed an executive order addressing various AI concerns, instructing government agencies to develop guidance on powerful AI systems, labor market impacts, and civil rights and privacy implications. The order promises impactful but gradual implementation.
2. What notable event took place in the U.K. regarding AI safety?
The U.K. AI Safety Summit at Bletchley Park gathered representatives and AI leaders from 28 countries. Led by Prime Minister Rishi Sunak, it aimed to prioritize AI safety by establishing the AI Safety Institute. However, concerns arose about diversity and AI companies’ safety policies.
3. Could you highlight significant elements of the E.U. AI Act and its implications?
The finalized E.U. AI Act focuses on stricter regulations for foundation models, labeling highly computational models as systemically important. It aims to ban certain AI applications and impose controls in high-risk areas, with potential fines of up to 7% of global revenue for non-compliance.
[To share your insights with us, please write to sghosh@martechseries.com]