Artificial Intelligence On The Global Stage: EU Adopts First AI Act
As artificial intelligence (AI) continues to rapidly advance, it can pose unprecedented challenges to security and safety. From Amazon’s AI recruiting system demonstrating a bias against women to Tesla’s autopilot driving feature leading to crashes, artificial intelligence has raised ethical issues, threatened jobs and safety, and wielded the power to create and spread misinformation. Geoffrey Hinton, a computer scientist often known as the “Godfather of AI,” has voiced concerns over the unchecked development of AI. Prominent technology leaders including Elon Musk and Steve Wozniak have echoed Hinton’s sense of unease, calling for a six-month pause on development to adequately consider all of the risks. The European Union (EU) has been at the forefront of innovative AI legislation. The EU began to address the gap in legal regulatory structures for AI beginning in 2019 with the European Commission’s outline for AI in Europe which resulted in the publication of an ethics guideline for trustworthy AI.
The “AI Act” was then proposed in April of 2021 by the European Commission, and efforts to regulate AI have only increased as more advanced and generative AI continues to be introduced. The act was adopted by the European Council in 2022, approved in June of 2023, and is expected to be published later this year, going into effect by 2026 at the latest. As the first piece of regulation on the use of artificial intelligence in the world, the EU is setting the standard for AI governance and marks a critical step in the emergence of AI from a legal and international relations perspective.
The EU’s AI Act takes a risk-based approach categorizing all AI applications into levels: unacceptable risk, high risk, limited risk, and minimal or no risk. Each category includes descriptions of its meaning and what it encompasses, which are then used to institute restrictions. For example, an application considered to pose an unacceptable risk is prohibited and unable to be deployed within the EU. An example of an unacceptable risk application is systems that allow law enforcement to predict criminal behavior using analytics. For optimal implementation, the AI Act will also include a grace period before it takes effect to allow companies and organizations to sufficiently adhere to the new guidelines. The AI Act’s system operates in a way where, as new technology emerges, it can be easily sorted into the tiered risk framework outlined by the EU, allowing the legislation to maintain longevity and legal integrity. In a recent provision, foundational models, such as ChatGPT, are required to disclose copyright material used to train the systems. The EU’s efficiency in adapting to new models such as ChatGPT further demonstrates the strength of its legislative foundation to comprehensively regulate AI as it continues to develop.
The talks behind the AI Act have begun to shape the political and legal future of Europe regarding technology and data security. Europe’s Digital Commissioner, Margrethe Vestager, spoke to growing fears of AI threatening security and underscored the importance of implementing a clear policy on AI quickly. She notes that “democracy needs to show we are as fast as the technology,” and a clear regulatory framework to preserve security would do just that. The use of remote biometric identification is the latest battleground of this debate in Europe. France, Finland, the Czech Republic, and Denmark agreed that using facial recognition software to identify people in public spaces could be justified when pertaining to public security. However, a survey of European citizens strongly opposed this justification and supported a ban. The Commission compromised by heavily regulating the use of the technology without entirely banning it. Such discussions reveal the increasing presence of artificial intelligence and technology in European political and legal affairs.
While the Act has the capacity to create groundbreaking change, it still has several drawbacks to consider. This European desire to create a regulatory influence and lead the initiative resembles their efforts in the General Data Protection Regulation (GDPR), which effectively became the international standard for data protection. The European focus on being the “first” in a race to regulate AI has received backlash. Some have raised the point that Europe should focus less on rushing through the process and more on ensuring Europe’s needs are comprehensively addressed. A more collaborative approach and the inclusion of other governments to develop policy has been encouraged globally, while supporters of the Act feel that the establishment of a standard would inspire others to follow suit. This desire to be first, instead of necessarily the most comprehensive, is also reflected in UK’s Prime Minister Rishi Sunak’s rhetoric, expressing a clear determination to secure a legacy on the world stage and accrediting the rise of artificial intelligence as the platform on which he will establish prestige.
The lack of global support is also notable. The absence of China and Russia at the G20 raised the worry of “no-shows” at Sunak’s upcoming AI summit this November. These potential absences are particularly important considering Biden has already confirmed he will not be attending the summit.
Despite possible shortcomings, the act has remained prominent in global discourse. At the UN General Assembly in 2017, the topic of AI was only brought up by three speakers. This year, over 20 speakers spoke on the issue. While how best to address AI remains contested, it is undeniable that the EU AI Act has worked to increase the discourse on AI. The Act posed such a significant threat to companies that OpenAI said it may be forced to pull out of Europe depending on the final version of the act, solidifying the EU as a prominent global leader in technology and AI regulation.
In addressing the question of how best to handle regulation on an international scale and if such an overarching and unified ban is even possible, the position of the U.S. must also be considered. The EU AI Act could make it more difficult for the U.S. to pass its own legislation, as consistency of rules in markets is ideal for companies. If the U.S. were to connect more deeply with the efforts of the EU to align on standards and agree on the proposed regulations, economic division and confusion could be avoided. So far, the U.S. has adopted a hands-off approach in comparison to the EU, relying on industry to develop its own safeguards. Steep political divisions within the U.S. Congress also make the passing of AI legislation unlikely in the near future. However, with leading AI companies including Google, Microsoft, and OpenAI being headquartered in the U.S., working with these companies to set a cohesive policy could aid them in competing against China’s rivals and to ensure national security.
U.S. efforts to handle artificial intelligence remain far behind that of the EU and require extensive work towards establishing a legal framework, but aligning with the platform of the EU could be beneficial for international integration by preventing misalignment of markets and data privacy. While how best to proceed remains highly contested, the EU’s proposal of the “AI Act” has irrefutably shifted the international discourse on artificial intelligence. Bringing this issue to the forefront of discussions has ultimately raised critical concerns about the technology’s impact on security and stressed the importance of a legal framework for its regulation.