EU’s Landmark AI Act Takes Effect: Implications for U.S. Tech Giants

New Regulations Set to Transform the Landscape of AI Development and Use

Advertisement

The European Union’s groundbreaking artificial intelligence legislation, known as the AI Act, officially came into force on Thursday. This comprehensive regulation is set to impose significant changes on major American technology companies and other businesses involved in AI development and deployment.

What Is the AI Act?

First proposed by the European Commission in 2020, the AI Act establishes a harmonized regulatory framework for artificial intelligence across the EU. The legislation targets both large U.S. tech firms and a broad spectrum of other businesses, not limited to the tech sector.

Advertisement

Tanguy Van Overstraeten, head of the technology, media, and technology practice at law firm Linklaters in Brussels, emphasized the global significance of the legislation, describing it as “the first of its kind in the world.” The law adopts a risk-based approach, applying different regulations based on the perceived risk level of AI applications. High-risk AI systems, such as those used in autonomous vehicles, medical devices, and biometric identification, will face stringent requirements. These include rigorous risk assessments, high-quality training datasets, and mandatory documentation to ensure compliance.

Certain AI applications deemed “unacceptable” due to their risk level, such as social scoring systems and predictive policing, are banned outright under the AI Act.

Impact on U.S. Tech Companies

U.S. technology giants, including Microsoft, Google, Amazon, Apple, and Meta, are likely to be among the most affected by the new regulations. These companies have heavily invested in AI technologies, often utilizing their substantial computing infrastructure to support AI model development.

Charlie Thompson, Senior Vice President of EMEA and LATAM at Appian, noted that the AI Act’s influence extends beyond the EU. “The AI Act will likely apply to you no matter where you’re located,” he explained, highlighting the increased scrutiny these companies will face regarding their operations in the EU and their handling of EU citizen data.

Meta, for instance, has already taken steps to limit the availability of its AI models in Europe due to regulatory uncertainties. The company has refrained from releasing its LLaMa models in the EU, citing concerns about compliance with the General Data Protection Regulation (GDPR).

Eric Loeb, Executive Vice President of Government Affairs at Salesforce, suggested that the AI Act could serve as a model for other governments developing their AI policies. He praised the EU’s approach, which aims to balance innovation with safety.

Generative AI Under the AI Act

Generative AI, classified as “general-purpose” AI in the AI Act, faces specific requirements, including adherence to EU copyright laws, transparency in model training, and stringent cybersecurity measures. While there are exceptions for open-source models, these models must meet strict criteria to qualify for exemption, such as making their parameters and usage publicly available.

Consequences for Non-Compliance

Companies that fail to comply with the AI Act could face severe penalties, ranging from fines of up to €35 million or 7% of global annual revenues to €7.5 million or 1.5% of global revenues, depending on the nature of the infringement. The European AI Office, established by the European Commission in February 2024, will oversee compliance with the Act.

Jamil Jiva, Global Head of Asset Management at fintech firm Linedata, emphasized the importance of substantial fines to ensure the effectiveness of the regulations. He noted that, similar to the GDPR, the EU aims to establish global standards with the AI Act.

Despite the Act’s official enactment, most of its provisions will not take effect until at least 2026, with a specific transition period of 36 months for existing generative AI systems like OpenAI’s ChatGPT and Google’s Gemini to achieve compliance.