EU AI Act Phase 2: Transparency Rules for AI Models Effective 2025

The EU AI Act's second phase, effective August 2, 2025, mandates transparency and safety for general-purpose AI models, requiring detailed documentation, training data summaries, and risk assessments for high-risk systems. This extraterritorial regulation aims to foster accountability but may challenge innovation. Global providers must adapt to avoid fines.
EU AI Act Phase 2: Transparency Rules for AI Models Effective 2025
Written by Juan Vasquez

As the European Union’s groundbreaking Artificial Intelligence Act begins to take effect in phases, a critical deadline looms for developers of general-purpose AI models. Starting August 2, 2025, providers must comply with specific provisions aimed at ensuring transparency and safety in AI systems that can perform a wide array of tasks, from generating text to analyzing data. This marks the second major enforcement milestone in the AI Act, following the initial ban on prohibited AI practices earlier this year.

The regulations target general-purpose AI (GPAI) models, which are foundational technologies like large language models that underpin applications across industries. According to a recent analysis by TechRepublic, these models must now maintain up-to-date technical documentation detailing their architecture, training processes, and evaluation methods. Providers are also required to publish summaries of the content used in training, offering insights into data sources without revealing proprietary secrets.

Transparency Mandates and Documentation Requirements

This push for documentation isn’t just bureaucratic; it’s designed to foster accountability in an era where AI decisions can influence everything from hiring to healthcare. The EU’s approach classifies GPAI models based on risk, with those posing “systemic risks”—typically models trained with computing power exceeding 10^25 floating-point operations—facing even stricter rules, including mandatory risk assessments and cybersecurity measures.

Insights from Stephenson Harwood highlight that these obligations extend to providers regardless of their location, as long as the models are deployed in the EU market. This extraterritorial reach could reshape global AI development, compelling companies like OpenAI and Google to adapt their practices or risk hefty fines up to 3% of global annual turnover.

Systemic Risk Models and Enhanced Scrutiny

For models deemed to have systemic risks, the AI Act introduces obligations like adversarial testing to identify vulnerabilities and reporting of serious incidents. The European Commission has already provided guidelines to clarify these rules, as noted in a report from Reuters, which emphasizes how such pointers help mitigate threats like misinformation or bias amplification.

These measures build on the Act’s risk-based framework, where low-risk AI faces minimal oversight, but high-risk applications, such as those in critical infrastructure, undergo rigorous conformity assessments. Industry insiders point out that while the rules aim to protect consumers, they might slow innovation, particularly for startups lacking resources to comply.

Voluntary Codes and Industry Response

To ease the transition, the EU has introduced a voluntary General-Purpose AI Code of Practice, detailed in coverage by Stibbe. This non-binding instrument offers best practices on transparency, safety, and intellectual property, encouraging providers to align with the Act ahead of full enforcement in 2026.

Major players are responding variably: Google has signed on to the code, signaling a cooperative stance, while Meta has notably declined, citing concerns over feasibility. As per TechRepublic‘s reporting on the matter, this divergence underscores tensions between regulatory compliance and competitive edges in AI.

Implications for Global AI Development

The Act’s focus on training data summaries addresses longstanding concerns about data privacy, potentially bolstering enforcement of the General Data Protection Regulation (GDPR). A piece from TechPolicy.Press argues that this could close gaps in transparency, ensuring AI models aren’t built on unethically sourced data.

Looking ahead, the August 2 deadline is just the beginning. By 2026, the full spectrum of the AI Act will apply, including to high-risk systems, prompting companies to invest in compliance teams and ethical AI frameworks. For industry leaders, this isn’t merely about avoiding penalties—it’s about building trust in AI technologies that are increasingly integral to business operations.

Challenges and Future Outlook

Critics, however, warn of overregulation stifling Europe’s tech sector compared to less encumbered markets like the U.S. or China. Yet proponents, including EU officials, view it as a model for global standards, influencing regulations worldwide.

As enforcement ramps up, monitoring bodies like the AI Office will play a pivotal role in interpreting and applying these rules, ensuring the Act evolves with technological advancements. For now, providers of GPAI models must prioritize documentation and transparency to navigate this new regulatory era effectively.

Subscribe for Updates

AITrends Newsletter

The AITrends Email Newsletter keeps you informed on the latest developments in artificial intelligence. Perfect for business leaders, tech professionals, and AI enthusiasts looking to stay ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us