EU’s New GPAI Guidelines: A Compliance Roadmap for August 2, 2025—and Beyond
Key Takeaways:
- The EU has clarified the GPAI obligations and timelines.
- Enforcement—and large potential fines—are less than a year away.
- Regulatory themes are converging worldwide; a siloed approach to risk management is a liability.
- RadarFirst provides the unified, audit-ready platform you need to adapt as the rules evolve.
The European Commission has just released its Guidelines for Providers of General-Purpose AI (GPAI) Models (July 18, 2025), clarifying how Article 55 of the EU AI Act will apply when the first requirements kick in on August 2, 2025. The document outlines which models qualify as GPAI, what constitutes a “significant modification,” and when open-source developers can claim reduced duties.
For organizations building or deploying AI in the EU—and for multinationals eyeing global harmonization—these clarifications arrive just in time. Failing to meet the new obligations can result in penalties of up to €35 million or 7% of worldwide turnover, once enforcement begins in 2026. Radar AI RiskTM is designed to help teams document, attest to, and audit their models in real-time, so compliance becomes a routine by-product of good governance rather than a last-minute fire drill.
What the guidelines say:
- A clear definition of “general-purpose” AI – technical criteria anchored in versatility and downstream adaptability remove the guesswork about coverage.
- Significant-modification test – Only material changes (new core capabilities or large-scale retraining) make you a new provider responsible for fresh documentation; hot-fixes stay in scope of the original provider.
- Open-source carve-outs – OSS developers may be exempt from some Article 55 duties if they publish technical details and license terms upfront, promoting transparency without stifling innovation.
- Systemic-risk notification duty – Frontier-scale models must alert the EU AI Office and collaborate on mitigation once they cross defined compute and capability thresholds.
- Code-of-Practice alignment – The voluntary GPAI Code of Practice is flagged as the Commission’s preferred “how-to” manual for showing compliance evidence.
The ticking clock
Although the guidelines are not legally binding, they preview how regulators will interpret the law—so providers should act now. From August 2, 2025, every new GPAI model placed on the EU market must meet Article 55 duties and be ready to “informally collaborate” with the AI Office.
One year later, on August 2, 2026, the Commission’s full enforcement powers were activated, bringing the possibility of multi-million-euro fines. Finally, by August 2, 2027, even “grandfathered” models (those commercialized before August 2025) must comply.
Global ripple-effects you can’t ignore
- United States — Executive Order on Safe, Secure & Trustworthy AI (Oct 30, 2023): mandates risk assessments, incident reporting, and provenance disclosures across federal agencies, all of which echo Article 55 transparency files.
- United Kingdom — AI Safety Institute guidance (2025): emphasises systemic-risk evaluations for frontier models, mirroring the EU’s notification trigger.
- Canada — Artificial Intelligence and Data Act (AIDA, expected late 2025): will classify “high-impact” systems and require documentation and audit trails similar to those in the EU’s graded obligations.
Taken together, these regimes reveal a converging global norm: continuous, evidence-backed risk management for advanced AI.
How RadarFirst keeps you enforcement-ready
RadarFirst’s Radar AI Risk solution unifies AI system inventory, risk scoring, and cross-jurisdiction rules in a single dashboard. As soon as a model is uploaded or retrained, it is auto-classified against EU, US, UK, and Canadian criteria. Trigger thresholds—such as the EU’s systemic-risk definition—trigger automated workflows that assemble the required technical documentation, link to red-team results, and generate one-click notifications for the EU AI Office. All documentation is centralized in a single source of truth, so by the time an auditor calls in 2026, your proof is already sealed and time-stamped.
Beat the August 2, 2025 deadline
Book a 15-minute demo to see how RadarFirst transforms AI governance into a continuous, automated process.