The General Purpose AI (GPAI) Code of Practice, a set of voluntary guidelines for providers of GPAI models to comply with the EU AI Act, was published on July 10 and was approved by the EU Commission on August 2, 2025. Notably, this Code of Practice defines some of the AI Act’s first technical guidelines targeted at generative AI models and systems, including LLMs.
The Code of Practice primarily applies to providers of GPAI models, such as OpenAI, Anthropic, and Google, who develop and commercialize general-purpose models (such as LLMs). However, the Code also defines some guidelines for downstream deployers of these models, which cover many business use cases of LLMs.
As of August 4, 2025, several GPAI model providers, including OpenAI, Google, Anthropic, Microsoft, and Mistral have signed the voluntary Code of Practice. Meanwhile, xAI has signed only the Safety and Security chapter, citing that other Chapters are “profoundly detrimental to innovation.” Others, like Meta, have declined to sign altogether, citing concerns over “legal uncertainties.” Despite the lack of consensus, the Code serves as an important guide for how generative AI will be regulated under the EU AI Act, as deadlines for compliance are approaching (GPAI compliance deadline is set on August 4, 2025 and fines up to €35 million or 7% of global turnover begin to apply from 2 August 2026).
What is the GPAI Code of Practice?
The Code is a voluntary tool designed to help providers of GPAI models comply with the AI Act. The term “provider” refers to those placing GPAI models on the market. GPAI providers are distinct from “downstream providers” who offer AI systems that integrate GPAI models. If an existing GPAI model is modified or fine-tuned, the obligations of risk assessment and documentation for the GPAI provider apply only to the scope of those modifications.
There are 3 Chapters in the Code: 1) Transparency Chapter, 2) Copyright Chapter, and 3) Safety and Security Chapter. The Transparency and Copyright Chapters are relevant to all providers of GPAI models, while the Safety and Security Chapter applies only to providers of the most advanced models, or “GPAI models with systemic risk” (Currently, AI models trained with more than 10^25 FLOPs fall under this category). The following sections provide a summary of each Chapter and expectations for downstream providers.
Why is the Code important for downstream providers?
A downstream provider that develops services integrating GPAI models into high-risk AI systems (eg. Emotion recognition, critical infrastructure management, employee evaluation, and criminal profiling) must comply with the requirements of the EU AI Act, including technical documentation, risk assessments, and technical or policy interventions to mitigate safety and security risks. The Code can facilitate downstream providers involved in high-risk AI systems by offering information on intended purposes, capabilities, limitations and acceptable use policies (Transparency Chapter) as well as insights into copyright Measures (Copyright Chapter) and systemic risks (Safety and Security Chapter).
The Code is also useful for downstream providers not involved in high-risk AI systems, as information from GPAI providers helps them choose the right GPAI model for their specific application. Even for non-high-risk uses, collecting information such as the nature of the training data and mitigation Measures against copyright-infringing outputs from GPAI model providers helps downstream providers gain clearer expectations and reduce unintended outcomes about their systems/services.
Type of provider | Applicable measures |
---|---|
GPAI model providers | Subject to Transparency and Copyright chapters only |
GPAI model providers with systemic risks | Subject to all 3 chapters |
Modifiers of open-source GPAI models | Subject to Transparency and Copyright chapters if the training compute exceeds 1/3 of the compute used to train the original model |
Downstream provider | Encouraged to check their model providers’ information/documentation available on their website or upon request |
Downstream provider who fine-tunes or modifies a GPAI model | Subject to the Transparency Chapter and Copyright Chapters only for the training process/data used as part of the fine-tuning. |
Downstream provider integrating GPAI into high-risk AI systems | Strongly encouraged to check their model providers’ compliance, with the 3 chapters for them to stay compliant with the AI Act |
What are the Measures for providers of GPAI models?
Transparency Chapter
The 3 Measures of this Chapter are:
- Drawing up and keeping up-to-date Model Documentation
- Making relevant information accessible to downstream providers and the AI Office (the latter upon request)
- Ensuring the quality and integrity of the information documented
The Chapter contains a Model Documentation Form, allowing GPAI providers to detail information, including general information such as the provider name and release date, model properties, methods of distribution and licenses, expected use, training process, training data, computational resources and energy use during training. A previous Model Documentation Form is expected to be retained for 10 years after the model has been placed on the market. GPAI model providers are required to publicly disclose contact information to request access to information in the Model Documentation Form either on their website or through other means.
Copyright Chapter
This Chapter contains 5 Measures for providers to implement and maintain a copyright policy, both from technical and policy-based approaches. Technical safeguards include ensuring lawful data crawling by respecting site access controls, robots.txt files, and EU copyright-blacklisted domains. To avoid copyright-infringing outputs being generated in AI systems that integrate GPAI models, GPAI model providers also need to implement technical safeguards and set an acceptable use policy for downstream providers. The Chapter also outlines a measure to establish a point of contact and a complaint submission mechanism for rights-holders. GPAI model providers are encouraged to make an up-to-date summary of their copyright policy publicly available.
Safety and Security Chapter
This Chapter introduces 10 Commitments, each containing several Measures, for managing systemic risks of GPAI models Systemic risks are defined as “risks of large-scale harm from the most advanced (state-of-the-art) models at any given point in time or from other models that have an equivalent impact”, such as “negative effects on health, safety, public security, fundamental rights, or the society as a whole”. These Measures include regular risk assessments, clear allocations of internal risk responsibilities, post-market monitoring, independent evaluations, and specific reporting timelines for different types of incidents.
Two important Measures outlined in the Chapter are the adoption of the Safety and Security Framework and the Safety and Security Model Report.
The Safety and Security Framework is an outline of systemic risk management processes by providers of GPAI with systemic risks. A GPAI model provider has to “confirm” (approve) its Safety and Security Framework no later than 2 weeks before placing the model on the market and needs to be implemented throughout the model lifecycle. This Framework must also be updated as soon as its adequacy is called into question or every 12 months from placing the model on the market. The Framework should include:
- A description and justification of the criteria for conducting model evaluations and how these criteria will be used throughout a model lifecycle
- A justification of the systemic risk acceptance criteria
- Estimated timelines that a model may exceed the highest systemic risk tier reached by existing models
- Potential influences of external actors on the model development process
The Safety and Security Model Report, on the other hand, is the information that the model provider has to report to the AI Office before placing a model on the market. This also needs to be kept up-to-date. The model report contains the following information:
- Model description and behaviour (including a model’s architecture, capabilities, propensities, affordances and expected use)
- Reasons for proceeding (including justification for why the systemic risks from the model are acceptable)
- Descriptions of processes involved in systemic risk analysis and mitigation (including methods of systemic risk identification and safety and security mitigations implemented)
Downstream providers will be able to access the above information through the websites of providers of a GPAI model with systemic risks, who are expected to publish summarized versions of the Framework and the Model Report online.
What are the takeaways for businesses?
While the Code is aimed at providers of GPAI models, it is also important for downstream providers who integrate GPAI models developed by another entity into their systems or services.
Key actions for downstream providers include:
- Checking whether your GPAI model provider has signed the Code of Practice (if they do not, check what alternative ways they implement to demonstrate compliance with the requirements of the EU AI Act’s Articles 53 and 55)
- Requesting and reviewing the Model Documentation Form
- Requesting any additional information needed to understand GPAI model capabilities and limitations (GPAI providers are required to provide information within no later than 14 days of receiving the request)
- Reviewing publicly available summaries of copyright policies and Safety and Security Framework and Model Reports (the latter two only applicable for GPAI models with systemic risks)
Alongside the Code of Practice, the European Commission has published GPAI Guidelines explaining key terms related to GPAI models in the AI Act. We recommend reviewing them together.