Brussels – Warning bells are ringing about the effective implementation of some of the pillars of the AI Act, the pioneering EU regulation on artificial intelligence that came into force in August 2024. In the spotlight is the drafting of the “Code of Best Practices,” which will play a key role in countering the “systemic risks” of AI. Last week, the four majority groups in the EU Parliament expressed concern about the watering down of regulations in favour of hi-tech giants. A second clue came today (April 2): Reporters Sans Frontières (RSF) abandoned the writing table, denouncing “the lack of guarantees on the right to information and the exorbitant weight of industry in the process”.
The AI Office, created at the European Commission and consisting of about 140 employees, including IT experts, administrative assistants, lawyers, policy specialists, and economists, has already produced three drafts of such a Code. It is a kind of instruction manual on compliance with legislation regarding cybersecurity, transparency of information to users, data used for model training, respect for copyright, and verification of all elements of systemic risk regarding freedom of information, democracy, and fundamental rights. The deadline specified by the regulation to arrive at the final text is now around the corner, in early May. If the EU executive does not find it satisfactory, it could then take on the task of prescribing by August a set of common standards for implementing the obligations prescribed for providers of general-purpose AI models with systemic risk.
In drafting it, the ad hoc office within the European Commission is joined by an Advisory forum composed of stakeholders: representatives from industry, SMEs, start-ups, civil society and academia. It is from this body that RSF, a leading press freedom organization, decided to disengage after the publication of the third version of the Code of Best Practices last March 11. A document that RSF says “remains largely sufficient.” According to the director general, Thibaut Bruttin, “the Code does not contain a single concrete provision to combat the proven dangers that AI poses to access to reliable information.” The conclusion is unequivocal: “We have not been heard, we will not be taken for fools (‘Nous ne jouerons pas les idiots utiles’).”
This tough stance reflects criticism already clearly expressed in a letter dated March 25, which reached the European Commission’s executive vice president in charge of Technological Sovereignty, Henna Virkkunen. The letter was signed by the Democratic Party MEP, Brando Benifei, who was rapporteur of the AI Act for the European Parliament, the shadow rapporteurs of the four political groups that pushed for the adoption of the text (Populars, Social Democrats, Liberals, and Greens), and the former Spanish Secretary of State for Digitization and Artificial Intelligence, Carme Artigas, who at the time led the negotiations on the text on behalf of the Spanish Presidency of the EU Council.

The co-legislators conveyed to Virkkunen their “great concern” about the Code of Best Practices, “in which the assessment and mitigation of various risks to fundamental rights and democracy are now suddenly entirely voluntary for providers of general-purpose AI models with systemic risks.” The letter continues: “We, the co-legislators who negotiated the AI Act, stress that this was never the intention of the inter-institutional agreement“. Conversely, “risks to fundamental rights and democracy are systemic risks that the most impactful AI providers must assess and mitigate. It is dangerous, undemocratic, and creates legal uncertainty to completely reinterpret a legal text and narrow its scope, via a Code of Good Practice, after the co-legislators have agreed upon it,” the letter further notes.
Benifei, in an interview for Askanews, pointed out two other critical elements that have emerged from the third draft of the Code: the first concerns the rules for ensuring copyright compliance, and the second the provisions on transparency and sharing of technical and security information. In a very harsh joint statement, major European copyright associations said they could not support a Code that “undermines the objectives of the Ia law, contravenes European legislation and ignores the intentions of the legislators.” The executive chair of Impala (the European Organization for Independent Music Businesses and National Associations), Helen Smith, explained that the draft “does not provide European creators and rights holders with the tools to exercise and enforce their rights in the face of the use of information to train AI models.
Benifei fears that “there may be a caving into the U.S. political line” and pressure from the star-studded AI giants. This would be a mockery, considering that we are talking about a law that the EU has adopted—the first in the world—precisely to protect its citizens from the aggressive and deregulated use of AI overseas.
English version by the Translation Service of Withub