Brussels – Everything is in place for the EU Artificial Intelligence Act to become the world’s first legislation on the subject that will be fully in force throughout the Union within two years. After the almost unanimous approval by MEPs in the plenary session on March 13, the 27 EU governments gave the green light today (May 21) written procedure to legislation that establishes a first time for the Union and on which there could be further work on even more specific aspects, as co-rapporteur Brando Benifei (PD) had confirmed in an interview with Eunews.
At this point, one must carefully observe the dates. As soon as the Regulation is signed and published in the Official Journal of the EU, it will enter into force 20 days later and be fully applicable after 24 months (June 2026). There are, however, some exceptions: application of the bans on prohibited practices will begin after six months, of codes of conduct after nine, and for general IA rules (including governance) after 12. On the other hand, obligations for high-risk systems will be delayed for another year, with application after 36 months. In the meantime, given the timing of the entry into force of the new Regulation and given the potential impact of new technologies on the European elections in June 2024, the EU pact on artificial intelligence was launched in mid-November of 2023 to voluntarily anticipate the requirements on IA and ease the transition to the application of the new rules.
The risk scale of artificial intelligence
The EU Regulation provides a horizontal level of user protection, with a risk scale to regulate artificial intelligence applications based on four levels: minimal, limited, high, and unacceptable. Systems with limited risk will have light transparency requirements, such as disclosing that content was generated by AI. High risk systems will be subject to a pre-market fundamental rights impact assessment, including the requirement to register with the appropriate EU database and the obligation to submit data and technical documentation to demonstrate product compliance.
The Regulations place at an unacceptable level — and therefore bans — cognitive behavior manipulation systems, untargeted collection of facial images from the Internet or CCTV footage to create facial recognition databases, emotion recognition in the workplace and educational institutions, ‘social scoring’ by governments, biometric categorization to infer sensitive data (political, religious, philosophical beliefs, sexual orientation), or religious convictions, and some instances of predictive policing for individuals.
Exceptions, governance and foundation models
An emergency procedure was introduced in the Regulations that will allow law enforcement to use a high-risk artificial intelligence tool that did not pass the evaluation procedure, which will have to dialogue with the specific mechanism on the protection of fundamental rights. There are also exemptions for the use of real-time remote biometric identification systems in publicly accessible spaces, subject to judicial authorization and for strictly defined lists of offenses. ‘Post-remote’ use may only be used for the targeted search of a person convicted of or suspected of committing a serious crime, while real-time use “limited in time and place” for targeted searches for victims (kidnapping, trafficking, sexual exploitation), prevention of a “specific and current” terrorist threat, and to locate or identify a person suspected of committing specific crimes (terrorism, human trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization, environmental crimes).
Among the new arrangements already in place is the Office for Artificial Intelligence within the European Commission to supervise general-purpose artificial intelligence systems integrated into other high-risk systems, flanked by an advisory forum for stakeholders (representatives from industry, SMEs, start-ups, civil society, and academia). To consider the breadth of tasks that artificial intelligence systems can perform – generation of video, text, images, side-language conversation, computation, or computer code generation – and the rapid expansion of their capabilities, ‘high-impact’ foundation models (a type of generative artificial intelligence trained on a broad spectrum of generalized, label-free data) will have to comply with several transparency requirements before being released to the market: from drafting technical documentation to complying with EU copyright law and disseminating detailed summaries of the content used for training.
Innovation and Sanctions
Image created by an artificial intelligence following the instructions “robot protecting online privacy.”
To support innovation, sandboxes (test environments in computing) of artificial intelligence regulation will create a controlled environment to develop, test, and validate innovative systems even under real-world conditions. To alleviate the administrative burden of smaller companies and protect them from the pressures of dominant market players, the Regulation provides “limited and clearly specified” support actions and exemptions. Finally, regarding sanctions, any natural or legal person can file a complaint with the relevant market supervisory authority regarding non-compliance with the EU Artificial Intelligence Act. In case of violation of the Regulation, the company will have to pay either a percentage of annual global turnover in the previous financial year or a predetermined amount (whichever is higher): 35 million euros or 7 percent for breaches of prohibited applications, 15 million euros or 3 percent for violations of the law’s obligations, 7.5 million euros or 1.5 percent for providing incorrect information, while more proportionate ceilings will apply instead for small and medium-sized enterprises and start-ups.
English version by the Translation Service of Withub