The European Parliament and Council reached an agreement on December 9 on the proposed regulation on artificial intelligence (AI Act). The regulatory intervention raises several issues that impact the business world: from risk management policy, an approach that is not new but that the regulation looks at in greater depth and that may result in some bureaucratization of compliance costs, establishing an access threshold to adopt AI systems that is likely to be high for SMEs, especially in Italy. From potential obstacles, these factors will have to be transformed into development catalysts so that their proper management, even in Europe, will lead to the growth of tech companies capable of competing with digital giants and, therefore, to greater European industrial influence.
Managing the regulation has proven to be very complex due to the speed with which innovations in the field of AI and the need to integrate the discipline with others already in place (e.g., GDPR on data protection). In addition, the issue is and continues to be sensitive at the “political” level. The need to protect the fundamental rights and freedoms of European citizens has certainly been the shared starting point. At the same time, however, the critical issues of over-regulation were highlighted, including the possible hindrance to freedom of innovation and, therefore, to the establishment of European companies capable of competing with the big players in the sector. Indeed, these are in countries — the United States and China primarily — which, in addition to a greater ability to attract investment, do not contemplate similarly stringent rules.
Nonetheless, it is reasonable to assume that the AI Act represents a potential driver of innovation, facilitating the construction of a market capable of ensuring security, trust, and development. In addition, the EU legislation will also affect outside the continental borders, as the new regulation will apply to all suppliers who will commission AI systems in the territory of the Union, even if they are in non-member third countries: the so-called Brussels effect, or the theoretical ability of the Union to unilaterally regulate global markets. Indeed, multinational companies operating within EU borders – one of the largest and most influential consumer markets – must observe its rules, ending up often voluntarily extending them to their worldwide operations to rationalize costs. For example, Meta, Google, and Microsoft adopted a single global privacy policy that complies with the European GDPR.
The approach adopted by the regulation can be described, in a nutshell, as multilevel, differentiated based on risk (risk-based approach). The underlying principle is to empower operators – consistent with the logic of compliance that is now part of the contemporary enterprise – in classifying AI systems into four risk categories: minimal, high, specific transparency, and unacceptable.
Most AI systems present minimal risk to citizens’ rights and their security (e.g., anti-spam filters) and, therefore, are exempt from specific regulatory requirements even though, to foster maximum trust, the regulation states that providers may voluntarily adhere to codes of conduct consistent with what is prescribed for high-risk categories. The latter is where there could be a very significant impact, such as AI systems applied, for example, in health care, education, or staff recruitment. To this end, the regulation prescribes several stringent security and transparency measures: risk mitigation systems, high-quality data sets, activity logging, detailed documentation, clear user information, human oversight, a high level of robustness and cybersecurity.
In some cases – such as for chatbots – there is also a specific transparency requirement to prevent user manipulation so that users are aware that they are interacting with AI: think of the phenomenon of deep-fake, images through which a subject’s face and movements are realistically reproduced, with the creation of false representations that are difficult to recognize.
Finally, all applications of unacceptable risk, such as those capable of manipulating the free will of individuals, social evaluation (so-called social scoring), or emotion recognition in the workplace or schools, are banned. The use in public spaces of biometric identification systems (RBI) – capable of identifying people based on physiological or behavioral characteristics – will be considered exceptional. In the present case, “real-time” RBI is to be used for the identification of victims of serious crimes (kidnapping, sexual exploitation, etc.), the prevention of a specific and current terrorist threat, or the location of a person suspected of committing specific crimes, while “a posteriori” RBI is to be used only for the targeted search of persons convicted or suspected of serious crimes.
Among the biggest question marks on the corporate side is the cost of compliance, which could pose a significant challenge, especially for SMEs and start-ups. A study published by Intellera Consulting estimates that launching a compliant AI system for an SME could require around 300,000 euros, precluding it from entering the industry. Therefore, it will be important to develop synergies with integrated compliance systems or offset the expenses with appropriate economic-financial support. The AI Act also focuses on measures to support innovation, promoting so-called regulatory sandboxes for real-world testing, i.e., spaces for regulatory experimentation: controlled environments established by the relevant authorities to facilitate the development, testing, and validation of innovative AI systems before their release to the market, with priority access for smaller vendors and start-ups.
Businesses will, therefore, have to know how to seize the new opportunities and combine regulatory burdens and support measures since compliance with EU regulations will enable them to develop solutions that meet the needs of European citizens and consumers: the best guarantee of reliability in an industry still ridden with uncertainties and, often, irrational fears.
*Riccardo Borsari is a lawyer and professor of criminal law at the University of Padua.
English version by the Translation Service of Withub