European Commission proposes a new AI Regulation

Published on 11 May 2021 categories ,


On 21 April, the European Commission presented its long-awaited proposal for an AI Regulation. This proposal gives a central role to the fundamental rights and freedoms of EU citizens. For the degree of regulation, the Commission uses a risk-based approach.

In this blog: an overview of the key elements of the new AI Regulation.



With this proposal, the Commission aims to create an AI Regulation, which – like the General Data Protection Regulation (GDPR) – will be directly applicable in all European Member States. The proposal follows up on the Commission’s White Paper on Artificial Intelligence published in February 2020. The aim was to set out policy options to achieve the twin goals of facilitating the uptake of AI and mitigating the risks associated with certain AI applications. The proposed AI Regulation focuses on the latter.

Objectives of the new AI Regulation


The new AI Regulation aims that AI systems in Europe must be safe, robust and reliable. The fundamental rights and freedoms of EU citizens play a central role in this regard. At the same time, a uniform regulatory framework provides businesses with the legal certainty they need to facilitate investment and innovation in AI. It also prevents diverging regulations between Member States and thus market fragmentation.

Broad definition of AI system


The proposal uses a broad definition of ‘AI system’. The term has been kept as neutral as possible in order to make it future-proof:

‘’software that is developed with machine learning, logic, and knowledge-based or statistical approaches, and that “can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

This broad definition is likely to be the subject of extensive discussion in the upcoming negotiations on the content of the AI Regulation. What is striking about this definition is that it is very difficult to determine what exactly the lower limit of this definition shall be. In other words: how dumb does software has to be to fall outside the scope? The techniques included in the Annex are formulated very broadly. For example, almost every IT system uses “logic and knowledge based approaches”, or “statistical approaches”. 

Scope


The draft regulation applies when these AI systems are placed on the market, put into service and used in the EU. It does not cover the training of AI. Systems that train with personal data are subject to the GDPR. The new rules will apply to:

  • Providers from within and outside the EU that market AI systems in the EU;
  • Users of AI systems who are located in the EU;
  • Providers and users of AI systems that are located outside the EU, but whose output is used in the EU.

The Commission opts for a very wide territorial scope. This could be particularly interesting when it comes to the third point mentioned above. In my opinion, it is also very questionable whether this is a use case that needs to be regulated in all cases.

Risk-based approach


The proposal is characterised by a risk-based approach. The Commission distinguishes between AI systems and applications that pose a minimal or low risk, a high risk or an unacceptable risk. And the higher the risk, the stricter the rules.

AI with minimal to no risk


The vast majority of AI currently used in the EU are systems where the risk to citizens is minimal or non-existent. Examples of such systems are spam filters and video games. The draft regulation leaves these AI systems untouched. However, while the proposal allows for the free use of these systems, they are still subject to the usual European product safety rules.

AI with a limited risk


AI with a limited risk, on the other hand, refers to systems that pose a specific risk of manipulation. Three specific applications are concerned in the Commission proposal:

  • systems that interact with humans, e.g. chatbots;
  • systems that are used to detect emotions or to (socially) categorise someone on the basis of biometric data; and
  • systems that generate or manipulate the content of images, videos and/or audio. The latter refers to deep fakes.

To prevent manipulation, these systems are subject to a transparency obligation. For example, it must be clear to users when they are dealing with a chatbot.

Deep fakes must also be recognisable as such. This would mean, for example, that the user of a deepfake AI system would have to ensure that the output is watermarked.

The transparency the regulation aims to create should enable EU citizens to make informed choices. In addition, AI providers are encouraged to draw up their own sectoral codes of conduct.

High-risk AI


High-risk AI systems are subject to strict requirements. The Regulation uses the term high risk AI system for this purpose.

Firstly, these are the AI systems listed in Annex II of the Regulation. These systems are subject to sectoral EU legislation. The safety components of such systems are also considered to be high risk AI systems.

Secondly, this category includes systems that pose a high risk to the health, safety or fundamental rights of EU citizens. Broadly speaking, these are systems related to:

  • Management and operation of critical infrastructure networks, such as the electricity grid;
  • Education, for example systems deployed for assessing or admitting students;
  • Labour market, such as the use of AI for recruitment or staff screening;
  • Essential public and private services, such as using AI to assess a person’s creditworthiness;
  • Law enforcement, such as predictive police systems;
  • Migration, asylum and border control, such as the use of AI in an examination of the admissibility of an asylum claim
  • Justice, for example systems that assist a court in interpreting facts;
  • Biometric identification of natural persons in real time or remotely, such as facial recognition or fingerprint technology. Certain forms of these types of AI systems are also prohibited.

Obligations for providers of high-risk AI


These high-risk systems are allowed on the European market, as long as they comply with a substantial set of requirements and obligations. The general requirements relate to (section 2):

  • compliance and adequate risk management;
  • automatically generated logs of the system;
  • quality of the data;
  • detailed documentation;
  • transparency;
  • sufficient human supervision;
  • high level of robustness, security and resilience of systems.

It is noteworthy that the AI Regulation emphasises the importance of human intervention. The fact that natural persons retain control and supervision over the functioning of AI systems is even considered necessary.

Obligations for providers of high-risk AI


In addition, the new AI Regulation also imposes some specific obligations on suppliers. Importers, distributors and representatives of foreign companies also have to comply with them. Providers of high-risk AI systems are obliged to

  • setting up and applying a quality management system;
  • establish and maintain technical documentation
  • keep logs (generated by the AI system);
  • Registering the AI system in the EU database (accessible to the public);
  • Setting up a monitoring system (after placing on the market);
  • cooperating with market surveillance authorities;
  • Applying a CE-marking (labelling);
  • carrying out a conformity assessment and signing the declaration of conformity.

With the conformity assessment providers must demonstrate that their system meets all requirements. This assessment is valid for five years and is usually carried out by the supplier itself. This is not the case for remote biometric identification of natural persons. These AI systems must be subject to a third party conformity assessment. Annex II systems (and the security components of such systems) have their own procedures in existing EU legislation.

Obligations for users of risky AI


Furthermore, the proposed AI Regulation also imposes specific obligations on the users of these systems (Article 29). For example, users must operate risky AI systems in accordance with the instructions for use, monitor such use and ensure human supervision of that use. Users are also required to report malfunctions and serious incidents to the supplier or distributor. Existing legal obligations, such as the AVG, continue to apply.

Ultimately, the responsibility for AI systems lies with the person or body offering the system. It is the supplier who is at risk and it is therefore the supplier who must supervise the supply chain and quality control procedures.

Prohibited AI with unacceptable risk


The list of prohibited practices contains a limited number of AI systems that are considered unacceptable and therefore prohibited. These systems all pose a clear threat to the safety, livelihoods and rights of EU citizens. More specifically, they are AI systems and applications that:

  • Have the potential to manipulate human behaviour and thus influence or circumvent the free will of users;
  • enable social scoring by governments. This refers to mass surveillance systems. These are systems that assess the trustworthiness of people based on their social behaviour or personality traits;
  • real-time biometric identification systems. This must be a system that is used remotely, in a public space, and for law enforcement. Exceptions to this are possible under certain conditions.

It is striking that the conditions under which exceptions can be made are not particularly strict. When it comes to the investigation of offences for which a European arrest warrant can be issued, the use of real-time biometric identification systems by law enforcement is permitted.

Enforcement by national supervisory authorities


The Commission leaves the task of supervision to national supervisory authorities. In this context, it proposes national market surveillance authorities, but it is up to the Member States to decide which ones to have. These supervisors will, among other things, have the power to impose fines. Furthermore, the Commission proposes a European Artificial Intelligence Committee, in which all national regulators will participate. This committee will coordinate enforcement and facilitate the execution and implementation of rules.

AI total package


Besides the AI Regulation, the European Commission also presented the Coordinated Plan on Artificial Intelligence and a draft regulation for machine products on 21 April. This AI total package aims to both encourage innovation and increase trust in AI. To support innovation, the AI Regulation itself also mentions some initiatives. For example, it encourages the use of so-called regulatory sandboxes. These are legal sanctuaries to facilitate the development and testing of innovative AI systems. For this purpose, Member States must give small-scale providers and users first access.

The proposed AI regulation is a novelty. Because nowhere in the world there is regulation that specifically focuses on AI. This will probably remain the case in the coming years. The proposal will first have to be evaluated by the Member States and the European Parliament.

Share:

publications

Related posts