The EU and the AI Act: A guide to the core content

The use of AI can bring many advantages: The improvement of forecasts, the optimisation of processes and resource allocation as well as the personalisation of services.
12/09/2024
Florian Kassel

The EU has created the world’s first legal framework for artificial intelligence (AI) on the European market.

The use of AI can bring many advantages: The improvement of forecasts, the optimisation of processes and resource allocation as well as the personalisation of services. AI can make a positive difference, particularly in socially relevant areas such as environmental protection, European competitiveness, health, agriculture, justice and law enforcement.

At the same time, the new technological advances also harbor risks of misuse and disadvantages for individuals and society. Against the backdrop of “rapid change and potential challenges [der KI-Systeme], the EU is determined to achieve a balanced approach“.

What does the EU want to achieve with the new regulatory proposals?

In its legislative proposal, the European Commission states that the aim is to “strengthen the EU’s technological leadership and ensure that Europeans can benefit from new technologies developed and functioning in accordance with the values, fundamental rights and principles of the Union” (COM/2021/206 final p. 1).

In plain language: The EU is by no means against the introduction of AI systems on the European market. Rather, it is endeavouring to ensure freedom and promotion of innovation, to limit the risks of AI systems and to create transparency and trust in their use for users.

What risks does the EU fear for users on the European market from AI-based products and systems?

On the one hand, the EU wants to prevent the fragmentation of EU law with regard to the Europe-wide use of AI through standardised regulation. In order to ensure a functioning internal market for AI systems, a standardised legal framework is essential. If each country were to enact its own regulations, there would be a risk that, on the one hand, users would not have sufficient legal remedies against infringements of their rights and, on the other hand, providers would not be able to sell their products across borders, or only to a limited extent, due to national regulations. The efficiency of the European market and individual legal protection would suffer as a result.

In addition, the ethical principles, fundamental rights and values of the EU are to be protected across the board. One strives “a high level of data protection, digital rights and ethical standards” (COM/2021/206 final p. 2) with the new regulations.

The EU considers biometric (real-time) remote identification systems in public spaces to be particularly critical. These systems make it possible to identify people via an external device using data-driven AI systems based on facial recognition, skin colour, tattoos, etc. Such systems are frequently used in law enforcement in particular. The EU recognises the positive potential of such systems, but classifies them as high-risk and provides for special regulations for their use. Particularly against the background of rights and freedoms such as equal treatment, freedom of assembly and the protection of privacy, such practices are associated with special security measures.

How does the EU intend to implement this legal framework?

The EU has opted for a risk-based approach to implementing regulation, also known as Option 3+. The EU itself describes this implementation as “a horizontal EU legal instrument based on proportionality and a risk-based approach, complemented by a code of conduct for AI systems that do not pose a high risk” (COM/2021/206 final p. 11).

In plain language: the EU categorises AI systems into different risk groups. There will only be a legal framework for high-risk AI systems and the possibility for all providers of AI systems that do not pose a high risk to submit to a code of conduct. The requirements for high-risk systems will relate to data, documentation, traceability, provision of information, transparency, human oversight, robustness and accuracy.

How is the law on artificial intelligence structured and what does it actually provide for?

The regulations provide for a risk-based approach in which systems and products are categorised according to the “degree of risk” of the AI or its intended use. This results in rights and obligations for the operators. AI systems that endanger users in an “unacceptable manner” are to be banned. The following categories are listed.

Prohibited AI applications (with unacceptable risk)

AI applications are banned if they invade privacy or discriminate. If an AI application poses an unacceptable risk to people, especially children, it is prohibited under the AI Act.

This applies to:

Biometric remote identification systems that enable people to be identified in public spaces in real time or not significantly after the fact.

Social scoring” systems, through biometric categorisation of sensitive characteristics (gender, religion, political orientation, citizenship, race, ethnicity, socio-economic status, personality traits).

Profiling in preventive police work (working with location determination based on previous criminal behavior, consideration of acute danger potential).

Emotion recognition systems (in education, the workplace, border control and law enforcement).

Facial recognition databases (untargeted processing of facial images from the Internet or surveillance material)

Cognitive behavioral manipulation of individuals or certain vulnerable groups (for example, voice-controlled toys that encourage dangerous behavior in children).

High-risk AI systems

If an AI application poses a high risk to the health, safety or fundamental rights of natural persons, this application falls into the high-risk group.

These are:

System applications that are integrated into products that can be assigned to EU product safety regulations (for example: vehicles, aviation, medical devices or toys).

System applications that fall into eight sub-categories and are subject to mandatory registration in the EU database:

  • Management and operation of critical infrastructures
  • Education and vocational training
  • Biometric identification and categorisation of people
  • Employment, employee management and access to self-employment
  • Prosecution
  • Access to essential private and public services
  • Interpretation and application of laws
  • Border controls, management of migration and asylum applications

Generative AI applications

These are applications that can generate texts, images or other media. These are required by law,

  • to disclose that the generated content was created by AI;
  • ensure that the generation of illegal content is prevented and
  • to publish the copyrighted data used for the AI training.

AI applications with limited risk

If AI applications pose a minimal, foreseeable and limited risk to their users, they only need to fulfil low transparency requirements. Through interaction with the system, users must be enabled to make an informed and self-determined decision as to whether they wish to continue using the application. In addition, operators must inform users that they are interacting with an AI. Companies can draw up a code of conduct for such an application to which they wish to adhere.

Implementation and monitoring of the regulations of the AI Act

The EU is planning to set up an EU-wide public database in which every operator of a high-risk AI application must register. This database will enable competent authorities, users and other interested parties to check whether a particular high-risk AI system fulfils the requirements of the AI Act. AI providers will be required to provide meaningful information about their systems and their conformity assessment when registering in the database.

In addition, operators will be obliged to inform the competent national authorities of serious incidents or malfunctions that constitute a breach of the obligation to respect fundamental rights and of recalls from the market. The national authorities will be responsible for investigating these incidents and will forward all data to the Commission, which will analyse them accordingly.

Current status of the legislative process

The European Commission proposed the first European legal framework for AI applications back in April 2021. On 14 June 2023, the Members of the European Parliament adopted their negotiating position on the AI Act. Negotiations will then begin with the EU member states in the European Council on the final form of the EU AI Act. The common goal is to reach an agreement on the AI Act by the end of the 2023 calendar year.

Conclusion
The EU AI Act is an important step towards AI-supported innovation. The EU wants to regulate AI in order to utilise opportunities and minimise risks. Companies that want to integrate AI into their business models will be guided by clear guidelines and ethical standards. The EU promotes a balanced approach that protects fundamental rights while allowing freedom to innovate. Through risk-based regulation and a database for high-risk AI applications, the AI Act supports responsible progress. The AI Act points the way to a responsible and future-oriented use of AI in Europe.
In this article
  • What does the EU want to achieve with the new regulatory proposals?
  • What risks does the EU fear for users on the European market from AI-based products and systems?
  • How does the EU intend to implement this legal framework?
  • How is the law on artificial intelligence structured and what does it actually provide for?
  • Prohibited AI applications (with unacceptable risk)
  • High-risk AI systems
  • Generative AI applications
  • AI applications with limited risk
  • Implementation and monitoring of the regulations of the AI Act
  • Current status of the legislative process
Written by
Florian Kassel
Florian Kassel
Online Marketing Experte
Our network for businesses
Join our community for free
and take advantage of exclusive benefits for your business growth:
Marketplace: Products, consulting, and more
Partner offers: Discounts and exclusive deals
Academy: Coaching & seminars for your professional development
Sign up for free now
Logo
Contact
nonprismatic GmbH
Beethovenstr. 13
66111 Saarbrücken
Deutschland
Cookies & Privacy Settings

On our website we use third-party cookies, among other things, to personalize content or analyze access to our website. You can agree to the use of these cookies or reject them. You can view the form in which we process data at any time in our privacy policy.