The EU has created the world’s first legal framework for artificial intelligence (AI) on the European market.
The use of AI can bring many advantages: The improvement of forecasts, the optimisation of processes and resource allocation as well as the personalisation of services. AI can make a positive difference, particularly in socially relevant areas such as environmental protection, European competitiveness, health, agriculture, justice and law enforcement.
At the same time, the new technological advances also harbor risks of misuse and disadvantages for individuals and society. Against the backdrop of “rapid change and potential challenges [der KI-Systeme], the EU is determined to achieve a balanced approach“.
In its legislative proposal, the European Commission states that the aim is to “strengthen the EU’s technological leadership and ensure that Europeans can benefit from new technologies developed and functioning in accordance with the values, fundamental rights and principles of the Union” (COM/2021/206 final p. 1).
In plain language: The EU is by no means against the introduction of AI systems on the European market. Rather, it is endeavouring to ensure freedom and promotion of innovation, to limit the risks of AI systems and to create transparency and trust in their use for users.
On the one hand, the EU wants to prevent the fragmentation of EU law with regard to the Europe-wide use of AI through standardised regulation. In order to ensure a functioning internal market for AI systems, a standardised legal framework is essential. If each country were to enact its own regulations, there would be a risk that, on the one hand, users would not have sufficient legal remedies against infringements of their rights and, on the other hand, providers would not be able to sell their products across borders, or only to a limited extent, due to national regulations. The efficiency of the European market and individual legal protection would suffer as a result.
In addition, the ethical principles, fundamental rights and values of the EU are to be protected across the board. One strives “a high level of data protection, digital rights and ethical standards” (COM/2021/206 final p. 2) with the new regulations.
The EU considers biometric (real-time) remote identification systems in public spaces to be particularly critical. These systems make it possible to identify people via an external device using data-driven AI systems based on facial recognition, skin colour, tattoos, etc. Such systems are frequently used in law enforcement in particular. The EU recognises the positive potential of such systems, but classifies them as high-risk and provides for special regulations for their use. Particularly against the background of rights and freedoms such as equal treatment, freedom of assembly and the protection of privacy, such practices are associated with special security measures.
The EU has opted for a risk-based approach to implementing regulation, also known as Option 3+. The EU itself describes this implementation as “a horizontal EU legal instrument based on proportionality and a risk-based approach, complemented by a code of conduct for AI systems that do not pose a high risk” (COM/2021/206 final p. 11).
In plain language: the EU categorises AI systems into different risk groups. There will only be a legal framework for high-risk AI systems and the possibility for all providers of AI systems that do not pose a high risk to submit to a code of conduct. The requirements for high-risk systems will relate to data, documentation, traceability, provision of information, transparency, human oversight, robustness and accuracy.
The regulations provide for a risk-based approach in which systems and products are categorised according to the “degree of risk” of the AI or its intended use. This results in rights and obligations for the operators. AI systems that endanger users in an “unacceptable manner” are to be banned. The following categories are listed.
AI applications are banned if they invade privacy or discriminate. If an AI application poses an unacceptable risk to people, especially children, it is prohibited under the AI Act.
This applies to:
Biometric remote identification systems that enable people to be identified in public spaces in real time or not significantly after the fact.
Social scoring” systems, through biometric categorisation of sensitive characteristics (gender, religion, political orientation, citizenship, race, ethnicity, socio-economic status, personality traits).
Profiling in preventive police work (working with location determination based on previous criminal behavior, consideration of acute danger potential).
Emotion recognition systems (in education, the workplace, border control and law enforcement).
Facial recognition databases (untargeted processing of facial images from the Internet or surveillance material)
Cognitive behavioral manipulation of individuals or certain vulnerable groups (for example, voice-controlled toys that encourage dangerous behavior in children).
If an AI application poses a high risk to the health, safety or fundamental rights of natural persons, this application falls into the high-risk group.
These are:
System applications that are integrated into products that can be assigned to EU product safety regulations (for example: vehicles, aviation, medical devices or toys).
System applications that fall into eight sub-categories and are subject to mandatory registration in the EU database:
These are applications that can generate texts, images or other media. These are required by law,
If AI applications pose a minimal, foreseeable and limited risk to their users, they only need to fulfil low transparency requirements. Through interaction with the system, users must be enabled to make an informed and self-determined decision as to whether they wish to continue using the application. In addition, operators must inform users that they are interacting with an AI. Companies can draw up a code of conduct for such an application to which they wish to adhere.
The EU is planning to set up an EU-wide public database in which every operator of a high-risk AI application must register. This database will enable competent authorities, users and other interested parties to check whether a particular high-risk AI system fulfils the requirements of the AI Act. AI providers will be required to provide meaningful information about their systems and their conformity assessment when registering in the database.
In addition, operators will be obliged to inform the competent national authorities of serious incidents or malfunctions that constitute a breach of the obligation to respect fundamental rights and of recalls from the market. The national authorities will be responsible for investigating these incidents and will forward all data to the Commission, which will analyse them accordingly.
The European Commission proposed the first European legal framework for AI applications back in April 2021. On 14 June 2023, the Members of the European Parliament adopted their negotiating position on the AI Act. Negotiations will then begin with the EU member states in the European Council on the final form of the EU AI Act. The common goal is to reach an agreement on the AI Act by the end of the 2023 calendar year.
Why does it make sense to carry out a trade mark and trademark search, even if you don't want to register a trade mark initially, but just want to set up your start-up?
A privacy policy is mandatory for website operators. But what does it actually include to fulfil the legal requirements?
If you offer goods or services online in your company, you are generally obliged to provide an imprint on your website. The imprint provides information about you as a provider so that your customers have an overview of your identity. This is intended to create legal certainty and transparency. The legislator regulates this obligation in the German Telemedia Act (TMG). This obligation includes a list of details that you as the operator are obliged to clearly display on your website. An incorrect imprint can constitute a competition offence and result in fines.
On our website we use third-party cookies, among other things, to personalize content or analyze access to our website. You can agree to the use of these cookies or reject them. You can view the form in which we process data at any time in our privacy policy.