AI Act

AI Act

The AI Act, officially known as Regulation (EU) 2024/1689, is the European Union’s first comprehensive legal framework for regulating artificial intelligence (AI). The goal: safe, transparent, and trustworthy AI that respects fundamental rights and enables innovation.

What is the EU AI Act?

The AI Act (also known as the Artificial Intelligence Act) is an EU regulation (Regulation (EU) 2024/1689) that came into force on August 1, 2024, and will be implemented in stages over the following years. It applies to a wide range of AI systems, from providers and operators to the distribution or import of AI products.

Essentially, it aims to systematically regulate the risks that AI can pose to people or society—such as through discrimination, manipulation, surveillance, or misconduct.

Who must comply with the AI Act?

Group of actors
Description
Obligations under the AI Act
supplier
Develop AI systems or market them under their own name
Risk assessment, conformity assessment, technical documentation, transparency obligations
provider
Use AI systems in their own processes (e.g., companies, government agencies)
Ensuring legally compliant use, training employees, risk management
importers
Introducing AI systems from third countries into the EU market
Verification of compliance, disclosure of information to authorities
retailer
Continue to sell AI systems within the EU
Information obligations, cooperation with authorities
Authorized representatives
Acting on behalf of a supplier not based in the EU
Support with compliance processes and contact person for regulatory authorities
Operators of GPAI
Use or integrate so-called “general purpose AI” (e.g., large language models such as GPT)
Transparency, disclosure of technical documentation, registration with authorities if necessary

Table 1: Stakeholder group, description, and responsibilities under the AI Act

Objectives of the AI Act

The AI Act has three overarching objectives:

It is based on a long-standing EU AI strategy and is part of the European Commission’s digital future initiative.

Risk-based approach and risk classes

A key element of the AI Act is the classification of AI systems according to risk, because different applications have different potential risks.

Risk class
Description
Regulatory requirements
Unacceptable Risk
AI that disproportionately jeopardizes fundamental rights or security
Not allowed
High‑Risk
AI in sensitive areas (e.g., health, justice, workplace)
Strict requirements (e.g., risk management, transparency, conformity assessment)
Limited Risk
KI, die eher geringfügig Risiken birgt
Transparency requirements (e.g., notices for users)
Minimal Risk
KI mit kaum Risiken
Hardly any regulations

Table 2: Risk classes AI Act

This risk classification determines which obligations providers, operators, or developers must comply with and must be taken into account as early as the design and development stages of AI systems.

Obligations and requirements

For providers and operators of high-risk AI

Transparency and labeling requirements

Even AI systems with lower risks are subject to disclosure requirements, e.g., when users need to know that they are interacting with AI (as with chatbots or automated decisions).

Employers’ obligation to teach AI skills

The EU AI Act creates not only technical but also organizational requirements for companies. Particularly relevant: Employers must ensure that employees have sufficient knowledge and skills in dealing with AI systems—especially when these systems are used in sensitive or regulated areas.

Specifically, this means:

This training requirement arises indirectly from the requirements for the safe use, monitoring, and documentation of AI systems. Companies that invest in this at an early stage not only minimize legal risks, but also strengthen their workforce’s confidence in new technologies.

Tip: A documented training strategy can serve as proof of AI compliance during audits or inspections.

AI supervisory authority in Germany

The Federal Network Agency will be the central AI supervisory authority in Germany for the implementation of the EU AI Act. It is responsible for:

In addition, the Federal Network Agency works closely with the European AI Office, which coordinates at the EU level. This ensures that uniform standards and procedures apply throughout the internal market.

Penalties for violations of the AI Act

The EU AI Act provides for fines for companies that violate the provisions of the regulation. The amount of the penalties depends on the severity of the violation and the company’s annual turnover.

The most important sanctions at a glance:

Violation
Maximum fine
Violations of prohibited practices under Article 5 or breaches of data requirements
up to €35 million or 7% of the previous year’s global sales
Violations of other provisions of the regulation
up to €15 million or 3% of the previous year’s global turnover
Failure to comply with GPAI regulations
up to €15 million or 3% of the previous year’s global turnover
Providing false, incomplete, or misleading information to authorities
up to €7.5 million or 1.5% of the previous year’s global turnover

Table 3: Violations and fines under the AI Act

Companies—especially those that use high-risk AI or general-purpose AI (GPAI)—should take compliance measures at an early stage to avoid financial risks and reputational damage.

You can read about all current changes and laws at any time here.

Olga Fedukov completed her studies in Media Management at the University of Applied Sciences Würzburg. In eology's marketing team, she is responsible for the comprehensive promotion of the agency across various channels. Furthermore, she takes charge of planning and coordinating the content section on the website as well as eology's webinars.

Olga
Fedukov
, Marketing Manager o.fedukov@eology.de +49 9381 58290138