Skip to main content

Browse and search all articles of Regulation (EU) 2024/1689

30 articles indexed

Showing all 30 articles

4

Article 4: AI literacy

Requires providers and deployers of AI systems to ensure that their staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy, taking into account their technical knowledge, experience, education, training, and the context of use.

prohibitedhigh-riskgpaigpai-systemiclimitedminimal
5

Article 5: Prohibited artificial intelligence practices

Lists AI practices that are prohibited in the Union, including subliminal manipulation, exploitation of vulnerabilities, social scoring, individual predictive policing, untargeted facial scraping, emotion inference in workplace/education, biometric categorization for sensitive attributes, and real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions).

prohibited
6

Article 6: Classification rules for high-risk AI systems

Defines two pathways for high-risk classification: (1) AI systems that are safety components of products or are themselves products covered by Annex I Union harmonisation legislation requiring third-party conformity assessment; (2) AI systems referred to in Annex III areas, unless they do not pose a significant risk of harm to health, safety, or fundamental rights.

high-risk
9

Article 9: Risk management system

Requires establishment of a risk management system throughout the entire lifecycle of the high-risk AI system, including identification and analysis of known and foreseeable risks, estimation and evaluation of risks from intended use and reasonably foreseeable misuse, adoption of appropriate risk management measures, and testing to identify most appropriate measures.

high-risk
10

Article 10: Data and data governance

Requires that training, validation, and testing data sets are subject to appropriate data governance and management practices, including documentation of data sources, assessment of relevance and representativeness, examination for biases affecting health, safety, or fundamental rights, and measures for bias detection and mitigation.

high-risk
11

Article 11: Technical documentation

Requires preparation of technical documentation before the system is placed on the market or put into service, demonstrating compliance with high-risk requirements. Documentation must include general system description, development process details, monitoring and control mechanisms, risk management compliance, lifecycle changes, standards applied, human oversight measures, and accuracy/robustness/cybersecurity information.

high-risk
12

Article 12: Record-keeping

Requires high-risk AI systems to include automatic recording of events (logs) throughout their lifetime, enabling tracing of system operation. Logging must conform to recognised standards and include period of use, reference database, input data, and identification of natural persons involved in verification.

high-risk
13

Article 13: Transparency and provision of information to deployers

Requires high-risk AI systems to be designed to enable deployers to interpret output and use it appropriately. Instructions for use must include provider identity, system capabilities and limitations, intended purpose, foreseeable misuse scenarios, human oversight measures, expected accuracy/robustness/cybersecurity levels, known risks, and input data specifications.

high-risk
14

Article 14: Human oversight

Requires high-risk AI systems to be designed to allow effective oversight by natural persons, enabling them to fully understand system capacities and limitations, monitor for anomalies, remain aware of automation bias, correctly interpret output, decide not to use the system or override its output, and interrupt operation via a stop mechanism.

high-risk
15

Article 15: Accuracy, robustness and cybersecurity

Requires high-risk AI systems to achieve appropriate accuracy levels, be resilient to errors and adversarial attempts, include fail-safe mechanisms, and implement cybersecurity measures proportionate to the risks. Accuracy levels and metrics must be declared in instructions for use.

high-risk
16

Article 16: Obligations of providers of high-risk AI systems

Providers of high-risk AI systems shall ensure that their systems are compliant with the requirements set out in this Section, have a quality management system in place, draw up technical documentation, keep logs, ensure conformity assessment is carried out, affix the CE marking, and comply with registration obligations.

high-risk
17

Article 17: Quality management system

Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. The system shall be documented in a systematic and orderly manner in the form of written policies, procedures, and instructions, and shall include strategies for regulatory compliance, design control, and post-market monitoring.

high-risk
22

Article 22: Authorised representatives of providers of high-risk AI systems

Prior to making their high-risk AI systems available on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union. The authorised representative shall perform the tasks specified in the mandate received from the provider.

high-risk
26

Article 26: Obligations of deployers of high-risk AI systems

Deployers must use systems in accordance with instructions, ensure human oversight by competent persons, monitor operation, keep logs, perform data protection impact assessments where required, and inform affected persons and workers representatives about their use of high-risk AI systems.

high-risk
27

Article 27: Fundamental rights impact assessment for high-risk AI systems

Certain deployers of high-risk AI systems (public bodies and private entities providing public services) must perform a fundamental rights impact assessment before putting the system into use, assessing impact on rights of affected persons, specific risks of harm, human oversight measures, and governance measures.

high-risk
43

Article 43: Conformity assessment

Defines conformity assessment procedures for high-risk AI systems, including self-assessment based on internal control (Annex VI) and third-party assessment involving a notified body (Annex VII) for certain categories including biometric identification and critical infrastructure systems.

high-risk
49

Article 49: Registration

Requires providers and deployers of high-risk AI systems to register in the EU database established under Article 71 before placing the system on the market or putting it into service. Registration information must be kept up to date.

high-risk
50

Article 50: Transparency obligations for providers and deployers of certain AI systems

Establishes transparency obligations for AI systems interacting with persons (must disclose AI nature), generating synthetic content (must mark as AI-generated), performing emotion recognition (must inform exposed persons), and performing biometric categorization (must inform exposed persons).

limited
51

Article 51: Classification of general-purpose AI models as general-purpose AI models with systemic risk

Defines when a GPAI model is classified as posing systemic risk: either trained with compute greater than 10^25 FLOPs (automatic classification) or designated by the Commission based on high-impact capabilities assessed against criteria including number of users, cross-border reach, and degree of autonomy.

gpaigpai-systemic
52

Article 52: Obligations for providers of general-purpose AI models

All GPAI providers must prepare and keep up to date technical documentation (Annex XI), provide information and documentation to downstream providers, establish a copyright compliance policy including compliance with text and data mining opt-out, and publish a sufficiently detailed summary of training data content.

gpaigpai-systemic
53

Article 53: Obligations for providers of general-purpose AI models to put in place a policy to comply with Union copyright law

GPAI providers must comply with EU copyright law, specifically Directive 2019/790 on copyright in the Digital Single Market, including identifying and respecting text and data mining opt-out reservations expressed by rights holders. Open-source models meeting specific criteria have reduced obligations.

gpaigpai-systemic
54

Article 54: Authorised representatives of providers of general-purpose AI models

Providers of GPAI models established outside the Union must appoint an authorised representative in the Union before making the model available on the Union market.

gpaigpai-systemic
54

Article 54: Authorised representatives of providers of general-purpose AI models

Providers of GPAI models established outside the Union shall appoint an authorised representative in the Union before making the model available on the Union market.

gpaigpai-systemic
55

Article 55: Obligations for providers of general-purpose AI models with systemic risk

In addition to standard GPAI obligations, providers of models with systemic risk must perform model evaluation including adversarial testing (red teaming), assess and mitigate systemic risks, track and report serious incidents to the AI Office and national authorities, and ensure adequate cybersecurity for the model and its physical infrastructure.

gpai-systemic
56

Article 56: Codes of practice

The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level as a means of contributing to the proper application of this Regulation, taking into account international approaches. GPAI model providers may rely on adherence to a code of practice to demonstrate compliance with obligations.

gpaigpai-systemic
69

Article 69: Codes of conduct for voluntary application of specific requirements

The AI Office and Member States shall encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application of requirements set out in Title III, Chapter 2 to AI systems other than high-risk AI systems, taking into account available technical solutions and industry best practices.

minimal
72

Article 72: Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems

Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the AI technology and the risks of the high-risk AI system. The system shall actively and systematically collect, document, and analyse relevant data which may be provided by deployers or collected through other sources.

high-risk
73

Article 73: Reporting of serious incidents

Providers of high-risk AI systems placed on the Union market shall report any serious incident to the market surveillance authorities of the Member States where that incident occurred. The report shall be made immediately after the provider has established a causal link or reasonable likelihood of a causal link between the AI system and the serious incident.

high-risk
95

Article 95: Voluntary codes of conduct for non-high-risk AI systems

Providers and deployers of AI systems that are not high-risk are encouraged to create codes of conduct intended to foster the voluntary application of some or all of the requirements applicable to high-risk AI systems, adapted to the intended purpose of the systems and the lower risk involved.

minimal
99

Article 99: Penalties

Establishes the penalty framework for infringements of the Regulation. Maximum fines range from EUR 35 million or 7% of global annual turnover for prohibited practices, EUR 15 million or 3% for non-compliance with other obligations, to EUR 7.5 million or 1% for supplying incorrect information. SMEs and startups benefit from proportionate fines (lower of fixed or turnover-based), and EU institutions are capped at EUR 1.5 million.

prohibitedhigh-riskgpaigpai-systemiclimitedminimal