Artificial intelligence (AI) is on everyone's minds. Just this week, Donald Trump ruffled feathers by re-posting pictures which appeared to falsely imply that he had Taylor Swift's endorsement for president, many of which appear to have been created by AI.
The purpose of the new AI Regulation 2024/1689 of the European Parliament and of the Council (henceforth referred to as ‘AIR’ or ‘the Regulation’) is to support innovation, improve the functioning of the internal market and promote the uptake of human centric and trustworthy artificial intelligence, while ensuring a high level of protection of health, safety and fundamental rights and avoidance of harmful effects of AI.
This is a tall task (reflected by the extensive nature of the Regulation which includes 180 recitals, 113 articles and 13 annexes) and it is one which the EU has taken on by means of a risk-based approach, following in the footsteps of the GDPR and cyber security regulations, in an attempt to harmonise rules while allowing for the flexibility needed to keep up with new technologies.
Risk-based approaches call for those who may be affected to evaluate their position, and those who deem their activities to be ‘high-risk’ will be subject to stricter obligations and requirements than those whose activities are ‘low-risk’.
Written by Abigail Sked
Paralegal
This is a complex regulation which will be developed in years to come by expert opinion, official guidance and secondary legislation, and so this article attempts to act as a summarised guide to the 10 areas that we consider the most crucial and curious for Spanish companies.
In short, you will need to take the following steps:
- Decide if you are affected by the Regulation.
- Decide whether the AI system you use would be considered to be prohibited, high-risk, general-purpose but with a systemic risk, or general purpose.
- Decide if you are the provider, importer, distributor or deployer of that AI system (you may fall into more than one category. We explain this more below, but definitions can also be found in Article 3 of the Regulation)
- Understand and fulfil your obligations depending on the type of AI system you use and your role in its interaction.
Table of Contents
-
Scope: Am I affected?
If you use, develop, import, distribute or provide AI services in a professional capacity, this law probably affects you. Note that even providers established outside of the EU will be affected if their services are offered within the EU. Therefore, it’s probably most helpful to look at the areas/groups which are excluded from the scope of this Regulation:
Exclusions from the scope of the Regulation (Article 2):
- Competences of the Member States concerning national security.
- AI systems placed on market or used exclusively for military, defence or national security purposes.
- Public authorities in a third country and international organisations where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union and provide adequate safeguards of fundamental rights and freedoms.
- Provisions on the liability of providers of intermediary services.
- AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.
- Research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service.
- Union legal acts relating to personal data protection, consumer protection and product safety.
- Natural persons using AI systems in the course of a purely personal non-professional activity.
(Definitions can be found in article 3 of the Regulation, but an important term to know is ‘deployer’ which means “a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity”.)
-
AI Literacy (art 4)
Does your company provide or make use of AI such as ChatGPT? If so, the AIR requires you to take measures to ensure, to your best extent, a sufficient level of AI literacy of your staff and other persons dealing with the operation and use of AI systems on your behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
-
Prohibited AI practices (art 5)
The AIR sets out certain AI practices that it deems inappropriate and thus prohibits entirely their being carried out or developed, such as:
- Subliminal, manipulative or deceptive techniques used to impair one’s ability to make an informed decision and thus change their behaviour.
- Exploitation of the vulnerabilities of natural persons
- ‘Social scoring’ of natural persons based on their behaviour or characteristics leading to their detrimental/unfavourable treatment in an unjustified or disproportionate way or in a social context distinct from that in which their data was collected.
- Risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics.
- Creation or expansion of facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
- Inference of emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.
- Biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation (there are some exceptions).
- The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the objectives set out in this article.
- Other AI practices which infringe other Union law.
-
High-risk AI systems (Articles 6-49)
Should the AI system that your company uses or develops be classified as high risk, your interaction with that system will be subject to particularly tight requirements and obligations. A very simplified summary is that AI systems will be high-risk when they are or provide safety features for products which are subject to EU harmonisation regulations or when they have the purpose of providing safety features for critical infraestructure or when they pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including materially influencing the outcome of decision making, especially in the case of profiling.
A system will be classed as high risk when:
- the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I; AND
- the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation.
Annex 3 also lists a range of systems which will be considered high-risk. This will be particularly relevant to those who interact with AI systems in the following areas:
- Biometrics
- Critical infrastructure
- Educational and vocational training
- Employment, workers’ management and access to self-employment
- Access to and enjoyment of essential private services and essential public services and benefits.
- Law enforcement
- Migration, asylum and border control management
- Administration of justice and democratic processes
If the AI system that you use or provide falls under the scope of Annex 3 but you do not consider it to be high-risk, you must document your reasoning for that decision.
Requirements for high-risk AI systems
High-risk AI systems must comply with the requirements set out in Section 2 of the AIR, taking into account their purpose as well as the generally acknowledged state of the art on AI and AI-related technologies. It is the provider of the AI system who is responsible for ensuring that their product is fully compliant with all applicable requirements.
- A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems (Article 9)
- High-risk AI systems which make use of techniques involving the training of AI models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in Article 10, including taking measures to prevent possible biases by ensuring data sets are sufficiently representative and free of errors.
- Technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements. It shall be drawn up before the system is placed on the market or put into service and it shall be kept up-to-date (Article 11 and Annex IV).
- High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system to identify risks and monitor operation. (Article 12)
- These systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately and meet their obligations. This will include clear, accessible instructions for use made available to deployers (Article 13)
- Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, and high-risk AI systems will be designed and developed in a way which allows for this (Article 14)
- Design and development which achieves an appropriate level of accuracy, robustness, and cybersecurity, and which ensures that high-risk AI systems perform consistently in those respects throughout their lifecycle (article 15).
Obligations of providers of high-risk AI systems
There are many people involved in the AI value chain including importers (article 23), distributors (article 24), deployers (article 26), and third parties who are all subject to their own obligations, but any one of them can also be subject to the obligations of the provider if:
- They put their name or trademark on a high-risk AI system;
- They make a substantial modification to a high-risk AI system in such a way that it remains high-risk; OR
- They modify the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk, in such a way that it becomes high-risk.
The obligations of high-risk AI system providers are largely set out in articles 16-22 of the AIR and include:
- Ensuring that the high-risk AI systems are compliant with the requirements set out above.
- Indicating their company and contact details
- Having a quality management system in place (Article 17)
- Keeping the relevant documentation (Article 18)
- When under their control, keeping the logs automatically generated by their high-risk AI systems (Article 19)
- Ensuring the AI system undergoes the relevant conformity assessment procedure as referred to in Article 43 prior to being placed on the market or put into service.
- Drawing up an EU declaration of conformity and affixing the CE marking of conformity on the system, packaging or documentation (Article 47 and 48)
- Complying with registration obligations (Article 49)
- Cooperating with national competent authorities.
- Reporting any serious incidents linked to the AI system to the market surveillance authorities of the Member States where the incident occurred (Article 73)
- Ensuring that the high-risk AI system complies with accessibility requirements in accordance with Directives (EU) 2016/2102 and (EU) 2019/882.
- (In the case of providers established in third countries) Appointing an authorised representative which is established in the Union, by written mandate and prior to making their high-risk AI systems available on the Union market by written mandate (Article 22)
Although the provider bears the brunt of the responsibility for the high-risk AI system’s compliance with the aforementioned requirements, importers and distributors have a responsibility for verifying this conformity, for not putting that conformity in jeopardy and for cooperating with relevant competent authorities.
Obligations of deployers of high-risk AI systems (Article 26)
Deployers of high-risk AI systems (i.e. those who make use of those systems for professional purposes) shall:
- Take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems.
- Assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support.
- (To the extent that they exercise control over the input data) ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.
- Monitor the operation of the high-risk AI system on the basis of the instructions for use.
- Inform the provider or distributor and relevant market surveillance authority and suspend use of the system when they have reason to consider that its use in accordance with the instructions may result in a risk to the health or safety or to fundamental rights of persons.
- Keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control
- Where applicable, use the information provided to them to comply with their obligation to carry out a data protection impact assessment
- Cooperate with the relevant competent authorities
Some specific use cases:
- Deployers of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to natural persons shall inform the natural persons that they are subject to the use of the high-risk AI system.
- Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system.
- Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations and not use systems that have not been registered in the EU database.
- In the case in the output of a high-risk AI system is being used to make decisions about persons which produce legal or similarly significant effects adversely affecting their health, safety or fundamental rights, deployers must support affected persons in their right to obtain clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken (Article 86).
- In the framework of criminal investigation, deployers of high-risk AI systems for post-remote biometric identification are subject to specific obligations (article 26(10))
-
Fundamental rights impact assessment for high-risk AI systems (Article 27)
With the exception of those systems intended to be used as safety components of critical infrastructure, the following actors will have to carry out a fundamental rights impact assessment, following the procedure outlined in article 27 of the AIR, prior to deploying a high-risk system:
- Deployers that are bodies governed by public law;
- Deployers that are private entities providing public services;
- Deployers of high-risk AI systems which are used to evaluate creditworthiness or to carry out risk assessment and pricing in relation to natural persons in the case of life and health insurance.
-
Transparency obligations (Article 50)
A common public concern is that we are moving towards an era in which it is almost impossible for us to tell whether we are interacting with human or AI-generated content. Therefore, the AIR sets out certain transparency obligations both for those who provide and those who deploy (use) certain AI systems, regardless of whether they are high risk or not. There are some exceptions, more information about which you’ll find in article 50.
- If you provide AI systems intended to interact directly with natural persons, the natural persons must be informed that they are interacting with an AI system.
- If you provide AI systems which generate synthetic audio, image, video or text, the outputs must be marked in a machine-readable format and detectable as AI generated or manipulated.
- If you use an emotion recognition system or biometric categorization system, you must inform the natural persons thereto of the operation of the system, and process personal data in accordance with Union law.
- If you use an AI system that generates or manipulates image, audio or video content constituting a deep fake (resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful), you must disclose that the content has been artificially generated or manipulated.
- If you use an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest, you must disclose that the text has been artificially generated or manipulated, unless the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.
This information must be provided in a clear and distinguishable manner at the latest at the time of the natural person’s first interaction or exposure.
-
General-purpose AI models
It is not just AI systems of high risk which will be subject to obligations; providers of general-purpose AI models will also have to follow certain rules. If you provide a general-purpose AI model, you must:
- Draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation.
- Draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems so that they may have a good understanding of the capabilities and limitations of the model and comply with their obligations.
- Put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed.
- Draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model.
- Cooperate as necessary with the Commission and the national competent authorities in the exercise of their competences and powers.
- (In the case of providers established in third countries) Appoint, by written mandate, an authorised representative which is established in the Union prior to placing a general-purpose AI model on the Union market.
There are some exceptions for free, open-source models. See articles 53-54.
Furthermore, general-purpose AI models which are categorized as having a systemic risk must comply with further obligations. These are models which have high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks. For reference, A general-purpose AI model shall be presumed to have high impact capabilities when the cumulative amount of computation used for its training measured in floating point operations is greater than 1025.
In addition to the obligations for general-purpose AI models outlined above, general-purpose AI models with a systemic risk must also:
- Perform model evaluation in accordance with standardised protocols and tools
- Assess and mitigate possible systemic risks at Union level
- Keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them
- ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model
-
Codes of Practice (Article 56)
The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level in order to contribute to the proper application of this Regulation, taking into account international approaches. These codes shall be ready at the latest by 2 May 2025.
-
AI regulatory sandboxes
‘AI regulatory sandbox’ means a controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision. Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which shall be operational by 2 August 2026 (Article 57)
Any testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes must fulfil the conditions outlined in Article 60 and must rely on the freely-given, informed, prior consent of the subjects of the testing.
-
Penalties (Articles 99 - 101)
Member States shall lay down the rules on penalties and other enforcement measures, which may also include warnings and non-monetary measures. The penalties provided for are to be effective, proportionate and dissuasive and take into account the interests of SMEs, including start-ups, and their economic viability.
The AIR outlines a scale of potential administrative fines:
- For non-compliance with the prohibition of certain AI practices: Up to EUR 35,000,000 or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
- For non-compliance of members of the high-risk AI value chain and notified bodies with their obligations and requirements: Up to EUR 15,000,000 or, if the offender is an undertaking, up to % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
- For the supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request: Up to EUR 7,500,000 or, if the offender is an undertaking, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
- Fines imposed by the Commission for providers of general-purpose AI models who intentionally or negligently infringed or failed to comply with the provisions of this Regulation: Fines not exceeding 3% of their annual total worldwide turnover in the preceding financial year or EUR 15,000,000, whichever is higher.
Other consequences of non-compliance include:
- Order by the national market surveillance authority to suspend, terminate or modify testing in real world conditions that does not comply with AIR conditions (article 76)
- Where the market surveillance authority deems that an AI system presents a risk to health, safety or fundamental rights of persons, it shall require corrective actions to be taken to bring the AI system into compliance, or the withdrawal or recall of the AI system from the market (article 79).
- Order by the market surveillance authority to put an end to formal non-compliance of obligations relating to CE markings, EU declarations of conformity, technical documentation, etc. (Article 83)
This AI Regulation will apply generally from the 2nd of August 2026.
However, different dates of application apply to certain sections of the Act:
- Chapters I (General Provisions) and II (Prohibited AI Practices) shall apply from 2 February 2025;
- Chapter III Section 4 (Notifying authorities and notified bodies), Chapter V (General-Purpose AI Models), Chapter VII (Governance) and Chapter XII (Penalties) and Article 78 (Confidentiality) shall apply from 2 August 2025, with the exception of Article 101 (Fines for providers of general-purpose AI models);
- The classification of certain systems as high-risk AI systems under Article 6(1) with regards to safety features of a product and products subject to harmonising rules, and the corresponding obligations in this Regulation shall apply from 2 August 2027.
Looking for advice on compliance with EU regulations and/or how your company processes personal data? Contact us to discuss your queries: