This blog post is Human-Centered Content: Written by humans for humans.
This summer, the European Union (EU) has released a new regulation regarding the development and use of artificial intelligence (AI) in order to protect EU citizens.
The Artificial Intelligence Act (AI Act), with the official name “Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act),” can be accessed in multiple languages and formats on the official website for EU legal documents: EUR-Lex.
With 144 pages, 13 chapters, 113 articles and 13 annexes, this document can be quite overwhelming. But don’t worry, this blog post is breaking it down a bit and will give you a basic understanding of what this is all about.
Disclaimer: The reader is responsible for ensuring compliance with all relevant laws and regulations. InterWorks does not offer legal advice nor guarantee that its services or products will ensure compliance with any laws or regulations.
Agenda
- When Will This be Enforced? (Timeline)
- Who Needs to Know? (Scope and Personas)
- What is This? (Risk Categories)
- What Needs to be Done? (Actions)
- Why Should You Care? (Penalties)
- Wrap Up
When Will This be Enforced? (Timeline)
Wondering if you need to act now or can wait? This section outlines key dates and deadlines.
Proposed by the Commission in April 2021 and agreed upon by the European Parliament and the Council in December 2023, the EU AI Act entered into force on August 2, 2024.
The following dates indicate when the respective regulations will take effect based on the risk category:
- February 2025 (after 6 months)
Prohibitions on AI practices that pose an unacceptable risk. - August 2025 (after 1 year)
Regulations for General Purpose AI (GPAI) models put into market from that date on. - August 2026 (after 2 years)
Regulations for high-risk AI systems as mentioned in Annex III. - August 2027 (after 3 years)
Regulations for high-risk AI systems as mentioned in Annex I as well as necessary steps for providers of General Purpose AI models that were placed into market before August 2, 2025.
Who Needs to Know? (Scope and Personas)
Yes, this is legal material, but it’s essential for everyone involved in AI — including users.
Scope
The AI Act applies to AI systems used in a professional environment inside the European Union (EU), including AI systems that might be located outside of the EU but its output is used in the EU.
Exemptions for the AI Act regulations include:
- Purely personal, non-professional activities by individuals
- Scientific research or development
- Military, defense or national security purposes
Personas
There are general requirements for AI systems, but most of the regulations are tailored to specific obligations depending on the parties involved, or personas. The two primary ones are:
- Provider:
A person that develops an AI system, develops and places it into market, or puts it into service under its own name or trademark. - Deployer:
A person that uses an AI system for a professional activity, depending on the system, its use may affect other persons as well.
There are also the Importer and Distributer personas, which are distinct from the Provider, in that they have no involvement in the development of the AI system. They are only responsible for the later steps of the supply chain.
What is This? (Risk Categories)
How you can determine if your AI system falls under these regulations.
The AI Act divides artificial intelligence in categories based on the risk they might pose for the user or the population in general. These risk categories will then determine what actions need to be taken to reduce or better eliminate any harm that could be done. There are three main risk categories you need to be aware of:
1. Unacceptable Risk ❌
The absolute No-Gos. If you do one of these, read the entire article and get busy right away.
This category describes AI practices that pose an unacceptable risk and will be prohibited entirely. This shall already come into force early next year. The full list of AI practices in this category is described in chapter 2, article 5, they include:
- Manipulate Behavior: AI Systems that exploit vulnerabilities in individuals, such as those that manipulate behavior or exploit emotional responses. For example, an AI system used to spread false or biased information to influence voter decision or incite panic or violence.
- Social Scoring: AI systems that implement social scoring, such as those used to evaluate the trustworthiness of individuals.
- Untargeted scraping of facial images: AI systems used to create or expand a facial recognition database by broadly scraping facial images from the internet or CCTV footage.
- Real-Time Biometric Identification: Technologies used for real-time biometric identification in public spaces, which could infringe on privacy and civil liberties.
2. High-Risk AI 🔥
The risky, but necessary things.
High-Risk AI describes AI systems that might impose a high risk of harm to the health and safety or the fundamental rights of people and are divided into two subcategories based on their intended use-case or area. The AI Act describes this category in chapter 3, articles 6-7, and the areas are listed in annex I and III.
Annex I covers areas in which AI systems are products or are used as safety components in products, which are regulated under specified Union regulations. These products include:
- Medical devices
- Toys
- Transportation, lifts and aviation
Annex III describes areas in which AI systems are implemented, these areas include:
- Remote biometric identification systems (in case they are not already prohibited due to an unacceptable risk).
- Critical Infrastructure like road traffic, or in the supply of water, gas, heating or electricity.
- Education, employment, and asylum or border control management.
3. General Purpose AI Models with Systemic Risk
AI on a large scale, having a high impact and wide reach.
General Purpose AI models — for example, Large Language Models (LLMs) — will be classified as having a systemic risk when they have high impact capabilities on the EU market due to their reach, or have a potential negative effect on public health, safety, public security, fundamental rights or the society as a whole.
The impact capabilities will be assessed by the Commission based on specific factors, such as:
- The model’s design.
- The quality or size of its training data.
- The computational power required for training.
(For the techies: when the cumulative amount of computation used for its training exceeds 1025 in floating point operations.)
More details on this category can be found in chapter 5, article 51 and annex XIII.
What Needs to be Done? (Actions)
Now that you might have more clarity on how to categorize your AI system, you need to know what actions you need to take. There are numerous requirements, too many to cover fully here. I will highlight some of the most important ones, to give you a sense of the overall direction.
General Requirements for High-Risk AI Systems
The general requirements are listed in chapter 3, section 2, the following is giving you a summary of what it entails:
- Risk management system
The Act mandates a comprehensive risk management process to continuously identify, assess, mitigate and monitor risks throughout the system’s lifecycle. - Data and data governance
The data used for developing the AI model shall be of high-quality, relevant and representative to minimize bias and ensure accuracy, with clear documentation on data sources and processing methods. - Technical documentation
Technical documentation shall be drawn up to ensure transparency and enabling authorities to assess compliance with safety, accuracy and ethical standards throughout the AI system’s lifecycle. - Record-keeping
The AI system shall automatically keep records and logs to ensure traceability, allowing authorities to monitor and verify its compliance with safety and transparency standards. - Transparency and provision of information to deployers
The AI system shall be accompanied by clear and comprehensive information regarding its intended purpose, capabilities and limitations, along with any necessary instructions for safe and responsible use. - Human oversight
The AI system shall be designed to allow effective human oversight, enabling operators to monitor and intervene in the system’s operations to prevent or minimize risks to health, safety and fundamental rights. - Accuracy, robustness and cybersecurity
The AI systems shall be designed and developed to ensure high accuracy and reliability while implementing measures to protect against cybersecurity risks throughout their lifecycle.
Obligations Based on Personas
In addition to these general requirements for high-risk AI systems, there are special obligations depending on the personas involved.
- Providers are, for instance, required to:
- Implement a quality management system to ensure compliance with the regulation.
- Monitor system performance after deployment and address any issues that arise to ensure compliance and safety.
- Deployers must also ensure the following:
- High-risk AI systems are used in accordance with instructions and information made available by the provider.
- Effective human oversight is in place, conducted by a trained individual with adequate support.
- Accurate records of the systems use are maintained.
- Ongoing monitoring of the systems performance, reporting of any issues to the provider, and compliance with applicable safety and ethical standards.
- Importers and Distributers shall, for example, ensure that:
- The provider has drawn up the required technical documentation.
- The product bears the required CE mark and is accompanied by the EU declaration of conformity (these two are required for any system operating within the European Economic Area (EEA) declaring that the product meets all the legal requirements)
More on these persona differentiations can be found in chapter 3, section 3 and even more regulations regarding things like notifications, certifications and CE markings are listed in chapter 3, sections 4 and 5.
Regulations for GPAI models with a Systemic Risk
Additional regulations for providers of GPAI models posing a systemic risk as outlined in chapter 5, article 55 include:
- Model Evaluation and Testing: Conducting standardized, state-of-the-art evaluations — including adversarial testing — to identify and mitigate systemic risks.
- Risk Assessment at Union Level: Identifying and mitigating risks that may arise across the EU from the model’s development, deployment or use.
- Incident Tracking and Reporting: Documenting and reporting serious incidents and corrective actions promptly to the AI Office and relevant authorities.
- Cybersecurity Measures: Ensuring robust cybersecurity for both the model and its physical infrastructure.
Specific Transparency Obligations
In addition to the risk categories, there are transparency obligations for AI systems that are directly interacting with a person and/or generating audio, image, video or text output. These shall make it clear to the user that its content is artificially generated. More details on this can be found in chapter 4, article 50.
Why Should You Care? (Penalties)
Why should we care? Could we just ignore this?
Nope, because like most laws and regulations, non-compliance comes with penalties. And given the sensitivity of this topic and the intent to protect EU citizens, these penalties are substantial, so I strongly recommend ensuring compliance.
I am focusing on the main categories here. The full text regarding all fines and penalties can be found in chapter 12 of the AI Act.
- Non-compliance with the prohibited AI practices (article 5)
- Up to 35 Mill EUR.
- Or 7% of worldwide annual turnover, whichever is higher.
- Non-compliance with most other regulations (e.g. for high-risk AI systems, GPAI models, or transparency obligations)
- Up to 15 Mill EUR.
- Or 3% of worldwide annual turnover, whichever is higher.
- The supply of incorrect, incomplete, or misleading information to notified bodies or national competent authorities in reply to a request
- Up to 7.5 Mill EUR.
- Or 1% of worldwide annual turnover, whichever is higher.
Beyond these penalties, it is crucial to be aware of the status of your AI system. For instance, cloud platforms like the Snowflake AI Data Cloud already restrict the use of their AI functionality for systems classified as prohibited or high-risk, as outlined in their Acceptable Use Policy in Chapter II D.
Wrap Up
This article offers a general overview of the AI Act, which may evolve over time. I’ve highlighted some key sections to help you understand what to focus on and to navigate the official document more easily. This document is by no means a replacement of legal consult.
I highly recommend familiarizing yourself with the official regulations and identifying any areas that might apply to you. If your product or work could potentially fall into one of the restricted categories, it’s wise to take action now to avoid fines or other penalties — and to ensure a safe environment that protects the people of the European Union.