AI Act – how is risk classified? Risk levels in the Artificial Intelligence Act
The EU AI Act classifies artificial intelligence systems according to four different risk levels: unacceptable, high, medium, and minimal. Each category has different regulations and requirements that organizations developing or using AI systems must comply with. What do these risks concern? We discuss this in this article along with examples.
The AI Act, or Artificial Intelligence Act, is an important step in the development of effective and responsible regulatory frameworks for artificial intelligence in Europe. This law aims to create a level playing field for all businesses while protecting the rights and interests of citizens. One of the key issues is the classification of artificial intelligence systems by risk.
What is AI Act?
The AI Act (or AIA, from Artificial Intelligence Act) is a European Union regulation known as the Artificial Intelligence Act. It was proposed in April 2021 by the European Commission. This is the first and, at the same time, the most comprehensive legal framework aimed at regulating the use of artificial intelligence systems on one hand, and reducing the risks associated with them on the other. There is also a third aspect—the AI Act aims to facilitate investments and innovations in this field.
Risk classifications under the Artificial Intelligence Act
When developing the act, a priority was establishing a classification of artificial intelligence systems, which was done based on a risk analysis. This approach facilitated the creation of horizontal regulations, meaning those applicable across all sectors.
Four risk levels for artificial intelligence systems were established: unacceptable, high, medium, and low. We briefly discuss them below with examples. It should also be noted that depending on the classification, three solutions are adopted. Namely, the system may be completely banned, additional requirements may be imposed, or users interacting with AI must be notified about this interaction.
Important: Risk classifications are still subject to minor changes.
Unacceptable Risk
The highest level of risk. Artificial intelligence systems associated with this risk will be banned in the EU. This category is divided into eight types of AI applications.
AI applications associated with unacceptable risk include those operating in areas such as:
- Subliminal manipulation: aims to alter a person’s behavior without their knowledge, which could harm them in any way. An example might be a system influencing people to vote for a particular political party without their knowledge or consent.
- Exploitation of vulnerability: exploits a person’s social or economic situation, age, or physical or mental capacity, leading to harmful behavior. An example would be a toy with a voice assistant encouraging children to do dangerous things.
- Biometric categorization of individuals based on sensitive features: includes gender, ethnic origin, political orientation, religion, sexual orientation, and philosophical beliefs.
- General-purpose social scoring: using AI systems to evaluate individuals based on their personal traits, behaviors, and social actions, such as online purchases or social media interactions. The issue is that someone might be denied a job or loan due to their social score, which could be unjustified or unrelated to the situation.
- Real-time biometric identification in public spaces: biometric identification systems will be completely banned, including ex-post identification. Exceptions are made for situations necessary for law enforcement (with judicial consent and supervision), such as preventing terrorism, tracking serious criminals, suspects, or crime victims (e.g., human trafficking, sexual abuse, armed robbery, environmental crimes).
- Emotional state assessment of individuals: involves AI systems in the workplace or education. This may be allowed as a high-risk application if it serves safety purposes (e.g., detecting if a driver is falling asleep).
- Predictive policing: assessing the likelihood of an individual committing a crime in the future based on personal characteristics.
- Facial image collection: from the internet or CCTV footage to create databases enabling facial recognition that violates human rights.
High Risk
Classifying AI systems as high-risk was one of the most controversial aspects, as it entails significant burdens on organizations. This is also the most regulated area. This is due to the fact that such systems could negatively impact human health and safety, fundamental rights, or the environment, especially if systems fail or are misused. In order for AI systems in this risk category to be brought to market and operated within the EU, they must meet specific requirements.
What are high-risk AI systems? These include all systems that use safety elements in regulated systems such as medical devices, elevators, vehicles, or machinery. Additionally, the law identifies further areas where autonomous AI systems could be classified as high-risk.
What areas are considered high-risk under the Artificial Intelligence Act? These include:
- Biometric systems and systems based on biometrics, such as biometric identification and categorization of individuals;
- Management and operation of critical infrastructure, e.g., traffic management and energy supply;
- Education and vocational training, including the assessment of students in educational institutions;
- Management of employment and workforce, e.g., recruitment, performance evaluation, or task allocation;
- Access to essential services, both private and public benefits, such as credit scoring and dispatching emergency services;
- Law enforcement, e.g., evaluating the credibility of evidence or criminal analysis;
- Migration management, asylum, and border control, e.g., assessing the security risk of an individual or handling applications for asylum, visas, or residence permits;
- Administration of justice and democratic processes, such as assisting in interpreting and investigating facts, laws, and the application of the law or political campaigns.
Medium Risk
The third level is the limited, medium risk, which includes AI systems associated with manipulation or fraud risks. AI systems in this category must be transparent, meaning that people must be informed about their interactions with AI (unless this is obvious), and any deep fakes should be clearly labeled as such. This is especially important for generative AI systems and their content. This category includes, for example, chatbots and deep fakes.
Low Risk
The lowest risk level, referred to as minimal risk. This includes all AI systems that do not fall under the above categories, e.g., spam filters. No restrictions or mandatory obligations are imposed on these systems. However, it is recommended to follow general principles, such as human oversight, non-discrimination, and fairness.
Also check: How to manage risks associated with artificial intelligence?
GPAI also falls under the AI Act
The original version of the AI Act focused on risk classification based on the use of artificial intelligence systems and did not mention general-purpose AI (GPAI) systems such as OpenAI or Aleph Alpha. This was quickly addressed. This is particularly important because GPAI systems do not have restrictions on their use – they are machines with a wide range of applications, performing a variety of intellectual tasks that reflect human cognitive abilities. Unlike specialized AI systems, GPAI aims to mimic human intelligence in various actions.
For general-purpose AI models (GPAI), the EU law distinguishes between two classes of risk: non-systemic risk and systemic risk, depending on the computational power required to train the model. All GPAI models are subject to transparency obligations, which become more stringent in the case of systemic risk, i.e., if the model is highly effective. The creators of GPAI will be required to provide relevant information to further suppliers who use these models in high-risk applications. Their duties include creating and maintaining technical documentation, developing policies for respecting copyright law, and providing a detailed summary of the content used to train the GPAI model.
It is worth noting that publicly available open-source models can avoid more stringent requirements if their license allows access, use, modification, and distribution of the model and its parameters. This rule applies as long as there is no connection to high-risk or prohibited applications, nor any risk of manipulation.
How to manage risk and prepare for the AI Act?
Artificial intelligence systems will be tightly regulated on the EU market, especially those with high risk. This, in turn, imposes significant burdens on organizations to comply with EU requirements. However, in practice, every system will need to ensure transparency and security, as well as limit the risk of manipulation or fraud. Even if a company uses or develops AI systems with low risk, it is recommended to follow general principles and requirements.
It’s already a good time to prepare for the AI Act regulation. We can help you with this!
We will prepare and conduct an audit to verify the compliance of your artificial intelligence system with the EU legislation. We will check how the risks are monitored. We will assess the model’s transparency and security. Contact us now to mitigate potential risks early on and ensure a positive outcome in the EU audit. We will help you fully understand the AI development process and prepare you for the AI Act.