LOADING
1254 words
6 minutes
The EU AI Act High-Risk Classification. A Practical Guide for Engineering Teams

I don’t want to think that many engineering teams discovered the EU AI Act when someone from the legal department sent them a summary.

You could say that this Act isn’t primarily a legal instrument, but rather an engineering specification with legal implications. I think this distinction is important because legal teams read it looking for potential liabilities, while engineering teams should read it looking for architectural requirements. And these requirements arise much earlier in the development process than we might think.

This is a practical guide to the part of the Act that most directly affects engineering decisions: the risk classification system, and specifically what it means to build a system that is classified as high risk.

The classification system in simple terms

The EU AI Act organizes AI systems into four risk levels.

Unacceptable Risk refers to all those systems that are outright prohibited: social scoring by governments, real-time biometric surveillance in public spaces, AI that exploits psychological vulnerabilities. If you’re developing something like this in the private sector, you have much more serious problems than legal compliance.

High Risk is the level where most enterprise engineering teams should focus their attention. Systems in this category are permitted, but subject to mandatory requirements before implementation. The list is specific and should be read carefully rather than assuming your system isn’t on it.

Limited Risk mainly applies to systems with transparency obligations: for example, chatbots must disclose that they are AI, and deepfakes must be labeled. The requirements are less stringent, but not nonexistent.

Minimal Risk encompasses most AI applications. There are no mandatory requirements beyond existing legislation, although voluntary codes of conduct apply.

Engineering work is almost entirely performed at the high-risk level. Everything that follows refers to this category.

What is considered high risk?

According to the EU AI Act, high-risk systems are divided into two groups.

The first group comprises AI systems used as safety components in products already covered by European product safety legislation: machinery, medical devices, vehicles, and aviation equipment. If an AI system is integrated into a product that already requires CE marking, that system automatically inherits the high-risk category.

The second group is more relevant to most business teams. It consists of a list of specific application areas defined in Annex III of the Act:

  • Biometric identification and categorization of natural persons.

  • Management and operation of critical infrastructure: water, gas, electricity, and transport.

  • Education and vocational training: specifically, systems that determine access to educational institutions or assess students. This includes automated grading, admissions selection, and performance evaluation tools.

  • Employment and workforce management: resume screening, promotion decisions, performance tracking, and task assignment systems.

  • Access to essential public and private services: credit scoring, insurance risk assessment, and emergency service dispatch.

  • Law enforcement: risk assessment tools, evidence evaluation, and crime prediction.

  • Migration, asylum, and border control: risk assessment, document verification, and application processing.

  • Administration of justice and democratic processes: AI that assists in judicial decisions and applies the law to the facts.

It’s a long list, and many of the systems being developed today fall within it: educational platforms that assess student performance, human resources systems that select candidates, credit scoring tools, and more.

What does High-Risk classification really require?

This is where the implications of this classification become clearest from an engineering perspective.

Risk management system

There must be a documented and continuous process for identifying and mitigating risks throughout the system’s lifecycle. This is not a one-time assessment at implementation, but rather an ongoing process with defined review cycles.

Data governance

Training, validation, and testing datasets must be documented. Data collection practices, preprocessing steps, known limitations, and potential biases must be recorded. For systems that use recovery (RAG) instead of fine-tuning, this extends to the recovery corpus: its origin, maintenance, and who controls it.

Technical documentation

A detailed technical file describing the system design, development process, capabilities, and limitations must be created. This file must be maintained and updated. This is what a conformity assessment body reviews. It must be written as a document that anyone can read and understand, not as mere internal notes.

Transparency and information provision

Every system must be accompanied by instructions for use. These must explain the system’s purpose, the level of accuracy it achieves, its known limitations, the circumstances under which it might fail, and the role of human oversight.

Human oversight measures

The system must be designed to allow human operators to monitor its operation, understand its results, intervene when necessary, and override or shut it down. This is considered not a function of the user interface, but an architectural requirement, and this oversight capability must be built into the system, not added later.

Accuracy, robustness, and cybersecurity

The system must achieve levels of accuracy appropriate for its purpose, be resilient to errors and inconsistencies, and be protected against potential malicious manipulation. Each of these aspects requires defined metrics, not just engineering criteria.

Conformity assessment

Before implementation, high-risk systems must undergo a conformity assessment, either a self-assessment in accordance with legal requirements or a third-party assessment, depending on the scope. Technical documentation is the primary input for this assessment.

The timeline

The AI ​​Act came into force in August 2024, and is mandatory for high-risk Schedule III systems (HR, banking, education) from August 2026. If it is a security component of an already regulated product (such as a medical device), the date is extended to August 2027.

While this may seem like plenty of time, I believe it isn’t, for two reasons.

First, documentation and risk management requirements are applied retroactively to systems already in operation when a “substantial change” occurs (but let’s be realistic: in the AI ​​world, retraining a model with new data, drastically changing a directive, or updating weights is generally considered a substantial change).

If you created a high-risk system before the provisions of the Act came into effect, you will still need to adapt it before the deadline. Retroactive compliance is more difficult than adapting from the outset: it involves documenting decisions made months or years ago, often without the necessary records to do so accurately.

Second, the human oversight requirements typically necessitate architectural changes. A system not designed for human intervention doesn’t become one simply by adding a “reject this output” button to the interface. Meaningful human oversight, as required by law, means that operators can understand what the system is doing and why, intervene at appropriate times, and that such intervention actually affects the system’s behavior. Adapting this to a system designed for autonomous operation is costly.

A practical first step

Before assessing compliance requirements, it is necessary to determine if your system presents a high risk. This determination is not always obvious.

First, you should read the application domains listed in Annex III and see if your system fits any of them. The definitions are broader than they initially appear. For example, an AI system that “assesses and ranks” student results is different from a teacher grading exams: the Act treats automated assessment differently from human assessment, and this distinction is important for ranking.

If your system could present a high risk, the next step is to establish the documentation, starting with a written description of what the system does, what data it uses, how decisions are made, and where human oversight exists or does not exist. This document becomes the foundation for everything else.

With this documentation in place, teams find it easier to navigate the conformity assessment process, as it simplifies the process for them to answer any questions about their systems.

Conclusion

The high-risk classification under the EU AI Act is not a compliance issue that can be delegated to the legal department.

The requirements (risk management, data governance, technical documentation, human oversight) are engineering problems that require engineering solutions. Most of them should be integrated from the initial design stage.

The EU AI Act High-Risk Classification. A Practical Guide for Engineering Teams
Author
Raúl Ferrer
Published at
2025-03-18
License
CC BY-NC-SA 4.0

Some information may be outdated