Cybersecurity & Risk ManagementAI, Data, & Tech Innovations

Risk Analysis for AI Systems: How to Classify them According to the AI Act

AI Risk Analysis Made Easy - How to classify under the AI Act
252x252_arthur_heydata_882dfef0fd_c07468184b.webp
Arthur
09.07.2025

How companies can correctly classify their AI systems in accordance with the AI Act – while minimizing regulatory, ethical, and safety risks.

What is risk analysis in the context of the AI Act?

Risk analysis is the foundation of all AI compliance under the AI Act. It serves to systematically assess the risks that an AI system poses to people, society, and fundamental rights – e.g., discrimination, lack of transparency, wrong decisions, or power asymmetries.

It includes, among other things:

  • Identification of risks: What damage could occur and to whom?
  • Probability and impact assessment: How serious is the risk?
  • Assessment of the risk class: Which category does the system fall into according to the AI Act?
  • Derivation of protective measures: What specific steps do we take to minimize risks?

Example: An HR tool that automatically pre-sorts applicant profiles could systematically discriminate against people based on age, origin, or gender, without malicious intent, but with serious consequences. Risk analysis identifies and addresses such problems at an early stage.

Table of Contents:

Why the Classification of AI Systems is Crucial

The risk classification determines everything else:

  • What obligations do I have to fulfill (transparency, audits, documentation)?
  • What technical and organizational measures are required?
  • How do I prove to authorities and partners that my system is being operated responsibly?

Correct classification prevents:

  • Fines and liability risks
  • Reputational damage due to misconduct
  • Investment losses in systems that are not approved later on

In short: Without clear classification, there is no secure future for AI systems in companies.

The Four Risk Categories Under the EU AI Act

The AI Act classifies AI systems into four categories based on their area of application, impact on people, and potential risk:

1. Unacceptable risk

Prohibited applications, e.g.:

  • Social scoring systems
  • Emotion recognition in the workplace or in schools
  • Covert manipulation of behavior

May not be used in the EU.

2. High risk

Areas of application with a significant impact on human rights or security, e.g.:

  • Biometric identification
  • Creditworthiness checks
  • Personnel decisions
  • Critical infrastructure

Strict requirements for documentation, data quality, monitoring, and human control.

3. Limited risk

Systems that do not significantly interfere with fundamental rights but trigger transparency obligations, e.g.:

  • Chatbots
  • AI-based recommendation systems

Users must be able to recognize that they are interacting with AI.

4. Minimal risk

E.g., spam filters, AI text recognition, or spell check.

No specific obligations, but voluntary standards and best practices are recommended.

Core Elements of a Risk Assessment Under the EU AI Act

According to the EU AI Act, the following aspects are central to conducting a legally compliant risk analysis:

  • Purpose-specific system description: What is the system supposed to do? For whom?
  • Analysis of the data basis: Where does the training data come from, and how was it prepared?
  • Model behavior & interpretation of results: Are the results comprehensible and fair?
  • Technical security: How is the system prevented from being manipulated or misdirected?
  • Governance & control: Who is responsible, how is operation organized?

→ Additionally required: complete documentation, e.g., through technical documents, risk reports, test reports – especially for high-risk systems.

Methods for Conducting an Effective Risk Assessment

Depending on the size of the company and the type of system, different methods can be combined. The following are recommended, among others:

  • Scenario analysis: What happens if the system makes the wrong decision? Who is affected?
  • FMEA (Failure Mode and Effects Analysis): What errors can occur and how serious would the consequences be?
  • FTA (Fault Tree Analysis): Visual derivation of error chains and cause-and-effect relationships
  • AI Impact Assessment / DPIA: Data protection impact assessment (especially for personal data)
  • Questionnaire-based self-assessment: Templates and tools help to make structured initial assessments

Tools such as heyData AI Compliance Tool or open-source frameworks (e.g., from the OECD or NIST) offer practical templates for companies.

Challenges and Solutions in Classifying AI Systems

Challenges:

  • Rapid technological advancement: Risk can change with new features
  • Borderline cases in classification: Not every system can be clearly categorized
  • Lack of internal expertise: Especially in medium-sized companies without AI specialists

Solutions

  • Development of an internal AI governance process
  • Establishment of an AI classification team comprising tech, legal, and ethics experts
  • Use of regularly updated assessment guidelines and tools
  • Collaboration with specialized compliance partners such as heyData

Step-by-Step Example: Risk Assessment of an AI-Based Hiring Tool

Case: AI-supported application screening tool for SMEs

  1. System definition
    → Automates the pre-selection of applicant profiles based on resumes
  2. Risk identification
    → Potential discrimination based on gender, origin, or age
    → Non-transparent criteria for rejection
  3. Classification according to the AI Act
    → High-risk system (Personnel decisions according to Annex III)
  4. Risk assessment
    → What criteria does the model use?
    → How traceable are the decisions?
  5. Risk mitigation measures
    → Human-in-the-loop control
    → Fairness audits of training data
    → Documentation & regular model review
  6. Monitoring & reporting
    → Use of a dashboard for monitoring
    → Mandatory documentation for supervisory authorities

Best Practices for AI Act Compliance

  • Start early
    Embed risk analysis in product development (privacy and ethics by design)
  • Document instead of improvising
    Every AI system should have a “digital risk dossier.”
  • Train employees
    Create understanding for compliance, ethical principles, and classification criteria
  • Seek external expertise
    An outside perspective is particularly valuable for high-risk systems
  • Integrate technology and compliance
    Involve tech teams in regulatory responsibility (DevOps → DevComOps)

Conclusion: Risk-Based Compliance Is the Key to Responsible AI

The future of AI in Europe depends on trust - and trust stems from transparency and security. The AI Act provides clear guardrails. Companies that conduct structured risk assessments, document their processes, and maintain ongoing oversight are best positioned to succeed.

Classification is not just a regulatory checkbox - it’s a strategic tool:
It helps avoid missteps, improve product quality, and build trust with users, partners, and regulators alike.

Those who act now gain more than legal certainty - they gain a real competitive edge.

Important: The content of this article is for informational purposes only and does not constitute legal advice. The information provided here is no substitute for personalized legal advice from a data protection officer or an attorney. We do not guarantee that the information provided is up to date, complete, or accurate. Any actions taken on the basis of the information contained in this article are at your own risk. We recommend that you always consult a data protection officer or an attorney with any legal questions or problems.