Whitepaper on the EU AI Act

EU AI Act for SMEs: How to correctly classify your AI tools

Key points at a glance
- Risk-based approach: The AI Act regulates not the technology itself, but its intended use - the higher the risk, the stricter the obligations.
- Four risk categories: Prohibited, High-Risk, Subject to Transparency Requirements, and Minimal Risk - the classification determines all compliance obligations.
- High-risk pitfall: AI in recruiting or credit decisions is considered high-risk and requires risk management systems and human oversight.
- Role matters: Users (deployers) and developers (providers) bear different responsibilities – significant modifications can unintentionally shift roles.
- Inventory requirement: A central AI directory of all tools-similar to the GDPR processing inventory - becomes indispensable proof of compliance.
- GDPR synergies: Existing data protection structures can be directly extended to meet AI Act requirements, thereby saving effort.
Introduction
Artificial intelligence is no longer a topic of the future – it is part of everyday work. From chatbots in customer service to automated invoice processing and AI-powered recruiting tools: many small and medium-sized enterprises use AI systems today, often without being aware of the regulatory implications. With the entry into force of the EU AI Act, the legal framework is fundamentally changing. The regulation introduces a binding risk classification that precisely determines which compliance obligations your company will face.
But what does this mean in practice for SMEs? Do you now have to document every small automation? How can you reliably determine whether a tool you use qualifies as high-risk AI? And why does systematically recording your AI landscape suddenly become a critical safeguard for management? This article explains the logic behind risk classification in the EU AI Act, shows why this categorization forms the strategic core of corporate compliance, and raises awareness of the growing importance of transparent governance.
Table of Contents:
Why risk classification at all?
The EU AI Act follows a consistent risk-based regulatory approach. This means: it does not regulate the technology itself, but rather its specific use case and the associated risk potential. This logic is already familiar to many entrepreneurs from data protection, where particularly sensitive data (such as health data) is subject to stricter rules than general contact data.
The underlying idea is pragmatic: a simple spam filter AI carries different risks than a system that decides on credit applications or automatically pre-screens candidates. The higher the potential risk to fundamental rights, safety, and people’s health, the stricter the requirements for the system and its operation. For you as an entrepreneur, this means: you must clearly understand which category your used or offered AI tools fall into.
Risk classification is far more than a bureaucratic exercise – it is the central control mechanism of the regulation. It determines whether you must fulfill extensive documentation obligations, whether complex conformity assessments are required, or whether a system is even completely prohibited in the EU. Without proper classification, you simply cannot assess the legal risks your company is taking.
Whitepaper on the EU AI Act
The four risk categories in the EU AI Act
According to the current status, the EU AI Act distinguishes four main categories that you need to know:
| Risk level | Definition according to the regulation | Practical examples for SMBs | Legal requirements & obligations |
| Unacceptable risk (Prohibited AI) | Systems that severely violate fundamental rights or act in a manipulative way. | Social scoring, real-time remote biometric identification, manipulative behavioral influence. | Strict prohibition: use, placing on the market, and putting into operation are prohibited in the EU. |
| High risk (High-risk AI) | Systems with significant risk potential for safety or fundamental rights. | AI-powered recruiting (CV ranking), creditworthiness assessments, management of critical infrastructure. | Strict compliance: risk management, high data quality, human oversight, and comprehensive technical documentation. |
| Transparency risk (Specific obligations) | Systems that interact with humans or generate content. | Chatbots in customer service, generative AI (e.g. ChatGPT), deepfakes. | Disclosure obligation: users must be actively informed that they are interacting with AI. |
| Minimal risk | Applications without significant risk potential. | Spam filters, spell checkers, simple recommendation algorithms. | No specific obligations: no new requirements arise from the AI Act; the GDPR remains fully applicable. |
High-risk AI: When does it become relevant for SMEs?
For SMEs, the category of “high-risk AI” is the biggest challenge. The legislator defines these systems in two ways. On the one hand, it includes AI systems that are built as safety components into products already subject to EU safety regulations (e.g. machinery or medical devices). On the other hand, Annex III of the regulation lists specific areas of application.
A typical example for SMEs can be found in employment and HR. Do you use software that scans, evaluates, and ranks CVs for interviews? Such a tool significantly affects access to employment and is therefore almost always classified as high-risk AI. In this case, you must ensure that the system has undergone a risk management process and that human oversight is guaranteed.
As a user (deployer) of such tools, you are responsible for their intended use and monitoring during operation. You must ensure that the input data is appropriate for your company context and that serious incidents or malfunctions are reported immediately.
Provider or user: Your role makes the difference
A common stumbling block is role allocation. The EU AI Act strictly distinguishes between the “provider” and the “user” (deployer).
The provider is the party that develops or commissions the development of an AI system in order to place it on the market under its own name. As a provider, you bear the full burden of conformity assessment, technical documentation, and CE marking.
The user (often referred to in the law as “operator” or deployer) is the company that uses an AI system under its own responsibility. For most SMEs, this is the standard role: subscribing to HR software or using an AI writing assistant. But be careful: if you modify an existing system so significantly that its purpose changes, you may legally become a “provider” with all associated obligations.
Why systematic documentation becomes mandatory
The reality in many companies is a fragmented AI landscape: marketing uses tools for content creation, HR uses software for applicant management, and IT relies on automated security analysis. This often happens without a central overview.
Without structured documentation, as a managing director you simply do not know where high-risk AI is in use or where transparency obligations are being violated. The EU AI Act turns this lack of transparency into a significant liability risk. Systematic documentation serves as proof of compliance for supervisory authorities, enables proactive risk assessment, and clarifies internal responsibilities. Similar to the GDPR’s record of processing activities, a register of AI systems becomes an essential management tool.
AI Systems Catalog: The new structure for your management
A structured approach to handling this complexity is the “AI Systems Catalog.” This central register of all AI systems used in the company is methodologically based on data protection law. Instead of relying on manual, scattered lists that quickly become outdated in day-to-day business, a central catalog creates clarity.
An effective catalog records not only the name of the tool but also the specific use case, risk classification, affected groups of people, and the responsible person within the company. By using specialized compliance platforms such as heydata, this AI management can be seamlessly integrated into existing data protection structures. This saves time and ensures that synergies between GDPR and the AI Act are not left unused.
AI competence: The new obligation for AI literacy
A frequently overlooked but central aspect of the AI Act is the requirement for AI literacy. Companies are obliged to take measures to ensure an appropriate level of AI competence among their staff. This means employees working with AI must understand how these systems function, their limitations, and the risks associated with their use.
For SMBs, this means that training and awareness programs must become a fixed part of governance. Risk classification is of little value if users at their screens cannot recognize the risks of a “hallucinating” AI or a biased algorithm. This is a major lever for liability prevention: well-trained staff is the best line of defense against compliance violations.
Connection to data protection: Leveraging synergies
Since many AI systems process personal data, GDPR and the AI Act overlap significantly. This is no coincidence but intentional: both frameworks protect the fundamental rights of citizens. For SMEs, this is good news, because those who already have a functioning data protection management system do not have to start from scratch.
An AI recruiting tool, for example, falls under both laws: the GDPR governs the protection of applicant data, while the AI Act assesses the fairness and transparency of the algorithm. Integrated documentation, as enabled by modern platforms, prevents duplicate work and ensures consistency. If your data protection officer already uses established processes for risk analysis, these can ideally be extended to meet the requirements of the AI Act.
What SMEs should expect in practice
The requirements strongly depend on your role and the risk profile of your tools. As a pure user, you mainly need to fulfill transparency and monitoring obligations. You must request information from your providers and ensure that usage aligns with internal policies. However, if you develop AI solutions yourself (provider), significant documentation and certification burdens will apply.
The biggest hurdle is often getting started. Many SMEs underestimate the time required to carry out reliable inventorying. “Waiting and seeing” is a risky strategy here, as the AI Act will come into force gradually and the first prohibitions will take effect soon.
Conclusion: Transparency as the foundation for trust
The EU AI Act fundamentally changes the rules for using technology in companies. Risk classification is far more than a legal necessity – it is the foundation for responsible and future-proof action. For SMEs, this means: manual lists and ad hoc decisions are no longer sufficient. Systematic AI management is not a bureaucratic burden but a strategic safeguard against liability risks and reputational damage.
Those who start now to systematically document their AI landscape, clearly define roles, and rely on modern compliance solutions will build trust with customers and partners. In the end, it will not be those who adopt AI the fastest who succeed, but those who master it most safely and transparently.
FAQ
Do SMEs need to create a separate risk analysis for every tool?
For minimal-risk systems, this is not necessary. However, for high-risk applications, as a deployer you are required to assess the risks for your specific use case. Your provider should supply the necessary technical baseline information.
What happens in case of misclassification?
Incorrect classification can lead to violations of legal obligations, which may result in fines. However, your documentation of due diligence is crucial: if you can justify why you chose a particular classification, you significantly reduce your risk.
Does the AI Act also apply to free open-source tools?
Yes. The price of a tool is irrelevant for regulation. What matters is the purpose for which the AI is used in a business context.
Are manual lists sufficient for documentation?
They may be sufficient for an initial overview. However, once you use multiple tools or high-risk systems, manual lists quickly become confusing and error-prone. Specialized software offers significantly greater legal certainty.
Who in the company is responsible for AI classification?
Ideally, it is a collaboration between IT, the legal department (or external data protection consultants), and the respective departments using the tool. However, ultimate responsibility for compliance always lies with management.
Important: The content of this article is for informational purposes only and does not constitute legal advice. The information provided here is no substitute for personalized legal advice from a data protection officer or an attorney. We do not guarantee that the information provided is up to date, complete, or accurate. Any actions taken on the basis of the information contained in this article are at your own risk. We recommend that you always consult a data protection officer or an attorney with any legal questions or problems.


