
Transparency and Legal Basis: The Achilles’ Heel of Many AI Providers

Why lack of openness and legal clarity pose major risks—and how to build user trust.
Table of Contents:
Why Trust Is the Prerequisite for AI Adoption
Artificial intelligence is transforming business models, processes, and entire industries. But the more influence AI systems have - whether in lending decisions, hiring, or healthcare - the more one factor comes into focus: trust.
Users and organizations will only adopt AI in the long term if they understand how decisions are made, what data is used, and on what legal basis the data is processed. Without this clarity, innovation quickly becomes a reputational risk.
Transparency Requirements: What the AI Act and GDPR Demand
Both the EU AI Act and the General Data Protection Regulation (GDPR) require that AI systems are transparent and that the legal basis for data processing is clearly defined.
Specifically:
- Users must know when they’re interacting with an AI system.
- The origin of the data used to train and operate the AI must be disclosed.
- The purpose of data processing must be clearly stated.
- There must be a valid legal basis under Article 6 GDPR (e.g. consent, contract, legitimate interest).
- If personal data is involved, additional information under Articles 13 and 14 GDPR is required, including recipients, storage periods, and user rights.
Failure to comply can result in fines, product bans, or significant loss of user trust.
Why Many AI Providers Struggle with Compliance
In practice, transparency is often the weakest point in AI offerings. Common pitfalls include:
- Opaque or undisclosed training data
- Blurred responsibilities between developers, API providers, and hosting platforms
- Missing or invalid consent for the use of personal data
- “Black box” models that do not explain how results are generated
Example:
An AI provider trains a product recommendation engine for online shops using real customer data. But users were never informed about the use of AI or their rights.
Result: Violation of the GDPR—and possibly unfair competition laws.
Transparency as a Strategic Advantage: What Users Expect
Modern customers—whether individuals or companies—expect more than just functional algorithms. They want to:
- Understand how decisions are made
- Trust that their data won’t be misused
- Retain control through the right to object or request explanations
Transparency is not just about compliance—it’s key to market acceptance.
According to EY’s 2025 study:
- 71% of surveyed businesses expect full disclosure of training data sources
- 65% want a legal assessment of how AI technologies are used
- 58% would not adopt AI systems that rely on black-box decisions
Legal Basis: No Clean Product Without Clear Legal Grounds
The GDPR mandates a specific legal basis for all processing of personal data. For AI systems, this applies both during training and in production use.
Possible legal bases under Article 6 GDPR include:
- Consent (Art. 6(1)(a)) – ideal for voluntary use, but only valid if informed and revocable
- Contract performance (Art. 6(1)(b)) – e.g. for AI tools delivering promised services
- Legitimate interest (Art. 6(1)(f)) – possible only after proper balancing of interests and a documented risk analysis
- Special categories of data (Art. 9 GDPR) – such as health data or biometrics require additional safeguards
Key issue: Many providers assume legitimate interest applies—without proper justification. This is legally risky and highly vulnerable to enforcement action.
Best Practices für KI-Anbieter: So gelingt Transparenz & Rechtskonformität
Best Practices for AI Providers: How to Ensure Transparency and Legal Compliance
To build trust among users, partners, and regulators, AI companies should implement these guiding principles:
Transparency Measures
- Disclosure obligation: Clearly inform users when they interact with AI
- Promote explainability: Use visual logic trees or “Why this result?” buttons
- Reveal training data sources – even if anonymized or synthetic
- Document system limitations: Make it clear where the model might fail or is not suitable
Secure Legal Basis
- Explicitly define the legal basis (consent, contract, legitimate interest)
- Ensure data minimization and purpose limitation
- Maintain a record of processing activities (ROPA) and document risk assessments
- Use standardized processes to collect and store valid consents
Communication & UX
- Embed transparency directly in the user interface, not just the privacy policy
- Provide AI factsheets or “model cards” (similar to NIST’s AI documentation standards)
- Integrate feedback mechanisms for users to flag unclear or incorrect results
Conclusion: Those Who Don’t Clarify Will Be Left Behind
The regulatory landscape is clear: Transparency and legal basis are not optional - they’re mandatory. But beyond compliance, they are a competitive edge.
AI providers who embrace clear documentation, user-centered design, and legal safeguards will gain the trust of customers, regulators, and investors alike.
The Achilles’ heel of many AI companies is also the opportunity for future market leaders.
Important: The content of this article is for informational purposes only and does not constitute legal advice. The information provided here is no substitute for personalized legal advice from a data protection officer or an attorney. We do not guarantee that the information provided is up to date, complete, or accurate. Any actions taken on the basis of the information contained in this article are at your own risk. We recommend that you always consult a data protection officer or an attorney with any legal questions or problems.


