Transparency Obligations for AI Systems under EU Law

Transparency Obligations for AI Systems under EU Law

Transparency as a regulatory principle

Transparency is a foundational principle of European digital regulation and plays a central role in the EU Artificial Intelligence Act (EU AI Act). The Regulation proceeds on the assumption that individuals must be able to recognise when and how artificial intelligence is used, particularly where AI systems interact directly with humans or influence decisions affecting them.

Unlike the governance obligations applicable to high-risk AI systems, transparency obligations apply horizontally to specific use cases across different risk categories. Their purpose is not to prohibit or condition AI deployment, but to ensure awareness, traceability and informed interaction.

Transparency under the EU AI Act is therefore closely linked to concepts of autonomy, accountability and trust in AI-enabled environments.


Addressees of transparency obligations

Transparency obligations under the EU AI Act are primarily directed at providers and deployers of AI systems, depending on the specific use case.

  • Providers are generally responsible for designing systems in a way that enables transparency and for supplying the necessary information and instructions.
  • Deployers are responsible for ensuring that transparency measures are effectively implemented in practice, particularly in user-facing contexts.

As with other parts of the Regulation, the allocation of obligations follows a functional approach, based on actual control over the system’s design and use rather than contractual labels.


Key transparency-related use cases

The EU AI Act specifies several typical application scenarios in which transparency obligations apply. These obligations are triggered not by the technical sophistication of the system, but by the nature of the interaction and its potential impact on individuals.

AI systems interacting with natural persons (e.g. chatbots)

Where an AI system is designed to interact directly with individuals, users must be informed that they are interacting with an AI system, unless this is obvious from the context.

This obligation applies, for example, to:

  • conversational agents and chatbots,
  • virtual assistants,
  • automated customer service systems.

The objective is to prevent deceptive or misleading interactions and to allow individuals to adjust their expectations and behaviour accordingly.


Emotion recognition and biometric categorisation systems

AI systems used for emotion recognition or biometric categorisation are subject to heightened transparency obligations.

Individuals exposed to such systems must be informed of:

  • the use of AI-based analysis, and
  • the general purpose of the system.

These obligations reflect the particularly intrusive nature of biometric and affective technologies and their potential impact on privacy and dignity.


AI-generated or manipulated content

The EU AI Act also addresses transparency in relation to AI-generated or AI-manipulated content, including synthetic audio, video or images.

Where such content is generated or altered by AI, transparency obligations aim to ensure that recipients are not misled as to the origin or authenticity of the material, subject to narrowly defined exceptions.

This aspect of transparency intersects with broader concerns relating to disinformation and trust in digital communications.


Relationship to GDPR transparency requirements

The transparency obligations under the EU AI Act operate alongside, not in place of, transparency requirements under the General Data Protection Regulation (GDPR).

While both regimes share common objectives, their focus differs:

  • GDPR transparency centres on the processing of personal data, requiring information about data use, legal bases and data subject rights.
  • EU AI Act transparency focuses on the use of AI as such, regardless of whether personal data is processed.

In practice, AI systems that process personal data may be subject to overlapping transparency obligations under both frameworks. These obligations are complementary and must be addressed in a coordinated manner to avoid gaps or inconsistencies.


Form and timing of transparency disclosures

The EU AI Act does not prescribe a single format for transparency disclosures. Instead, disclosures must be clear, accessible and proportionate to the context of use.

Relevant considerations include:

  • the target audience,
  • the complexity of the AI system,
  • the potential consequences of AI use for individuals.

Transparency information must be provided at an appropriate time, typically before or at the moment of interaction, to ensure that it can meaningfully inform individual behaviour or decisions.


Legal consequences of non-compliance

Failure to comply with transparency obligations constitutes a breach of the EU AI Act and may result in enforcement action by competent authorities.

Possible consequences include:

  • corrective measures and compliance orders,
  • restrictions on the use of the AI system,
  • administrative fines.

While transparency obligations are generally less onerous than the requirements applicable to high-risk AI systems, non-compliance can nonetheless give rise to significant regulatory and reputational risks.


Regulatory function of transparency obligations

Transparency obligations serve a distinct regulatory function within the EU AI Act. They are designed to address information asymmetries between AI system operators and affected individuals and to support informed participation in AI-mediated environments.

By mandating disclosure in defined scenarios, the Regulation seeks to foster trustworthy AI deployment without imposing disproportionate compliance burdens on low-risk applications.


Conclusion

Transparency obligations under EU law ensure that individuals are aware of AI use in contexts where such awareness is legally and ethically significant. These obligations apply across various AI applications, including chatbots, biometric systems and AI-generated content, and operate alongside GDPR transparency requirements.

Although transparency obligations do not prohibit AI deployment, they represent an essential element of the EU’s approach to responsible and accountable artificial intelligence.


Notice

The information provided on this page is for general informational purposes only and does not constitute legal advice.

Scroll to Top