HomeHow-to Guides › Transparency and Disclosure
ARTICLE 13 TRANSPARENCY HIGH-RISK AI

How to Implement Transparency and Disclosure for High-Risk AI Under the EU AI Act

Article 13 requires that high-risk AI systems be transparent in a way that enables deployers (those putting the system into use) to understand and correctly interpret the system's outputs. Transparency under the EU AI Act is not about publishing a notice. It is about enabling informed, accountable use of AI systems by the people who deploy them.

EU AI Act Reference

Article 13 requires providers to ensure that high-risk AI systems are designed and developed in a way that their operation is sufficiently transparent that deployers can understand the system's capabilities and limitations and interpret its outputs correctly. Article 13(3) specifies the minimum content for instructions for use that providers must supply. Article 50 adds separate disclosure obligations for AI systems that interact with natural persons, including chatbots, and for AI-generated content.

What Transparency Means Under the EU AI Act

Transparency in the EU AI Act has a specific meaning that differs from general usage. It is not simply "telling users that AI is involved." Article 13 transparency is aimed at the deployer, the organisation that takes a high-risk AI system from a provider and puts it into use in a professional context. The deployer must be given enough information to use the system correctly, oversee it effectively, and meet their own compliance obligations.

The Act identifies three distinct transparency audiences, each with different requirements:

TECHNICAL DOCUMENTATION For regulators and conformity assessment bodies

Detailed technical documentation under Article 11 covering system design, training methodology, performance metrics, test results, risk management outcomes, and post-market monitoring plan. This does not need to be public but must be available to market surveillance authorities on request.

INSTRUCTIONS FOR USE For deployers (organisations using the AI system)

Operational documentation required by Article 13(3): system identity and version, capabilities and limitations, intended purpose and conditions of use, performance metrics and accuracy levels, known biases or errors, human oversight requirements, and contraindications for use. This is operational documentation enabling deployers to use the system correctly.

USER-FACING DISCLOSURE For natural persons subject to AI decisions

Article 50 requires disclosure to individuals when they are interacting with an AI system (e.g., a chatbot) and, in high-risk contexts, information about AI-influenced decisions that affect them. This is different from Article 13 transparency. Article 50 is about disclosure to end users, not deployers.

Step-by-Step: Implementing Transparency Requirements

1

Document the AI System's Capabilities and Limitations

For each high-risk AI system, produce a capabilities and limitations document that a non-technical deployer can read and act on. This should cover: what the system does (intended purpose in plain language), what inputs it requires and what outputs it produces, the accuracy level it achieves and how this was measured, what conditions or populations the system performs well on, and where it should not be used (contraindications). This document is the foundation of the instructions for use required by Article 13(3) and is also useful for human oversight training.

2

Create User-Facing AI Notices

Article 50 requires disclosure when individuals interact with AI systems or when AI-generated content could mislead them. For chatbots and AI assistants, there must be a clear and timely notice that the individual is interacting with an AI system, unless this is already obvious to a reasonably well-informed person. For AI-generated content (images, audio, video), watermarking or labelling requirements apply. For high-risk AI influencing decisions about individuals, ensure your data protection and fair processing notices (required under GDPR) are updated to reflect AI processing. These notices must be in plain language and must be provided before or at the time the interaction occurs, not buried in terms and conditions.

3

Implement Explainability for Consequential Decisions

For high-risk AI systems that inform decisions affecting individuals, including credit decisions, hiring decisions, medical diagnoses, and benefit eligibility assessments, individuals have rights under GDPR Article 22 and related provisions to meaningful information about automated decision-making. Implement explanations that are: specific to the individual case (not generic system descriptions), expressed in plain language, accurate in describing the factors that influenced the output, and actionable where relevant (e.g., "improving your credit utilisation ratio would increase your score"). Log the explanation generated for each decision alongside the decision itself. This is needed both for subject access requests and for regulatory investigations.

4

Produce Instructions for Use for Deployers

Article 13(3) sets out the minimum content for instructions for use. As a provider of a high-risk AI system, you must supply deployers with documentation covering: the system's identity, version, and purpose; performance on relevant benchmarks including accuracy across different groups; known biases or limitations; description of input data requirements; description of any changes that could affect performance; the necessary human oversight measures; hardware and infrastructure requirements; and data logging obligations. Where you are both provider and deployer (building and deploying your own high-risk AI), this documentation still serves as an internal compliance record.

5

Keep Documentation Current Through Model Updates

Transparency documentation becomes misleading if it describes a version of the system that no longer exists. Establish a documentation update process that is triggered whenever a material change is made to the AI model: retraining, significant hyperparameter changes, changes to input or output schemas, or changes to the deployment infrastructure that affect performance. Version control your documentation alongside your model versions. Deployers who rely on outdated documentation may make incorrect oversight decisions. If harm results, providers who failed to update documentation bear responsibility for the gap.

Transparency at the API Layer

For AI systems exposed via APIs, a practical starting point for transparency is the API specification itself. Documenting what each AI API endpoint does, what inputs it accepts, what outputs it returns, the model version it calls, and what error conditions are possible, provides a machine-readable transparency layer that complements human-readable documentation.

API-level documentation should include: endpoint purpose and intended use, required and optional input fields and their types, output schema and semantics (including what each field means), error codes and their meanings, versioning and deprecation policy, and rate limits or usage constraints. This level of API transparency benefits compliance (by making the system's behaviour inspectable) as well as integration quality and oversight effectiveness.

Article 50 Chatbot Disclosure: Practical Implementation

If your organisation deploys an AI system that interacts with humans, such as a chatbot, virtual assistant, or automated response system, Article 50 requires that users are informed they are interacting with an AI. This requirement applies at the start of the interaction, not only if the user asks.

Practical implementation: display a clear label such as "You are chatting with an AI assistant" at the beginning of every conversation session. Do not embed this disclosure in lengthy terms. It must be prominent and immediately visible. The exception applies only where it would be "obvious to a reasonably well-informed person" that they are interacting with AI, which is a narrow exception that should not be assumed.

← Back to Assessment