HomeHow-to Guides › AI System Inventory
ARTICLE 3 & 9 GOVERNANCE FOUNDATIONAL

How to Build an AI System Inventory for EU AI Act Compliance

You cannot govern what you cannot see. Before any other compliance activity can begin, organisations must produce a complete, accurate, and continuously maintained inventory of every AI system they develop, deploy, or use. This guide explains what to capture, how to discover AI systems systematically, and how to keep the inventory current.

EU AI Act Reference

Article 3 defines what constitutes an "AI system" for the purposes of the Regulation. Article 9 requires that providers of high-risk AI systems implement a risk management system, which presupposes a complete inventory of systems in scope. Article 49 requires high-risk AI systems to be registered in the EU database before they are placed on the market or put into service.

Why Inventory is the Foundational Compliance Activity

Every obligation in the EU AI Act, including risk classification, technical documentation, logging, human oversight, and conformity assessment, applies at the level of individual AI systems. Without knowing which AI systems exist, you cannot determine which obligations apply, assign accountability, or demonstrate compliance to a regulator.

In practice, AI systems proliferate rapidly and often without centralised oversight. A procurement team may subscribe to an AI-powered contract analysis tool. An engineering team may integrate a large language model via API. A customer service team may deploy a chatbot. Each of these constitutes an AI system under Article 3 and may carry compliance obligations. A static snapshot of AI tools taken at one point in time will be outdated within weeks in most organisations.

What Counts as an "AI System" Under Article 3

Article 3(1) of the EU AI Act defines an AI system as a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

In practical terms, this includes:

Rules-based systems that apply strictly deterministic logic without any learned model component are generally not considered AI systems under the Act, though the boundary can be ambiguous in practice. When in doubt, include the system in your inventory and apply a classification assessment.

Key Inventory Data Fields

A useful AI inventory should capture, at minimum, the following fields for each system:

Field Description
System nameThe internal and/or vendor name of the AI system
Vendor / providerWhether this is internally built, a third-party SaaS tool, or an API-based model
Model / technologyUnderlying model or technology (e.g., GPT-4, BERT, custom classifier)
Risk tierClassification under the EU AI Act: prohibited, high-risk, limited risk, minimal risk
Use caseWhat the system does and the decisions or outputs it produces
Data inputsWhat data the system processes, including any personal data or sensitive categories
Data outputsWhat the system produces: predictions, scores, decisions, content, etc.
OwnerNamed individual accountable for this system's compliance
Deployment dateWhen the system was first deployed or made available to users
Affected personsWho is subject to outputs from this system (employees, customers, third parties)
Regulatory statusWhether conformity assessment is required; registration status in EU database

Step-by-Step: Building Your AI Inventory

1

Define the Scope

Your inventory must cover all AI systems for which your organisation acts as a provider (you developed or deployed it) or a deployer (you use it in a professional context). This includes: internally built models, third-party SaaS tools with AI features, AI APIs your software calls, AI models embedded in products you sell, and AI features provided by cloud platforms (such as AI services within AWS, Azure, or Google Cloud). Agree on scope boundaries before discovery begins and document any deliberate exclusions with justification.

2

Discovery: Find All AI Systems

Use multiple discovery channels in parallel. API traffic analysis is particularly effective: many AI integrations manifest as outbound API calls to model providers such as OpenAI, Anthropic, Cohere, Hugging Face, or cloud AI services. Monitoring outbound API traffic can reveal integrations that no other method would surface, including unofficial integrations made by individual developers. Supplement this with code scanning of your source code repositories for known AI library imports and API endpoint patterns, vendor questionnaires sent to business units and teams, and finance and procurement data to identify AI-related software spend. Each method has blind spots; using all of them together produces a more complete picture.

3

Classify Each System by Risk Tier

Once you have a list of candidate AI systems, each must be classified against the EU AI Act's risk tiers. Apply the prohibited use check (Article 5) first. Then assess whether the system falls under Annex III high-risk categories. Systems not classified as prohibited or high-risk should be assessed for limited-risk transparency obligations. See the risk classification guide for the full methodology.

4

Assign Ownership and Accountability

Every AI system in the inventory must have a named owner: an individual accountable for its compliance. This is not merely a formality: the owner must understand the system's capabilities, limitations, and risk classification, and must be responsible for ensuring that applicable obligations (technical documentation, logging, human oversight, etc.) are fulfilled. For third-party SaaS AI tools, the owner is typically the business unit that contracted for it.

5

Automate Ongoing Discovery

A static inventory becomes inaccurate quickly. New AI integrations are added continuously, by developers, by SaaS vendors silently adding AI features, and by procurement. Continuous monitoring of outbound API traffic provides near-real-time visibility into new AI endpoints being called, enabling the inventory to be updated promptly. Automated alerting when a new AI API endpoint is first detected gives compliance teams an opportunity to review and classify the system before it reaches significant usage. Scheduled periodic reviews (at minimum quarterly) should supplement automated discovery.

Common Mistakes to Avoid

Regulatory Consequences of an Incomplete Inventory

The EU AI Act creates direct legal obligations that depend on knowing which AI systems you have. Specifically:

← Back to Assessment