How to Build an AI System Inventory for EU AI Act Compliance
You cannot govern what you cannot see. Before any other compliance activity can begin, organisations must produce a complete, accurate, and continuously maintained inventory of every AI system they develop, deploy, or use. This guide explains what to capture, how to discover AI systems systematically, and how to keep the inventory current.
Article 3 defines what constitutes an "AI system" for the purposes of the Regulation. Article 9 requires that providers of high-risk AI systems implement a risk management system, which presupposes a complete inventory of systems in scope. Article 49 requires high-risk AI systems to be registered in the EU database before they are placed on the market or put into service.
Why Inventory is the Foundational Compliance Activity
Every obligation in the EU AI Act, including risk classification, technical documentation, logging, human oversight, and conformity assessment, applies at the level of individual AI systems. Without knowing which AI systems exist, you cannot determine which obligations apply, assign accountability, or demonstrate compliance to a regulator.
In practice, AI systems proliferate rapidly and often without centralised oversight. A procurement team may subscribe to an AI-powered contract analysis tool. An engineering team may integrate a large language model via API. A customer service team may deploy a chatbot. Each of these constitutes an AI system under Article 3 and may carry compliance obligations. A static snapshot of AI tools taken at one point in time will be outdated within weeks in most organisations.
What Counts as an "AI System" Under Article 3
Article 3(1) of the EU AI Act defines an AI system as a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
In practical terms, this includes:
- Machine learning models, including classifiers, regression models, and generative models
- Large language models (LLMs) used for text generation, summarisation, or question answering
- Computer vision systems used for image classification, object detection, or facial recognition
- Recommendation engines that personalise content, products, or decisions
- Predictive scoring systems used in HR, credit, fraud detection, or risk assessment
- AI features embedded within SaaS products (e.g., an AI-powered search or summarisation feature in a productivity tool)
Rules-based systems that apply strictly deterministic logic without any learned model component are generally not considered AI systems under the Act, though the boundary can be ambiguous in practice. When in doubt, include the system in your inventory and apply a classification assessment.
Key Inventory Data Fields
A useful AI inventory should capture, at minimum, the following fields for each system:
| Field | Description |
|---|---|
| System name | The internal and/or vendor name of the AI system |
| Vendor / provider | Whether this is internally built, a third-party SaaS tool, or an API-based model |
| Model / technology | Underlying model or technology (e.g., GPT-4, BERT, custom classifier) |
| Risk tier | Classification under the EU AI Act: prohibited, high-risk, limited risk, minimal risk |
| Use case | What the system does and the decisions or outputs it produces |
| Data inputs | What data the system processes, including any personal data or sensitive categories |
| Data outputs | What the system produces: predictions, scores, decisions, content, etc. |
| Owner | Named individual accountable for this system's compliance |
| Deployment date | When the system was first deployed or made available to users |
| Affected persons | Who is subject to outputs from this system (employees, customers, third parties) |
| Regulatory status | Whether conformity assessment is required; registration status in EU database |
Step-by-Step: Building Your AI Inventory
Define the Scope
Your inventory must cover all AI systems for which your organisation acts as a provider (you developed or deployed it) or a deployer (you use it in a professional context). This includes: internally built models, third-party SaaS tools with AI features, AI APIs your software calls, AI models embedded in products you sell, and AI features provided by cloud platforms (such as AI services within AWS, Azure, or Google Cloud). Agree on scope boundaries before discovery begins and document any deliberate exclusions with justification.
Discovery: Find All AI Systems
Use multiple discovery channels in parallel. API traffic analysis is particularly effective: many AI integrations manifest as outbound API calls to model providers such as OpenAI, Anthropic, Cohere, Hugging Face, or cloud AI services. Monitoring outbound API traffic can reveal integrations that no other method would surface, including unofficial integrations made by individual developers. Supplement this with code scanning of your source code repositories for known AI library imports and API endpoint patterns, vendor questionnaires sent to business units and teams, and finance and procurement data to identify AI-related software spend. Each method has blind spots; using all of them together produces a more complete picture.
Classify Each System by Risk Tier
Once you have a list of candidate AI systems, each must be classified against the EU AI Act's risk tiers. Apply the prohibited use check (Article 5) first. Then assess whether the system falls under Annex III high-risk categories. Systems not classified as prohibited or high-risk should be assessed for limited-risk transparency obligations. See the risk classification guide for the full methodology.
Assign Ownership and Accountability
Every AI system in the inventory must have a named owner: an individual accountable for its compliance. This is not merely a formality: the owner must understand the system's capabilities, limitations, and risk classification, and must be responsible for ensuring that applicable obligations (technical documentation, logging, human oversight, etc.) are fulfilled. For third-party SaaS AI tools, the owner is typically the business unit that contracted for it.
Automate Ongoing Discovery
A static inventory becomes inaccurate quickly. New AI integrations are added continuously, by developers, by SaaS vendors silently adding AI features, and by procurement. Continuous monitoring of outbound API traffic provides near-real-time visibility into new AI endpoints being called, enabling the inventory to be updated promptly. Automated alerting when a new AI API endpoint is first detected gives compliance teams an opportunity to review and classify the system before it reaches significant usage. Scheduled periodic reviews (at minimum quarterly) should supplement automated discovery.
Common Mistakes to Avoid
- Only inventorying first-party systems. Many organisations focus on AI systems they have built themselves, overlooking third-party SaaS tools that contain AI features. The EU AI Act's obligations for deployers apply regardless of whether you built the system.
- Not updating after new procurement. New AI tools are procured continuously. Without a mechanism to flag AI-related purchases for inventory inclusion, the register quickly becomes incomplete.
- Missing API integrations. AI integrations made via API (calling an LLM API directly from application code) are often invisible to procurement and vendor management processes. These are among the easiest integrations to add and the easiest to miss in a manual inventory exercise.
- Treating the inventory as a one-off project. An inventory completed once and not maintained provides diminishing value. Plan for ongoing maintenance from the outset.
Regulatory Consequences of an Incomplete Inventory
The EU AI Act creates direct legal obligations that depend on knowing which AI systems you have. Specifically:
- Article 49 requires that high-risk AI systems are registered in the EU database before being placed on the market or put into service. You cannot comply with this if you do not know which of your systems are high-risk.
- Article 9 requires a documented risk management system for each high-risk AI system. An incomplete inventory means some systems will have no risk management process in place.
- National market surveillance authorities may request evidence of your AI inventory as part of an investigation or audit. Inability to provide a complete and current inventory is itself evidence of governance failure.