Article 26 imposes eight distinct obligations on deployers of high-risk AI systems, each carrying enforcement exposure of up to EUR 15 million or 3% of global turnover. The FRIA, meaningful human oversight, and recognising provider status escalation are the three areas that most commonly catch organisations unprepared.
Deployers of high-risk AI systems face eight obligations under Article 26: use in accordance with Instructions for Use, human oversight by competent persons with genuine organisational authority, input data relevance and representativeness, monitoring system operation, informing the provider of serious incidents, keeping automatically generated logs for at least six months, informing workers and representatives before deployment in employment contexts, and conducting a Fundamental Rights Impact Assessment under Article 27. Two additional obligations cover EU database registration for public authority deployers under Article 49(3) and AI literacy for all deployers under Article 4. The FRIA, meaningful human oversight with override authority, and recognising provider status escalation under Article 25 are the three areas that most commonly catch organisations unprepared. The provider-deployer distinction determines documentation requirements: providers prepare a full AISDP while deployers prepare a smaller compliance record. The AI Act supply chain includes importers who verify third-country provider compliance, distributors who check CE marking and documentation, and authorised representatives maintaining documentation on behalf of non-EU providers. Practical configurations range from dual provider-deployer status for self-developed systems to pure deployer status for procured tools.
Article 26 of the EU AI Act imposes eight distinct obligations on deployers of high-risk AI systems, each with its own legal basis, documentation requirement, and enforcement exposure reaching EUR 15 million or 3 per cent of global annual turnover.
Article 26 of the EU AI Act imposes eight distinct obligations on deployers of high-risk AI systems, each with its own legal basis, documentation requirement, and enforcement exposure reaching EUR 15 million or 3 per cent of global annual turnover.
The first obligation requires the deployer to use the system in accordance with the provider's Instructions for Use under Article 26(1). The deployer must operate within the boundaries defined by the IFU, not using the system for purposes, populations, or contexts not covered. Documentation includes a record of IFU receipt and acknowledgement, internal operating procedures aligned to the IFU, and a deviation register.
Three obligations consistently cause the greatest difficulty in practice, and organisations should prioritise their preparation accordingly.
Three obligations consistently cause the greatest difficulty in practice, and organisations should prioritise their preparation accordingly.
The Fundamental Rights Impact Assessment has no direct predecessor in EU law. The GDPR's Data Protection Impact Assessment covers data processing risks, but the covers the impact on the full range of Charter of Fundamental Rights protections, extending well beyond non-discrimination to encompass human dignity under Article 1, the right to an effective remedy under Article 47, and the solidarity rights in employment contexts under Articles 31 and 32. Most organisations have DPIA experience; almost none have FRIA experience. provides the complete six-step approach.
Article 3 defines the provider as the entity that develops an AI system, or has one developed, and places it on the market or puts it into service under its own name or trademark.
Article 3 defines the provider as the entity that develops an AI system, or has one developed, and places it on the market or puts it into service under its own name or trademark. The deployer is the entity that uses the system under its authority. The obligations differ substantially between the two roles, and the documentation requirements reflect this difference.
A deployer that only uses a system supplied by a separate provider does not prepare a full AI System Documentation Package. The provider prepares the AISDP; the deployer prepares a separate, smaller set of deployer-side documentation in the deployer compliance record. This distinction is fundamental to understanding which obligations apply and what documentation is required.
Five practical configurations illustrate how the provider-deployer distinction operates across different organisational arrangements and their compliance implications.
Five practical configurations illustrate how the provider-deployer distinction operates across different organisational arrangements and their compliance implications.
When a bank develops its own credit scoring model, it holds both provider and deployer roles simultaneously, requiring both a full AISDP and a deployer-side FRIA. When a hospital procures a diagnostic imaging tool from an EU vendor, the hospital is deployer only and prepares the deployer compliance record while the vendor bears provider obligations.
No, unless the deployer triggers provider status under Article 25. A deployer that only uses a provider-supplied system prepares a smaller deployer compliance record.
Fine-tuning for a high-risk use case triggers provider status under Article 25(1)(b). The organisation must then prepare a full AISDP, conduct a conformity assessment, and register in the EU database.
Yes. Article 26(7) requires deployers to inform workers and their representatives before putting the system into service in an employment context.
Eight obligations under Article 26 plus AI literacy (Article 4) and EU database registration for public authorities (Article 49(3)), each carrying penalties of up to EUR 15 million or 3% of global turnover.
The Fundamental Rights Impact Assessment (no precedent in EU law), human oversight with genuine authority (not just training), and recognising when deployer status escalates to provider under Article 25.
The provider develops and markets the system under its name, preparing a full AISDP. The deployer uses the system under its authority, preparing a smaller compliance record.
Providers, deployers, importers (Article 23), distributors (Article 24), and authorised representatives (Article 22) for third-country providers.
The second obligation requires human oversight by competent persons under Article 26(2) and Article 14. The deployer must assign oversight to individuals with the competence, training, and authority to exercise it effectively. Documentation includes a human oversight policy, operator role descriptions, training and certification records, and override and escalation procedures.
The third obligation under Article 26(4) requires the deployer to ensure input data is relevant and sufficiently representative, to the extent the deployer exercises control over the input data. The fourth obligation under Article 26(5) requires monitoring the system on the basis of the IFU, reporting any serious risk to the provider and suspending the system if it presents a risk. The fifth obligation, also under Article 26(5), requires notifying the provider without undue delay of any malfunction, serious incident, or risk affecting compliance.
The sixth obligation under Article 26(6) requires the deployer to retain automatically generated logs for a period appropriate to the intended purpose, at minimum six months, with documented storage and access controls, retention schedules, and deletion procedures. The seventh obligation under Article 26(7) requires informing workers and their representatives before putting the system into service in an employment context. The eighth obligation under Article 27 requires conducting and publishing a Fundamental Rights Impact Assessment before putting the system into service, notifying the market surveillance authority, and for public authority deployers publishing a summary.
Two additional obligations apply to specific categories: public authority deployers and Union institutions must register in the EU database under Article 49(3), and all deployers must ensure AI literacy of staff under Article 4 regardless of risk tier.
Human oversight in practice is more demanding than most organisations anticipate. Article 26(2) requires deployers to assign human oversight to individuals with the necessary competence, training, and authority. The word "authority" carries significant weight: an operator who has been trained to use the system but has no organisational authority to override its recommendations does not exercise human oversight in the statutory sense. The oversight pyramid and break-glass procedures address the operational reality of making oversight effective.
Knowing when you have become a provider is the third critical challenge. Article 25 defines three circumstances in which a deployer's status escalates to that of a provider, carrying the full weight of provider obligations including preparing a full aisdp, conducting a conformity assessment, and signing a Declaration of Conformity. The most common trigger in practice is fine-tuning a general-purpose AI model for a high-risk use case. The provider-deployer distinction provides the complete decision framework.
Deployers do not always procure AI systems directly from the provider. The AI Act defines two intermediary roles in the supply chain. Importers under Article 23 are entities established in the EU that place a third-country provider's system on the EU market, and they must verify that the provider has completed the conformity assessment, drawn up the technical documentation, and affixed the CE marking. Distributors under Article 24 make the system available on the market without modifying it, verifying that the system bears the CE marking and is accompanied by the Declaration of Conformity and Instructions for Use. Authorised representatives under Article 22 are appointed by third-country providers to maintain a copy of the technical documentation and cooperate with competent authorities on the provider's behalf.
When a recruitment agency licenses a US-built CV screening tool through a UK importer, the US developer is the provider, the UK importer verifies compliance, and the agency as deployer directs information requests to the authorised representative. When an insurer fine-tunes a foundation model for claims fraud detection, the fine-tuning triggers provider status under Article 25, requiring the insurer to prepare a full AISDP despite having procured the base model externally.
When a local authority uses a procured welfare eligibility tool, it is a public authority deployer with additional obligations: EU database registration under Article 49(3) and FRIA notification and publication requirements. Each configuration creates a distinct compliance pathway, and organisations must accurately determine their role before planning their documentation strategy.
CTO of Standard Intelligence. Leads platform engineering and contributes to the PIG series technical content.