ISO 42001 - List of Deliverables
These are the “must-have” artefacts to set up governance of AI.
AI Management System Scope & Boundaries: document what parts of the organisation, what AI systems, what lifecycle phases are included.
Context of the Organization: formal assessment of internal/external issues, interested parties (stakeholders), requirements, constraints.
Leadership commitment & AI policy: top management must show leadership, commit to the AIMS, establish an overarching AI policy aligned with strategic direction.
Roles, Responsibilities & Authorities: assign accountable owners, define authorities for AI-governance, risk, operations.
AI policy (more specific): A documented policy addressing responsible use of AI, aligning with ethics, transparency, accountability, fairness.
Integration with existing management systems: e.g., if you already have ISO 27001, integrate the AIMS rather than build in isolation.
Periodic items and procedures around planning and risk.
AI-specific Risk Criteria & Appetite: define what constitutes acceptable AI risk (ethical, legal, operational, safety, bias).
AI Impact Assessment Procedure: evaluate potential consequences of AI systems on individuals, groups, society.
Risk Assessment for AI Systems: identify, assess, prioritise risks specific to AI (data quality, bias, model drift, misuse, safety).
Risk Treatment / Mitigation Planning: for each identified risk, define and document controls, mitigation, residual risk.
Objectives & Targets for AI management: set measurable objectives (e.g., bias reduction target, accuracy threshold, transparency metrics).
Planning of Changes: process to handle changes in context, in AI systems, in regulatory environment.
Procedures, training, documentation, resources that support the AIMS.
Resource Allocation: ensure human, technical, data, tool resources for AI governance and operations.
Competence, Training & Awareness: train staff on AI policy, risk, ethics, system lifecycle, their specific roles.
Communication: internal and external communication regarding AI management system, stakeholders, interested parties.
Documented Information (Policies, Procedures, Records): maintain, control, update, store documentation supporting the AIMS.
Infrastructure/Data/Technology Support: ensure data governance, infrastructure to support AI lifecycle (data quality, access, model management) as part of support.
Procedures & periodic activities for the actual development, deployment, monitoring of AI systems.
Operational Planning & Control: define how AI systems are designed, developed, deployed, monitored, maintained.
AI System Lifecycle Management: covering from concept to retirement: design, build, test, deploy, maintain, retire.
Data Governance for AI Systems: data quality, data selection, bias check, traceability of data used in AI.
Monitoring & Control of AI System Use: procedures for use, third-party components, monitoring model drift, unintended outcomes.
Supplier/Third-Party Management: if you use external models, data, services – controls over those relationships.
Change Control: controlling planned changes to AI systems, handling unintended changes, verifying outcomes.
Incident & Deviation Handling: detect, document, handle non-conformities, unintended negative impacts of AI systems, corrective actions.
Procedures and periodic items to ensure the AIMS remains effective, relevant, and compliant.
Non-conformity Response & Corrective/Preventive Actions: process to address deviations, document and correct them, prevent recurrence.
Continual Improvement of AIMS: seek opportunities to improve the AI management system, adapt to new technologies, new risks, regulatory changes.
Review & Update of Risk, Impact Assessments & Treatment Plans: at planned intervals and whenever significant changes occur.
Compliance Monitoring & Review of External Requirements: keep track of new AI regulations, market developments, update policies and controls accordingly.
Here are the items that recur on a periodic basis (e.g., quarterly, annually, on-change) and should be part of your monitoring cadence:
Review of internal/external context, stakeholder requirements (annually or when change)
Review of AI policy alignment, objectives and targets (annually)
Competency and awareness training refresh (at least annually, ideally more frequently)
Data and model quality performance review (quarterly or aligned with model release cycles)
Risk assessment refresh (annually and when AI systems or context change)
Impact assessment updates (when new systems or major changes)
Internal audits of AIMS (semi‐annual or annual)
Management review meetings (at least annually)
Supplier/third-party reviews (annually or per contract)
Monitoring model drift, bias, accuracy metrics (continuous/ongoing)
Non-conformity/corrective action reviews (ongoing)
Improvement initiatives implementation and tracking (ongoing)
Compliance/regulation watch and update of controls (ongoing)
According to multiple sources, ISO 42001 also includes a set of controls in Annex A which organizations must implement. Typical control areas include:
Policies related to AI systems
Internal organization
Resources for AI systems
Assessing impacts of AI systems
AI system lifecycle
Data for AI systems
Information for interested parties of AI systems
Use of AI systems
Third-party and customer relationships