The world’s first international standard on AI management systems, commonly referred to as ISO 42001, was officially published in December 2023. This standard has been in development since August 2021 by the ISO/IEC JTC 1/SC 42 committee, who is also responsible for developing 50+ other AI standards.
Now that ISO 42001 is published, organizations developing or using AI systems can align their AI governance processes with an international standard, and eventually receive an ISO 42001 certification through a third-party audit.
What is ISO 42001?
Officially, ISO 42001 is an “international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System within organizations”.
In general, a management system refers to internal policies, processes, and structures created by organizations to govern part of their business (examples of other management systems). In this case, an artificial intelligence management system (AIMS) will govern the use or development of AI within an organization to ensure quality and fulfillment of its objectives.
In the following sections, we will provide concrete examples and comparisons to help you understand the scope of ISO 42001.
Why is the release of ISO 42001 an important event?
During the recent wave of AI standards development, many similar governance-oriented documents have been published by:
- Industry representatives seeking a common language,
- Professional services firms creating their own certifications, and
- Government agencies taking the first step toward AI regulation
However, the official publication of standards by ISO/IEC, an international authority in the field, is given great attention across countries and sectors. ISO standards consolidate years of work by experts, are widely accepted by industry, and are often certifiable and auditable.
In regulated industries, ISO standards also provide technical specifications and a pathway for companies and products to meet regulatory requirements, which include medical devices, automobiles, and now AI systems under the EU AI Act.
How is ISO 42001 different from the other AI-related standards?
The SC 42 committee has already published dozens of technical documents focusing on different aspects of AI development. For example, ISO/IEC TS 4213:2022, focusing on the assessment of machine learning classification performance, can be applied to quantitatively measure the performance of AI products.
In contrast to product-level standards, ISO 42001 provides an organization-level approach to AI: as a management system standard (MSS), it covers principles of good governance and practical ways to manage AI-related risks, by making sure that adequate preventative and remedial measures are in place. Establishing a management system and complying with ISO 42001 involves:
- A company-wide review of the different use cases, processes, and policies for AI development and use.
- Validating or updating these existing processes, policies, systems, tools, and documentation.
- Providing evidence that the existing or updated processes are actually being followed at the product or service level.
In order to confirm that the organization-level management system principles are followed, a sizable amount of product or project-level evidence also needs to be provided. This includes per-project assessments and documentation on internal tooling, evaluation, monitoring, and other technical infrastructure (more on that below).
What is the structure of ISO 42001?
ISO 42001 joins the family of other management system standards that have been published over the years. Some of these well-known standards include:
- ISO 27001: Information Security Management System
- ISO 13485: Medical Device Quality Management System
- ISO 37001: Anti-Bribery Management System
- ISO 14001: Environmental Management System
All MSS follow the exact same document structure, which has been tried and tested to be as flexible as possible and apply to organizations across industries, with ISO 42001 being no exception:
- Clause 1: Scope
- Clause 2: Normative references
- …
- Clause 10: Improvement
- Annexes
Most of the practical requirements are included in the Annexes, which contain the list of controls that the organization must implement, such as:
- B.6.2.4 AI system verification and validation. It requires developers to implement a standardized evaluation protocol for AI models and datasets.
- B.6.2.8 AI system recording of event logs. It requires developers to have a logging system to record production input data, output predictions, and potential outliers.
- B.7.4 Quality of data for AI systems. It requires developers to define and measure training and production data quality, as well as consider impact of data bias on performance.
(If you’re interested in a solution to automate the requirements above, consider using Citadel Lens and Citadel Radar.)
Organization-level Requirements
These requirements apply to the whole organization and its processes:
- B.2.2 AI policy
- B.2.3 Alignment with other organizational policies
- B.2.4 Review of the AI policy
- B.3.2 AI roles and responsibilities
- B.3.3 Reporting of concerns
- B.5.2 AI system impact assessment process
- B.6.1.2 Objectives for responsible development of AI system
- B.6.1.3 Processes for responsible design and development of AI systems
- B.7.2 Data for development and enhancement of AI system
- B.7.3 Acquisition of data
- B.8.3 External reporting
- B.8.4 Communication of incidents
- B.8.5 Information for interested parties
- B.9.2 Processes for responsible use of AI
- B.9.3 Objectives for responsible use of AI system
- B.10.2 Allocating responsibilities
- B.10.3 Suppliers
- B.10.4 Customers
Project-level Requirements
These requirements apply to specific projects within the organization:
- B.4.2 Resource documentation
- B.4.3 Data resources
- B.4.4 Tooling resources
- B.4.5 System and computing resources
- B.4.6 Human resources
- B.5.3 Documentation of AI system impact assessments
- B.5.4 Assessing AI system impact on individuals and groups of individuals
- B.5.5 Assessing societal impacts of AI systems
- B.6.2.2 AI system requirements and specification
- B.6.2.3 Documentation of AI system design and development
- B.6.2.4 AI system verification and validation
- B.6.2.5 AI system deployment
- B.6.2.6 AI system operation and monitoring
- B.6.2.7 AI system technical documentation
- B.6.2.8 AI system recording of event logs
- B.7.4 Quality of data for AI systems
- B.7.5 Data provenance
- B.7.6 Data preparation
- B.8.2 System documentation and information for users
- B.9.3 Objectives for responsible use of AI system
- B.9.4 Intended use of the AI system
Does ISO 42001 cover all technical aspects of various AI processes?
ISO 42001 sits at the center of the responsible AI standards ecosystem, which includes other standards that go deeper into specific aspects of AI management, use, and development. The goal of ISO 42001 is to maintain a balance between being prescriptive in terms of the requirements for an AIMS, but flexible in terms of how AIMS is implemented for a particular organization.
Since ISO 42001 itself does not address the details of specific AI applications, it outsources the more technical details to more narrowly focused standards and other “generally accepted frameworks”. Some examples of other standards referenced in ISO 42001 are:
- ISO 5259 series on Data Quality
- ISO 23894 AI Risk Management
- ISO 24029-1 Assessment of the robustness of neural networks
What is the process for ISO 42001 certification?
To obtain an ISO 42001 certification, an organization must successfully pass an audit conducted by a certification body. However, the ISO 42006 standard defining the requirements for such bodies is currently under development, which means that as of January 2024, no certifications can be granted yet.
However, the standard can still be freely used for voluntary assessments, both internally and by third parties. An early gap analysis based on the published standard can significantly accelerate preparations for an official certification when it becomes available.
Based on known timelines for other management system standards, the audit can be expected to take from several months to up to a year, depending on the number of people involved in the AI lifecycle processes; the organization’s role as an AI provider, developer, or user; and the complexity of the AIMS.
According to the current draft of ISO 42006, the audit process will likely align with the established sequence of an internal audit (initial assessment of nonconformities with a window for improvement), followed by a two-stage external audit (an accredited body reviewing relevant documentation and evidence), with annual recertification.
How Citadel AI tools help with ISO 42001 preparation
Citadel AI’s technology helps organizations streamline their AI testing and governance processes, and automate compliance with AI standards and guidelines. Our products, Lens and Radar, can:
- Automatically fulfill some of the most technically demanding requirements of ISO 42001
- Help engineering teams validate their models and datasets against international standards
- Provide easy reporting and guidance to get on the certification track as quickly as possible
Citadel AI is trusted by world-leading organizations such as the British Standards Institution. At this critical period of time where AI standards and regulation are maturing, we believe that we can help you streamline compliance, improve AI reliability, and navigate this evolving landscape.