Article

Rajiv Dattani

Feb 15, 2025

4 min read

Framework comparison

Green Fern
Green Fern
Green Fern

How AIUC-1 compares to NIST AI RMF, EU AI Act, ISO 42001 and more

AIUC-1 is designed to be:

  • Customer-focused. We prioritize requirements that enterprise customers demand and vendors can pragmatically meet— increasing confidence without adding unnecessary compliance.

  • Adaptable. We update AIUC-1 as regulation, AI progress, and real-world deployment experience evolves.

  • Transparent. We keep a public changelog and share our lessons.

  • Forward-looking. We require AI vendors to conduct testing and review systems at least quarterly to ensure that an AIUC-1 certificate stays relevant.

  • Insurance-enabling. We emphasize the risks that lead to direct harms and financial losses.

  • Predictable. We review the standard in partnership with our technical contributors and push updates on January 1, April 1, July 1, and October 1 of each year.

In practice, this means that AIUC-1 builds on other AI frameworks including the EU AI Act, the NIST AI RMF, ISO 42001, and MITRE ATLAS. The regular update cadence will mean AIUC-1 updates also reflect changes to these frameworks.

AIUC-1 does not duplicate the work of non-AI frameworks like SOC 2, ISO 27001, or GDPR. Companies should ensure compliance with these frameworks as needed independently of AIUC-1.

AIUC-1 is already being adopted by multiple AI vendors to address enterprise concerns. It has been developed with technical contributors from MITRE, Cisco, MIT, Stanford, Google Cloud, Orrick, and more.

AIUC-1 is built on:

AIUC-1 does not duplicate:

✅ EU AI Act

- SOC 2

✅ NIST AI RMF

- EU GDPR

✅ ISO42001

- AIDA

✅ MITRE ATLAS

- ISO27001

✅ OWASP Top Ten

- CSA AI Controls Matrix

✅ Regional AI legislation


✅ Sector-specific AI legislation


✅ OECD AI Principles


More detail on how each of these frameworks is addressed by AIUC-1 is available in the “crosswalk” section of each requirement.


AIUC-1 operationalizes emerging AI legislation and best practices 

Framework

Description

How AIUC-1 compares

EU AI Act

EU regulation classifying AI systems by risk levels (minimal, limited, high, unacceptable) with corresponding compliance obligations

Operationalizes the EU AI Act by aligning with its requirements. Certification against AIUC-1 is a strong step towards compliance with the EU AI Act - as it: 

+ Enables compliance for minimal and limited risk systems

+ Enables compliance for high risk systems only if specific control activities are met (AIUC can help guide AI companies through this process)

+ Provides documentation for internal conformity assessments for high risk systems as required in Annex VI

NIST AI RMF

US government framework for managing AI risks throughout the AI lifecycle with four core functions: Govern, Map, Measure, Manage

Operationalizes NIST’s AI RMF. Certification against AIUC-1: 

+ Translates NIST’s high-level actions into specific, auditable controls

+ Provides concrete implementation guidance for key areas such as harmful output prevention, 3rd-party testing and risk management practices

ISO42001

International standard for AI management systems (AIMS) covering responsible AI development and deployment

Aligns with ISO42001. Certification against AIUC-1: 

+ Incorporates the majority of controls from ISO42001

+ Translates ISO’s management system approach into concrete, auditable requirements

+ Extends ISO42001 with third-party testing requirements of e.g. hallucinations and jailbreak attempts

+ Addresses additional key concerns such as AI failure plans and AI-specific system security

MITRE ATLAS

Knowledge base of adversarial tactics, techniques and mitigation strategies for machine learning systems, similar to MITRE ATT&CK for cybersecurity

Integrates MITRE ATLAS. Certification against AIUC-1: 

+ Incorporates ATLAS mitigation strategies in requirements and controls

+ Strengthens robustness against the adversarial tactics and techniques identified in ATLAS

+ Goes beyond ATLAS’s focus on security alone



More here

OWASP Top Ten for LLM and GenAI

Curated list of the most critical security threats to LLM and GenAI systems

Integrates OWASP’s Top Ten for LLM and GenAI. Certification against AIUC-1: 

+ Addresses Top Ten threats in requirements and controls

+ Strengthens robustness against the threats identified with concrete requirements and controls

+ Goes beyond OWASP’s focus on security alone

Regional US regulation

E.g. California SB-1001, New York City Local Law 144, Colorado AI Act

Simplifies compliance with regional regulation. AIUC can help guide AI companies through the process of meeting California’s SB-1001, Colorado’s AI Act, New York City Local Law 144 through optional requirements



AIUC-1 already addresses top concerns in emerging regional regulations such as discrimination and bias, human-in-the-loop, and data handling

Sector-specific regulation

E.g. HIPAA, Fair Credit Reporting Act, Fair Housing Act, FTC guidance on AI & algorithms

Simplifies compliance with AI requirements in sector-specific regulation. Certification against AIUC-1: 

+ Prepares organizations to comply with e.g. FTC guidance on AI & algorithms

+ Addresses top concerns in sector-specific regulations such as discrimination and bias, human-in-the-loop, monitoring and logging, third-party interactions, and data handling in base requirements

+ Offers AI companies optional add-on requirements for relevant use cases (e.g. for financial transactions, PII handling) 

OECD AI Principles

First inter- governmental AI standard (2019, updated 2024) with five principles for trustworthy AI adopted by 47+ countries

Operationalizes OECD’s AI Principles. Certification against AIUC-1: 

+ Translates OECD’s five principles into concrete, auditable requirements

+ Addresses additional key areas such as 3rd-party testing, AI failure plans and adversarial resilience

The following frameworks are outside the scope of AIUC-1 

Ensuring that AIUC-1 does not duplicate existing work

Framework

Description

How AIUC-1 compares

SOC 2

Leading cybersecurity standard

Certification against AIUC-1:

+ Extends SOC 2 Security controls specifically for AI systems (e.g. jailbreak attempts)

+ Extends SOC 2 Privacy controls specifically for AI systems (e.g. data used for model training) 

+ Extends SOC 2 Availability controls specifically for AI systems (e.g. system reliability/hallucinations)

+ Avoids duplication of existing requirements in SOC 2 on general cyber security best practices

Additional ISO standards including ISO27001 and ISO42006

International standards for e.g. information security management systems (ISMS) 

Certification against AIUC-1:

+ Focuses on ISO42001, which is specific to AI systems

+ Extends several ISO27001 controls into the AI domain including the Confidentiality-Integrity-Availability triad



AIUC is also following the development of ISO42006 closely 

EU GDPR

European data protection regulation with AI-relevant provisions on automated decision-making, profiling, and data subject rights

AIUC-1 does not duplicate GDPR 

AIDA

Canada's proposed Artificial Intelligence and Data Act regulating AI systems based on impact assessments and risk mitigation

AIDA has not been passed yet - AIUC can help with guidance on how to meet AIDA once passed having incorporated similar principles of risk mitigation, risk assessment, transparency, and incident notification

CSA AI Controls Matrix

Cloud Security Alliance's AI Controls Matrix providing security controls framework specifically designed for AI/ML systems

Certification against AIUC-1: 

+ Addresses key controls for AI vendors from the AICM such as adversarial robustness, system transparency, and documentation of criteria for cloud & on-prem processing

+ Enables a compliance burden significantly lower than CSA’s AICM due to its targeted focus on top AI enterprise concerns

+ Avoids duplicating controls in areas where CSA is industry-leading such as data center infrastructure, physical server security, and other domains outside of the AIUC-1 core scope

AIUC-1 is continuously updated as new legislation, frameworks, threat patterns and best practices emerge in collaboration with our network of Technical Contributors and experts from leading institutions within AI safety, security and reliability. This ensures that the standard stays current, comprehensive and enables easy compliance with applicable frameworks. 

Move with confidence

Move with confidence

Move with confidence