Announcement

Rune Kvist

Jul 17, 2025

3 min read

Launching AIUC-1: 'SOC 2 for AI agents'

Green Fern
Green Fern
Green Fern

AIUC-1 is the world’s first AI agent security, safety and reliability standard. Think ‘SOC 2 for AI agents’. AIUC-1 speeds up AI adoption by helping enterprises answer: “Can we trust this AI agent?” 

AIUC-1 is the world’s first AI agent security, safety and reliability standard. Think ‘SOC 2 for AI agents’. AIUC-1 speeds up AI adoption by helping enterprises answer: “Can we trust this AI agent?” 

Today, enterprises struggle to adopt AI with confidence. AI agents are black boxes, the technology moves fast, and there is no established playbook for evaluating AI risks. As a result, enterprises either sit in analysis paralysis - or move forward recklessly, generating news headlines with their AI agent failures. For example, AirCanada’s chatbot hallucinated their refund policy, Google’s Gemini produced images of people of color in Nazi uniforms and Workday’s AI recruiter discriminated against candidates.

AIUC-1 is designed to let enterprises adopt AI with confidence. The standard covers all major areas of enterprise risk across data & privacy, security, safety, reliability, accountability and society. For example, hallucinations, harmful outputs, jailbreaks, IP infringement, training data practices, red-teaming and incident response. AIUC-1 was built with trusted institutions like PwC, Orrick, MITRE, Stanford, MIT and based on feedback from 500+ enterprise risk leaders such as CISOs from Google Cloud and MongoDB. AIUC-1 operationalizes principles found across high-level frameworks like NIST’s AI RMF, EU AI Act, and MITRE’s ATLAS.

For more detail on each of the principles:

Principle

Description

  1. Data & Privacy

Protect against data leakage, IP leakage, and training on user data without consent

  1. Safety

Prevent harmful AI outputs and brand risk through testing, monitoring and safeguards

  1. Security

Protect against adversarial attacks like jailbreaks and prompt injections as well as unauthorized tool calls

  1. Reliability

Prevent hallucinations and unreliable tool calls to business systems

  1. Accountability

Assign accountability, enforce oversight, create emergency responses and vet suppliers

  1. Society

Prevent AI from enabling societal harm through cyberattacks or national security risks

To certify against the standard, AI companies must implement 50+ technical, operational and legal safeguards - and submit their AI to frequent, rigorous, third-party technical testing to show the safeguards’ effectiveness. The testing includes advanced adversarial attempts to jailbreak models, create harmful content or leak data based on the latest AI security research. Like SOC 2, the certificate comes with an independent audit report.

The core benefits for enterprises are:
  • Confidence. All controls are designed to enable enterprises to adopt with confidence. AIUC-1 enables independent audits. Technical testing shows if safeguards work.

  • Clarity. AIUC-1 captures how to detect, prevent and mitigate all major enterprise risks.

  • Compliance. AIUC-1 enables compliance with existing and new regulatory frameworks.

  • Current. AIUC-1 adapts regularly as new AI capabilities and threats emerge.

  • Speed. AIUC-1 attestation reports gather most decision-relevant data in one place.

AIUC-1 is already helping enterprises adopt from high-growth AI companies today. 

Move with confidence

Move with confidence

Move with confidence