Announcement
Rune Kvist
Jul 17, 2025
3 min read
Launching AIUC-1: 'SOC 2 for AI agents'
AIUC-1 is the world’s first AI agent security, safety and reliability standard. Think ‘SOC 2 for AI agents’. AIUC-1 speeds up AI adoption by helping enterprises answer: “Can we trust this AI agent?”
AIUC-1 is the world’s first AI agent security, safety and reliability standard. Think ‘SOC 2 for AI agents’. AIUC-1 speeds up AI adoption by helping enterprises answer: “Can we trust this AI agent?”
Today, enterprises struggle to adopt AI with confidence. AI agents are black boxes, the technology moves fast, and there is no established playbook for evaluating AI risks. As a result, enterprises either sit in analysis paralysis - or move forward recklessly, generating news headlines with their AI agent failures. For example, AirCanada’s chatbot hallucinated their refund policy, Google’s Gemini produced images of people of color in Nazi uniforms and Workday’s AI recruiter discriminated against candidates.
AIUC-1 is designed to let enterprises adopt AI with confidence. The standard covers all major areas of enterprise risk across data & privacy, security, safety, reliability, accountability and society. For example, hallucinations, harmful outputs, jailbreaks, IP infringement, training data practices, red-teaming and incident response. AIUC-1 was built with trusted institutions like PwC, Orrick, MITRE, Stanford, MIT and based on feedback from 500+ enterprise risk leaders such as CISOs from Google Cloud and MongoDB. AIUC-1 operationalizes principles found across high-level frameworks like NIST’s AI RMF, EU AI Act, and MITRE’s ATLAS.
For more detail on each of the principles:
Principle | Description |
---|---|
| Protect against data leakage, IP leakage, and training on user data without consent |
| Prevent harmful AI outputs and brand risk through testing, monitoring and safeguards |
| Protect against adversarial attacks like jailbreaks and prompt injections as well as unauthorized tool calls |
| Prevent hallucinations and unreliable tool calls to business systems |
| Assign accountability, enforce oversight, create emergency responses and vet suppliers |
| Prevent AI from enabling societal harm through cyberattacks or national security risks |
To certify against the standard, AI companies must implement 50+ technical, operational and legal safeguards - and submit their AI to frequent, rigorous, third-party technical testing to show the safeguards’ effectiveness. The testing includes advanced adversarial attempts to jailbreak models, create harmful content or leak data based on the latest AI security research. Like SOC 2, the certificate comes with an independent audit report.
The core benefits for enterprises are:
Confidence. All controls are designed to enable enterprises to adopt with confidence. AIUC-1 enables independent audits. Technical testing shows if safeguards work.
Clarity. AIUC-1 captures how to detect, prevent and mitigate all major enterprise risks.
Compliance. AIUC-1 enables compliance with existing and new regulatory frameworks.
Current. AIUC-1 adapts regularly as new AI capabilities and threats emerge.
Speed. AIUC-1 attestation reports gather most decision-relevant data in one place.
AIUC-1 is already helping enterprises adopt from high-growth AI companies today.
Latest articles
Research
MIT x AIUC-1: AI-Proofing The Board and C-suite
Dr. Keri Pearlson, Principal Research Scientist at MIT, launches a multi-company research project..
Read more
Research
Stanford Trustworthy AI Research x AIUC partner on AIUC-1
Stanford Professor Dr. Sanmi Koyejo on real-world AI risk for enterprises
Read more
Research
Orrick x AIUC partner on AIUC-1
AIUC and top AI law firm Orrick, Herrington & Sutcliffe have partnered to create AIUC-1.
Read more