Pillar C: Cybersecurity of AI SystemsC7

AI Governance & Risk

EU AI Act compliance, NIST AI RMF, AI risk assessment, model cards, algorithmic auditing, AI incident response.

Part of Pillar C: Cybersecurity of AI Systems · Cybersecurity of AI Systems groups the disciplines that share methods, tools, and threat models with AI Governance & Risk.

What is AI Governance & Risk?

AI governance and risk management is the discipline of establishing organizational frameworks, policies, and processes to ensure AI systems are developed and deployed responsibly, ethically, and in compliance with evolving regulations. As AI capabilities accelerate, so does the regulatory landscape — the EU AI Act, NIST AI Risk Management Framework, and sector-specific regulations are creating binding requirements for AI transparency, accountability, and risk assessment.

The EU AI Act classifies AI systems by risk level (unacceptable, high, limited, minimal) and imposes corresponding obligations. High-risk systems require conformity assessments, technical documentation, human oversight provisions, and post-market monitoring. The NIST AI RMF provides a voluntary but widely adopted framework organized around four functions: Govern, Map, Measure, and Manage — helping organizations establish AI risk management practices regardless of regulatory jurisdiction.

Model cards, datasheets for datasets, and algorithmic impact assessments are becoming standard governance artifacts. They document model capabilities, limitations, intended uses, evaluation metrics, and fairness assessments. Algorithmic auditing — both internal and third-party — provides independent validation that AI systems meet their stated objectives without unacceptable bias or harm. As AI governance matures, it is becoming as fundamental to enterprise risk management as cybersecurity governance was a decade ago.

Why it matters

Regulatory frameworks like the EU AI Act are creating legally binding AI obligations with significant penalties. Organizations without AI governance programs face regulatory risk, reputational damage, and potential market exclusion.

AI governance provides the policy and compliance framework within which all AI security and safety activities operate. It connects technical AI risk controls to board-level accountability, regulatory compliance, and stakeholder trust.

Key topics

EU AI Act risk classification and obligations
NIST AI Risk Management Framework (AI RMF 1.0)
Model cards and documentation requirements
Algorithmic impact assessments
Third-party AI auditing and assurance
AI ethics boards and governance structures
Bias detection and fairness metrics
Transparency and explainability requirements
AI incident reporting and post-market monitoring
Sector-specific AI regulations (healthcare, finance, HR)

Standards and frameworks

Curated resources

Authoritative sources we ground AI Governance & Risk questions in — frameworks, research, guides, and tools.

NISTframework

NIST AI Risk Management Framework (AI 100-1)

The authoritative framework for managing AI risks. Defines four core functions: Govern, Map, Measure, Manage. Essential reading for anyone building or deploying AI systems.

NISTframework

NIST Cybersecurity Framework 2.0

Updated cybersecurity framework with six core functions: Govern, Identify, Protect, Detect, Respond, Recover.

European Unionframework

EU AI Act

The European Union's comprehensive AI regulation. Classifies AI systems by risk level and sets requirements for high-risk systems.

ISOframework

ISO/IEC 42001 — AI Management System

International standard for establishing and maintaining an AI management system. Includes 39 controls across 10 areas.

Googleframework

Google Secure AI Framework (SAIF)

Google's conceptual framework for securing AI systems. Covers supply chain, data governance, and deployment security.

WEFresearch

World Economic Forum Global Cybersecurity Outlook

Annual survey of cyber leaders on resilience, workforce, geopolitics, and emerging tech including AI. Excellent for leadership and strategy questions.

NISTframework

NIST AI RMF 1.0 + Playbook

(See cross-cutting.md for details.) The primary AI governance framework for US context. Questions should test practical application of Govern/Map/Measure/Manage, not just recall.

European Parliamentframework

EU AI Act — High-Risk System Requirements

(See cross-cutting.md.) For C7 specifically: conformity assessments, technical documentation requirements, post-market monitoring, fundamental rights impact assessments. Detailed compliance questions.

Responsible AI Instituteresearch

Responsible AI Institute — RAI Certification

Certification program for responsible AI. Assessment criteria across fairness, explainability, accountability, robustness. Emerging industry certification.

Stanford Institute for Human-Centered AIresearch

Stanford HAI — AI Index Report (Annual)

Comprehensive annual data on AI progress: research output, investment, policy, public opinion, technical performance. The best source for quantitative AI landscape questions.

Gartnerresearch

Gartner Hype Cycle for AI (2024)

Positions AI security technologies on the hype cycle. Useful for questions about technology maturity, adoption timelines, and distinguishing hype from operational readiness.

Gartnerresearch

Gartner Market Guide for AI Trust, Risk and Security Management (AI TRiSM)

Market categorization of AI security tools: model monitoring, adversarial robustness, privacy, compliance. Useful for understanding the vendor landscape without favoring specific vendors.

Certifications that signal this domain

Credentials whose blueprint meaningfully covers this domain. Core means centrally covered; also touched means present in the blueprint but not the primary focus.

Core coverage

AAIAExpert·ISACAOfficial page →

Advanced in AI Audit

ISACA specialization for AI Audit. First certification worldwide specifically for auditing AI systems. Requires active CISA (or comparable audit certification). Three domains: AI Governance & Risk, AI Operations, AI Auditing.

AAIRExpert·ISACAOfficial page →

Advanced in AI Risk

ISACA specialization for AI risk management. Beta phase since April 2026. Requires active ISACA or equivalent certification. Focus on AI Risk Governance, AI Risk Program Management, and AI Life Cycle Risk Management.

AAISMExpert·ISACAOfficial page →

Advanced in AI Security Management

ISACA specialization for AI Security Management. Requires active CISM or CISSP. Focus on AI Governance & Program Management, AI Risk Management, and AI Technologies & Controls. For security leaders managing AI risks.

AIGPProfessional·IAPPOfficial page →

Artificial Intelligence Governance Professional

AI risk, governance, and regulatory literacy (EU AI Act, NIST AI RMF).

CAIPProfessional·CertNexusOfficial page →

Certified AI Practitioner

CertNexus certification for AI/ML practitioners. First AI certification with ANAB/ISO 17024 accreditation. Vendor-neutral, focused on ML engineering (Supervised/Unsupervised Learning, Deep Learning, NLP). Not security-specific, but AI literacy foundation for security professionals.

CRAGEProfessional·EC-CouncilOfficial page →

Certified Responsible AI Governance & Ethics

EC-Council certification for responsible AI governance and ethics. Focus on oversight, risk management, regulatory alignment (NIST AI RMF, ISO 42001), accountability across the AI lifecycle. Brand new since February 2026.

CRAIProfessional·ISACAOfficial page →

ISACA Certified in Risk of Artificial Intelligence (emerging)

AI risk management and governance — emerging blueprint, expect revisions.

ISO 42001 LAProfessional·PECBOfficial page →

PECB ISO/IEC 42001 Lead Auditor

PECB certification for auditing AI Management Systems according to ISO/IEC 42001. Complementary to Lead Implementer. Growing demand through third-party AI audits and regulatory requirements.

ISO 42001 LIProfessional·PECBOfficial page →

PECB ISO/IEC 42001 Lead Implementer

The PECB ISO/IEC 42001 Lead Implementer certificate qualifies professionals to establish and lead an AI Management System (AIMS) according to the international standard ISO/IEC 42001 within an organization—analogous to the well-known ISO 27001 Lead Implementer in the ISMS domain. It is the implementation-oriented counterpart to the Lead Auditor and targets individuals responsible for AIMS rollout. Strength: Strong anchoring in the ISO framework, internationally recognized as a compliance reference for AI governance; practical focus on project management and implementation. Weakness: PECB is a commercial provider with less market recognition than IAPP or CompTIA; the certificate requires substantial professional experience and is therefore not an entry-level certification. The market for ISO-42001-compliant AIMS implementations is still young, which currently limits demand for the certificate.

Also touched

CCISOLeadership·EC-CouncilOfficial page →

Certified Chief Information Security Officer

Executive leadership — governance, program mgmt, finance, and strategic planning for security.

CIPMProfessional·IAPPOfficial page →

Certified Information Privacy Manager

Running a privacy program end-to-end.

CIPP/CProfessional·IAPPOfficial page →

Certified Information Privacy Professional / Canada

Canadian privacy-law expertise — PIPEDA, provincial regimes (Quebec Law 25, Alberta/BC PIPA), and federal sectoral rules.

CIPP/EProfessional·IAPPOfficial page →

Certified Information Privacy Professional / Europe

GDPR and European privacy law expertise.

CIPP/USProfessional·IAPPOfficial page →

Certified Information Privacy Professional / United States

US federal and state privacy-law expertise.

CRISCProfessional·ISACAOfficial page →

Certified in Risk and Information Systems Control

Enterprise risk identification, assessment, and response + IT controls.

Browse all certifications → — pick a cert on the interactive map to highlight every domain it covers.

Education and certifications

More in Cybersecurity of AI Systems

See how your AI Governance & Risk skills stack up

301 questions available. Compete head-to-head or run a quick speed quiz to benchmark yourself.