AI Governance & Risk
EU AI Act compliance, NIST AI RMF, AI risk assessment, model cards, algorithmic auditing, AI incident response.
What is AI Governance & Risk?
AI governance and risk management is the discipline of establishing organizational frameworks, policies, and processes to ensure AI systems are developed and deployed responsibly, ethically, and in compliance with evolving regulations. As AI capabilities accelerate, so does the regulatory landscape — the EU AI Act, NIST AI Risk Management Framework, and sector-specific regulations are creating binding requirements for AI transparency, accountability, and risk assessment.
The EU AI Act classifies AI systems by risk level (unacceptable, high, limited, minimal) and imposes corresponding obligations. High-risk systems require conformity assessments, technical documentation, human oversight provisions, and post-market monitoring. The NIST AI RMF provides a voluntary but widely adopted framework organized around four functions: Govern, Map, Measure, and Manage — helping organizations establish AI risk management practices regardless of regulatory jurisdiction.
Model cards, datasheets for datasets, and algorithmic impact assessments are becoming standard governance artifacts. They document model capabilities, limitations, intended uses, evaluation metrics, and fairness assessments. Algorithmic auditing — both internal and third-party — provides independent validation that AI systems meet their stated objectives without unacceptable bias or harm. As AI governance matures, it is becoming as fundamental to enterprise risk management as cybersecurity governance was a decade ago.
Why it matters
Regulatory frameworks like the EU AI Act are creating legally binding AI obligations with significant penalties. Organizations without AI governance programs face regulatory risk, reputational damage, and potential market exclusion.
AI governance provides the policy and compliance framework within which all AI security and safety activities operate. It connects technical AI risk controls to board-level accountability, regulatory compliance, and stakeholder trust.
Govern & Direct
Set direction, own risk, shape policy, govern AI/quantum programs, work with people and narrative.
Other domains in this layer
Key topics
Standards and frameworks
Curated resources
Authoritative sources we ground AI Governance & Risk questions in — frameworks, research, guides, and tools.
NIST AI Risk Management Framework (AI 100-1)
The authoritative framework for managing AI risks. Defines four core functions: Govern, Map, Measure, Manage. Essential reading for anyone building or deploying AI systems.
NIST Cybersecurity Framework 2.0
Updated cybersecurity framework with six core functions: Govern, Identify, Protect, Detect, Respond, Recover.
EU AI Act
The European Union's comprehensive AI regulation. Classifies AI systems by risk level and sets requirements for high-risk systems.
ISO/IEC 42001 — AI Management System
International standard for establishing and maintaining an AI management system. Includes 39 controls across 10 areas.
Google Secure AI Framework (SAIF)
Google's conceptual framework for securing AI systems. Covers supply chain, data governance, and deployment security.
World Economic Forum Global Cybersecurity Outlook
Annual survey of cyber leaders on resilience, workforce, geopolitics, and emerging tech including AI. Excellent for leadership and strategy questions.
NIST AI RMF 1.0 + Playbook
(See cross-cutting.md for details.) The primary AI governance framework for US context. Questions should test practical application of Govern/Map/Measure/Manage, not just recall.
EU AI Act — High-Risk System Requirements
(See cross-cutting.md.) For C7 specifically: conformity assessments, technical documentation requirements, post-market monitoring, fundamental rights impact assessments. Detailed compliance questions.
Responsible AI Institute — RAI Certification
Certification program for responsible AI. Assessment criteria across fairness, explainability, accountability, robustness. Emerging industry certification.
Stanford HAI — AI Index Report (Annual)
Comprehensive annual data on AI progress: research output, investment, policy, public opinion, technical performance. The best source for quantitative AI landscape questions.
Gartner Hype Cycle for AI (2024)
Positions AI security technologies on the hype cycle. Useful for questions about technology maturity, adoption timelines, and distinguishing hype from operational readiness.
Gartner Market Guide for AI Trust, Risk and Security Management (AI TRiSM)
Market categorization of AI security tools: model monitoring, adversarial robustness, privacy, compliance. Useful for understanding the vendor landscape without favoring specific vendors.
Certifications that signal this domain
Credentials whose blueprint meaningfully covers this domain. Core means centrally covered; also touched means present in the blueprint but not the primary focus.
Core coverage
Advanced in AI Audit
ISACA specialization for AI Audit. First certification worldwide specifically for auditing AI systems. Requires active CISA (or comparable audit certification). Three domains: AI Governance & Risk, AI Operations, AI Auditing.
Advanced in AI Risk
ISACA specialization for AI risk management. Beta phase since April 2026. Requires active ISACA or equivalent certification. Focus on AI Risk Governance, AI Risk Program Management, and AI Life Cycle Risk Management.
Advanced in AI Security Management
ISACA specialization for AI Security Management. Requires active CISM or CISSP. Focus on AI Governance & Program Management, AI Risk Management, and AI Technologies & Controls. For security leaders managing AI risks.
Artificial Intelligence Governance Professional
AI risk, governance, and regulatory literacy (EU AI Act, NIST AI RMF).
Certified AI Practitioner
CertNexus certification for AI/ML practitioners. First AI certification with ANAB/ISO 17024 accreditation. Vendor-neutral, focused on ML engineering (Supervised/Unsupervised Learning, Deep Learning, NLP). Not security-specific, but AI literacy foundation for security professionals.
Certified Responsible AI Governance & Ethics
EC-Council certification for responsible AI governance and ethics. Focus on oversight, risk management, regulatory alignment (NIST AI RMF, ISO 42001), accountability across the AI lifecycle. Brand new since February 2026.
ISACA Certified in Risk of Artificial Intelligence (emerging)
AI risk management and governance — emerging blueprint, expect revisions.
PECB ISO/IEC 42001 Lead Auditor
PECB certification for auditing AI Management Systems according to ISO/IEC 42001. Complementary to Lead Implementer. Growing demand through third-party AI audits and regulatory requirements.
PECB ISO/IEC 42001 Lead Implementer
The PECB ISO/IEC 42001 Lead Implementer certificate qualifies professionals to establish and lead an AI Management System (AIMS) according to the international standard ISO/IEC 42001 within an organization—analogous to the well-known ISO 27001 Lead Implementer in the ISMS domain. It is the implementation-oriented counterpart to the Lead Auditor and targets individuals responsible for AIMS rollout. Strength: Strong anchoring in the ISO framework, internationally recognized as a compliance reference for AI governance; practical focus on project management and implementation. Weakness: PECB is a commercial provider with less market recognition than IAPP or CompTIA; the certificate requires substantial professional experience and is therefore not an entry-level certification. The market for ISO-42001-compliant AIMS implementations is still young, which currently limits demand for the certificate.
Also touched
Certified Chief Information Security Officer
Executive leadership — governance, program mgmt, finance, and strategic planning for security.
Certified Information Privacy Manager
Running a privacy program end-to-end.
Certified Information Privacy Professional / Canada
Canadian privacy-law expertise — PIPEDA, provincial regimes (Quebec Law 25, Alberta/BC PIPA), and federal sectoral rules.
Certified Information Privacy Professional / Europe
GDPR and European privacy law expertise.
Certified Information Privacy Professional / United States
US federal and state privacy-law expertise.
Certified in Risk and Information Systems Control
Enterprise risk identification, assessment, and response + IT controls.
Browse all certifications → — pick a cert on the interactive map to highlight every domain it covers.
Education and certifications
More in Cybersecurity of AI Systems
See how your AI Governance & Risk skills stack up
301 questions available. Compete head-to-head or run a quick speed quiz to benchmark yourself.