AI Governance & Risk — Domain 1
The largest exam domain (33%) covers the governance structures, frameworks, regulations, and risk approaches used to deploy AI responsibly and audit it effectively.
NIST AI Risk Management Framework (AI RMF 1.0)
| Aspect | Detail |
|---|---|
| Published by | U.S. National Institute of Standards and Technology (NIST), January 2023 |
| Structure | Core (4 functions) + Profiles (current state vs. target) + Tiers (1–4, indicating rigor of practice) |
| Voluntary | Yes — not legally binding; designed for any organization developing or deploying AI |
| AI Trustworthiness | Reliable, Explainable, Interpretable, Accountable, Transparent, Fair with bias managed, Privacy-enhanced, Secure & resilient, Safe |
| GOVERN vs. the others | GOVERN is foundational — it's the only function that underpins and enables the other three (MAP, MEASURE, MANAGE) |
| Key companion document | NIST AI RMF Playbook — provides informative actions for each function subcategory |
EU AI Act — Risk-Based Classification
AI Governance Frameworks & Standards
Key frameworks, standards, and principles that form the backbone of AI governance globally — all frequently tested in the AAIA.
OECD's 5 AI Principles
| Framework | Type | Mandatory? | Primary Focus | Key Concept |
|---|---|---|---|---|
| NIST AI RMF | Risk framework | No (US voluntary) | AI risk management lifecycle | GOVERN → MAP → MEASURE → MANAGE |
| ISO/IEC 42001 | Mgmt system standard | No (certifiable) | AI management system (AIMS) | AI System Impact Assessment (AISIA) |
| EU AI Act | Regulation (law) | Yes (EU) | Risk-based AI classification | 4 risk tiers; conformity assessment for High Risk |
| OECD AI Principles | Principles / policy | No (normative) | Values for trustworthy AI | 5 principles, adopted by 42+ countries |
| IEEE 7000 Series | Technical standards | No | Ethics in engineering design | Value Elicitation; ethics in SDLC |
AI Risk Management
Understanding the types, sources, and management approaches for AI-specific risks — from model failure to third-party exposure.
Six AI Risk Categories
| Dimension | Traditional IT Risk | AI-Specific Risk |
|---|---|---|
| Determinism | Systems behave predictably given same inputs | AI outputs may vary; probabilistic and context-dependent |
| Explainability | Logic is auditable in code | Complex models (deep learning) may be "black boxes" |
| Change management | Updates are planned, version-controlled | Models can drift silently with no code change |
| Data dependency | Data integrity is important but bounded | Training data quality fundamentally shapes all outputs |
| Bias risk | Limited — systems do what they're programmed to do | High — bias in data or design produces systematically unfair outcomes |
| Emergent behavior | Not applicable | AI may develop unexpected behaviors not in original design |
AI Ethics, Privacy & Data Governance
The ethical and data governance pillars of responsible AI — fairness, transparency, accountability, privacy by design, and data quality.
| Area | Key Audit Question | What to Look For |
|---|---|---|
| Fairness | Has the organization tested for demographic disparities in model outputs? | Bias testing reports, fairness metrics, remediation plans |
| Transparency | Can the organization explain how the AI model works to affected stakeholders? | Model cards, system documentation, explainability tools (LIME/SHAP) |
| Accountability | Is there a named AI owner responsible for model performance and compliance? | RACI matrices, AI governance committee charters, role descriptions |
| Privacy | Does training data collection comply with applicable privacy laws? | Privacy impact assessments (PIAs), consent records, data licensing |
| Human oversight | Are humans able to review, override, or reject AI decisions? | Human-in-the-loop controls, override logs, escalation procedures |
| Data governance | Is training data quality assessed and documented before model training? | Data quality reports, lineage documentation, version control for datasets |
Practice Quiz
10 AAIA-style questions on AI Governance & Risk. Select an answer to see instant feedback.
Memory Hooks
High-yield mnemonics and patterns to lock in AI Governance & Risk concepts for the AAIA.
| Fact | Answer |
|---|---|
| The only certifiable ISO standard for AI management systems | ISO/IEC 42001 (published Dec 2023) |
| NIST AI RMF foundational function | GOVERN — underpins Map, Measure, and Manage |
| First intergovernmental AI standard | OECD AI Principles (2019), adopted by 42+ countries |
| EU AI Act: highest risk tier (not prohibited) | High Risk — requires conformity assessment, registration, documentation |
| GDPR provision most relevant to consequential AI decisions | Article 22 — right to human review of automated decisions |
| AI model degrades with no code changes — what is this? | Model drift (data drift / concept drift) |
| AI risk that the acquiring org retains even when using vendors | Third-party AI risk — accountability cannot be outsourced |
| Fairness impossibility theorem | Cannot satisfy all fairness metrics simultaneously — must make a documented, defensible choice |
| Document showing a model's intended use, limitations, and performance | Model card |
| Privacy built into AI architecture from design, not added later | Privacy by Design (Ann Cavoukian, 7 principles) |
Flashcards & Study Advisor
Click any card to flip it. Use the Study Advisor for targeted guidance by topic area.
What are the 4 functions of the NIST AI RMF, and which is foundational?
GOVERN, MAP, MEASURE, MANAGE. GOVERN is foundational — it establishes the culture, policies, and accountability that enable the other three. Mnemonic: "Good Managers Measure More."
Name the 4 EU AI Act risk tiers and give one example for each.
Unacceptable (banned — social scoring), High Risk (conformity assessment — recruitment AI, credit scoring), Limited (transparency — chatbots, deepfakes), Minimal (voluntary — spam filters, game AI).
What is ISO/IEC 42001 and how does it differ from NIST AI RMF?
ISO 42001 is a certifiable AI Management System (AIMS) standard (Dec 2023) — like ISO 27001 for AI. NIST AI RMF is a voluntary risk framework, not certifiable. ISO 42001 = structured management system; NIST = risk lifecycle approach.
What is the FATE framework, and what is the difference between transparency and explainability?
FATE = Fairness, Accountability, Transparency, Explainability. Transparency = can you explain how the system works (documentation, process)? Explainability = can you justify a specific decision to the affected person? A documented black box can be transparent but not explainable.
What is model drift, and why is it a unique AI risk compared to traditional IT?
Model drift = degradation of model performance because real-world data changes relative to training data. Unique because it happens silently with no code changes — traditional IT systems don't degrade without modification. Continuous monitoring is the primary control.
What does GDPR Article 22 require for consequential AI decisions?
Article 22 grants individuals the right not to be subject to solely automated decisions with significant legal or similar effects. Organizations must provide: human review capability, the ability to contest the decision, and meaningful information about the decision logic.
A vendor's AI system has no model card. What risk category is this, and why does accountability still rest with the buyer?
Third-party AI risk. Accountability cannot be outsourced — the acquiring organization is responsible for AI outcomes regardless of vendor arrangement. Without a model card, the buyer cannot assess limitations, biases, or appropriate use cases — a critical governance gap.
What is the "fairness impossibility theorem" and why does it matter for AI auditors?
It is mathematically impossible to satisfy all fairness metrics simultaneously (e.g., demographic parity, equal opportunity, equalized odds conflict). Auditors assess whether the organization made a documented, defensible choice of which fairness criteria is appropriate for the specific use case — not whether they achieved all metrics.
Ready for the Full AAIA Deck?
Access hundreds of AI governance, risk, and audit flashcards, practice tests, and study tools on FlashGenius.
Unlock Full Practice Tests on FlashGenius →Study Advisor
NIST AI RMF Tips
- GOVERN is the anchor: Every AAIA question about culture, policies, roles, or accountability structures maps to GOVERN. If an organization lacks AI governance, MAP/MEASURE/MANAGE are ineffective.
- MAP = context, not controls: MAP is about understanding the AI system's purpose, stakeholders, and potential impacts before risk treatment — not about implementing controls.
- MEASURE ≠ monitoring: MEASURE is about analyzing and quantifying risk (testing, benchmarks, evaluations). Ongoing operational monitoring is part of MANAGE.
- Profiles and Tiers: A Profile captures current vs. target state. Tiers (1–4) measure how rigorously an organization implements the framework — Tier 4 is most rigorous. Tiers are NOT risk levels.
- Voluntary but influential: NIST AI RMF is voluntary in the US, but regulators and courts increasingly reference it as the standard of care for AI risk management.
EU AI Act Tips
- High Risk examples to memorize: Employment/recruitment, credit scoring, critical infrastructure, law enforcement, migration/border control, administration of justice, medical devices. If it affects fundamental rights → likely High Risk.
- Conformity assessment ≠ certification: High-risk AI requires conformity assessment (documentation + testing + registration), but not necessarily third-party certification in all cases — some can be self-assessed.
- Prohibited vs. High Risk: Prohibited = banned outright (social scoring, real-time biometric ID in public with narrow exceptions). High Risk = allowed with strict controls. This distinction is frequently tested.
- General Purpose AI (GPAI): The EU AI Act also covers foundation models and GPAI — they have their own transparency and systemic risk requirements separate from the application-level risk tiers.
- Extraterritorial reach: Like GDPR, the EU AI Act applies to any provider whose AI system is used in the EU, regardless of where the provider is based.
ISO 42001 & OECD Tips
- ISO 42001 key requirement: AI System Impact Assessment (AISIA) — organizations must assess the potential impact of AI systems on individuals and society. This is analogous to the DPIA in GDPR.
- HLS (High-Level Structure): ISO 42001 follows the same clause structure as ISO 27001 and ISO 9001, enabling integrated management systems. Know: Clauses 4 (context), 6 (planning), 8 (operations), 9 (evaluation), 10 (improvement).
- OECD ≠ legally binding: OECD AI Principles are normative (aspirational), not law. But they influenced the EU AI Act, US executive orders, and most national AI strategies.
- OECD principle 4 — Robustness: AI systems must be secure and safe throughout their lifecycle — this directly maps to the AAIA audit concern of continuous monitoring for model drift and adversarial attacks.
- Framework stacking: Organizations often implement multiple frameworks. A mature AI governance program might use NIST AI RMF for risk management, ISO 42001 for the management system, OECD for ethical principles, and the EU AI Act for regulatory compliance.
AI Risk & Bias Tips
- Model drift is silent: Unlike software bugs that throw errors, model drift degrades performance gradually and invisibly. The control is continuous performance monitoring with statistical thresholds and alerts.
- Proxy discrimination: Using variables that correlate with protected characteristics (zip code → race, name → ethnicity) is a form of measurement bias even when the protected attribute is excluded from the model.
- Third-party accountability: In every AAIA scenario involving vendor AI, the acquiring organization retains accountability. "The vendor is responsible" is never the right audit answer.
- Fairness metrics conflict: If a question asks which fairness metric to apply, the answer is: it depends on the use case and harm being prevented. There is no universally correct metric — document the rationale.
- Data governance for AI: Representativeness is the most AI-specific data quality dimension — traditional IT data quality frameworks (accuracy, completeness, timeliness) don't fully capture the need for demographic and scenario coverage in training data.
AAIA Exam Strategy
- Domain 1 is 33% — highest priority: Governance & Risk is the biggest single domain. Invest significant prep time here, especially on NIST AI RMF, EU AI Act, and ISO 42001 distinctions.
- Framework identification questions: "Which framework/standard addresses X?" — Know: NIST = risk management lifecycle, ISO 42001 = certifiable management system, EU AI Act = law/risk tiers, OECD = normative principles.
- Scenario-based questions: Most AAIA questions are scenario-based. For governance questions, always ask: Is there a policy? Is it operationalized? Is there oversight? Does residual risk fit the appetite?
- Accountability is non-delegable: In any scenario involving vendors, third parties, or outsourced AI, the acquiring/deploying organization retains ultimate accountability. This is tested repeatedly.
- AAIA = auditor's perspective: Unlike CISA which is broader, AAIA always frames questions from the auditor's view — your job is to assess, test, and report on controls, not to implement or recommend specific technologies.