AAIA Exam Prep · Domain 1 · FlashGenius

AI Governance & Risk

NIST AI RMF, EU AI Act, ISO 42001, OECD Principles, AI risk management, ethics, and data governance — 33% of the AAIA exam.

Study with Practice Tests →
33%
Exam Weight
5+
Frameworks
10
Practice Questions
8
Flashcards

AI Governance & Risk — Domain 1

The largest exam domain (33%) covers the governance structures, frameworks, regulations, and risk approaches used to deploy AI responsibly and audit it effectively.

Auditor's Lens: AI governance is not just about having policies — it's about whether those policies are operationalized, monitored, and enforced. For the AAIA, always ask: Is the governance structure adequate? Are risks identified and managed? Are there controls, and are they effective?

NIST AI Risk Management Framework (AI RMF 1.0)

G
GOVERN
Establishes the culture, policies, processes, and accountability structures needed for responsible AI. Sets the "tone at the top" for AI risk management.
Roles, policies, incentives, culture, oversight mechanisms
M
MAP
Identifies the context, business purpose, potential impacts, and stakeholders for an AI system. Establishes the AI risk landscape before deployment.
Context, purpose, affected parties, risk identification
M
MEASURE
Analyzes, assesses, and tracks identified AI risks using quantitative and qualitative methods. Evaluates trustworthiness characteristics of AI systems.
Risk analysis, metrics, testing, evaluation, benchmarks
M
MANAGE
Allocates resources and implements risk response plans. Prioritizes and treats identified AI risks through controls, mitigation, transfer, or acceptance.
Risk response, treatment, residual risk, monitoring
NIST AI RMF — Key Points for AAIA
AspectDetail
Published byU.S. National Institute of Standards and Technology (NIST), January 2023
StructureCore (4 functions) + Profiles (current state vs. target) + Tiers (1–4, indicating rigor of practice)
VoluntaryYes — not legally binding; designed for any organization developing or deploying AI
AI TrustworthinessReliable, Explainable, Interpretable, Accountable, Transparent, Fair with bias managed, Privacy-enhanced, Secure & resilient, Safe
GOVERN vs. the othersGOVERN is foundational — it's the only function that underpins and enables the other three (MAP, MEASURE, MANAGE)
Key companion documentNIST AI RMF Playbook — provides informative actions for each function subcategory

EU AI Act — Risk-Based Classification

Banned
Unacceptable Risk — Prohibited
Requirement: Complete prohibition — these AI systems may not be deployed in the EU
Social scoring by governments, real-time biometric ID in public spaces (with narrow exceptions), manipulation of human behavior, exploitation of vulnerabilities, emotion recognition in workplaces/schools, AI that infers sensitive attributes from biometrics
High Risk
High Risk — Strict Requirements
Requirement: Conformity assessment, registration, technical documentation, human oversight, transparency, accuracy, and robustness requirements
Critical infrastructure, education/employment screening, credit scoring, law enforcement, migration/asylum, administration of justice, medical devices, recruitment AI, remote biometric identification
Limited
Limited Risk — Transparency Obligations
Requirement: Users must be informed they are interacting with an AI system
Chatbots, deepfake generators, AI-generated content — must disclose AI nature to users; emotion recognition systems must disclose operation
Minimal
Minimal / No Risk — Largely Unregulated
Requirement: Encouraged (not required) to adopt voluntary codes of conduct
Spam filters, AI in video games, recommendation systems, most B2B AI tools with no significant impact on fundamental rights

AI Governance Frameworks & Standards

Key frameworks, standards, and principles that form the backbone of AI governance globally — all frequently tested in the AAIA.

Framework Families: Know the difference between risk frameworks (NIST AI RMF — how to manage risk), management system standards (ISO 42001 — how to structure an AI program), regulations (EU AI Act — what is legally required), and principles (OECD — what values AI should embody).
ISO/IEC 42001 — AI Management System Standard
International Organization for Standardization (ISO) — published December 2023
Standard
Provides requirements for establishing, implementing, maintaining, and continually improving an AI Management System (AIMS). Analogous to ISO 27001 for information security — it's an auditable, certifiable standard for responsible AI.
Clause 4: Context of organization (stakeholders, AI policy). Clause 6: Planning (risk/opportunity identification, AI objectives). Clause 8: Operation (AI system impact assessments). Clause 9: Performance evaluation. Clause 10: Improvement.
⚡ Exam Hook: ISO 42001 is the ONLY certifiable international standard specifically for AI management systems. It follows the High-Level Structure (HLS/Annex SL) of ISO management standards, making it compatible with ISO 27001 and ISO 9001. An AI system impact assessment (AISIA) is a key ISO 42001 requirement.
OECD AI Principles
Organisation for Economic Co-operation and Development — adopted 2019, updated 2024
Principles
① Inclusive growth, sustainable development & well-being ② Human-centred values & fairness ③ Transparency & explainability ④ Robustness, security & safety ⑤ Accountability. Endorsed by G20; foundational to many national AI strategies.
Invest in AI R&D; develop inclusive digital infrastructure; enable policy environment; build human capacity; enable international cooperation. Not a compliance checklist — a normative guide.
⚡ Exam Hook: OECD AI Principles were the FIRST intergovernmental standard on AI. The 5 principles are often tested by scenario — e.g., a system that crashes unpredictably violates "robustness, security & safety."

OECD's 5 AI Principles

1
Inclusive Growth & Well-being
AI should benefit people and the planet; foster inclusive growth and human well-being as the primary goal.
2
Human-Centred Values & Fairness
Respect rule of law, human rights, democratic values. Address fairness, non-discrimination, dignity, and autonomy.
3
Transparency & Explainability
Stakeholders should be able to understand AI outcomes. Meaningful information about AI systems and decisions must be disclosed.
4
Robustness, Security & Safety
AI systems must function reliably and securely throughout their lifecycle. Risks must be continually assessed and managed.
5
Accountability
Organizations and individuals developing or deploying AI must be accountable for its proper functioning and compliance with these principles.
IEEE 7000 Series — Ethically Aligned Design
Institute of Electrical and Electronics Engineers
Standard
IEEE 7000: Model process for addressing ethical concerns in system design. IEEE 7001: Transparency of autonomous systems. IEEE 7010: Well-being metrics for AI. Focus on embedding ethics into engineering processes from design onwards.
Value Elicitation (identifying stakeholder values), Ethical Risk Assessment, Value Prioritization, and embedding ethics through the system development lifecycle. Less prescriptive than ISO 42001.
⚡ Exam Hook: IEEE standards focus on the engineering process of building ethical AI; ISO 42001 focuses on the management system for operating AI responsibly. Both are complementary — IEEE shapes what you build; ISO 42001 shapes how you manage it.
AI Governance Framework Quick Comparison
FrameworkTypeMandatory?Primary FocusKey Concept
NIST AI RMFRisk frameworkNo (US voluntary)AI risk management lifecycleGOVERN → MAP → MEASURE → MANAGE
ISO/IEC 42001Mgmt system standardNo (certifiable)AI management system (AIMS)AI System Impact Assessment (AISIA)
EU AI ActRegulation (law)Yes (EU)Risk-based AI classification4 risk tiers; conformity assessment for High Risk
OECD AI PrinciplesPrinciples / policyNo (normative)Values for trustworthy AI5 principles, adopted by 42+ countries
IEEE 7000 SeriesTechnical standardsNoEthics in engineering designValue Elicitation; ethics in SDLC

AI Risk Management

Understanding the types, sources, and management approaches for AI-specific risks — from model failure to third-party exposure.

AI Risk vs. Traditional IT Risk: AI systems introduce novel risk dimensions — model drift, training data bias, explainability gaps, and emergent behaviors — that traditional IT risk frameworks were not designed to capture. The AAIA tests whether candidates understand these distinctions.

Six AI Risk Categories

🤖
Model Risk
Risk from incorrect, incomplete, or poorly performing AI models producing adverse outcomes. Includes model drift, overfitting, underfitting, and hallucination.
Example: Credit model approves risky loans because training data is outdated
🗄️
Data Risk
Risks from biased, incomplete, low-quality, or poisoned training data. Includes data lineage failures and unauthorized data use in model training.
Example: Facial recognition trained only on certain demographics underperforms for others
⚙️
Operational Risk
Risk of AI system failures, unexpected behaviors, or integration failures in production. Includes system outages, dependency failures, and model degradation over time.
Example: AI-powered fraud detection system goes offline during peak transaction period
⚖️
Ethical & Bias Risk
Risk of AI producing discriminatory, unfair, or harmful outcomes. Includes disparate impact across protected groups, dignity violations, and autonomy erosion.
Example: Hiring AI consistently deprioritizes candidates from certain zip codes
🤝
Third-Party AI Risk
Risks from AI systems developed, hosted, or maintained by vendors and partners. Includes black-box models, vendor lock-in, and limited audit access.
Example: Organization cannot audit a purchased AI decision engine's logic or training data
📋
Regulatory & Compliance Risk
Risk of violating AI-related regulations (EU AI Act, GDPR, sector-specific rules). Includes failure to meet documentation, transparency, or human oversight requirements.
Example: Deploying a high-risk EU AI Act system without completing conformity assessment
AI Risk Assessment Process
Applied to AI systems within an audit or governance context
Process
Identify AI systems and use cases in scope ② Classify by risk tier (EU AI Act model or internal taxonomy) ③ Assess inherent risk (likelihood × impact for each risk type) ④ Evaluate controls (design and operating effectiveness) ⑤ Determine residual risk and whether within appetite.
Assess: training data quality and provenance, model validation documentation, explainability of outputs, human oversight mechanisms, change management for model updates, and monitoring for drift and anomalies.
⚡ Exam Hook: The audit risk model (Inherent Risk × Control Risk = Detection Risk) applies to AI audits. High-risk AI systems with weak controls require more extensive testing. Residual risk must align with the organization's stated risk appetite.
Third-Party AI Risk Management
Vendor & Supply Chain AI Risk
Risk Domain
Organizations cannot simply transfer AI risk to vendors — they retain accountability. Key issues: right to audit contractual clauses, access to model cards/documentation, vendor financial stability, and subcontractor AI use.
Vendor due diligence process, AI-specific contract clauses (audit rights, incident notification, SLAs), ongoing monitoring of vendor performance, and review of model cards, system cards, and AI transparency reports provided by vendors.
⚡ Exam Hook: "Model cards" (standardized documentation of a model's intended use, limitations, and performance) and "system cards" are key vendor transparency artifacts. Absence of these is an audit finding. The acquiring organization always retains accountability for AI outcomes regardless of vendor arrangement.
AI Risk vs. Traditional IT Risk — Key Differences
DimensionTraditional IT RiskAI-Specific Risk
DeterminismSystems behave predictably given same inputsAI outputs may vary; probabilistic and context-dependent
ExplainabilityLogic is auditable in codeComplex models (deep learning) may be "black boxes"
Change managementUpdates are planned, version-controlledModels can drift silently with no code change
Data dependencyData integrity is important but boundedTraining data quality fundamentally shapes all outputs
Bias riskLimited — systems do what they're programmed to doHigh — bias in data or design produces systematically unfair outcomes
Emergent behaviorNot applicableAI may develop unexpected behaviors not in original design

AI Ethics, Privacy & Data Governance

The ethical and data governance pillars of responsible AI — fairness, transparency, accountability, privacy by design, and data quality.

Ethics ≠ Compliance: Ethical AI goes beyond legal compliance. A system can be technically lawful but still produce unfair, harmful, or untrustworthy outcomes. AAIA auditors assess both regulatory compliance AND whether AI systems uphold ethical principles in practice.
Core AI Ethics Principles
Synthesized from NIST AI RMF, OECD, EU AI Act, IEEE
Ethics
Fairness: Absence of unjustified discrimination across groups. Accountability: Clear ownership and answerability for AI decisions. Transparency: Openness about how AI works and what data it uses. Explainability: Ability to describe how a specific decision was reached in understandable terms.
Safety: AI operates reliably without causing harm. Privacy: AI respects individuals' data rights. Inclusivity: AI benefits and works for all groups, not just majority populations. Human oversight: Humans retain meaningful ability to override or correct AI decisions.
⚡ Exam Hook: In audit practice, "transparency" is assessed through documentation (can someone explain the model?); "explainability" is assessed through outputs (can a specific decision be justified to an affected person?). These are different — a system can be transparent (documented) but not explainable (complex black box).
Algorithmic Fairness & Bias
AI Fairness Testing & Bias Mitigation
Fairness
Historical bias: Training data reflects past discrimination. Representation bias: Certain groups underrepresented in training data. Measurement bias: Proxies used that correlate with protected characteristics. Aggregation bias: One model applied to heterogeneous groups.
Demographic parity: Equal positive prediction rates across groups. Equal opportunity: Equal true positive rates. Equalized odds: Equal TPR and FPR. Individual fairness: Similar individuals treated similarly. No single metric works for all use cases.
⚡ Exam Hook: The "fairness impossibility theorem" — it is mathematically impossible to satisfy all fairness metrics simultaneously. Auditors must understand which fairness criteria are most appropriate for the specific use case and whether the organization has made a defensible, documented choice.
Privacy in AI Systems — GDPR & Privacy by Design
GDPR (EU Regulation 2016/679); Privacy by Design (Ann Cavoukian)
Privacy
Article 22: Right not to be subject to solely automated decision-making with significant effects — requires human review. Recital 71: Right to explanation of automated decisions. Data minimization: Use only data necessary for the stated AI purpose. Purpose limitation: Don't repurpose training data for unrelated AI models.
① Proactive, not reactive ② Privacy as default ③ Privacy embedded in design ④ Full functionality (positive-sum) ⑤ End-to-end security ⑥ Visibility and transparency ⑦ Respect for user privacy. Applied to AI: privacy controls built into the model architecture, not bolted on afterward.
⚡ Exam Hook: GDPR Article 22 is the most AAIA-relevant privacy provision — it requires human oversight for consequential automated decisions. A purely automated AI making decisions on credit, employment, or insurance without human review may violate Article 22.
Data Governance for AI
Training Data Quality, Lineage & Provenance
Data
Accuracy: Data correctly represents reality. Completeness: No critical gaps. Consistency: No contradictions across sources. Timeliness: Data is current and not stale. Representativeness: Training data adequately covers all target groups and scenarios.
Auditors verify that the organization can trace: where training data came from (provenance), how it was transformed (lineage), whether consent/licensing was obtained, and what version of data was used to train which model version. This is a key audit evidence requirement.
⚡ Exam Hook: Data governance for AI is broader than for traditional systems — it includes training data, validation data, and inference-time data. "Garbage in, garbage out" is not just an IT axiom in AI — biased training data produces systematically biased models that cannot be corrected without data remediation.
AI Ethics & Privacy — Key Audit Questions
AreaKey Audit QuestionWhat to Look For
FairnessHas the organization tested for demographic disparities in model outputs?Bias testing reports, fairness metrics, remediation plans
TransparencyCan the organization explain how the AI model works to affected stakeholders?Model cards, system documentation, explainability tools (LIME/SHAP)
AccountabilityIs there a named AI owner responsible for model performance and compliance?RACI matrices, AI governance committee charters, role descriptions
PrivacyDoes training data collection comply with applicable privacy laws?Privacy impact assessments (PIAs), consent records, data licensing
Human oversightAre humans able to review, override, or reject AI decisions?Human-in-the-loop controls, override logs, escalation procedures
Data governanceIs training data quality assessed and documented before model training?Data quality reports, lineage documentation, version control for datasets

Practice Quiz

10 AAIA-style questions on AI Governance & Risk. Select an answer to see instant feedback.

Question 1 of 10
The NIST AI Risk Management Framework (AI RMF) includes four core functions. Which function is considered foundational because it enables and underpins all the other three?
A Map
B Measure
C Manage
D Govern
GOVERN is the foundational function — it establishes the culture, policies, processes, and accountability structures that enable the other three functions (Map, Measure, Manage) to operate effectively. Without governance, risk identification and management are ad hoc and unsustainable.
Question 2 of 10
Under the EU AI Act, which of the following AI applications would be classified as "high risk" and subject to strict conformity assessment requirements?
A A spam filter for email
B An AI chatbot for customer service
C An AI system used for employee recruitment and selection
D An AI recommendation engine for a streaming service
AI systems used in employment, including recruitment and selection, are explicitly listed as high-risk under the EU AI Act Annex III. They require conformity assessment, technical documentation, human oversight, and registration before deployment. Spam filters and recommendation engines are minimal/no risk.
Question 3 of 10
Which international standard provides a certifiable management system framework specifically for AI, analogous to ISO 27001 for information security?
A ISO/IEC 27001
B ISO/IEC 42001
C ISO/IEC 31000
D NIST AI RMF
ISO/IEC 42001 (published December 2023) is the first certifiable international standard for AI Management Systems (AIMS). It follows the High-Level Structure (HLS) of other ISO management standards, making it compatible with ISO 27001 and ISO 9001. NIST AI RMF is a framework, not a certifiable standard.
Question 4 of 10
An organization discovers that its AI hiring tool consistently scores candidates from certain zip codes lower than equally qualified candidates from other areas. This is BEST described as which type of AI bias?
A Aggregation bias
B Measurement bias
C Representation bias
D Historical bias
This is measurement bias — zip code is a proxy variable that correlates with protected characteristics (race, socioeconomic status). Using proxies that are correlated with protected attributes introduces bias even when the protected attribute itself is not directly used. This is also sometimes called proxy discrimination.
Question 5 of 10
GDPR Article 22 is particularly relevant to AI auditors because it:
A Requires all AI training data to be stored in the EU
B Prohibits the use of AI in financial services
C Grants individuals the right not to be subject to solely automated decisions with significant effects, requiring human review
D Mandates that all AI systems must be explainable at a technical level
GDPR Article 22 grants individuals the right not to be subject to solely automated decision-making that produces significant legal or similarly significant effects (e.g., credit decisions, hiring). Organizations must provide meaningful human oversight and review mechanisms. This is a key audit control to verify for consequential AI systems.
Question 6 of 10
During an audit, you find that a vendor-provided AI system has no model card and the organization cannot obtain documentation about the model's training data or known limitations. What is the MOST significant risk this represents?
A Operational risk — the model may be slow
B Third-party AI risk — lack of transparency limits the organization's ability to identify, assess, and manage model risks
C Data risk — the training data may be stored insecurely
D Regulatory risk only — the vendor is non-compliant with GDPR
The absence of model documentation is primarily a third-party AI risk issue — without model cards or documentation, the acquiring organization cannot assess model limitations, potential biases, or appropriate use cases. The organization retains accountability for AI outcomes regardless of the vendor arrangement, making opacity a critical governance gap.
Question 7 of 10
The OECD AI Principles were significant because they were the:
A First legally binding international AI regulation
B First intergovernmental standard on AI, adopted by over 40 countries
C First AI management system standard certifiable by third parties
D First framework developed specifically for AI auditors
The OECD AI Principles (2019) were the first intergovernmental standard on AI, adopted by 42+ countries including all G20 members. They are not legally binding but normative, and they influenced the EU AI Act and many national AI strategies. ISO 42001 is the certifiable standard; the EU AI Act is the first major legally binding AI regulation.
Question 8 of 10
An AI model that was performing well 18 months ago is now producing increasingly inaccurate results despite no code changes. This is MOST likely caused by:
A Algorithmic bias introduced by developers
B A cyberattack on the model's parameters
C Model drift — the real-world data distribution has changed since training
D Inadequate testing during initial deployment
Model drift (also called data drift or concept drift) occurs when the statistical properties of the input data in production change relative to the training data. Unlike traditional software, AI models can degrade silently without any code changes — a key AI-specific operational risk. Continuous monitoring is the primary control.
Question 9 of 10
Privacy by Design, as applied to AI systems, primarily means:
A Encrypting all AI model weights and parameters
B Embedding privacy controls into AI system architecture from the outset, rather than adding them afterward
C Using only publicly available data for AI training
D Conducting a privacy impact assessment after model deployment
Privacy by Design means integrating privacy protections proactively and by default into the system design — data minimization built into the model, purpose limitation defined before data collection, anonymization in the pipeline. It's about building privacy in, not bolting it on after deployment. A PIA after deployment is reactive, not Privacy by Design.
Question 10 of 10
When assessing training data quality for an AI system, which dimension ensures that the data adequately covers all target population subgroups and real-world scenarios the model will encounter?
A Accuracy
B Timeliness
C Consistency
D Representativeness
Representativeness is the data quality dimension that ensures training data adequately covers the full diversity of populations and scenarios the model will encounter in production. Unrepresentative training data is a primary source of algorithmic bias — models trained on non-representative samples underperform for underrepresented groups.
Practice Score — Keep studying with FlashGenius!

Memory Hooks

High-yield mnemonics and patterns to lock in AI Governance & Risk concepts for the AAIA.

🧭
NIST AI RMF — 4 Functions
The four NIST AI RMF functions in order: GOVERN → MAP → MEASURE → MANAGE. GOVERN is the foundation; the other three are the lifecycle of risk management.
Mnemonic: "Good Managers Measure More" — Govern, Map, Measure, Manage. GOVERN underpins all three M's.
🇪🇺
EU AI Act — 4 Risk Tiers
Risk tiers from highest to lowest: Unacceptable (Banned) → High (Conformity assessment) → Limited (Transparency) → Minimal (Voluntary). High-risk = employment, credit, law enforcement, critical infrastructure.
Mnemonic: "Unhappy Humans Like Minimal Regulation" — Unacceptable, High, Limited, Minimal
⚖️
AI Ethics — FATE Framework
Core AI ethics principles: Fairness, Accountability, Transparency, Explainability. Remember: Transparency = can you explain the system? Explainability = can you justify a specific decision to the affected person?
Mnemonic: "FATE decides AI's destiny" — Fairness, Accountability, Transparency, Explainability
🌍
OECD's 5 AI Principles
Inclusive growth → Human-centred values → Transparency → Robustness/Safety → Accountability. First-ever intergovernmental AI standard (2019). Not legally binding — normative.
Mnemonic: "I Have To Remain Accountable" — Inclusive, Human-centred, Transparency, Robustness, Accountability
📊
AI Risk Types — 6 Categories
The 6 AI risk types: Model, Data, Operational, Ethical/Bias, Third-party, Regulatory. Model drift is the key Model risk; bias in training = Data risk; vendor black-box = Third-party risk.
Mnemonic: "My Dog Often Eats The Rug" — Model, Data, Operational, Ethical, Third-party, Regulatory
🔒
GDPR Article 22 — The AI Provision
Article 22 = right not to be subject to solely automated decisions with significant effects. Requires human review for AI making consequential decisions (credit, hiring, insurance). Key control: Human-in-the-loop (HITL).
Mnemonic: "Article 22 = humans must review consequential AI decisions." If the AI alone decides your fate → Article 22 kicks in.
High-Yield AAIA Facts — AI Governance & Risk Domain
FactAnswer
The only certifiable ISO standard for AI management systemsISO/IEC 42001 (published Dec 2023)
NIST AI RMF foundational functionGOVERN — underpins Map, Measure, and Manage
First intergovernmental AI standardOECD AI Principles (2019), adopted by 42+ countries
EU AI Act: highest risk tier (not prohibited)High Risk — requires conformity assessment, registration, documentation
GDPR provision most relevant to consequential AI decisionsArticle 22 — right to human review of automated decisions
AI model degrades with no code changes — what is this?Model drift (data drift / concept drift)
AI risk that the acquiring org retains even when using vendorsThird-party AI risk — accountability cannot be outsourced
Fairness impossibility theoremCannot satisfy all fairness metrics simultaneously — must make a documented, defensible choice
Document showing a model's intended use, limitations, and performanceModel card
Privacy built into AI architecture from design, not added laterPrivacy by Design (Ann Cavoukian, 7 principles)

Flashcards & Study Advisor

Click any card to flip it. Use the Study Advisor for targeted guidance by topic area.

AI Governance

What are the 4 functions of the NIST AI RMF, and which is foundational?

Click to flip
Answer

GOVERN, MAP, MEASURE, MANAGE. GOVERN is foundational — it establishes the culture, policies, and accountability that enable the other three. Mnemonic: "Good Managers Measure More."

EU AI Act

Name the 4 EU AI Act risk tiers and give one example for each.

Click to flip
Answer

Unacceptable (banned — social scoring), High Risk (conformity assessment — recruitment AI, credit scoring), Limited (transparency — chatbots, deepfakes), Minimal (voluntary — spam filters, game AI).

AI Framework

What is ISO/IEC 42001 and how does it differ from NIST AI RMF?

Click to flip
Answer

ISO 42001 is a certifiable AI Management System (AIMS) standard (Dec 2023) — like ISO 27001 for AI. NIST AI RMF is a voluntary risk framework, not certifiable. ISO 42001 = structured management system; NIST = risk lifecycle approach.

AI Ethics

What is the FATE framework, and what is the difference between transparency and explainability?

Click to flip
Answer

FATE = Fairness, Accountability, Transparency, Explainability. Transparency = can you explain how the system works (documentation, process)? Explainability = can you justify a specific decision to the affected person? A documented black box can be transparent but not explainable.

AI Risk

What is model drift, and why is it a unique AI risk compared to traditional IT?

Click to flip
Answer

Model drift = degradation of model performance because real-world data changes relative to training data. Unique because it happens silently with no code changes — traditional IT systems don't degrade without modification. Continuous monitoring is the primary control.

Privacy

What does GDPR Article 22 require for consequential AI decisions?

Click to flip
Answer

Article 22 grants individuals the right not to be subject to solely automated decisions with significant legal or similar effects. Organizations must provide: human review capability, the ability to contest the decision, and meaningful information about the decision logic.

AI Risk

A vendor's AI system has no model card. What risk category is this, and why does accountability still rest with the buyer?

Click to flip
Answer

Third-party AI risk. Accountability cannot be outsourced — the acquiring organization is responsible for AI outcomes regardless of vendor arrangement. Without a model card, the buyer cannot assess limitations, biases, or appropriate use cases — a critical governance gap.

AI Fairness

What is the "fairness impossibility theorem" and why does it matter for AI auditors?

Click to flip
Answer

It is mathematically impossible to satisfy all fairness metrics simultaneously (e.g., demographic parity, equal opportunity, equalized odds conflict). Auditors assess whether the organization made a documented, defensible choice of which fairness criteria is appropriate for the specific use case — not whether they achieved all metrics.

Ready for the Full AAIA Deck?

Access hundreds of AI governance, risk, and audit flashcards, practice tests, and study tools on FlashGenius.

Unlock Full Practice Tests on FlashGenius →

Study Advisor

NIST AI RMF
EU AI Act
ISO 42001 & OECD
AI Risk & Bias
Exam Strategy

NIST AI RMF Tips

  • GOVERN is the anchor: Every AAIA question about culture, policies, roles, or accountability structures maps to GOVERN. If an organization lacks AI governance, MAP/MEASURE/MANAGE are ineffective.
  • MAP = context, not controls: MAP is about understanding the AI system's purpose, stakeholders, and potential impacts before risk treatment — not about implementing controls.
  • MEASURE ≠ monitoring: MEASURE is about analyzing and quantifying risk (testing, benchmarks, evaluations). Ongoing operational monitoring is part of MANAGE.
  • Profiles and Tiers: A Profile captures current vs. target state. Tiers (1–4) measure how rigorously an organization implements the framework — Tier 4 is most rigorous. Tiers are NOT risk levels.
  • Voluntary but influential: NIST AI RMF is voluntary in the US, but regulators and courts increasingly reference it as the standard of care for AI risk management.

EU AI Act Tips

  • High Risk examples to memorize: Employment/recruitment, credit scoring, critical infrastructure, law enforcement, migration/border control, administration of justice, medical devices. If it affects fundamental rights → likely High Risk.
  • Conformity assessment ≠ certification: High-risk AI requires conformity assessment (documentation + testing + registration), but not necessarily third-party certification in all cases — some can be self-assessed.
  • Prohibited vs. High Risk: Prohibited = banned outright (social scoring, real-time biometric ID in public with narrow exceptions). High Risk = allowed with strict controls. This distinction is frequently tested.
  • General Purpose AI (GPAI): The EU AI Act also covers foundation models and GPAI — they have their own transparency and systemic risk requirements separate from the application-level risk tiers.
  • Extraterritorial reach: Like GDPR, the EU AI Act applies to any provider whose AI system is used in the EU, regardless of where the provider is based.

ISO 42001 & OECD Tips

  • ISO 42001 key requirement: AI System Impact Assessment (AISIA) — organizations must assess the potential impact of AI systems on individuals and society. This is analogous to the DPIA in GDPR.
  • HLS (High-Level Structure): ISO 42001 follows the same clause structure as ISO 27001 and ISO 9001, enabling integrated management systems. Know: Clauses 4 (context), 6 (planning), 8 (operations), 9 (evaluation), 10 (improvement).
  • OECD ≠ legally binding: OECD AI Principles are normative (aspirational), not law. But they influenced the EU AI Act, US executive orders, and most national AI strategies.
  • OECD principle 4 — Robustness: AI systems must be secure and safe throughout their lifecycle — this directly maps to the AAIA audit concern of continuous monitoring for model drift and adversarial attacks.
  • Framework stacking: Organizations often implement multiple frameworks. A mature AI governance program might use NIST AI RMF for risk management, ISO 42001 for the management system, OECD for ethical principles, and the EU AI Act for regulatory compliance.

AI Risk & Bias Tips

  • Model drift is silent: Unlike software bugs that throw errors, model drift degrades performance gradually and invisibly. The control is continuous performance monitoring with statistical thresholds and alerts.
  • Proxy discrimination: Using variables that correlate with protected characteristics (zip code → race, name → ethnicity) is a form of measurement bias even when the protected attribute is excluded from the model.
  • Third-party accountability: In every AAIA scenario involving vendor AI, the acquiring organization retains accountability. "The vendor is responsible" is never the right audit answer.
  • Fairness metrics conflict: If a question asks which fairness metric to apply, the answer is: it depends on the use case and harm being prevented. There is no universally correct metric — document the rationale.
  • Data governance for AI: Representativeness is the most AI-specific data quality dimension — traditional IT data quality frameworks (accuracy, completeness, timeliness) don't fully capture the need for demographic and scenario coverage in training data.

AAIA Exam Strategy

  • Domain 1 is 33% — highest priority: Governance & Risk is the biggest single domain. Invest significant prep time here, especially on NIST AI RMF, EU AI Act, and ISO 42001 distinctions.
  • Framework identification questions: "Which framework/standard addresses X?" — Know: NIST = risk management lifecycle, ISO 42001 = certifiable management system, EU AI Act = law/risk tiers, OECD = normative principles.
  • Scenario-based questions: Most AAIA questions are scenario-based. For governance questions, always ask: Is there a policy? Is it operationalized? Is there oversight? Does residual risk fit the appetite?
  • Accountability is non-delegable: In any scenario involving vendors, third parties, or outsourced AI, the acquiring/deploying organization retains ultimate accountability. This is tested repeatedly.
  • AAIA = auditor's perspective: Unlike CISA which is broader, AAIA always frames questions from the auditor's view — your job is to assess, test, and report on controls, not to implement or recommend specific technologies.