AIGP Exam · Domain I of IV · IAPP Certification

Foundations of AI Governance

Master the conceptual bedrock of the AIGP exam — AI types, machine learning approaches, the AI lifecycle, risk categories, and the responsible AI principles that underpin all governance frameworks.

Narrow AI vs AGI ML Approaches AI Lifecycle Risk Categories Responsible AI OECD Principles
Study with Flashcards →
Domain I
AIGP BOK
3
AI Capability Tiers
7
AI Lifecycle Stages
5
OECD AI Principles
6
Core Risk Categories

Foundations of AI Governance

Domain I establishes the conceptual vocabulary that every other AIGP domain builds on. You cannot govern what you do not understand — this domain covers what AI is, how it works, where it can go wrong, and the ethical principles that shape responsible governance.

Why Governance? AI systems make decisions at a scale, speed, and complexity that no individual human overseer can match. A hiring algorithm can screen thousands of applicants in seconds — and encode discrimination at the same scale. Governance exists to ensure AI serves human values, remains accountable, and causes no unjustified harm.

🤖 AI & ML Concepts

Understanding the difference between AI types, how machine learning models are trained, and what generative AI and foundation models actually do — the technical vocabulary the AIGP exam assumes you know.

🔄 AI Lifecycle

Governance doesn't start at deployment — it must be embedded from the first line of problem definition through decommissioning. Each stage introduces different risks and requires different controls.

⚖️ Responsible AI

The ethical principles — fairness, transparency, accountability, explainability, safety, and human oversight — that form the normative foundation of every AI governance framework tested on the AIGP exam.

🗺️ Domain I Coverage — What Gets Tested
Topic AreaKey Concepts
AI capability typesNarrow AI (ANI), Artificial General Intelligence (AGI), Superintelligence (ASI)
Machine learning approachesSupervised, unsupervised, reinforcement learning; deep learning; foundation models
Generative AILLMs, diffusion models, hallucination, prompt engineering, fine-tuning
AI lifecycle7 stages from problem definition to retirement; governance touchpoints at each stage
AI risk categoriesSafety, privacy, bias/fairness, security, accountability, transparency, autonomy
Responsible AI principlesFairness, transparency, accountability, explainability, safety, human oversight
International AI principlesOECD AI Principles (5), UNESCO Recommendation, IEEE Ethically Aligned Design
Key terminologyAlgorithm, model, training, inference, bias, drift, hallucination, interpretability
📍 Where AI Governance Sits

AI governance sits at the intersection of technology, law, ethics, and organizational risk management. An AI governance professional must be fluent in all four domains — understanding enough technical detail to identify risk, enough law to ensure compliance, enough ethics to spot value misalignment, and enough risk management to prioritize controls.

🧩 AI vs ML vs Deep Learning

AI is the broad field of systems that simulate human intelligence.

Machine Learning (ML) is a subset of AI — systems that learn from data without being explicitly programmed for each decision.

Deep Learning is a subset of ML using multi-layered neural networks — the engine behind image recognition, NLP, and generative AI.

AI & Machine Learning Fundamentals

The AIGP exam expects you to understand AI at a conceptual level — not to build models, but to govern them. That requires knowing what type of AI you're governing and how it learns.

AI Capability Tiers

Tier 1
Narrow AI (ANI)
✓ All AI Today
Designed for ONE specific task. Highly capable within its domain but cannot transfer knowledge to other tasks. Every AI system currently deployed commercially is ANI.
e.g., spam filters, image recognition, ChatGPT, fraud detection, recommendation engines
Tier 2
Artificial General Intelligence (AGI)
◎ Theoretical
An AI system with human-level cognitive ability across ALL domains — reasoning, creativity, learning, and common sense. Can apply knowledge flexibly across tasks like a human can.
No deployed system qualifies. Subject of active research and intense governance debate.
Tier 3
Artificial Superintelligence (ASI)
◌ Hypothetical
An AI that surpasses human intelligence across all domains, including scientific creativity and social intelligence. Remains entirely hypothetical. Central to long-term AI safety discourse.
Does not exist. Governance concern: loss of meaningful human oversight.

Machine Learning Approaches

Supervised Learning
"Learn from labeled examples"
Model is trained on labeled input-output pairs. It learns to predict the output for new inputs. Most common ML approach in practice.
📌 Examples: email spam detection, credit scoring, medical diagnosis, image classification
Unsupervised Learning
"Find hidden patterns in unlabeled data"
No labeled outputs — model discovers structure, clusters, or anomalies in raw data on its own. Used for exploration and pattern discovery.
📌 Examples: customer segmentation, anomaly detection, topic modeling, recommendation engines
Reinforcement Learning
"Learn by trial, error, and reward"
Agent learns by taking actions in an environment and receiving reward or penalty signals. Optimizes long-term cumulative reward.
📌 Examples: game-playing AI (AlphaGo), robotic control, trading bots, autonomous vehicles
Self-Supervised / Foundation Models
"Pretrained at massive scale"
Trained on enormous unlabeled datasets to learn general representations. Then fine-tuned for specific tasks. The basis of all large language models (LLMs).
📌 Examples: GPT-4, Claude, Gemini, DALL-E, Stable Diffusion
🔠 Generative AI — Key Concepts for Governance
ConceptDefinitionGovernance Implication
Large Language Model (LLM)Foundation model trained on vast text to generate, summarize, translate, and reason in natural languageRisk of hallucination, copyright infringement, bias amplification, and misuse at scale
HallucinationAI generates plausible-sounding but factually incorrect information with apparent confidenceHigh-stakes use cases (medical, legal) require human review; transparency obligations apply
Fine-tuningAdapting a pretrained foundation model for a specific task using smaller, domain-specific datasetsFine-tuning data quality, bias, and provenance must be governed — not just the base model
Prompt EngineeringCrafting inputs to guide model outputs toward desired behavior without retrainingPrompt injection attacks; adversarial prompting; output quality and safety depend on prompt design
Multimodal AIAI that processes and generates multiple data types — text, images, audio, video — in combinationExpanded harm surface: deepfakes, synthetic media, cross-modal bias
🔍 Explainability vs Interpretability

These terms are often confused but have distinct meanings in AI governance contexts. Both matter for accountability and regulatory compliance.

📐 Model Complexity Trade-off

Simple models (linear regression, decision trees) are highly interpretable but less accurate for complex tasks. Complex models (deep neural networks, LLMs) achieve higher accuracy but are "black boxes" — difficult to interpret, requiring external explainability tools (SHAP, LIME).

Explainability
"Why did the model make THIS decision?"
Post-hoc explanation of a specific output. Answers: what features drove this prediction? Why was this person denied a loan? Can be generated for black-box models using tools like SHAP or LIME.
e.g., "The loan was denied because income was below threshold and debt-to-income ratio exceeded 40%."
Interpretability
"How does the model work overall?"
Intrinsic property of the model's structure — how inputs map to outputs globally. Simple models (decision trees) are inherently interpretable. Neural networks are not — they require external tools.
e.g., A decision tree where you can trace every branch manually — the logic is visible in the structure itself.

The AI Lifecycle & Governance Touchpoints

Governance must be embedded throughout the entire AI lifecycle — not bolted on at deployment. Each stage introduces distinct risks and requires specific controls. The AIGP exam tests governance at every stage.

Stage 1
Problem Definition & Design
Governance first. Define purpose, scope, risk level, and human oversight requirements.
Stage 2
Data Collection & Preparation
Data quality, consent, bias audits, privacy assessment, labeling standards.
Stage 3
Model Selection & Training
Algorithm choice, fairness constraints, training documentation, version control.
Stage 4
Validation & Testing
Bias testing, red-teaming, performance benchmarks, safety checks, explainability review.
Stage 5
Deployment
Access controls, human oversight integration, user disclosure, rollback plans.
Stage 6
Monitoring & Maintenance
Drift detection, incident response, performance logging, ongoing bias review.
Stage 7
Retirement & Decommission
Data disposal, model archiving, documentation of outcomes, transition planning.

Key Exam Rule: Governance applies from Stage 1 — problem definition — not from deployment. The most consequential governance decisions (what the AI is for, what data it can use, what oversight is required) are made before a single line of code is written. Late-stage governance is reactive; early-stage governance is preventive.

📋 AI Lifecycle — Governance Controls by Stage
StageKey Governance ActivityOutput / Artifact
Problem DefinitionAI use case risk classification; define intended use and prohibited usesUse Case Assessment, Risk Tier Classification
Data CollectionPrivacy impact assessment; consent verification; bias audit on source dataData Provenance Record, DPIA, Bias Assessment
TrainingDocument model architecture, hyperparameters, training data versionModel Card, Technical Documentation
Validation & TestingFairness testing across protected groups; adversarial testing; performance thresholdsValidation Report, Red Team Findings
DeploymentHuman oversight design; user notification and disclosure; access controlsDeployment Authorization, User Notice
MonitoringDrift detection; incident logging; periodic re-evaluationMonitoring Dashboard, Incident Reports
RetirementSafe data disposal; archive model documentation; communicate to affected usersDecommission Plan, Final Audit Report
📉 Model Drift — Why Monitoring Never Stops

Data drift: The statistical distribution of input data changes after deployment (e.g., customer behavior shifts post-pandemic). The model wasn't trained on this new distribution.

Concept drift: The relationship between inputs and outputs changes (e.g., what constitutes fraudulent behavior evolves). The model's predictions become systematically wrong even on similar inputs.

Governance implication: Continuous monitoring with predefined performance thresholds and re-training triggers is mandatory for high-stakes AI systems.

📄 Model Cards & Datasheets

Model Card: A short document accompanying a trained model that describes its intended use, performance across demographic groups, limitations, and known failure modes. Developed by Google.

Datasheet for Datasets: Standardized documentation of a training dataset's motivation, composition, collection process, preprocessing, and recommended uses. Promotes transparency and accountability in data-driven AI.

AI Risk Categories & Harms

AI risk is multi-dimensional. Governance professionals must understand each category, its causes, and the controls that mitigate it. The AIGP exam presents scenario-based questions that require mapping a harm to its risk type and appropriate response.

⚠️ Safety Risk
Physical or psychological harm caused by AI system failures, errors, or misuse. Especially critical for autonomous systems and high-stakes decision contexts.
e.g., Autonomous vehicle failure, medical AI misdiagnosis, AI-generated disinformation causing panic
🔒 Privacy Risk
Unauthorized collection, use, or re-identification of personal data. AI can infer sensitive attributes from seemingly innocuous data at scale.
e.g., Facial recognition surveillance, re-identification from anonymized datasets, behavioral profiling
⚖️ Bias & Fairness Risk
Systematic errors producing discriminatory outcomes for protected groups. Can arise from training data, model design, or deployment context.
e.g., Hiring algorithm underscoring women, recidivism tool producing racially disparate results
🛡️ Security Risk
Attacks targeting AI systems to manipulate behavior, steal models or data, or cause failures. Unique attack vectors compared to traditional software.
e.g., Adversarial inputs, model inversion attacks, data poisoning, prompt injection
🏛️ Accountability Gap
Unclear or absent responsibility for AI-caused harms. Diffuse supply chains (data providers, developers, deployers) make attribution difficult.
e.g., Third-party AI vendor causes harm — who is responsible: the vendor, the deploying organization, or both?
🔍 Transparency & Explainability Risk
Inability to explain AI decisions to affected individuals or regulators. Reduces trust, prevents appeal, and may violate legal rights.
e.g., Black-box credit denial with no explanation provided; unexplainable parole recommendation
🔎 Types of AI Bias — Where It Enters the System
Bias TypeWhere It OriginatesExample
Historical biasTraining data reflects past societal discriminationJob ad algorithm deprioritizes women because historical hiring data was male-dominated
Representation biasTraining data underrepresents certain groupsFacial recognition trained mostly on lighter-skinned faces performs worse on darker-skinned faces
Measurement biasProxy variables introduced during data collectionUsing zip code as a proxy for creditworthiness encodes racial segregation patterns
Aggregation biasOne model applied to groups with different characteristicsMedical model trained on predominantly male data applied equally to female patients
Deployment biasModel used in a context different from what it was trained forResume-screening tool trained on tech roles applied to non-tech hiring
Feedback loop biasModel outputs influence future training dataPredictive policing tool increases arrests in over-patrolled areas → more data → more predictions → more policing
🔐 AI-Specific Security Threats

Adversarial Inputs: Carefully crafted inputs designed to fool the model — a stop sign with stickers that an autonomous vehicle misclassifies.

Data Poisoning: Injecting malicious data into the training set to corrupt model behavior — training a spam filter to pass certain phishing emails.

Model Inversion: Querying a model repeatedly to reconstruct sensitive training data — recovering personal information from a medical model.

Prompt Injection: Embedding malicious instructions in user inputs to override the LLM's intended behavior or safety controls.

🎯 Risk Assessment Approaches for AI

Risk Tiering: Classify AI use cases by potential harm severity and scope. High-risk = more controls. This is how the EU AI Act structures its obligations.

AI Impact Assessment: Pre-deployment analysis of potential harms across population groups, similar to a Privacy Impact Assessment but broader in scope.

Red Teaming: Adversarial testing where a dedicated team attempts to make the AI system fail, produce harmful outputs, or be manipulated — identifies risks before deployment.

Responsible AI Principles

Responsible AI is the normative core of AI governance. These principles — drawn from international consensus — define what "good AI" looks like and provide the ethical baseline that laws and frameworks operationalize.

Principles ≠ Rules: Responsible AI principles are not checklists — they are values that must be interpreted and applied to specific contexts. The same principle (e.g., fairness) may require different technical implementations depending on the use case, affected population, and legal context.

The 5 OECD AI Principles (2019)

Significance: The OECD AI Principles (2019) were the first internationally agreed-upon, government-backed standards for responsible AI — adopted by 42+ countries. They are non-binding but highly influential, shaping the EU AI Act, US AI policy, and every major national AI strategy.

Core Responsible AI Principles — Exam Definitions

PrincipleDefinitionPractical Application
FairnessAI must not discriminate unjustifiably against individuals or groups based on protected characteristicsBias audits across demographic groups; fairness metrics (equal opportunity, demographic parity)
TransparencyOrganizations must be open about what AI systems do, how they work, and when AI is being usedUser notices, model cards, public disclosures of AI use in consequential decisions
AccountabilitySomeone must be responsible for AI outcomes — humans cannot hide behind algorithmic decisionsClear ownership of AI systems; audit trails; grievance and redress mechanisms
ExplainabilityAI decisions, especially consequential ones, must be explainable to affected individuals and regulatorsExplainability tools (SHAP, LIME); human-readable justifications for automated decisions
SafetyAI must not cause physical or psychological harm; must be designed with fail-safes and risk mitigationSafety testing, red teaming, human-in-the-loop for high-risk decisions
Human OversightMeaningful human control must be maintained, especially for high-impact AI decisionsHuman-in-the-loop design; override mechanisms; prohibition on fully automated high-stakes decisions
PrivacyAI must respect individuals' rights to control their personal data and be free from undue surveillanceData minimization, purpose limitation, consent management, anonymization
BeneficenceAI should produce positive outcomes for individuals, society, and the environmentImpact assessments; benefit analysis; equitable access to AI benefits

Essential AI Governance Terminology

Algorithm
A set of rules or instructions a computer follows to solve a problem or make a decision
Model
The output of training an ML algorithm on data — a mathematical function that maps inputs to predictions
Training Data
The dataset used to teach an ML model — its quality, diversity, and representativeness directly shape model behavior
Inference
Using a trained model to make predictions or generate outputs on new, unseen data
Bias
Systematic error in model outputs that produces unfair, inaccurate, or discriminatory results for certain groups
Model Drift
Degradation in model performance over time as real-world data distributions change from training conditions
Hallucination
Generative AI producing plausible-sounding but factually incorrect or fabricated information
Foundation Model
A large model trained on broad data and adaptable to many tasks via fine-tuning or prompting (e.g., GPT-4, Claude)
Overfitting
Model memorizes training data too closely — performs well on training data but poorly on new, unseen data
Human-in-the-Loop
Design pattern requiring human review or approval before AI decisions take effect — especially for high-stakes outcomes
Red Teaming
Adversarial testing where experts attempt to find failures, harmful outputs, or safety gaps before deployment
Model Card
Standardized documentation of a model's intended use, performance, limitations, and ethical considerations

Practice Quiz — Foundations of AI Governance

10 AIGP-style scenario questions. Select your answer to see instant feedback and explanation.

Question 1 of 10
Which type of AI is designed for a SINGLE specific task and represents virtually all AI systems commercially deployed today?
AArtificial General Intelligence (AGI) — human-level reasoning across all domains
BNarrow AI (ANI) — capable within a specific domain only
CArtificial Superintelligence (ASI) — beyond human-level intelligence
DSymbolic AI — rule-based expert systems
Correct. Narrow AI (ANI) is designed for one specific task and cannot transfer that capability to other domains. Every AI system deployed commercially today — spam filters, LLMs, image recognition, fraud detection — is ANI. AGI and ASI remain theoretical and hypothetical respectively.
Question 2 of 10
A streaming platform's recommendation engine analyzes viewing patterns without predefined labels to group users with similar tastes. Which machine learning approach does this describe?
ASupervised Learning — it learns from labeled user preferences
BReinforcement Learning — it receives rewards for correct recommendations
CUnsupervised Learning — it discovers patterns in unlabeled behavioral data
DTransfer Learning — it adapts a pretrained model to this specific task
Correct. Unsupervised learning finds hidden structure in unlabeled data — in this case, clustering users based on behavioral patterns without being told which groups exist. Supervised learning would require labeled examples of "this user likes this genre." Reinforcement learning requires reward signals, not just data patterns.
Question 3 of 10
At which stage of the AI lifecycle should governance controls FIRST be applied?
AModel training — when the technical risks first materialize
BDeployment — when real users are affected
CMonitoring — once the model is in production
DProblem definition and design — before development begins
Correct. Governance must begin at problem definition — the first stage of the AI lifecycle. This is where the most consequential decisions are made: what the AI will do, what data it can use, who will oversee it, and what risks are acceptable. Late governance is reactive and costly; early governance is preventive.
Question 4 of 10
A hiring algorithm trained on 10 years of historical employee data consistently ranks female candidates lower than equally qualified male candidates. This MOST likely reflects which type of AI bias?
ADeployment bias — the model is being used in the wrong context
BOverfitting — the model memorized training data too closely
CHistorical bias — the training data reflects past discriminatory hiring practices
DModel drift — the model's performance has degraded over time
Correct. Historical bias occurs when training data reflects past societal discrimination. If the company historically hired fewer women, that pattern is encoded in the training data — and the model learns to replicate it. This is exactly what happened with Amazon's real-world hiring tool, which was scrapped in 2018.
Question 5 of 10
Which responsible AI principle focuses on ensuring that AI-driven decisions can be understood and communicated to the individuals they affect?
AFairness — AI must not discriminate against protected groups
BSafety — AI must not cause physical or psychological harm
CExplainability — AI decisions must be understandable to affected parties
DAccountability — someone must be responsible for AI outcomes
Correct. Explainability refers to the ability to communicate why an AI system reached a specific decision — in terms meaningful to the affected individual. It differs from interpretability (understanding the model's internal mechanics). Explainability is a legal requirement under GDPR Article 22 for automated decisions with significant effects.
Question 6 of 10
A generative AI model confidently states a specific court ruling as precedent in a legal brief, but the ruling does not exist. This is an example of which AI phenomenon?
AModel drift — the model's performance has degraded since training
BAdversarial attack — a bad actor manipulated the model's output
COverfitting — the model memorized its training data too precisely
DHallucination — the model generated plausible but fabricated content
Correct. Hallucination occurs when generative AI produces confidently stated but factually incorrect or entirely fabricated content. This is a well-documented failure mode of LLMs — two lawyers were sanctioned in 2023 for submitting AI-generated briefs citing nonexistent cases. Governance controls include human review requirements for high-stakes outputs.
Question 7 of 10
What is the key distinction between explainability and interpretability in AI systems?
AThey are the same concept used interchangeably across all frameworks
BExplainability addresses WHY a specific decision was made; interpretability addresses HOW the model works overall
CInterpretability applies only to neural networks; explainability applies to simpler models
DExplainability is a technical metric; interpretability is a legal standard
Correct. Explainability = post-hoc justification of a specific decision ("why was this loan denied?"). Interpretability = understanding the model's internal structure globally ("how does this model map inputs to outputs?"). Simple models (decision trees) are inherently interpretable. Black-box models require external explainability tools like SHAP or LIME.
Question 8 of 10
Six months after deployment, a credit risk model begins producing inconsistent predictions for similar applicant profiles. Investigation reveals the economic conditions the model was trained on no longer reflect current reality. This is BEST described as:
AModel hallucination
BAdversarial data poisoning
CConcept drift — the relationship between inputs and outputs has changed
DOverfitting during original training
Correct. Concept drift occurs when the underlying relationship between input features and the target outcome changes after deployment. In this case, economic conditions have shifted, so the model's learned patterns no longer hold. This is distinct from data drift (input distribution changes) and requires ongoing monitoring with clear re-training triggers.
Question 9 of 10
Which responsible AI principle requires that organizations — not algorithms — must answer for AI-caused harms, and that mechanisms for human oversight and redress must exist?
ATransparency
BFairness
CExplainability
DAccountability
Correct. Accountability is the principle that human actors — developers, deployers, organizations — cannot hide behind algorithmic decisions. Someone must be responsible for AI outcomes. Accountability requires audit trails, clear ownership, human oversight mechanisms, and grievance processes for affected individuals. It is the 5th OECD AI Principle.
Question 10 of 10
The OECD AI Principles (2019) were significant in international AI governance primarily because they:
ACreated binding legal obligations on all member countries to enact specific AI legislation
BReplaced existing data protection laws with a unified AI regulatory framework
CEstablished the first internationally agreed-upon, government-backed standards for responsible AI development
DMandated that all AI systems must be explainable before deployment in any member state
Correct. The OECD AI Principles are non-binding but represent the first internationally agreed-upon, government-endorsed standards for responsible AI. Adopted by 42+ countries, they established a shared normative vocabulary for AI governance that directly influenced the EU AI Act, US AI policy, and national AI strategies worldwide. They are not legally binding and do not replace existing laws.
0/10
Practice Quiz Score

Review explanations above for any missed questions.

Memory Hooks & AI Advisor

Lock in the most exam-tested foundations concepts. Use the Advisor for focused deep-dive guidance by category.

🤖
AI Capability Tiers — NAS
Narrow AI = all AI today (single-task). AGI = theoretical (human-level reasoning across domains). Superintelligence = hypothetical (beyond human). Governance exam only asks about ANI and the risks of the others.
"Now AI Succeeds" — Narrow today, AGI theory, Super hypothetical
📚
ML Approaches — SUR
Supervised = labeled data, predicts outcomes. Unsupervised = unlabeled data, finds patterns. Reinforcement = reward-based trial and error. Foundation models = self-supervised pretraining at massive scale.
"Students Usually Respond well" — Supervised, Unsupervised, Reinforcement
🔄
AI Lifecycle — Governance from Stage 1
7 stages: Problem Definition → Data → Training → Validation → Deployment → Monitoring → Retirement. Governance starts at Stage 1, NOT deployment. Most important decisions (purpose, scope, oversight) are made at the START.
"Please Don't Make Very Dumb Mistakes Repeatedly"
⚖️
Bias Types — Where It Enters
Historical (past discrimination in data), Representation (underrepresented groups), Measurement (bad proxy variables), Aggregation (one model for diverse groups), Deployment (wrong context), Feedback Loop (model outputs contaminate future data).
"History Rarely Makes Accurate Decisions Forward"
🔍
Explain vs Interpret
Explainability = WHY this specific decision was made (post-hoc, per decision). Interpretability = HOW the model works overall (intrinsic to model structure). Black-box models need external tools (SHAP, LIME) for explainability.
"Explain = this decision; Interpret = this model"
🌍
OECD 5 Principles — IHTRS
Inclusive growth, Human-centered values & fairness, Transparency & explainability, Robustness, security & safety, Accountability. First internationally agreed non-binding AI standards (2019). 42+ countries adopted.
"I Help Teams Run Accountability" — Inclusive, Human, Transparent, Robust, Accountable
👁️
Hallucination
Generative AI produces confident but false content — fabricated citations, invented facts, nonexistent legal cases. Not a bug — an emergent property of probabilistic text generation. Governance: require human review for high-stakes LLM outputs.
"Confident ≠ Correct — always verify LLM facts"
📉
Model Drift — Two Types
Data drift: Input distribution changes (new patterns the model hasn't seen). Concept drift: The input-output relationship itself changes (what was true when trained is no longer true). Both require monitoring and re-training triggers.
"Data drifts in; Concepts drift away — monitor both"

Flashcards

Click each card to flip and reveal the answer.

AI Types

What distinguishes Narrow AI from AGI, and which exists today?

Click to flip
Answer

Narrow AI (ANI) handles ONE specific task — all AI today. AGI reasons across ALL domains like a human — theoretical only. No AGI exists commercially.

ML Approaches

What are the 3 core ML approaches and their defining characteristic?

Click to flip
Answer

Supervised = labeled data → predictions. Unsupervised = unlabeled data → patterns. Reinforcement = rewards → optimized behavior over time.

AI Lifecycle

At which lifecycle stage does governance FIRST apply, and why?

Click to flip
Answer

Stage 1: Problem Definition. The most consequential decisions — purpose, scope, oversight design — are made here. Late governance is reactive; early governance prevents harm.

Bias

What is historical bias and why is it common in AI hiring tools?

Click to flip
Answer

Historical bias: training data reflects past discrimination. If past hiring was male-dominated, the model learns to replicate that pattern — even without using gender as an explicit feature.

Explainability

What is the difference between explainability and interpretability?

Click to flip
Answer

Explainability = WHY this specific decision was made (post-hoc, per-decision). Interpretability = HOW the model works overall (global, model-structural). SHAP/LIME provide explainability for black-box models.

OECD Principles

What are the 5 OECD AI Principles and are they legally binding?

Click to flip
Answer

1. Inclusive Growth 2. Human-Centered Values & Fairness 3. Transparency & Explainability 4. Robustness, Security & Safety 5. Accountability. Non-binding but first internationally agreed AI standards (42+ countries, 2019).

Hallucination

What is AI hallucination and what governance control addresses it?

Click to flip
Answer

Hallucination: generative AI produces confident but factually wrong or fabricated outputs. Control: mandatory human review for high-stakes LLM outputs; user disclosures about AI limitations.

Model Drift

What is the difference between data drift and concept drift?

Click to flip
Answer

Data drift: statistical distribution of inputs changes after deployment. Concept drift: the underlying relationship between inputs and outputs changes. Both require continuous monitoring and re-training triggers.

AI Advisor

Select a category for focused exam guidance.

AI & ML Concepts
AI Lifecycle & Governance
AI Risk Categories
Responsible AI Principles
Key Terminology

AI & ML Concepts

  • All deployed AI is Narrow AI: Every real-world AI system — including GPT-4, Claude, and image recognition — is ANI. AGI is theoretical. The AIGP exam won't ask you to design AGI, but may ask what governance challenges it would pose.
  • Supervised learning = most common: Labeled data + prediction task. If someone describes a model that was "trained with examples of correct outputs," that's supervised learning.
  • Unsupervised ≠ uncontrolled: Unsupervised doesn't mean ungoverned. Clustering algorithms that segment customers without labels can still encode bias in their grouping criteria.
  • Foundation models shift governance complexity: A single foundation model underlies thousands of applications. Governance failures upstream (in pretraining data) propagate to all downstream uses.
  • Hallucination is structural, not a bug: LLMs generate text probabilistically — the next most likely token. They have no internal truth-checking mechanism. Governance must assume hallucination risk exists in all LLM outputs.
  • Explainability vs Interpretability: Explainability answers "why THIS decision?" — post-hoc, per-instance. Interpretability answers "how does THIS model work?" — global, structural. Black-box models need external XAI tools (SHAP, LIME) for explainability.
  • Simple models sacrifice accuracy for transparency: A linear regression is fully interpretable but may be less accurate than a deep neural network. This tradeoff is a governance decision — high-stakes decisions may require interpretable models even at accuracy cost.

AI Lifecycle & Governance

  • Governance starts at Stage 1: Problem definition is where the highest-leverage governance decisions are made. If an AI use case is fundamentally harmful, no amount of post-deployment monitoring fixes it.
  • Data stage is the most common bias entry point: Training data quality, representativeness, labeling accuracy, and consent are all governance obligations at Stage 2 — before any model is trained.
  • Model Cards are governance artifacts: Not just technical documents. A Model Card that honestly describes limitations, performance across demographic groups, and known failure modes is a transparency and accountability tool.
  • Retirement is a governance stage: Decommissioning an AI system requires data disposal plans, user notification, model archiving, and a final audit. The AIGP exam may test whether you recognize retirement as part of the governance lifecycle.
  • Monitoring is never optional for high-risk AI: Concept drift and data drift can cause a well-designed model to produce harmful outcomes months after deployment. Continuous monitoring with predefined alerting thresholds is mandatory governance practice.
  • Human-in-the-loop placement matters: Human review BEFORE a decision takes effect (meaningful oversight) differs fundamentally from human review after harm has occurred (reactive oversight). Governance frameworks require the former for high-stakes decisions.

AI Risk Categories

  • Bias is both a technical and ethical failure: Algorithmic bias isn't just a bad prediction — it's a potential civil rights violation. Disparate impact on protected classes can create legal liability even when discrimination is unintentional.
  • Historical bias is the most common: Most real-world bias originates in training data that reflects historical patterns of societal discrimination. Cleaning the data doesn't automatically remove the bias — the patterns are encoded in the statistical relationships.
  • Feedback loops amplify bias over time: When a model's outputs influence future training data (predictive policing → more arrests in targeted areas → more data confirming predictions), bias compounds. Governance must break these loops.
  • Adversarial attacks are AI-specific: Traditional software security doesn't address model inversion, data poisoning, or adversarial inputs. AI security governance requires AI-specific threat modeling.
  • Accountability gaps are organizational: "The algorithm decided" is not a legal defense. Governance frameworks explicitly require organizations to identify accountable humans for all AI-driven outcomes.
  • Privacy risk is broader than GDPR: AI can infer sensitive attributes (health, sexual orientation, political views) from innocuous data at scale. Privacy governance for AI goes beyond consent — it requires purpose limitation, data minimization, and re-identification risk assessment.

Responsible AI Principles

  • OECD Principles are non-binding but foundational: 42+ countries adopted them in 2019. They provide the normative vocabulary used by the EU AI Act, US AI policy, and virtually every national AI strategy. The exam treats them as the shared baseline.
  • Principles require interpretation: "Fairness" means different things statistically — demographic parity vs. equal opportunity vs. predictive parity can all be defined as fair but are mathematically incompatible. Context determines which applies.
  • Accountability ≠ Transparency: Transparency means being open about what the AI does. Accountability means someone is responsible for what it does. A system can be transparent (fully documented) but still have no accountable owner.
  • Human oversight has degrees: Human-in-the-loop (humans approve each decision) vs. human-on-the-loop (humans can override) vs. human-in-command (humans can shut down) represent different oversight levels. High-risk AI requires the first.
  • UNESCO Recommendation (2021): Broader than OECD — 193 member states. Introduces environmental sustainability and right to privacy as explicit AI governance concerns alongside the OECD principles.
  • Beneficence vs Non-maleficence: Responsible AI requires both — designing AI to produce positive outcomes AND to avoid harm. The absence of active harm is not sufficient; the system must also deliver genuine benefit.

Key Terminology

  • Algorithm vs Model: An algorithm is the method (e.g., random forest). A model is the output of running that algorithm on specific training data. Multiple models can use the same algorithm but behave differently based on data.
  • Inference: Using a trained model to make predictions on new data. Most AI harm occurs at inference time — during deployment — not during training. Governance at inference requires monitoring, logging, and access controls.
  • Overfitting vs Generalization: An overfit model memorizes training data and fails on new inputs. Good generalization is a governance concern — a model that only works on training data provides false confidence and fails in production.
  • Foundation model ≠ fine-tuned model: The base foundation model is pretrained at scale. Fine-tuning adapts it for a specific task. Governance must address both — base model risks AND risks introduced by fine-tuning data and process.
  • Red teaming: Adversarial testing by a dedicated team attempting to break the AI system — finding safety failures, harmful outputs, bias, security vulnerabilities — before deployment. Now required by several AI frameworks and regulations.
  • Model Card: Standardized documentation of a model's intended use, performance characteristics, limitations, and demographic performance disparities. Increasingly required by regulators and treated as a governance artifact, not just a technical document.
  • Human-in-the-loop: A system design where a human must review and approve AI outputs before they take effect. Critical for high-stakes domains (hiring, lending, healthcare, criminal justice). Contrasts with fully automated decision-making.

Reinforce Your AIGP Foundations

Deepen your understanding with targeted AIGP flashcard decks on FlashGenius — covering all four AIGP domains.

Unlock Full Flashcard Deck on FlashGenius →