AIGP Exam · Domain II of IV · IAPP Certification

Laws, Standards & Frameworks

Understand how the global regulatory landscape applies to AI — the EU AI Act's risk tiers, GDPR's automation rights, NIST AI RMF, ISO/IEC 42001, and international governance instruments.

EU AI Act GDPR Article 22 NIST AI RMF ISO/IEC 42001 CoE AI Treaty Global Landscape
Study with Flashcards →
Domain II
AIGP BOK
4
EU AI Act Risk Tiers
4
NIST AI RMF Functions
42001
ISO AI Mgmt Standard
1st
Binding AI Treaty (CoE)

The Global AI Regulatory Landscape

Domain II shifts from "what is AI" to "how is AI governed." You must understand not just what specific laws say, but how different types of regulatory instruments work — and how to apply them to real governance decisions.

The Governance Gap: AI capabilities have outpaced the legal frameworks designed to govern them. Every major jurisdiction is now racing to fill this gap — using different approaches, different timelines, and different enforcement mechanisms. The AIGP exam tests your ability to navigate this patchwork.

Types of Regulatory Instruments

Hard Law · Binding
Regulations & Treaties
Legally enforceable. Non-compliance carries penalties. Applies whether or not an organization agrees.
EU AI Act (2024) Council of Europe AI Treaty (2024) GDPR Article 22
Standards · Certifiable
ISO/IEC Standards
Voluntary unless referenced by law. Provide technical specifications; organizations can certify against them.
ISO/IEC 42001:2023 (AI Management) ISO/IEC 23894 (AI Risk) ISO/IEC 38507 (AI Governance)
Frameworks · Voluntary
Government Frameworks
Not legally required but represent authoritative guidance. Widely adopted. Can become de facto standard.
NIST AI Risk Management Framework NIST AI RMF Playbook
Soft Law · Principles
Recommendations & Guidelines
Non-binding but politically significant. Shape future hard law. Signal international consensus.
OECD AI Principles (2019) UNESCO AI Ethics Recommendation (2021) G7 Hiroshima AI Process

By Region: Key Instruments to Know

🇪🇺
European Union
Strictest
EU AI Act2024
GDPR (Art. 22)2018
AI Liability DirectivePending
🇺🇸
United States
Sector-based
NIST AI RMF2023
EO 14110 (Biden)Revoked '25
FDA/FTC AI rulesSector
🇨🇳
China
Regulator-led
Generative AI Regulation2023
Algorithm Recommendation Reg.2022
Deep Synthesis Regulation2022
🌐
International
Consensus-building
CoE AI TreatyBinding
OECD AI PrinciplesSoft law
UNESCO RecommendationSoft law
📅 Key Regulatory Timeline
YearInstrumentSignificance
2018GDPR takes effectArticle 22 — first major legal right regarding automated decisions
2019OECD AI PrinciplesFirst intergovernmental AI principles; adopted by 42+ countries
2021UNESCO Recommendation on AI EthicsFirst global normative instrument; adopted by 193 member states
2022NIST AI RMF draft / China Algorithm RegsUS voluntary framework; China's sector-specific approach
2023NIST AI RMF 1.0 · ISO/IEC 42001 · China GenAI RegFirst AI management system standard; China first to regulate GenAI
2024EU AI Act signed · CoE AI Treaty opened for signatureWorld's first comprehensive AI law; first binding international AI treaty
2026EU AI Act fully applicableHigh-risk AI requirements enforced; phased compliance from 2025

EU AI Act

Signed June 2024, the EU AI Act is the world's first comprehensive, horizontal AI regulation. It uses a risk-based approach — the higher the risk, the stricter the requirements. AIGP candidates must know the four risk tiers and the specific requirements for high-risk AI.

Risk-Based Logic: The Act categorizes AI systems by the risk they pose to health, safety, and fundamental rights — not by technology type. The same underlying algorithm could be minimal risk in one use case and high risk in another.

The Four Risk Tiers

Unacceptable Risk Prohibited — Banned outright
Obligation: Complete prohibition on placing on market or using
Banned practices include: AI systems that deploy subliminal or manipulative techniques; social scoring by governments; real-time biometric identification in public spaces by law enforcement (with narrow exceptions); emotion recognition in workplace or educational settings; predictive policing based solely on profiling; AI that exploits vulnerabilities of specific groups (age, disability, socioeconomic status).
High Risk Permitted — With strict obligations
Obligation: Conformity assessment, registration in EU database, post-market monitoring
High-risk domains: Critical infrastructure (energy, water, transport); education & vocational training; employment & HR management (CV screening, performance monitoring); essential private/public services (credit scoring, insurance, benefits); law enforcement (risk assessment, polygraphs); border control & migration; administration of justice & democratic processes.
Limited Risk Permitted — Transparency obligations only
Obligation: Disclose AI nature to users; label synthetic content
Applies to: Chatbots (must inform users they are interacting with AI); deepfakes and synthetic media (must be labeled as AI-generated); AI systems that generate text published for informational purposes. No conformity assessment required.
Minimal / No Risk Permitted — No mandatory obligations
Obligation: None mandated (voluntary codes of practice encouraged)
Examples: AI spam filters, AI-enabled video games, AI-powered search engines, recommendation systems (unless deemed limited/high risk in specific contexts). The vast majority of AI products in use today fall in this tier.

High-Risk AI: Mandatory Requirements

Req 1
Risk Management System
Ongoing process to identify, analyze, and mitigate risks throughout the entire lifecycle of the AI system.
Req 2
Data & Data Governance
Training, validation, and testing data must meet quality criteria. Bias assessment required. Data must be relevant, representative, and free from errors.
Req 3
Technical Documentation
Detailed documentation enabling competent authorities to assess compliance. Must be kept up to date throughout the system's lifecycle.
Req 4
Transparency & Instructions
Designed to be sufficiently transparent so deployers can interpret and use outputs appropriately. Must include clear instructions for use.
Req 5
Human Oversight
Must be designed to be effectively overseen by natural persons. Humans must be able to understand, monitor, intervene, override, and stop the AI system.
Req 6
Accuracy, Robustness & Cybersecurity
Achieves appropriate accuracy levels. Resilient against errors, faults, and inconsistencies. Protected against adversarial inputs and cybersecurity threats.
General-Purpose AI (GPAI) — Additional Category
Foundation Models & Large Language Models
All GPAI Models Must
  • Maintain technical documentation
  • Comply with EU copyright law
  • Publish a summary of training data
  • Meet energy efficiency requirements
Systemic Risk GPAI (High-Capability) Also Must
  • Perform model evaluations (red-teaming)
  • Assess & mitigate systemic risks
  • Report serious incidents to Commission
  • Ensure adequate cybersecurity protections
📅 EU AI Act Implementation Timeline
DateMilestone
June 2024EU AI Act officially published in the Official Journal
August 2024Act enters into force (20 days after publication)
February 2025Prohibited practices (Unacceptable Risk) apply
August 2025GPAI / General-Purpose AI rules apply; governance provisions apply
August 2026High-risk AI requirements fully apply; main body of Act applicable
August 2027High-risk AI already in service (regulated products) must comply

Privacy Law & AI Governance

AI systems are privacy law's most challenging frontier. They consume vast personal data during training, make automated decisions that affect individuals' lives, and generate outputs that can reveal sensitive information. Every AI governance professional must understand the privacy law obligations that attach to AI.

Key Exam Principle: Privacy law was not written for AI — but it applies to AI. Understanding how existing provisions (especially GDPR Article 22, DPIAs, and data minimization) apply to AI systems is a core Domain II competency.

GDPR Articles Most Relevant to AI

Art22
Automated Individual Decision-Making & Profiling
Individuals have the right not to be subject to decisions based SOLELY on automated processing (including profiling) that produce legal or similarly significant effects.
⚑ Most tested GDPR provision in the AIGP exam. Know the 3 exceptions: contract necessity, explicit consent, and legal authorization.
Art5
Principles of Data Processing
Data minimization (only collect what's necessary), purpose limitation (don't use data beyond stated purpose), and accuracy obligations all constrain how AI training data can be collected and used.
Data collected for one purpose cannot simply be repurposed to train an AI model without justification.
Art13/14
Transparency & Information Obligations
Controllers must provide "meaningful information about the logic involved" in automated decision-making, including the significance and envisaged consequences for the data subject (Recital 71).
This creates an explainability obligation for AI systems that process personal data.
Art17
Right to Erasure ("Right to be Forgotten")
Individuals can request deletion of their personal data. For AI systems, this creates complex challenges — a model trained on someone's data cannot easily "forget" that data without retraining.
Machine unlearning is an active area of research specifically to address this challenge.
Art35
Data Protection Impact Assessment (DPIA)
Required when processing is likely to result in a HIGH risk to individuals' rights and freedoms. Systematic automated processing, large-scale profiling, and processing of special category data all typically require a DPIA.
Most AI deployments involving personal data will require a DPIA. It must be completed BEFORE the processing begins.

GDPR Article 22 — Deep Dive

When does Article 22 apply, and when can organizations proceed anyway?
The Right (default protection)
  • Decision is solely automated (no meaningful human involvement)
  • Decision produces legal effects (e.g., loan denial, visa refusal, firing)
  • OR has similarly significant effects (e.g., insurance pricing, credit scoring)
  • Right to human review, to express one's point of view, and to contest the decision
The 3 Exceptions (when automated decisions are permitted)
  • Contract necessity: decision is necessary for entering into / performing a contract with the individual
  • Legal authorization: permitted by EU or member state law (with safeguards)
  • Explicit consent: individual has given explicit (not just implied) consent
  • For special category data: only consent or substantial public interest exceptions apply
🇺🇸 CCPA / CPRA (California)

The California Consumer Privacy Act (CCPA), strengthened by the CPRA, gives California residents rights over their personal data with AI implications.

Key AI-relevant rights: Right to opt out of sale/sharing of personal information; right to correct inaccurate personal information; right to limit use of sensitive personal information; right to know what information is collected and how it's used in automated decisions.

Unlike GDPR Art 22, CCPA/CPRA does not explicitly prohibit solely automated decisions — but transparency and opt-out rights constrain AI-based profiling significantly.

📋 DPIA vs AI Impact Assessment

DPIA (GDPR Art 35): Required for high-risk personal data processing. Focuses on privacy and data protection risks specifically.

AI Impact Assessment: Broader governance tool that covers harms beyond privacy — bias, fairness, safety, societal impact. Required for high-risk AI under the EU AI Act (as part of the conformity assessment), and encouraged by frameworks like NIST AI RMF and ISO/IEC 42001.

For AI systems processing personal data, both assessments may be required. They can often be combined into a single document.

🌐 Cross-Border Privacy Considerations for AI
ScenarioPrivacy Law TriggerAI-Specific Issue
AI trained on EU residents' dataGDPR applies regardless of where training occursLegal basis for data collection; purpose limitation for training
AI model transferred outside EUGDPR Chapter V data transfer restrictionsModel may embed personal data from training set
AI makes employment decisions for US workersCCPA/CPRA (CA), various state laws, EEOC guidanceAutomated scoring must not discriminate on protected characteristics
AI used in credit/insurance decisionsFair Credit Reporting Act (US), GDPR Art 22 (EU)Adverse action notices; right to explanation of AI decision
Generative AI outputs personal dataGDPR (output may constitute processing)Hallucinated personal data may create liability

Standards, Frameworks & International Instruments

Beyond hard law, AI governance relies on voluntary frameworks and technical standards that organizations can adopt to demonstrate responsible AI practices. The AIGP exam focuses especially on NIST AI RMF, ISO/IEC 42001, and the international consensus instruments.

Standards ≠ Laws: ISO/IEC standards and government frameworks like NIST AI RMF are voluntary — but they carry significant weight. Regulators, procurement authorities, and auditors use them as benchmarks. Alignment with these frameworks is increasingly a practical requirement even without a legal mandate.

NIST AI Risk Management Framework (AI RMF 1.0, 2023)

Function 1 — Foundation
GOVERN
Establishes the organizational culture, accountability structures, policies, and processes needed for responsible AI risk management across all other functions.
Define AI risk strategy & tolerances Establish roles & responsibilities Create organizational policies Foster a risk-aware culture
Function 2 — Context
MAP
Categorizes the context in which an AI system is developed or deployed; identifies and classifies potential AI risks and their scope of impact.
Define intended purpose & context Identify affected communities Categorize AI risks Assess beneficial & harmful impacts
Function 3 — Analysis
MEASURE
Analyzes, assesses, benchmarks, and monitors AI risks using quantitative and qualitative metrics to evaluate trustworthiness and performance.
Test AI system performance Measure bias & fairness metrics Evaluate robustness & explainability Benchmark against standards
Function 4 — Response
MANAGE
Prioritizes and addresses identified AI risks based on assessed likelihood and impact. Implements treatment plans, monitors outcomes, and communicates residual risk.
Prioritize risks for treatment Implement mitigation controls Monitor residual risk Document & communicate outcomes
🔑 NIST AI RMF Key Concepts
ConceptDefinition
Trustworthy AIAI that is valid, reliable, explainable, interpretable, privacy-enhanced, safe, fair, and accountable — the 7 characteristics the RMF is designed to support
Current ProfileAn organization's current state of AI risk management practices
Target ProfileThe desired future state of AI risk management; gap between current and target drives the roadmap
AI ActorsOrganizations or individuals involved in AI — including developers, operators, deployers, users, evaluators, and affected parties
GOVERN underpins allUnlike a sequential lifecycle, GOVERN is foundational — it must be established before MAP, MEASURE, and MANAGE can function effectively

ISO/IEC Standards for AI

★ First of its kind
ISO/IEC 42001:2023
Published December 2023
AI Management System (AIMS)
The first international standard specifically for AI governance. Provides requirements for establishing, implementing, maintaining, and continually improving an AI management system. Organizations can certify against it. Think of it as ISO 9001 (quality management) but for AI.
ISO/IEC 23894:2023
Published February 2023
AI Risk Management Guidance
Guidance on how organizations can manage risks specific to AI, integrating with existing enterprise risk management frameworks like ISO 31000.
ISO/IEC 38507:2022
Published May 2022
AI Governance Implications
Guidance for governing bodies (boards, executives) on the governance implications of their organizations' use of AI. Addresses oversight, accountability, and strategic alignment.
ISO/IEC TR 24027:2021
Published November 2021
Bias in AI Systems
Technical report examining bias in AI systems and AI-aided decision-making — sources of bias, types of bias, and mitigation approaches during development and deployment.
ISO/IEC 42005
In development
AI System Impact Assessment
Under development. Will provide guidance on conducting impact assessments for AI systems — analogous to DPIAs but broader in scope, covering societal and ethical impacts.
ISO/IEC TR 24029
Multipart series
Robustness of Neural Networks
Technical guidance on assessing the robustness of neural networks, including formal methods and statistical approaches for evaluating model reliability.

International & Multilateral Instruments

Binding
Council of Europe Framework Convention on AI (2024)
Council of Europe · 46 member states + open to all countries
The first legally binding international treaty specifically addressing AI. Opens for signature September 2024. Requires signatories to ensure AI systems respect human rights, democracy, and the rule of law. Applies to all AI activities by public authorities and private actors operating in the public sphere. Notably, non-CoE countries (including the US, Canada, Japan, Israel) were involved in drafting and can sign.
Soft Law
UNESCO Recommendation on the Ethics of AI (2021)
UNESCO · Adopted by all 193 member states
The first global normative instrument on AI ethics, adopted unanimously. Non-binding but represents global consensus. Covers 10 core principles including proportionality, safety, fairness, sustainability, privacy, human oversight, transparency, and accountability. Requires member states to conduct AI Ethics Impact Assessments (AIEIAs).
Soft Law
OECD AI Principles (2019, updated 2024)
OECD · 42+ countries adopted
First intergovernmental AI policy standard. Five principles: Inclusive Growth & Sustainable Development; Human-Centered Values & Fairness; Transparency & Explainability; Robustness, Security & Safety; Accountability. Non-binding but highly influential — formed the basis of the G20 AI Principles and influenced the EU AI Act.
Soft Law
China: GenAI, Algorithm & Deep Synthesis Regulations (2022–2023)
Cyberspace Administration of China (CAC)
China took a sector-specific regulatory approach rather than a horizontal law. The Generative AI Regulation (2023) was the world's first regulation specifically targeting generative AI — requiring security assessments, content moderation, and labeling of AI-generated content. The Algorithm Recommendation Regulation (2022) governs recommendation algorithms. The Deep Synthesis Regulation (2022) governs deepfakes and synthetic media.
Voluntary
G7 Hiroshima AI Process & Code of Conduct (2023)
G7 Countries
Voluntary code of conduct for advanced AI developers, agreed by G7 leaders at the Hiroshima Summit. 11 guiding actions covering transparency, safety evaluations, incident reporting, and cybersecurity. Represents international consensus among major AI-producing democracies without creating binding obligations.

Practice Quiz — Domain II

Test your knowledge of laws, standards, and frameworks. Select the best answer for each question.

Question 1 of 10
Under the EU AI Act, which of the following AI applications is classified as PROHIBITED (unacceptable risk)?
AAn AI system used by a hospital to prioritize patient triage in an emergency department
BAn AI system that deploys subliminal techniques to manipulate people's behavior against their will, causing psychological harm
CAn AI chatbot deployed for customer service that must disclose it is an AI to users
DAn AI-powered spam filter used by an email service provider
Subliminal manipulation techniques that exploit psychological vulnerabilities are explicitly prohibited under the EU AI Act as unacceptable risk. Triage AI is high-risk (not prohibited). Chatbots with disclosure obligations are limited risk. Spam filters are minimal/no risk.
Question 2 of 10
Under GDPR Article 22, individuals have a right to object to automated decisions that produce legal or similarly significant effects. When does this right apply?
AWhenever any personal data is processed by an AI system, regardless of the decision outcome
BWhen the decision is based SOLELY on automated processing and produces legal effects or similarly significant effects on the individual
COnly when the AI system is classified as high-risk under the EU AI Act
DOnly when the data controller is located within the European Union
Article 22 applies specifically to decisions based SOLELY on automated processing (no meaningful human review) that produce legal effects (e.g., loan denial) or similarly significant effects (e.g., insurance pricing). It is not triggered by all AI use of personal data, nor is it limited to EU-based controllers — GDPR has extraterritorial reach.
Question 3 of 10
The NIST AI Risk Management Framework (AI RMF) consists of four core functions. Which function provides the organizational FOUNDATION — establishing accountability, culture, and policies that enable all other functions?
AMAP — categorizes context and identifies AI risks
BMEASURE — analyzes and quantifies AI risks
CGOVERN — establishes organizational culture, accountability, and AI risk policies
DMANAGE — prioritizes and addresses identified AI risks
GOVERN is the foundational function of the NIST AI RMF. It establishes the organizational context — culture, accountability structures, risk tolerances, roles, and policies — without which MAP, MEASURE, and MANAGE cannot function effectively. The other three functions apply these foundations to specific AI systems.
Question 4 of 10
ISO/IEC 42001:2023 is particularly significant in the AI governance landscape because it is:
AThe first legally binding international treaty specifically addressing artificial intelligence
BA US government voluntary framework for managing AI-related risks
CThe first international standard for AI Management Systems, enabling organizations to certify their AI governance
DThe EU regulation establishing risk tiers and requirements for high-risk AI systems
ISO/IEC 42001:2023, published in December 2023, is the first international standard specifically for AI Management Systems (AIMS). It provides requirements that organizations can certify against — analogous to ISO 9001 for quality management. The binding treaty is the CoE Framework Convention; the US voluntary framework is NIST AI RMF; the EU risk-tier regulation is the EU AI Act.
Question 5 of 10
Under the EU AI Act, what obligation applies to AI systems classified as LIMITED RISK (such as customer-facing chatbots)?
AMandatory conformity assessment by a notified body before deployment
BRegistration in the EU AI Act public database maintained by the European Commission
CTransparency disclosure — users must be informed they are interacting with an AI system
DA complete prohibition on commercial deployment until a safety review is completed
Limited risk AI systems (chatbots, emotion recognition AI in specific contexts, AI-generated content) face transparency obligations only — they must inform users that they are interacting with AI, or label AI-generated content. Conformity assessments and database registration are high-risk requirements. Prohibition applies only to unacceptable risk.
Question 6 of 10
The Council of Europe's Framework Convention on Artificial Intelligence (2024) is historically notable because it is:
AA voluntary code of conduct that signatory countries are encouraged but not required to follow
BThe first legally binding international treaty specifically addressing the impact of AI on human rights, democracy, and the rule of law
CLimited to the 46 member states of the Council of Europe and not open to countries outside Europe
DAn update to the EU AI Act extending its requirements to non-EU countries
The CoE Framework Convention on AI (2024) is the first legally BINDING international treaty specifically on AI. It is legally enforceable for signatories. Crucially, it is open to countries beyond CoE members — including the US, Canada, Japan, and Israel, which participated in drafting. It is separate from the EU AI Act.
Question 7 of 10
A company uses a fully automated AI system to screen job applicants and make hiring decisions for EU-based roles. No human reviews the AI's decisions before offers are sent. Which GDPR provision is most directly implicated?
AArticle 5 — Principles of data processing (data minimization)
BArticle 17 — Right to erasure
CArticle 22 — Automated individual decision-making and profiling
DArticle 83 — General conditions for imposing fines
Article 22 is directly triggered: the decision is SOLELY automated (no human review) and has significant effects on the individual (employment outcome). Under Article 22, candidates have the right to human review, to express their point of view, and to contest the decision. The company needs a legal basis — likely explicit consent or contract necessity — to proceed lawfully.
Question 8 of 10
In the NIST AI RMF, the MAP function is primarily responsible for which activity?
AImplementing mitigation controls and monitoring residual AI risks
BEstablishing organizational governance policies and risk tolerances
CCategorizing the AI system's context, identifying potential risks, and assessing the scope of impacts on stakeholders
DQuantitatively benchmarking AI system performance and measuring bias metrics
MAP focuses on context-setting and risk identification — categorizing the AI system's intended use, identifying the range of potential harms, and assessing stakeholder impacts. MANAGE implements mitigations (A). GOVERN establishes policies (B). MEASURE quantifies and benchmarks (D).
Question 9 of 10
China became the first country in the world to specifically regulate which type of AI system with dedicated legislation?
AAutonomous vehicles and self-driving transportation systems
BFacial recognition and biometric identification systems
CGenerative AI systems (LLMs, image generation, synthetic media)
DMedical diagnosis and clinical decision support AI
China's Provisional Measures for the Management of Generative Artificial Intelligence Services (2023) was the world's first regulation specifically targeting generative AI. It requires security assessments, content moderation to prevent "false information," and labeling of AI-generated content — preceding similar measures in the EU AI Act's GPAI provisions.
Question 10 of 10
Under GDPR Article 35, when is an organization required to conduct a Data Protection Impact Assessment (DPIA) before deploying an AI system?
AWhenever any personal data is processed by an AI system, regardless of risk level
BWhen the processing is likely to result in a HIGH risk to the rights and freedoms of natural persons (e.g., systematic profiling, large-scale sensitive data processing)
COnly when the AI system is registered in the EU AI Act's high-risk AI database
DOnly when the organization employs more than 250 people and processes data at large scale
DPIAs are required under Article 35 when processing is likely to result in HIGH risk to individuals' rights and freedoms. Triggers include: systematic automated processing/profiling, large-scale processing of special category data, systematic monitoring of publicly accessible areas. It's not required for all AI use of personal data — the risk threshold must be met. DPIAs must be completed BEFORE the processing starts.
0/10
Questions correct — review explanations above

Memory Hooks

Mnemonics and mental models to lock in the Domain II concepts most likely to appear on exam day.

🏗️
EU AI Act Risk Tiers: U-H-L-M
The four risk levels from top to bottom: Unacceptable (banned), High (conformity assessment), Limited (label & disclose), Minimal (no obligation). Each step down means fewer obligations.
Mnemonic: "Uncle Harry Loves Minimalism"
🔄
NIST AI RMF: G→M→M→M
GOVERN first (set culture & policies) → MAP (find risks) → MEASURE (quantify risks) → MANAGE (fix risks). GOVERN is the foundation — without it, the three M's have no direction.
Mnemonic: "Good Managers Measure Methodically"
⚖️
GDPR Art 22: The "Solely + Significant" Rule
Article 22 triggers when TWO conditions meet: the decision is SOLELY automated (no meaningful human in the loop) AND it has SIGNIFICANT effects (legal or similarly impactful). Both required. Three exceptions: Contract, Consent, or Law.
Mnemonic: "Art 22 = Automated × 2-significant" (both bars must be met)
🌐
Binding vs Voluntary: Who Binds?
BINDS: EU AI Act (regulation), CoE AI Treaty (treaty), GDPR (regulation). GUIDES: NIST AI RMF (framework), OECD Principles (recommendations), ISO/IEC 42001 (voluntary unless law references it). The EU binds; NIST guides.
Mnemonic: "EU and CoE BIND. NIST and OECD GUIDE."
🔢
ISO/IEC 42001: "42 for AI"
ISO/IEC 42001:2023 — the number 42 anchors the entire AI standardization family (42xxx series). It's the FIRST international standard for AI Management Systems (AIMS). Organizations can certify against it. Published December 2023.
Mnemonic: "42 = The Answer to AI Governance" (like the Hitchhiker's Guide)
🏛️
High-Risk AI Requirements: RADAR-T
Six mandatory requirements for EU AI Act high-risk systems: Risk management, Accuracy & robustness, Data governance, Accountability (documentation), Residual cybersecurity, Transparency + Human oversight.
Mnemonic: "RADAR-T — Stay on the radar for high-risk AI"
📊 Quick Comparison: NIST AI RMF vs ISO/IEC 42001
DimensionNIST AI RMF (2023)ISO/IEC 42001 (2023)
OriginUS National Institute of Standards & TechnologyInternational Organization for Standardization
NatureVoluntary framework — guidance onlyVoluntary standard — certifiable
Structure4 functions: GOVERN, MAP, MEASURE, MANAGEManagement system standard (like ISO 9001, ISO 27001)
CertificationNo certification; organizations self-alignOrganizations can be certified by third-party auditors
ScopeRisk management lens across AI lifecycleComprehensive AI governance management system
Best forUS-focused organizations; risk assessment starting pointGlobal organizations; demonstrating AI governance maturity

Flashcards & Study Advisor

Tap any card to flip it. Use the advisor panel to get targeted study guidance by topic.

Flashcards — Domain II Key Concepts

EU AI Act

What are the 4 EU AI Act risk tiers, and what is the key obligation for each?

tap to reveal
Answer

Unacceptable = Banned. High = Conformity assessment required. Limited = Transparency/disclosure only. Minimal = No mandatory obligations.

EU AI Act

Name three examples of PROHIBITED (unacceptable risk) AI practices under the EU AI Act.

tap to reveal
Answer

1) AI using subliminal manipulation techniques. 2) Government social scoring systems. 3) Real-time biometric identification in public spaces by law enforcement (with narrow exceptions).

GDPR

GDPR Article 22: What triggers the right, and what are the 3 exceptions that permit automated decisions?

tap to reveal
Answer

Triggered by: SOLELY automated + LEGAL/SIGNIFICANT effects. Three exceptions: (1) Contract necessity, (2) Legal authorization, (3) Explicit consent.

NIST AI RMF

Name the 4 NIST AI RMF core functions in order and describe the role of GOVERN.

tap to reveal
Answer

GOVERN → MAP → MEASURE → MANAGE. GOVERN is foundational — it establishes organizational culture, accountability, risk tolerances, and policies that enable all other functions.

ISO Standards

What is ISO/IEC 42001:2023 and why is it historically significant?

tap to reveal
Answer

ISO/IEC 42001:2023 is the first international standard for AI Management Systems (AIMS). Published December 2023. Organizations can certify against it — like ISO 9001 for quality management, but for AI governance.

International Law

What makes the Council of Europe AI Framework Convention (2024) historically unique?

tap to reveal
Answer

It is the first legally BINDING international treaty specifically addressing AI. Open to all countries (not just CoE members). Covers human rights, democracy, and rule of law in the context of AI systems.

GDPR

When is a Data Protection Impact Assessment (DPIA) required under GDPR Article 35?

tap to reveal
Answer

When processing is likely to result in HIGH risk to individuals' rights and freedoms. Triggers include: systematic automated profiling, large-scale sensitive data processing, systematic monitoring of public spaces. Must be done before processing begins.

Global Landscape

Which country was first to specifically regulate generative AI with dedicated legislation, and when?

tap to reveal
Answer

China — Provisional Measures for the Management of Generative AI Services, effective August 2023. Required security assessments, content moderation, and labeling of AI-generated content before similar EU provisions were in force.

Master the Full AIGP Deck on FlashGenius

All 4 domains. Hundreds of flashcards. Spaced repetition to make it stick.

Unlock Full Flashcard Deck on FlashGenius →

Study Advisor

EU AI Act
GDPR & Privacy
NIST AI RMF
ISO Standards
Global Landscape

EU AI Act — Exam Focus Points

  • The four risk tiers are always tested. Know the tier name, the defining characteristic, and the primary obligation for each.
  • Unacceptable Risk = Banned. Memorize the prohibited practices — subliminal manipulation, social scoring by governments, and biometric surveillance in public top the list.
  • High-risk AI requires a conformity assessment and registration in the EU AI database. Know the 6 mandatory requirements: risk management, data governance, documentation, transparency, human oversight, accuracy/cybersecurity.
  • Limited risk = transparency only. Chatbots must disclose; deepfakes must be labeled. No conformity assessment.
  • The GPAI provisions (foundation models) are a new addition — standard GPAI vs. systemic-risk GPAI have different obligations.
  • Know the phased implementation timeline: prohibited practices apply Feb 2025; full high-risk requirements apply Aug 2026.
  • Remember: the Act applies to providers (who develop AI) AND deployers (who use it in their products/services).

GDPR & Privacy Law — Exam Focus Points

  • Article 22 is the #1 tested GDPR provision. Know the trigger (solely automated + legal/significant effects) and all three exceptions (contract, consent, law).
  • For special category data under Art 22, the exceptions narrow further — only consent and substantial public interest apply.
  • Art 22 grants three rights: human review, express point of view, and contest the decision.
  • DPIAs (Art 35) are required for high-risk processing — not all AI use of personal data. Key triggers: systematic automated profiling, large-scale sensitive data, public monitoring.
  • The right to erasure (Art 17) creates model-level challenges — "machine unlearning" is the active research area trying to solve this.
  • GDPR applies extraterritorially — processing EU residents' data anywhere in the world is covered.
  • CCPA/CPRA is a US comparator — no equivalent of Art 22's prohibition on solely automated decisions, but strong transparency and opt-out rights constrain AI profiling.

NIST AI RMF — Exam Focus Points

  • Know all four functions: GOVERN, MAP, MEASURE, MANAGE — and what each does, not just the names.
  • GOVERN is foundational — it underpins all other functions. Without organizational culture, policies, and accountability, the other three cannot work.
  • MAP identifies context and risks. MEASURE quantifies them. MANAGE addresses them. These three are iterative, not strictly sequential.
  • The RMF uses Current Profile vs. Target Profile — the gap between them drives the AI risk roadmap.
  • The framework defines 7 characteristics of trustworthy AI: valid, reliable, explainable, interpretable, privacy-enhanced, safe, fair, and accountable.
  • NIST AI RMF is voluntary — not a law. But it is widely adopted and referenced by US government procurement requirements.
  • Compare to ISO/IEC 42001: both address AI governance but NIST is risk-focused and US-centric; ISO is a certifiable management system standard with global reach.

ISO/IEC Standards — Exam Focus Points

  • ISO/IEC 42001:2023 is the most important — first international AI management system standard. Know that it is certifiable (third-party audit) and published December 2023.
  • ISO/IEC standards are voluntary unless referenced by law. The EU AI Act may reference them as presumption of conformity for high-risk AI requirements.
  • ISO/IEC 23894:2023 covers AI risk management guidance — integrates with ISO 31000 enterprise risk management.
  • ISO/IEC 38507:2022 addresses governance implications — aimed at boards and executives, not technical teams.
  • ISO/IEC standards in the 42xxx series are all AI-related. The number 42001 anchors the AI management system; other numbers address specific aspects (risk, bias, robustness).
  • Know the analogy: ISO/IEC 42001 is to AI governance what ISO 9001 is to quality management and ISO 27001 is to information security.

Global Landscape — Exam Focus Points

  • The Council of Europe Framework Convention (2024) = first binding international AI treaty. Not the same as the EU AI Act — it's a treaty open to all countries globally.
  • The UNESCO Recommendation (2021) = first global normative AI instrument. Non-binding. Adopted by all 193 UNESCO member states.
  • OECD AI Principles (2019) = first intergovernmental AI policy standard. 5 principles. Influential on EU AI Act drafting. Updated 2024.
  • China = first country to regulate generative AI (2023) with dedicated legislation. Sector-by-sector approach (GenAI, algorithms, deepfakes — separate regulations).
  • US approach = sector-specific and voluntary at federal level. Biden EO 14110 was revoked in Jan 2025. NIST AI RMF remains the key federal guidance. FDA, FTC have sector-specific AI rules.
  • The key distinction: hard law vs soft law vs standards vs frameworks. Know which category each instrument falls into and the enforcement implications.
  • Don't confuse the EU AI Act (EU regulation, binding on EU/accessing EU market) with the CoE AI Treaty (international treaty, binding on signatories worldwide).