The Global AI Regulatory Landscape
Domain II shifts from "what is AI" to "how is AI governed." You must understand not just what specific laws say, but how different types of regulatory instruments work — and how to apply them to real governance decisions.
The Governance Gap: AI capabilities have outpaced the legal frameworks designed to govern them. Every major jurisdiction is now racing to fill this gap — using different approaches, different timelines, and different enforcement mechanisms. The AIGP exam tests your ability to navigate this patchwork.
Types of Regulatory Instruments
By Region: Key Instruments to Know
| Year | Instrument | Significance |
|---|---|---|
| 2018 | GDPR takes effect | Article 22 — first major legal right regarding automated decisions |
| 2019 | OECD AI Principles | First intergovernmental AI principles; adopted by 42+ countries |
| 2021 | UNESCO Recommendation on AI Ethics | First global normative instrument; adopted by 193 member states |
| 2022 | NIST AI RMF draft / China Algorithm Regs | US voluntary framework; China's sector-specific approach |
| 2023 | NIST AI RMF 1.0 · ISO/IEC 42001 · China GenAI Reg | First AI management system standard; China first to regulate GenAI |
| 2024 | EU AI Act signed · CoE AI Treaty opened for signature | World's first comprehensive AI law; first binding international AI treaty |
| 2026 | EU AI Act fully applicable | High-risk AI requirements enforced; phased compliance from 2025 |
EU AI Act
Signed June 2024, the EU AI Act is the world's first comprehensive, horizontal AI regulation. It uses a risk-based approach — the higher the risk, the stricter the requirements. AIGP candidates must know the four risk tiers and the specific requirements for high-risk AI.
Risk-Based Logic: The Act categorizes AI systems by the risk they pose to health, safety, and fundamental rights — not by technology type. The same underlying algorithm could be minimal risk in one use case and high risk in another.
The Four Risk Tiers
High-Risk AI: Mandatory Requirements
- Maintain technical documentation
- Comply with EU copyright law
- Publish a summary of training data
- Meet energy efficiency requirements
- Perform model evaluations (red-teaming)
- Assess & mitigate systemic risks
- Report serious incidents to Commission
- Ensure adequate cybersecurity protections
| Date | Milestone |
|---|---|
| June 2024 | EU AI Act officially published in the Official Journal |
| August 2024 | Act enters into force (20 days after publication) |
| February 2025 | Prohibited practices (Unacceptable Risk) apply |
| August 2025 | GPAI / General-Purpose AI rules apply; governance provisions apply |
| August 2026 | High-risk AI requirements fully apply; main body of Act applicable |
| August 2027 | High-risk AI already in service (regulated products) must comply |
Privacy Law & AI Governance
AI systems are privacy law's most challenging frontier. They consume vast personal data during training, make automated decisions that affect individuals' lives, and generate outputs that can reveal sensitive information. Every AI governance professional must understand the privacy law obligations that attach to AI.
Key Exam Principle: Privacy law was not written for AI — but it applies to AI. Understanding how existing provisions (especially GDPR Article 22, DPIAs, and data minimization) apply to AI systems is a core Domain II competency.
GDPR Articles Most Relevant to AI
GDPR Article 22 — Deep Dive
- Decision is solely automated (no meaningful human involvement)
- Decision produces legal effects (e.g., loan denial, visa refusal, firing)
- OR has similarly significant effects (e.g., insurance pricing, credit scoring)
- Right to human review, to express one's point of view, and to contest the decision
- Contract necessity: decision is necessary for entering into / performing a contract with the individual
- Legal authorization: permitted by EU or member state law (with safeguards)
- Explicit consent: individual has given explicit (not just implied) consent
- For special category data: only consent or substantial public interest exceptions apply
The California Consumer Privacy Act (CCPA), strengthened by the CPRA, gives California residents rights over their personal data with AI implications.
Key AI-relevant rights: Right to opt out of sale/sharing of personal information; right to correct inaccurate personal information; right to limit use of sensitive personal information; right to know what information is collected and how it's used in automated decisions.
Unlike GDPR Art 22, CCPA/CPRA does not explicitly prohibit solely automated decisions — but transparency and opt-out rights constrain AI-based profiling significantly.
DPIA (GDPR Art 35): Required for high-risk personal data processing. Focuses on privacy and data protection risks specifically.
AI Impact Assessment: Broader governance tool that covers harms beyond privacy — bias, fairness, safety, societal impact. Required for high-risk AI under the EU AI Act (as part of the conformity assessment), and encouraged by frameworks like NIST AI RMF and ISO/IEC 42001.
For AI systems processing personal data, both assessments may be required. They can often be combined into a single document.
| Scenario | Privacy Law Trigger | AI-Specific Issue |
|---|---|---|
| AI trained on EU residents' data | GDPR applies regardless of where training occurs | Legal basis for data collection; purpose limitation for training |
| AI model transferred outside EU | GDPR Chapter V data transfer restrictions | Model may embed personal data from training set |
| AI makes employment decisions for US workers | CCPA/CPRA (CA), various state laws, EEOC guidance | Automated scoring must not discriminate on protected characteristics |
| AI used in credit/insurance decisions | Fair Credit Reporting Act (US), GDPR Art 22 (EU) | Adverse action notices; right to explanation of AI decision |
| Generative AI outputs personal data | GDPR (output may constitute processing) | Hallucinated personal data may create liability |
Standards, Frameworks & International Instruments
Beyond hard law, AI governance relies on voluntary frameworks and technical standards that organizations can adopt to demonstrate responsible AI practices. The AIGP exam focuses especially on NIST AI RMF, ISO/IEC 42001, and the international consensus instruments.
Standards ≠ Laws: ISO/IEC standards and government frameworks like NIST AI RMF are voluntary — but they carry significant weight. Regulators, procurement authorities, and auditors use them as benchmarks. Alignment with these frameworks is increasingly a practical requirement even without a legal mandate.
NIST AI Risk Management Framework (AI RMF 1.0, 2023)
| Concept | Definition |
|---|---|
| Trustworthy AI | AI that is valid, reliable, explainable, interpretable, privacy-enhanced, safe, fair, and accountable — the 7 characteristics the RMF is designed to support |
| Current Profile | An organization's current state of AI risk management practices |
| Target Profile | The desired future state of AI risk management; gap between current and target drives the roadmap |
| AI Actors | Organizations or individuals involved in AI — including developers, operators, deployers, users, evaluators, and affected parties |
| GOVERN underpins all | Unlike a sequential lifecycle, GOVERN is foundational — it must be established before MAP, MEASURE, and MANAGE can function effectively |
ISO/IEC Standards for AI
International & Multilateral Instruments
Practice Quiz — Domain II
Test your knowledge of laws, standards, and frameworks. Select the best answer for each question.
Memory Hooks
Mnemonics and mental models to lock in the Domain II concepts most likely to appear on exam day.
| Dimension | NIST AI RMF (2023) | ISO/IEC 42001 (2023) |
|---|---|---|
| Origin | US National Institute of Standards & Technology | International Organization for Standardization |
| Nature | Voluntary framework — guidance only | Voluntary standard — certifiable |
| Structure | 4 functions: GOVERN, MAP, MEASURE, MANAGE | Management system standard (like ISO 9001, ISO 27001) |
| Certification | No certification; organizations self-align | Organizations can be certified by third-party auditors |
| Scope | Risk management lens across AI lifecycle | Comprehensive AI governance management system |
| Best for | US-focused organizations; risk assessment starting point | Global organizations; demonstrating AI governance maturity |
Flashcards & Study Advisor
Tap any card to flip it. Use the advisor panel to get targeted study guidance by topic.
Flashcards — Domain II Key Concepts
What are the 4 EU AI Act risk tiers, and what is the key obligation for each?
Unacceptable = Banned. High = Conformity assessment required. Limited = Transparency/disclosure only. Minimal = No mandatory obligations.
Name three examples of PROHIBITED (unacceptable risk) AI practices under the EU AI Act.
1) AI using subliminal manipulation techniques. 2) Government social scoring systems. 3) Real-time biometric identification in public spaces by law enforcement (with narrow exceptions).
GDPR Article 22: What triggers the right, and what are the 3 exceptions that permit automated decisions?
Triggered by: SOLELY automated + LEGAL/SIGNIFICANT effects. Three exceptions: (1) Contract necessity, (2) Legal authorization, (3) Explicit consent.
Name the 4 NIST AI RMF core functions in order and describe the role of GOVERN.
GOVERN → MAP → MEASURE → MANAGE. GOVERN is foundational — it establishes organizational culture, accountability, risk tolerances, and policies that enable all other functions.
What is ISO/IEC 42001:2023 and why is it historically significant?
ISO/IEC 42001:2023 is the first international standard for AI Management Systems (AIMS). Published December 2023. Organizations can certify against it — like ISO 9001 for quality management, but for AI governance.
What makes the Council of Europe AI Framework Convention (2024) historically unique?
It is the first legally BINDING international treaty specifically addressing AI. Open to all countries (not just CoE members). Covers human rights, democracy, and rule of law in the context of AI systems.
When is a Data Protection Impact Assessment (DPIA) required under GDPR Article 35?
When processing is likely to result in HIGH risk to individuals' rights and freedoms. Triggers include: systematic automated profiling, large-scale sensitive data processing, systematic monitoring of public spaces. Must be done before processing begins.
Which country was first to specifically regulate generative AI with dedicated legislation, and when?
China — Provisional Measures for the Management of Generative AI Services, effective August 2023. Required security assessments, content moderation, and labeling of AI-generated content before similar EU provisions were in force.
Master the Full AIGP Deck on FlashGenius
All 4 domains. Hundreds of flashcards. Spaced repetition to make it stick.
Unlock Full Flashcard Deck on FlashGenius →Study Advisor
EU AI Act — Exam Focus Points
- The four risk tiers are always tested. Know the tier name, the defining characteristic, and the primary obligation for each.
- Unacceptable Risk = Banned. Memorize the prohibited practices — subliminal manipulation, social scoring by governments, and biometric surveillance in public top the list.
- High-risk AI requires a conformity assessment and registration in the EU AI database. Know the 6 mandatory requirements: risk management, data governance, documentation, transparency, human oversight, accuracy/cybersecurity.
- Limited risk = transparency only. Chatbots must disclose; deepfakes must be labeled. No conformity assessment.
- The GPAI provisions (foundation models) are a new addition — standard GPAI vs. systemic-risk GPAI have different obligations.
- Know the phased implementation timeline: prohibited practices apply Feb 2025; full high-risk requirements apply Aug 2026.
- Remember: the Act applies to providers (who develop AI) AND deployers (who use it in their products/services).
GDPR & Privacy Law — Exam Focus Points
- Article 22 is the #1 tested GDPR provision. Know the trigger (solely automated + legal/significant effects) and all three exceptions (contract, consent, law).
- For special category data under Art 22, the exceptions narrow further — only consent and substantial public interest apply.
- Art 22 grants three rights: human review, express point of view, and contest the decision.
- DPIAs (Art 35) are required for high-risk processing — not all AI use of personal data. Key triggers: systematic automated profiling, large-scale sensitive data, public monitoring.
- The right to erasure (Art 17) creates model-level challenges — "machine unlearning" is the active research area trying to solve this.
- GDPR applies extraterritorially — processing EU residents' data anywhere in the world is covered.
- CCPA/CPRA is a US comparator — no equivalent of Art 22's prohibition on solely automated decisions, but strong transparency and opt-out rights constrain AI profiling.
NIST AI RMF — Exam Focus Points
- Know all four functions: GOVERN, MAP, MEASURE, MANAGE — and what each does, not just the names.
- GOVERN is foundational — it underpins all other functions. Without organizational culture, policies, and accountability, the other three cannot work.
- MAP identifies context and risks. MEASURE quantifies them. MANAGE addresses them. These three are iterative, not strictly sequential.
- The RMF uses Current Profile vs. Target Profile — the gap between them drives the AI risk roadmap.
- The framework defines 7 characteristics of trustworthy AI: valid, reliable, explainable, interpretable, privacy-enhanced, safe, fair, and accountable.
- NIST AI RMF is voluntary — not a law. But it is widely adopted and referenced by US government procurement requirements.
- Compare to ISO/IEC 42001: both address AI governance but NIST is risk-focused and US-centric; ISO is a certifiable management system standard with global reach.
ISO/IEC Standards — Exam Focus Points
- ISO/IEC 42001:2023 is the most important — first international AI management system standard. Know that it is certifiable (third-party audit) and published December 2023.
- ISO/IEC standards are voluntary unless referenced by law. The EU AI Act may reference them as presumption of conformity for high-risk AI requirements.
- ISO/IEC 23894:2023 covers AI risk management guidance — integrates with ISO 31000 enterprise risk management.
- ISO/IEC 38507:2022 addresses governance implications — aimed at boards and executives, not technical teams.
- ISO/IEC standards in the 42xxx series are all AI-related. The number 42001 anchors the AI management system; other numbers address specific aspects (risk, bias, robustness).
- Know the analogy: ISO/IEC 42001 is to AI governance what ISO 9001 is to quality management and ISO 27001 is to information security.
Global Landscape — Exam Focus Points
- The Council of Europe Framework Convention (2024) = first binding international AI treaty. Not the same as the EU AI Act — it's a treaty open to all countries globally.
- The UNESCO Recommendation (2021) = first global normative AI instrument. Non-binding. Adopted by all 193 UNESCO member states.
- OECD AI Principles (2019) = first intergovernmental AI policy standard. 5 principles. Influential on EU AI Act drafting. Updated 2024.
- China = first country to regulate generative AI (2023) with dedicated legislation. Sector-by-sector approach (GenAI, algorithms, deepfakes — separate regulations).
- US approach = sector-specific and voluntary at federal level. Biden EO 14110 was revoked in Jan 2025. NIST AI RMF remains the key federal guidance. FDA, FTC have sector-specific AI rules.
- The key distinction: hard law vs soft law vs standards vs frameworks. Know which category each instrument falls into and the enforcement implications.
- Don't confuse the EU AI Act (EU regulation, binding on EU/accessing EU market) with the CoE AI Treaty (international treaty, binding on signatories worldwide).