GCP-PCA Practice Questions: Managing and provisioning a cloud solution infrastructure Domain
Test your GCP-PCA knowledge with 10 practice questions from the Managing and provisioning a cloud solution infrastructure domain. Includes detailed explanations and answers.
GCP-PCA Practice Questions
Master the Managing and provisioning a cloud solution infrastructure Domain
Test your knowledge in the Managing and provisioning a cloud solution infrastructure domain with these 10 practice questions. Each question is designed to help you prepare for the GCP-PCA certification exam with detailed explanations to reinforce your learning.
Question 1
A global retail company is migrating its on-premises order processing system to Google Cloud. The system consists of a stateless API layer and a stateful order-processing worker that runs long-running jobs (up to 45 minutes). The company has the following requirements: - Must support traffic spikes during seasonal sales without manual intervention. - Minimize operational overhead for infrastructure management. - Ensure that failed worker jobs are retried without duplicate processing. - Keep costs low during off-peak hours. Which architecture best meets these requirements?
Show Answer & Explanation
Correct Answer: A
Option A is best because it balances operational simplicity, scalability, and cost while handling long-running jobs and retries correctly.
- Cloud Run for the API layer provides automatic scaling to handle traffic spikes and scales to zero during off-peak hours, minimizing cost and operational overhead.
- Pub/Sub is appropriate for decoupling the API from the worker and handling spikes with durable, at-least-once delivery.
- Cloud Run jobs are designed for long-running, containerized batch workloads and can run up to 60 minutes, satisfying the 45-minute requirement.
- Using Pub/Sub with pull or push to trigger Cloud Run jobs, combined with idempotent worker logic, ensures retries without duplicate processing.
Why the others are suboptimal:
B: GKE with Jobs can technically handle long-running tasks and autoscaling, but it significantly increases operational overhead (cluster management, upgrades, capacity planning) compared to fully managed services. The custom controller watching Cloud SQL adds complexity and potential reliability issues. This violates the requirement to minimize operational overhead.
C: Managed instance groups with Cloud Tasks can work, but Cloud Tasks is optimized for short-lived HTTP tasks, not 45-minute jobs. You would need to manage your own worker pool and lifecycle on Compute Engine, increasing operational burden. Also, scaling to zero is not straightforward, so off-peak costs are higher.
D: App Engine standard for the API is reasonable, but Cloud Functions are not suitable for 45-minute processing; they have execution time limits and are better for short-lived functions. Chaining multiple functions to simulate long-running jobs increases complexity, error handling difficulty, and operational risk.
Therefore, A best aligns with managed services, cost efficiency, and operational simplicity while meeting the long-running and retry requirements.
Question 2
A global retail company is migrating its on-premises e-commerce platform to Google Cloud. The application is a monolithic Java app that must be available 24/7 with an RTO of 15 minutes and an RPO of 5 minutes. The company expects highly variable traffic, with large spikes during seasonal sales. They have a small operations team and want to minimize day-to-day infrastructure management while keeping costs predictable. The security team requires that customer payment data be stored in a managed database with automatic encryption at rest and in transit. Which deployment approach should you recommend for the application tier and database tier?
Show Answer & Explanation
Correct Answer: C
Option C best aligns with the requirements for minimal infrastructure management, high availability, and security by design.
Analysis:
- The company wants to minimize day-to-day infrastructure management and has a small operations team. Fully managed services are preferred.
- RTO of 15 minutes and RPO of 5 minutes require a managed database with HA and point-in-time recovery.
- Highly variable traffic and seasonal spikes require automatic, fine-grained scaling.
- Security team requires managed database with encryption at rest and in transit.
Why C is best:
- Cloud Run (fully managed) abstracts away VM and cluster management, fitting the small ops team and operational simplicity requirement.
- Cloud Run scales automatically based on traffic, including to zero when idle, which helps with cost efficiency and handling spikes.
- Cloud SQL for PostgreSQL with HA and point-in-time recovery supports the RPO/RTO requirements and provides managed encryption at rest and in transit.
- This combination uses managed services for both tiers, aligning with the Well-Architected principles of operational excellence and reliability.
Why not A:
- Managed instance groups with autoscaling and a global HTTP(S) load balancer are technically valid and can meet availability and scaling needs.
- However, they still require more infrastructure management (OS patching, VM sizing, instance templates) than Cloud Run.
- This is suboptimal given the small operations team and desire to minimize day-to-day management.
Why not B:
- GKE with cluster autoscaling is powerful and flexible but introduces significant operational overhead: cluster upgrades, node pool management, capacity planning, and Kubernetes expertise.
- Self-managed PostgreSQL on Compute Engine requires managing backups, patching, HA, and failover logic, which conflicts with the requirement for a managed database and small ops team.
- Security and reliability are more complex to implement correctly compared to Cloud SQL.
Why not D:
- App Engine standard can be a good fit for managed application hosting, but using Memorystore for Redis as the primary data store for customer payment data is inappropriate.
- Redis is an in-memory cache, not a durable system of record, and is not designed as a primary transactional database for sensitive payment data.
- This fails the reliability and compliance expectations for payment data, even with scheduled exports to Cloud Storage.
Therefore, option C provides the best balance of managed services, scalability, security, and operational simplicity.
Question 3
A SaaS company provides an internal analytics platform to multiple enterprise customers. Each customer has strict isolation requirements and wants assurance that their data and workloads are logically separated from other customers. The platform runs a multi-tenant web UI, per-tenant data processing jobs, and per-tenant data stores. Requirements: - Strong isolation between tenants at the infrastructure and network level - Centralized operations team must manage all environments with minimal duplication - Ability to onboard new tenants quickly with a standardized baseline - Support for per-tenant customizations (e.g., different data retention policies) - Minimize operational overhead and reduce risk of misconfiguration across many tenants How should you design the Google Cloud resource hierarchy and provisioning approach?
Show Answer & Explanation
Correct Answer: B
Option B provides strong tenant isolation, centralized management, and scalable provisioning with minimal duplication.
Analysis:
- One project per tenant: This gives a clear isolation boundary for quotas, IAM, logging, and billing. It reduces the risk of cross-tenant access due to misconfiguration within a single project.
- Shared folder: Placing tenant projects under a shared folder allows applying organization policies (e.g., allowed services, CMEK requirements) consistently across all tenants while still allowing per-project customization.
- Shared VPC host project + per-tenant service projects: This pattern centralizes network management (subnets, firewalls) while still giving each tenant its own service project. It supports strong network isolation via per-tenant subnets, firewall rules, and potentially per-tenant service perimeters.
- Centralized operations: A centralized CI/CD pipeline using Terraform modules can create new tenant projects, attach them to the shared VPC, and provision baseline resources (service accounts, IAM, storage, processing jobs) quickly and consistently. Per-tenant variables can handle customizations like data retention.
- Strong isolation: Per-project IAM and network segmentation via shared VPC provide strong logical isolation between tenants.
Why others are suboptimal:
- A: A single project for all tenants increases blast radius; a misconfigured IAM policy or firewall rule could expose multiple tenants’ data. While labels help with logical grouping, they do not enforce isolation. This design does not meet the requirement for strong isolation at the infrastructure and network level.
- C: Separate folders per tenant with multiple projects and per-tenant VPCs can provide isolation but significantly increases management overhead (many VPCs, VPNs, and policies). Manual provisioning for each tenant is error-prone and does not scale, conflicting with the requirement to onboard tenants quickly and minimize misconfiguration risk.
- D: Using a single GKE cluster with namespaces for tenant isolation relies heavily on Kubernetes RBAC and network policies. While this can work, it is a weaker isolation boundary than separate projects and VPC segmentation, and a misconfiguration in the cluster could affect multiple tenants. Also, a single shared VPC and cluster increase blast radius. This approach does not leverage the stronger isolation primitives available at the project and network level.
Therefore, B offers a well-architected multi-tenant model with strong isolation, centralized control, and scalable, automated provisioning aligned with the company’s requirements.
Question 4
A financial services company is designing a new risk analytics platform on Google Cloud. The platform will run batch jobs that can be preempted and restarted without data loss. Jobs are CPU-intensive and run for several hours, but they are not latency-sensitive. The company wants to minimize compute costs while ensuring that infrastructure provisioning is repeatable and auditable. Security policy requires that service-to-service access be tightly controlled and that no long-lived credentials be embedded in images or code. What should you do?
Show Answer & Explanation
Correct Answer: C
Option C best meets the cost, resilience, security, and provisioning requirements.
Reasoning:
- Workload characteristics: Batch, CPU-intensive, preemptible-tolerant, not latency-sensitive. Preemptible VMs are ideal to significantly reduce compute costs while accepting interruptions.
- Managed instance groups: Provide autoscaling and uniform management of preemptible VMs, simplifying operations.
- Provisioning: Terraform supports repeatable, auditable infrastructure provisioning and version control.
- Security: Attaching service accounts directly to instances avoids long-lived credentials in code or images. Least-privilege IAM roles enforce tight access control.
Why not A:
- GKE Autopilot simplifies operations but uses on-demand pricing; it does not exploit preemptible VMs as effectively as MIGs for cost optimization in long-running batch workloads.
- While Workload Identity is good for security, the scenario emphasizes cost minimization for long-running CPU-intensive jobs where preemptible MIGs are more cost-effective.
Why not B:
- On-demand VMs do not minimize costs as effectively as preemptible VMs for interruptible batch workloads.
- Storing JSON keys in Secret Manager still involves managing long-lived credentials, which the policy aims to avoid. Attaching service accounts to resources is preferred.
- Deployment Manager is less flexible and less commonly used than Terraform for modern IaC workflows.
Why not D:
- Cloud Run jobs are good for short-lived, event-driven workloads, but multi-hour CPU-intensive jobs can be more expensive and constrained compared to preemptible VMs.
- Using user-managed service account keys stored in Cloud Storage introduces long-lived credentials and key management overhead, violating the requirement to avoid embedded long-lived credentials.
- gcloud scripts for deployment are less auditable and repeatable than Terraform-based IaC.
Therefore, C is the best architectural choice for cost-optimized, secure, and repeatable provisioning of batch workloads.
Question 5
A global retail company is migrating its on-premises order processing system to Google Cloud. The system consists of a stateless API layer and a stateful order-processing worker that runs long-running tasks (up to 45 minutes). The company has these requirements: - Must support sudden traffic spikes during flash sales without manual intervention. - Minimize operational overhead for infrastructure management. - Ensure that a failed worker does not lose in-progress work; tasks must be retried safely. - Keep costs predictable and avoid overprovisioning idle capacity. - Production deployments must be roll-backed quickly if issues are detected. You are designing the target architecture on Google Cloud. What should you do?
Show Answer & Explanation
Correct Answer: A
Option A best satisfies the requirements with minimal operational overhead and strong reliability:
- Cloud Run for stateless API: Fully managed, automatic scaling, no server management, good for sudden traffic spikes and cost efficiency (pay-per-use).
- Pub/Sub for task queuing: Durable, at-least-once delivery, decouples producers and consumers, supports retries and backoff.
- Cloud Run jobs for long-running workers: Designed for containerized batch/long-running tasks, supports up to 60 minutes per execution, integrates well with Pub/Sub via push or orchestration, and simplifies worker lifecycle management. If a job fails, it can be retried without losing tasks.
- Cloud Build + Cloud Deploy: Provide CI/CD, progressive rollouts, and fast rollbacks with low operational overhead.
This combination aligns with managed services, operational simplicity, and resilience. It also keeps costs predictable by scaling to zero when idle and scaling up automatically during flash sales.
Why the other options are suboptimal:
- B (GKE Autopilot + RabbitMQ): Technically valid but higher operational complexity. Managing RabbitMQ (even on GKE) adds operational burden (upgrades, tuning, HA). GKE Autopilot reduces some infra management but still requires cluster and workload configuration. For a simple queue/worker pattern, Pub/Sub + Cloud Run is more aligned with managed, low-ops design.
- C (Managed instance groups + Cloud Tasks): Cloud Tasks is not ideal for long-running (45-minute) tasks; it is optimized for short-lived HTTP tasks and has execution time limits and semantics that make long-running processing more complex to manage. MIGs require more capacity planning and OS-level management than Cloud Run. This increases operational overhead and may lead to overprovisioning.
- D (App Engine standard + flexible): Mixing App Engine standard and flexible increases complexity (two runtimes, different scaling and deployment models). App Engine flexible is less responsive to sudden spikes and has longer startup times, which is problematic for flash sales. Cloud Run is generally preferred for new container-based workloads due to faster scaling and simpler ops.
Therefore, option A provides the best balance of scalability, reliability, cost control, and operational simplicity.
Question 6
A media streaming company is re-architecting its recommendation service on Google Cloud. The service: - Receives user activity events in near real time. - Computes personalized recommendations that must be available to the frontend API with p95 latency under 50 ms. - Must handle unpredictable traffic spikes during popular live events. - Needs to be highly available across zones within a region. - Operations team wants a deployment model that supports blue/green releases and quick rollback with minimal manual intervention. You are designing the infrastructure and deployment strategy for the recommendation service backend. What should you do?
Show Answer & Explanation
Correct Answer: C
Option C provides a managed, highly available, and operationally simple solution that supports low latency, autoscaling, and blue/green deployments.
Reasoning:
- Latency and availability:
- Cloud Run (fully managed) runs in a region and automatically spreads across zones, providing zonal redundancy.
- With appropriate minimum instances, it can meet p95 latency under 50 ms by avoiding cold starts for baseline traffic.
- Traffic spikes:
- Cloud Run autoscaling based on concurrent requests and custom metrics can handle unpredictable spikes without manual intervention.
- Deployment strategy:
- Cloud Run revisions natively support blue/green and canary deployments via traffic splitting, with quick rollback by adjusting traffic percentages.
- Operational overhead:
- Fully managed platform: no cluster or VM management, aligning with the operations team’s desire for minimal manual intervention.
Why not A:
- GKE requires cluster management, node scaling, and upgrades, increasing operational complexity compared to Cloud Run.
- Blue/green via separate Services and manual traffic switching is more complex and error-prone than Cloud Run’s built-in revision traffic splitting.
- While it can meet latency and availability requirements, it does not minimize operational overhead as well as Cloud Run.
Why not B:
- Compute Engine MIGs require OS patching, capacity planning, and instance template management.
- Blue/green using instance templates and URL maps is more manual and complex than Cloud Run’s revision-based deployment.
- While technically viable, it is less aligned with the goal of minimizing manual intervention and operational complexity.
Why not D:
- Cloud Functions are event-driven and not ideal for low-latency, synchronous recommendation APIs that must respond within 50 ms.
- Managing blue/green via multiple functions and IAM conditions is cumbersome and not a standard pattern for traffic shifting.
- Cold starts and function-level granularity make it harder to guarantee consistent low latency under unpredictable spikes compared to Cloud Run.
Question 7
A financial services company is designing a new analytics platform on Google Cloud. They must ingest transaction data from multiple regions into a central data lake and run batch analytics jobs nightly. Regulatory requirements mandate that raw transaction data must not leave the originating region, but aggregated, anonymized results can be stored centrally. The platform must be cost-efficient and easy to operate long term. Which architecture should you recommend for data storage and processing?
Show Answer & Explanation
Correct Answer: B
Option B best satisfies the regulatory, cost, and operational requirements.
Key constraints:
- Raw transaction data must not leave the originating region (data residency/compliance).
- Aggregated, anonymized results can be centralized.
- Need cost efficiency and operational simplicity.
Why B is best:
- Regional Cloud Storage buckets ensure raw data stays in the originating region, satisfying data residency.
- Running Dataflow jobs in each region processes data locally, avoiding cross-region movement of raw data.
- Dataflow is managed, autoscaling, and well-suited for nightly batch processing, reducing operational burden.
- Writing anonymized, aggregated results to a central BigQuery dataset in a multi-region location is compliant (only aggregated data is centralized) and simplifies analytics.
- This architecture aligns with the Well-Architected principles of security/compliance and operational excellence.
Why not A:
- A single multi-region Cloud Storage bucket would store raw data across multiple regions, which may violate the requirement that raw data must not leave the originating region.
- Even if the multi-region location includes those regions, you lose strict control over where raw data resides.
- Centralized processing of raw data also conflicts with the intent of regional isolation.
Why not C:
- Using persistent disks and custom scripts on Compute Engine significantly increases operational overhead (instance management, scaling, patching, failure handling).
- Cloud SQL is not ideal for large-scale analytics results storage; BigQuery is more appropriate for analytical workloads.
- This design is less cost-efficient and less scalable for large nightly batch analytics.
Why not D:
- Bigtable is a low-latency NoSQL database, not a natural fit for a raw data lake and batch analytics pattern.
- Managing Dataproc clusters in each region adds operational complexity compared to Dataflow.
- Exporting multiple regional BigQuery datasets to a central Cloud Storage bucket complicates the architecture and does not directly provide a central analytical store.
Therefore, option B provides a compliant, scalable, and operationally simple architecture.
Question 8
A healthcare provider is deploying a new patient portal on Google Cloud. The application is a containerized web app with a REST API and a background job processor. Requirements: - Must comply with HIPAA and store PHI only in approved services - All data at rest must be encrypted with customer-managed keys (CMEK) - Zero-downtime deployments with the ability to quickly roll back - Operations team has limited Kubernetes expertise and wants to avoid managing clusters - Need to minimize cost while supporting moderate, predictable traffic Which architecture and provisioning approach should you recommend?
Show Answer & Explanation
Correct Answer: A
Option A best satisfies compliance, operational simplicity, deployment safety, and cost-efficiency.
Analysis:
- HIPAA and PHI: Cloud Run, Cloud Functions, and Cloud SQL are HIPAA-eligible services when used under a BAA. Cloud SQL supports CMEK for data at rest, satisfying the encryption requirement.
- CMEK: Cloud SQL with CMEK allows customer-managed encryption keys. Terraform can provision KMS keys, key rings, and attach them to Cloud SQL instances, ensuring consistent, auditable configuration.
- Zero-downtime deployments: Cloud Run supports revisions and traffic splitting, enabling blue/green or canary deployments with quick rollback by shifting traffic back to the previous revision.
- Operational simplicity: Cloud Run and Cloud Functions are fully managed; no cluster management is required, aligning with the operations team’s limited Kubernetes expertise. Background jobs can run as Cloud Functions triggered by events or as Cloud Run jobs if needed; the option describes Cloud Functions, which is appropriate for background processing.
- Cost: For moderate, predictable traffic, Cloud Run and Cloud Functions can be cost-effective, especially when compared to running always-on clusters or Spanner.
Why others are suboptimal:
- B: GKE Autopilot reduces some operational burden but still requires Kubernetes knowledge (deployments, services, ingress, etc.). Cloud Spanner is a global, horizontally scalable database that is significantly more expensive and complex than Cloud SQL and is likely unnecessary for a patient portal with moderate traffic. Deployment Manager is less flexible and widely used than Terraform and does not inherently improve compliance over Terraform.
- C: Compute Engine managed instance groups and standalone instances require more operational management (OS patching, capacity planning). Storing PHI in Cloud Storage as the primary data store for a transactional patient portal is not ideal; Cloud Storage is object storage, not a relational database. While CMEK is supported, this design complicates application logic and consistency. Startup scripts and snapshots are coarse-grained deployment and rollback mechanisms and do not provide fine-grained, zero-downtime deployment control.
- D: Firestore with CMEK can store PHI, but using Firestore as the primary store for a portal that likely has strong relational and transactional requirements may not be optimal. Provisioning resources manually via the console undermines auditability, repeatability, and compliance controls. IaC is important for regulated environments to ensure consistent, reviewable infrastructure changes. Also, describing all background jobs as Cloud Run jobs may be viable, but the lack of IaC is a major compliance and operational drawback.
Therefore, A provides a compliant, managed, and cost-effective architecture with strong deployment and provisioning practices.
Question 9
A healthcare SaaS provider is designing a new multi-tenant platform on Google Cloud to host electronic medical records (EMR) for hospitals in a single country. Requirements include: - Strict data residency: all patient data must remain in a single region. - Each hospital must be logically isolated from others, with separate encryption keys and access controls. - The platform must support zero-downtime deployments and automatic rollback. - The operations team wants a consistent way to provision and update infrastructure across tenants with minimal manual steps. - The application consists of a stateless web/API tier and a relational database per tenant. What should you do to design the infrastructure provisioning and deployment approach?
Show Answer & Explanation
Correct Answer: B
Option B best addresses data residency, tenant isolation, and operational consistency.
Reasoning:
- Data residency:
- All resources (Cloud Run and Cloud SQL) are deployed in a single region, satisfying the residency requirement.
- Tenant isolation and security:
- A separate project per tenant provides strong isolation boundaries for IAM, networking, logging, and quotas.
- A separate Cloud SQL instance per tenant allows distinct encryption keys (via CMEK if required) and independent access controls and maintenance windows.
- Zero-downtime deployments and rollback:
- Cloud Deploy supports progressive delivery strategies (e.g., canary) and automated rollbacks, enabling zero-downtime deployments for the web/API tier.
- Operational consistency:
- Terraform can declaratively provision and update projects, Cloud Run services, and Cloud SQL instances, enabling repeatable, auditable infrastructure changes across tenants.
- Managed services:
- Cloud Run and Cloud SQL minimize infrastructure management overhead compared to managing clusters or VMs.
Why not A:
- A single Cloud SQL instance with multiple databases per tenant weakens isolation; noisy neighbor issues and shared maintenance events can affect all tenants.
- A single project and cluster for all tenants complicate per-tenant access control and encryption key separation.
- Using kubectl scripts for infrastructure changes is less consistent and less auditable than using an IaC tool like Terraform.
Why not C:
- Multiple regional GKE clusters contradict the requirement that all data must remain in a single region.
- Per-tenant GKE clusters add significant operational overhead (cluster lifecycle, upgrades) compared to using Cloud Run.
- Deployment Manager is less flexible and widely adopted than Terraform for complex, multi-project provisioning.
Why not D:
- Single project and single Cloud SQL instance with row-level security provide weaker isolation than per-tenant projects and instances.
- A multi-tenant database increases blast radius for performance and operational issues.
- While Terraform and GKE rolling updates help, this design does not meet the requirement for separate encryption keys and strong isolation per hospital as well as option B.
Question 10
A global retail company is migrating its on-premises e-commerce platform to Google Cloud. The application is a monolithic Java app that currently runs on VMs and connects to a PostgreSQL database. The company wants to minimize operational overhead and improve availability, but they are not ready to refactor into microservices yet. Requirements: - Must support traffic from customers in North America and Europe - RPO of 5 minutes and RTO of 30 minutes for the database - Minimize manual infrastructure management - Ability to perform blue/green deployments with minimal downtime - Data residency: customer data must remain in the EU for EU customers What should you design as the target architecture for the application and database?
Show Answer & Explanation
Correct Answer: B
Option B best balances operational simplicity, availability, compliance, and deployment flexibility.
Analysis:
- The company wants to minimize operational overhead and is not ready to refactor into microservices, but the app is a monolith that can still be containerized. Cloud Run (fully managed) significantly reduces infrastructure management compared to managing VMs or clusters, while still supporting containerized monoliths.
- Global traffic from North America and Europe is handled by deploying Cloud Run services in us-central1 and europe-west1 behind a global external HTTP(S) load balancer.
- Data residency: EU customer data must remain in the EU. Using two separate regional Cloud SQL instances (one in us-central1, one in europe-west1) and routing EU users to the EU instance satisfies this. Application-level routing or a routing layer can ensure that EU users only hit the EU database.
- RPO/RTO: Regional Cloud SQL with automated backups and point-in-time recovery can meet RPO 5 minutes and RTO 30 minutes when combined with appropriate backup and failover strategies. Separate regional instances also avoid cross-region replication latency for compliance-sensitive data.
- Blue/green: Cloud Run supports revisions and traffic splitting, enabling controlled blue/green or canary deployments with minimal downtime and no need to manage underlying infrastructure.
Why others are suboptimal:
- A: Uses regional managed instance groups and a single regional Cloud SQL instance with cross-region read replicas. This does not meet the data residency requirement because EU customer data would be stored in a single region (europe-west1) but US customers would read from a replica in us-central1; more critically, EU data might be replicated outside the EU depending on design. Also, using a single primary region for all customers may introduce higher latency for some users. Operational overhead is higher than Cloud Run (VM patching, instance templates, autoscaling tuning).
- C: GKE Autopilot reduces some operational overhead, but still introduces cluster-level complexity (Kubernetes objects, upgrades, networking). Cloud Spanner is highly available and global, but is a significant cost and complexity increase compared to Cloud SQL, and may be overkill for a monolithic app migration. Also, using a global Spanner instance with separate databases for EU and non-EU customers can satisfy residency, but it’s a heavier architectural shift than needed and may not align with the cost minimization and simplicity goals.
- D: Unmanaged instance groups increase operational burden (no autoscaling group management, more manual operations). Cloud SQL does not support a single multi-region instance spanning regions; this option is architecturally incorrect. DNS-based cutover for blue/green is slower and less precise than load balancer or platform-level traffic splitting and can cause longer propagation delays, impacting RTO and deployment control.
Therefore, B is the best fit for minimizing operational overhead, meeting data residency, supporting global traffic, and enabling safe blue/green deployments.
Ready to Accelerate Your GCP-PCA Preparation?
Join thousands of professionals who are advancing their careers through expert certification preparation with FlashGenius.
- ✅ Unlimited practice questions across all GCP-PCA domains
- ✅ Full-length exam simulations with real-time scoring
- ✅ AI-powered performance tracking and weak area identification
- ✅ Personalized study plans with adaptive learning
- ✅ Mobile-friendly platform for studying anywhere, anytime
- ✅ Expert explanations and study resources
Already have an account? Sign in here
About GCP-PCA Certification
The GCP-PCA certification validates your expertise in managing and provisioning a cloud solution infrastructure and other critical domains. Our comprehensive practice questions are carefully crafted to mirror the actual exam experience and help you identify knowledge gaps before test day.
Want a structured prep flow? Use Domain Practice first, then switch to Mixed Practice and Exam Simulation for full PCA readiness.
Explore FlashGenius PCA Prep →