FlashGenius Logo FlashGenius
Login Sign Up

GCP-PCA Practice Questions: Managing and provisioning a cloud solution infrastructure Domain

Test your GCP-PCA knowledge with 10 practice questions from the Managing and provisioning a cloud solution infrastructure domain. Includes detailed explanations and answers.

GCP-PCA Practice Questions

Master the Managing and provisioning a cloud solution infrastructure Domain

Test your knowledge in the Managing and provisioning a cloud solution infrastructure domain with these 10 practice questions. Each question is designed to help you prepare for the GCP-PCA certification exam with detailed explanations to reinforce your learning.

Question 1

A global retail company is migrating its on-premises order processing system to Google Cloud. The system consists of a stateless API layer and a stateful order-processing worker that runs long-running jobs (up to 45 minutes). The company has the following requirements: - Must support traffic spikes during seasonal sales without manual intervention. - Minimize operational overhead for infrastructure management. - Ensure that failed worker jobs are retried without duplicate processing. - Keep costs low during off-peak hours. Which architecture best meets these requirements?

A) Deploy the API layer on Cloud Run with automatic scaling. Use Pub/Sub for job queuing and Cloud Run jobs for the worker, triggered by Pub/Sub push subscriptions. Implement idempotency in the worker to handle retries.

B) Deploy the API layer on a regional GKE cluster with cluster autoscaling. Use a GKE Job for each order-processing task, triggered via a custom controller that watches a Cloud SQL table for new jobs.

C) Deploy the API layer on a managed instance group with autoscaling. Use Cloud Tasks to enqueue jobs and process them with a pool of Compute Engine instances in another managed instance group.

D) Deploy the API layer on App Engine standard. Use Cloud Functions triggered by Pub/Sub messages to process each order, chaining multiple functions for long-running tasks.

Show Answer & Explanation

Correct Answer: A

Explanation:

Option A is best because it balances operational simplicity, scalability, and cost while handling long-running jobs and retries correctly.

- Cloud Run for the API layer provides automatic scaling to handle traffic spikes and scales to zero during off-peak hours, minimizing cost and operational overhead.
- Pub/Sub is appropriate for decoupling the API from the worker and handling spikes with durable, at-least-once delivery.
- Cloud Run jobs are designed for long-running, containerized batch workloads and can run up to 60 minutes, satisfying the 45-minute requirement.
- Using Pub/Sub with pull or push to trigger Cloud Run jobs, combined with idempotent worker logic, ensures retries without duplicate processing.

Why the others are suboptimal:

B: GKE with Jobs can technically handle long-running tasks and autoscaling, but it significantly increases operational overhead (cluster management, upgrades, capacity planning) compared to fully managed services. The custom controller watching Cloud SQL adds complexity and potential reliability issues. This violates the requirement to minimize operational overhead.

C: Managed instance groups with Cloud Tasks can work, but Cloud Tasks is optimized for short-lived HTTP tasks, not 45-minute jobs. You would need to manage your own worker pool and lifecycle on Compute Engine, increasing operational burden. Also, scaling to zero is not straightforward, so off-peak costs are higher.

D: App Engine standard for the API is reasonable, but Cloud Functions are not suitable for 45-minute processing; they have execution time limits and are better for short-lived functions. Chaining multiple functions to simulate long-running jobs increases complexity, error handling difficulty, and operational risk.

Therefore, A best aligns with managed services, cost efficiency, and operational simplicity while meeting the long-running and retry requirements.

Question 2

A global retail company is migrating its on-premises e-commerce platform to Google Cloud. The application is a monolithic Java app that must be available 24/7 with an RTO of 15 minutes and an RPO of 5 minutes. The company expects highly variable traffic, with large spikes during seasonal sales. They have a small operations team and want to minimize day-to-day infrastructure management while keeping costs predictable. The security team requires that customer payment data be stored in a managed database with automatic encryption at rest and in transit. Which deployment approach should you recommend for the application tier and database tier?

A) Deploy the Java application on a regional managed instance group of Compute Engine VMs with autoscaling and a global external HTTP(S) load balancer; use a regional Cloud SQL for PostgreSQL instance with high availability and automatic backups.

B) Containerize the Java application and deploy it to a regional GKE cluster with cluster autoscaling and a global external HTTP(S) load balancer; use a self-managed PostgreSQL database on a separate managed instance group with persistent disks and custom backup scripts.

C) Containerize the Java application and deploy it to Cloud Run (fully managed) with automatic scaling and a global external HTTP(S) load balancer; use Cloud SQL for PostgreSQL with high availability and point-in-time recovery enabled.

D) Deploy the Java application on App Engine standard environment with automatic scaling; use a Memorystore for Redis instance as the primary data store for customer payment data and configure scheduled exports to Cloud Storage.

Show Answer & Explanation

Correct Answer: C

Explanation:

Option C best aligns with the requirements for minimal infrastructure management, high availability, and security by design.

Analysis:
- The company wants to minimize day-to-day infrastructure management and has a small operations team. Fully managed services are preferred.
- RTO of 15 minutes and RPO of 5 minutes require a managed database with HA and point-in-time recovery.
- Highly variable traffic and seasonal spikes require automatic, fine-grained scaling.
- Security team requires managed database with encryption at rest and in transit.

Why C is best:
- Cloud Run (fully managed) abstracts away VM and cluster management, fitting the small ops team and operational simplicity requirement.
- Cloud Run scales automatically based on traffic, including to zero when idle, which helps with cost efficiency and handling spikes.
- Cloud SQL for PostgreSQL with HA and point-in-time recovery supports the RPO/RTO requirements and provides managed encryption at rest and in transit.
- This combination uses managed services for both tiers, aligning with the Well-Architected principles of operational excellence and reliability.

Why not A:
- Managed instance groups with autoscaling and a global HTTP(S) load balancer are technically valid and can meet availability and scaling needs.
- However, they still require more infrastructure management (OS patching, VM sizing, instance templates) than Cloud Run.
- This is suboptimal given the small operations team and desire to minimize day-to-day management.

Why not B:
- GKE with cluster autoscaling is powerful and flexible but introduces significant operational overhead: cluster upgrades, node pool management, capacity planning, and Kubernetes expertise.
- Self-managed PostgreSQL on Compute Engine requires managing backups, patching, HA, and failover logic, which conflicts with the requirement for a managed database and small ops team.
- Security and reliability are more complex to implement correctly compared to Cloud SQL.

Why not D:
- App Engine standard can be a good fit for managed application hosting, but using Memorystore for Redis as the primary data store for customer payment data is inappropriate.
- Redis is an in-memory cache, not a durable system of record, and is not designed as a primary transactional database for sensitive payment data.
- This fails the reliability and compliance expectations for payment data, even with scheduled exports to Cloud Storage.

Therefore, option C provides the best balance of managed services, scalability, security, and operational simplicity.

Question 3

A SaaS company provides an internal analytics platform to multiple enterprise customers. Each customer has strict isolation requirements and wants assurance that their data and workloads are logically separated from other customers. The platform runs a multi-tenant web UI, per-tenant data processing jobs, and per-tenant data stores. Requirements: - Strong isolation between tenants at the infrastructure and network level - Centralized operations team must manage all environments with minimal duplication - Ability to onboard new tenants quickly with a standardized baseline - Support for per-tenant customizations (e.g., different data retention policies) - Minimize operational overhead and reduce risk of misconfiguration across many tenants How should you design the Google Cloud resource hierarchy and provisioning approach?

A) Create a single project for all tenants. Use separate VPC subnets and IAM service accounts per tenant. Tag resources with labels indicating tenant ownership. Provision tenant resources using Terraform with a shared module that creates per-tenant subnets and IAM bindings.

B) Create one project per tenant under a shared folder. Use a shared VPC host project for all tenants and separate service projects per tenant. Use organization policies at the folder level and per-project IAM to enforce isolation. Provision tenant projects and resources using Terraform with a tenant module and a centralized CI/CD pipeline.

C) Create separate folders per tenant, each containing dev, test, and prod projects. Use per-tenant VPCs and Cloud VPN to connect them to a central operations project. Provision resources manually for each tenant to allow maximum customization and avoid complex automation.

D) Create a single folder for all tenants and a single shared VPC. Use Kubernetes namespaces in a central GKE cluster to isolate tenant workloads. Use network policies and RBAC to enforce isolation. Provision namespaces and policies using Helm charts and scripts run by the operations team.

Show Answer & Explanation

Correct Answer: B

Explanation:

Option B provides strong tenant isolation, centralized management, and scalable provisioning with minimal duplication.

Analysis:
- One project per tenant: This gives a clear isolation boundary for quotas, IAM, logging, and billing. It reduces the risk of cross-tenant access due to misconfiguration within a single project.
- Shared folder: Placing tenant projects under a shared folder allows applying organization policies (e.g., allowed services, CMEK requirements) consistently across all tenants while still allowing per-project customization.
- Shared VPC host project + per-tenant service projects: This pattern centralizes network management (subnets, firewalls) while still giving each tenant its own service project. It supports strong network isolation via per-tenant subnets, firewall rules, and potentially per-tenant service perimeters.
- Centralized operations: A centralized CI/CD pipeline using Terraform modules can create new tenant projects, attach them to the shared VPC, and provision baseline resources (service accounts, IAM, storage, processing jobs) quickly and consistently. Per-tenant variables can handle customizations like data retention.
- Strong isolation: Per-project IAM and network segmentation via shared VPC provide strong logical isolation between tenants.

Why others are suboptimal:
- A: A single project for all tenants increases blast radius; a misconfigured IAM policy or firewall rule could expose multiple tenants’ data. While labels help with logical grouping, they do not enforce isolation. This design does not meet the requirement for strong isolation at the infrastructure and network level.
- C: Separate folders per tenant with multiple projects and per-tenant VPCs can provide isolation but significantly increases management overhead (many VPCs, VPNs, and policies). Manual provisioning for each tenant is error-prone and does not scale, conflicting with the requirement to onboard tenants quickly and minimize misconfiguration risk.
- D: Using a single GKE cluster with namespaces for tenant isolation relies heavily on Kubernetes RBAC and network policies. While this can work, it is a weaker isolation boundary than separate projects and VPC segmentation, and a misconfiguration in the cluster could affect multiple tenants. Also, a single shared VPC and cluster increase blast radius. This approach does not leverage the stronger isolation primitives available at the project and network level.

Therefore, B offers a well-architected multi-tenant model with strong isolation, centralized control, and scalable, automated provisioning aligned with the company’s requirements.

Question 4

A financial services company is designing a new risk analytics platform on Google Cloud. The platform will run batch jobs that can be preempted and restarted without data loss. Jobs are CPU-intensive and run for several hours, but they are not latency-sensitive. The company wants to minimize compute costs while ensuring that infrastructure provisioning is repeatable and auditable. Security policy requires that service-to-service access be tightly controlled and that no long-lived credentials be embedded in images or code. What should you do?

A) Use a regional GKE Autopilot cluster and run the batch jobs as Kubernetes Jobs using standard nodes. Use Terraform to provision the cluster and workloads, and use Workload Identity for service-to-service access.

B) Use Compute Engine managed instance groups with regular on-demand VMs and a startup script to pull job definitions. Use Deployment Manager to provision the infrastructure and service accounts with JSON keys stored in Secret Manager.

C) Use Compute Engine managed instance groups with preemptible VMs, orchestrated by a managed workflow or scheduler, and use Terraform to provision the infrastructure. Use service accounts attached to the instances and grant least-privilege IAM roles.

D) Use Cloud Run jobs with maximum CPU allocation and configure them to run in a fully managed environment. Use gcloud scripts to deploy jobs and use user-managed service account keys stored in Cloud Storage for authentication.

Show Answer & Explanation

Correct Answer: C

Explanation:

Option C best meets the cost, resilience, security, and provisioning requirements.

Reasoning:
- Workload characteristics: Batch, CPU-intensive, preemptible-tolerant, not latency-sensitive. Preemptible VMs are ideal to significantly reduce compute costs while accepting interruptions.
- Managed instance groups: Provide autoscaling and uniform management of preemptible VMs, simplifying operations.
- Provisioning: Terraform supports repeatable, auditable infrastructure provisioning and version control.
- Security: Attaching service accounts directly to instances avoids long-lived credentials in code or images. Least-privilege IAM roles enforce tight access control.

Why not A:
- GKE Autopilot simplifies operations but uses on-demand pricing; it does not exploit preemptible VMs as effectively as MIGs for cost optimization in long-running batch workloads.
- While Workload Identity is good for security, the scenario emphasizes cost minimization for long-running CPU-intensive jobs where preemptible MIGs are more cost-effective.

Why not B:
- On-demand VMs do not minimize costs as effectively as preemptible VMs for interruptible batch workloads.
- Storing JSON keys in Secret Manager still involves managing long-lived credentials, which the policy aims to avoid. Attaching service accounts to resources is preferred.
- Deployment Manager is less flexible and less commonly used than Terraform for modern IaC workflows.

Why not D:
- Cloud Run jobs are good for short-lived, event-driven workloads, but multi-hour CPU-intensive jobs can be more expensive and constrained compared to preemptible VMs.
- Using user-managed service account keys stored in Cloud Storage introduces long-lived credentials and key management overhead, violating the requirement to avoid embedded long-lived credentials.
- gcloud scripts for deployment are less auditable and repeatable than Terraform-based IaC.

Therefore, C is the best architectural choice for cost-optimized, secure, and repeatable provisioning of batch workloads.

Question 5

A global retail company is migrating its on-premises order processing system to Google Cloud. The system consists of a stateless API layer and a stateful order-processing worker that runs long-running tasks (up to 45 minutes). The company has these requirements: - Must support sudden traffic spikes during flash sales without manual intervention. - Minimize operational overhead for infrastructure management. - Ensure that a failed worker does not lose in-progress work; tasks must be retried safely. - Keep costs predictable and avoid overprovisioning idle capacity. - Production deployments must be roll-backed quickly if issues are detected. You are designing the target architecture on Google Cloud. What should you do?

A) Deploy the stateless API on Cloud Run with automatic scaling. Use Pub/Sub for task queuing and Cloud Run jobs for the long-running workers, triggered by Pub/Sub push subscriptions. Use Cloud Build for container image builds and Cloud Deploy for progressive rollouts and rollbacks.

B) Deploy the stateless API on a regional GKE Autopilot cluster with a Horizontal Pod Autoscaler. Use a GKE-managed RabbitMQ deployment for task queuing and a separate worker Deployment for long-running tasks. Use rolling updates with a low maxUnavailable setting for safe rollouts.

C) Deploy the stateless API on a regional managed instance group with autoscaling. Use Cloud Tasks for queuing and a separate managed instance group for workers that pull tasks from Cloud Tasks. Use instance templates and rolling updates for deployment management.

D) Deploy the stateless API on App Engine standard. Use Pub/Sub for task queuing and App Engine flexible environment for the long-running workers. Use App Engine traffic splitting for canary deployments and rollbacks.

Show Answer & Explanation

Correct Answer: A

Explanation:

Option A best satisfies the requirements with minimal operational overhead and strong reliability:

- Cloud Run for stateless API: Fully managed, automatic scaling, no server management, good for sudden traffic spikes and cost efficiency (pay-per-use).
- Pub/Sub for task queuing: Durable, at-least-once delivery, decouples producers and consumers, supports retries and backoff.
- Cloud Run jobs for long-running workers: Designed for containerized batch/long-running tasks, supports up to 60 minutes per execution, integrates well with Pub/Sub via push or orchestration, and simplifies worker lifecycle management. If a job fails, it can be retried without losing tasks.
- Cloud Build + Cloud Deploy: Provide CI/CD, progressive rollouts, and fast rollbacks with low operational overhead.

This combination aligns with managed services, operational simplicity, and resilience. It also keeps costs predictable by scaling to zero when idle and scaling up automatically during flash sales.

Why the other options are suboptimal:

- B (GKE Autopilot + RabbitMQ): Technically valid but higher operational complexity. Managing RabbitMQ (even on GKE) adds operational burden (upgrades, tuning, HA). GKE Autopilot reduces some infra management but still requires cluster and workload configuration. For a simple queue/worker pattern, Pub/Sub + Cloud Run is more aligned with managed, low-ops design.

- C (Managed instance groups + Cloud Tasks): Cloud Tasks is not ideal for long-running (45-minute) tasks; it is optimized for short-lived HTTP tasks and has execution time limits and semantics that make long-running processing more complex to manage. MIGs require more capacity planning and OS-level management than Cloud Run. This increases operational overhead and may lead to overprovisioning.

- D (App Engine standard + flexible): Mixing App Engine standard and flexible increases complexity (two runtimes, different scaling and deployment models). App Engine flexible is less responsive to sudden spikes and has longer startup times, which is problematic for flash sales. Cloud Run is generally preferred for new container-based workloads due to faster scaling and simpler ops.

Therefore, option A provides the best balance of scalability, reliability, cost control, and operational simplicity.

Question 6

A media streaming company is re-architecting its recommendation service on Google Cloud. The service: - Receives user activity events in near real time. - Computes personalized recommendations that must be available to the frontend API with p95 latency under 50 ms. - Must handle unpredictable traffic spikes during popular live events. - Needs to be highly available across zones within a region. - Operations team wants a deployment model that supports blue/green releases and quick rollback with minimal manual intervention. You are designing the infrastructure and deployment strategy for the recommendation service backend. What should you do?

A) Deploy the recommendation service on a regional GKE cluster with multiple node pools across zones. Use a Horizontal Pod Autoscaler based on CPU and custom metrics. Use a regional internal HTTP(S) Load Balancer for traffic and configure blue/green deployments using separate Kubernetes Services and manual traffic switching.

B) Deploy the recommendation service on Compute Engine managed instance groups in multiple zones behind a regional external HTTP(S) Load Balancer. Use autoscaling based on CPU utilization and implement blue/green deployments by updating instance templates and gradually shifting traffic using load balancer URL maps.

C) Deploy the recommendation service on Cloud Run (fully managed) in a single region with minimum instances configured to handle baseline load. Use Cloud Run revisions for blue/green deployments and traffic splitting, and configure autoscaling based on concurrent requests and custom metrics.

D) Deploy the recommendation service on Cloud Functions triggered by Pub/Sub events. Use multiple functions per recommendation type, and configure additional functions for blue/green deployments. Use IAM conditions to control which function version receives traffic.

Show Answer & Explanation

Correct Answer: C

Explanation:

Option C provides a managed, highly available, and operationally simple solution that supports low latency, autoscaling, and blue/green deployments.

Reasoning:
- Latency and availability:
- Cloud Run (fully managed) runs in a region and automatically spreads across zones, providing zonal redundancy.
- With appropriate minimum instances, it can meet p95 latency under 50 ms by avoiding cold starts for baseline traffic.
- Traffic spikes:
- Cloud Run autoscaling based on concurrent requests and custom metrics can handle unpredictable spikes without manual intervention.
- Deployment strategy:
- Cloud Run revisions natively support blue/green and canary deployments via traffic splitting, with quick rollback by adjusting traffic percentages.
- Operational overhead:
- Fully managed platform: no cluster or VM management, aligning with the operations team’s desire for minimal manual intervention.

Why not A:
- GKE requires cluster management, node scaling, and upgrades, increasing operational complexity compared to Cloud Run.
- Blue/green via separate Services and manual traffic switching is more complex and error-prone than Cloud Run’s built-in revision traffic splitting.
- While it can meet latency and availability requirements, it does not minimize operational overhead as well as Cloud Run.

Why not B:
- Compute Engine MIGs require OS patching, capacity planning, and instance template management.
- Blue/green using instance templates and URL maps is more manual and complex than Cloud Run’s revision-based deployment.
- While technically viable, it is less aligned with the goal of minimizing manual intervention and operational complexity.

Why not D:
- Cloud Functions are event-driven and not ideal for low-latency, synchronous recommendation APIs that must respond within 50 ms.
- Managing blue/green via multiple functions and IAM conditions is cumbersome and not a standard pattern for traffic shifting.
- Cold starts and function-level granularity make it harder to guarantee consistent low latency under unpredictable spikes compared to Cloud Run.

Question 7

A financial services company is designing a new analytics platform on Google Cloud. They must ingest transaction data from multiple regions into a central data lake and run batch analytics jobs nightly. Regulatory requirements mandate that raw transaction data must not leave the originating region, but aggregated, anonymized results can be stored centrally. The platform must be cost-efficient and easy to operate long term. Which architecture should you recommend for data storage and processing?

A) Store raw transaction data in a single multi-region Cloud Storage bucket; run Dataflow jobs in a single region to process all data and write aggregated results to BigQuery in that region.

B) Store raw transaction data in regional Cloud Storage buckets in each originating region; run regional Dataflow jobs in each region to process data locally and write anonymized, aggregated results to a central BigQuery dataset in a multi-region location.

C) Store raw transaction data in regional persistent disks attached to Compute Engine instances in each region; run custom batch processing scripts on those instances and write aggregated results to a Cloud SQL instance in a central region.

D) Store raw transaction data in a global Bigtable instance; run Dataproc clusters in each region to process data and write aggregated results to separate BigQuery datasets in each region, then periodically export them to a central Cloud Storage bucket.

Show Answer & Explanation

Correct Answer: B

Explanation:

Option B best satisfies the regulatory, cost, and operational requirements.

Key constraints:
- Raw transaction data must not leave the originating region (data residency/compliance).
- Aggregated, anonymized results can be centralized.
- Need cost efficiency and operational simplicity.

Why B is best:
- Regional Cloud Storage buckets ensure raw data stays in the originating region, satisfying data residency.
- Running Dataflow jobs in each region processes data locally, avoiding cross-region movement of raw data.
- Dataflow is managed, autoscaling, and well-suited for nightly batch processing, reducing operational burden.
- Writing anonymized, aggregated results to a central BigQuery dataset in a multi-region location is compliant (only aggregated data is centralized) and simplifies analytics.
- This architecture aligns with the Well-Architected principles of security/compliance and operational excellence.

Why not A:
- A single multi-region Cloud Storage bucket would store raw data across multiple regions, which may violate the requirement that raw data must not leave the originating region.
- Even if the multi-region location includes those regions, you lose strict control over where raw data resides.
- Centralized processing of raw data also conflicts with the intent of regional isolation.

Why not C:
- Using persistent disks and custom scripts on Compute Engine significantly increases operational overhead (instance management, scaling, patching, failure handling).
- Cloud SQL is not ideal for large-scale analytics results storage; BigQuery is more appropriate for analytical workloads.
- This design is less cost-efficient and less scalable for large nightly batch analytics.

Why not D:
- Bigtable is a low-latency NoSQL database, not a natural fit for a raw data lake and batch analytics pattern.
- Managing Dataproc clusters in each region adds operational complexity compared to Dataflow.
- Exporting multiple regional BigQuery datasets to a central Cloud Storage bucket complicates the architecture and does not directly provide a central analytical store.

Therefore, option B provides a compliant, scalable, and operationally simple architecture.

Question 8

A healthcare provider is deploying a new patient portal on Google Cloud. The application is a containerized web app with a REST API and a background job processor. Requirements: - Must comply with HIPAA and store PHI only in approved services - All data at rest must be encrypted with customer-managed keys (CMEK) - Zero-downtime deployments with the ability to quickly roll back - Operations team has limited Kubernetes expertise and wants to avoid managing clusters - Need to minimize cost while supporting moderate, predictable traffic Which architecture and provisioning approach should you recommend?

A) Deploy the web app and API to Cloud Run (fully managed) and the background jobs to Cloud Functions. Store PHI in Cloud SQL with CMEK. Use Infrastructure as Code (IaC) with Terraform to provision services and keys. Use Cloud Run traffic splitting for zero-downtime deployments.

B) Deploy the web app, API, and background jobs to a GKE Autopilot cluster. Store PHI in Cloud Spanner with CMEK. Use Deployment Manager to provision the cluster, database, and keys. Use rolling updates with maxUnavailable=0 for zero-downtime deployments.

C) Deploy the web app and API on Compute Engine managed instance groups and the background jobs on separate Compute Engine instances. Store PHI in Cloud Storage with CMEK. Use startup scripts for deployments and snapshots for rollback.

D) Deploy the web app, API, and background jobs to Cloud Run jobs and services. Store PHI in Firestore in Native mode with CMEK. Provision all resources manually via the console to reduce complexity and avoid IaC overhead. Use Cloud Run revisions for zero-downtime deployments.

Show Answer & Explanation

Correct Answer: A

Explanation:

Option A best satisfies compliance, operational simplicity, deployment safety, and cost-efficiency.

Analysis:
- HIPAA and PHI: Cloud Run, Cloud Functions, and Cloud SQL are HIPAA-eligible services when used under a BAA. Cloud SQL supports CMEK for data at rest, satisfying the encryption requirement.
- CMEK: Cloud SQL with CMEK allows customer-managed encryption keys. Terraform can provision KMS keys, key rings, and attach them to Cloud SQL instances, ensuring consistent, auditable configuration.
- Zero-downtime deployments: Cloud Run supports revisions and traffic splitting, enabling blue/green or canary deployments with quick rollback by shifting traffic back to the previous revision.
- Operational simplicity: Cloud Run and Cloud Functions are fully managed; no cluster management is required, aligning with the operations team’s limited Kubernetes expertise. Background jobs can run as Cloud Functions triggered by events or as Cloud Run jobs if needed; the option describes Cloud Functions, which is appropriate for background processing.
- Cost: For moderate, predictable traffic, Cloud Run and Cloud Functions can be cost-effective, especially when compared to running always-on clusters or Spanner.

Why others are suboptimal:
- B: GKE Autopilot reduces some operational burden but still requires Kubernetes knowledge (deployments, services, ingress, etc.). Cloud Spanner is a global, horizontally scalable database that is significantly more expensive and complex than Cloud SQL and is likely unnecessary for a patient portal with moderate traffic. Deployment Manager is less flexible and widely used than Terraform and does not inherently improve compliance over Terraform.
- C: Compute Engine managed instance groups and standalone instances require more operational management (OS patching, capacity planning). Storing PHI in Cloud Storage as the primary data store for a transactional patient portal is not ideal; Cloud Storage is object storage, not a relational database. While CMEK is supported, this design complicates application logic and consistency. Startup scripts and snapshots are coarse-grained deployment and rollback mechanisms and do not provide fine-grained, zero-downtime deployment control.
- D: Firestore with CMEK can store PHI, but using Firestore as the primary store for a portal that likely has strong relational and transactional requirements may not be optimal. Provisioning resources manually via the console undermines auditability, repeatability, and compliance controls. IaC is important for regulated environments to ensure consistent, reviewable infrastructure changes. Also, describing all background jobs as Cloud Run jobs may be viable, but the lack of IaC is a major compliance and operational drawback.

Therefore, A provides a compliant, managed, and cost-effective architecture with strong deployment and provisioning practices.

Question 9

A healthcare SaaS provider is designing a new multi-tenant platform on Google Cloud to host electronic medical records (EMR) for hospitals in a single country. Requirements include: - Strict data residency: all patient data must remain in a single region. - Each hospital must be logically isolated from others, with separate encryption keys and access controls. - The platform must support zero-downtime deployments and automatic rollback. - The operations team wants a consistent way to provision and update infrastructure across tenants with minimal manual steps. - The application consists of a stateless web/API tier and a relational database per tenant. What should you do to design the infrastructure provisioning and deployment approach?

A) Use a single regional GKE cluster for all tenants. Deploy the web/API tier as separate Kubernetes Deployments per tenant and use a single regional Cloud SQL instance with a separate database per tenant. Use Helm charts for application deployment and kubectl scripts for infrastructure changes.

B) Use a separate project per tenant in the same region. Deploy the web/API tier on Cloud Run and a separate regional Cloud SQL instance per tenant. Use Terraform to provision projects, Cloud Run services, and Cloud SQL instances, and use Cloud Deploy for application rollouts with canary releases.

C) Use a single project with multiple regional GKE clusters, one per tenant. Deploy the web/API tier and a Cloud SQL instance per tenant in each cluster. Use Deployment Manager to provision clusters and Cloud SQL, and use rolling updates in GKE for zero-downtime deployments.

D) Use a single project and a single regional GKE cluster. Deploy the web/API tier as a multi-tenant application and use a single regional Cloud SQL instance with row-level security per tenant. Use Terraform for infrastructure and GKE rolling updates for deployments.

Show Answer & Explanation

Correct Answer: B

Explanation:

Option B best addresses data residency, tenant isolation, and operational consistency.

Reasoning:
- Data residency:
- All resources (Cloud Run and Cloud SQL) are deployed in a single region, satisfying the residency requirement.
- Tenant isolation and security:
- A separate project per tenant provides strong isolation boundaries for IAM, networking, logging, and quotas.
- A separate Cloud SQL instance per tenant allows distinct encryption keys (via CMEK if required) and independent access controls and maintenance windows.
- Zero-downtime deployments and rollback:
- Cloud Deploy supports progressive delivery strategies (e.g., canary) and automated rollbacks, enabling zero-downtime deployments for the web/API tier.
- Operational consistency:
- Terraform can declaratively provision and update projects, Cloud Run services, and Cloud SQL instances, enabling repeatable, auditable infrastructure changes across tenants.
- Managed services:
- Cloud Run and Cloud SQL minimize infrastructure management overhead compared to managing clusters or VMs.

Why not A:
- A single Cloud SQL instance with multiple databases per tenant weakens isolation; noisy neighbor issues and shared maintenance events can affect all tenants.
- A single project and cluster for all tenants complicate per-tenant access control and encryption key separation.
- Using kubectl scripts for infrastructure changes is less consistent and less auditable than using an IaC tool like Terraform.

Why not C:
- Multiple regional GKE clusters contradict the requirement that all data must remain in a single region.
- Per-tenant GKE clusters add significant operational overhead (cluster lifecycle, upgrades) compared to using Cloud Run.
- Deployment Manager is less flexible and widely adopted than Terraform for complex, multi-project provisioning.

Why not D:
- Single project and single Cloud SQL instance with row-level security provide weaker isolation than per-tenant projects and instances.
- A multi-tenant database increases blast radius for performance and operational issues.
- While Terraform and GKE rolling updates help, this design does not meet the requirement for separate encryption keys and strong isolation per hospital as well as option B.

Question 10

A global retail company is migrating its on-premises e-commerce platform to Google Cloud. The application is a monolithic Java app that currently runs on VMs and connects to a PostgreSQL database. The company wants to minimize operational overhead and improve availability, but they are not ready to refactor into microservices yet. Requirements: - Must support traffic from customers in North America and Europe - RPO of 5 minutes and RTO of 30 minutes for the database - Minimize manual infrastructure management - Ability to perform blue/green deployments with minimal downtime - Data residency: customer data must remain in the EU for EU customers What should you design as the target architecture for the application and database?

A) Deploy the Java application on a regional managed instance group with autoscaling in us-central1 and europe-west1 behind a global external HTTP(S) load balancer. Use a single regional Cloud SQL for PostgreSQL instance in europe-west1 with cross-region read replicas in us-central1. Implement blue/green deployments using instance templates and load balancer traffic shifting.

B) Deploy the Java application to Cloud Run (fully managed) in us-central1 and europe-west1 behind a global external HTTP(S) load balancer. Use two separate regional Cloud SQL for PostgreSQL instances, one in us-central1 and one in europe-west1, with application-level routing to ensure EU customers use the EU database. Implement blue/green deployments using Cloud Run revisions and traffic splitting.

C) Deploy the Java application on GKE Autopilot clusters in us-central1 and europe-west1 behind a global external HTTP(S) load balancer. Use a multi-region Cloud Spanner instance for customer data, with separate databases for EU and non-EU customers. Implement blue/green deployments using separate Kubernetes deployments and load balancer routing.

D) Deploy the Java application on Compute Engine unmanaged instance groups in us-central1 and europe-west1 behind a global external HTTP(S) load balancer. Use a single multi-region Cloud SQL for PostgreSQL instance spanning us-central1 and europe-west1. Implement blue/green deployments using startup scripts and DNS-based cutover.

Show Answer & Explanation

Correct Answer: B

Explanation:

Option B best balances operational simplicity, availability, compliance, and deployment flexibility.

Analysis:
- The company wants to minimize operational overhead and is not ready to refactor into microservices, but the app is a monolith that can still be containerized. Cloud Run (fully managed) significantly reduces infrastructure management compared to managing VMs or clusters, while still supporting containerized monoliths.
- Global traffic from North America and Europe is handled by deploying Cloud Run services in us-central1 and europe-west1 behind a global external HTTP(S) load balancer.
- Data residency: EU customer data must remain in the EU. Using two separate regional Cloud SQL instances (one in us-central1, one in europe-west1) and routing EU users to the EU instance satisfies this. Application-level routing or a routing layer can ensure that EU users only hit the EU database.
- RPO/RTO: Regional Cloud SQL with automated backups and point-in-time recovery can meet RPO 5 minutes and RTO 30 minutes when combined with appropriate backup and failover strategies. Separate regional instances also avoid cross-region replication latency for compliance-sensitive data.
- Blue/green: Cloud Run supports revisions and traffic splitting, enabling controlled blue/green or canary deployments with minimal downtime and no need to manage underlying infrastructure.

Why others are suboptimal:
- A: Uses regional managed instance groups and a single regional Cloud SQL instance with cross-region read replicas. This does not meet the data residency requirement because EU customer data would be stored in a single region (europe-west1) but US customers would read from a replica in us-central1; more critically, EU data might be replicated outside the EU depending on design. Also, using a single primary region for all customers may introduce higher latency for some users. Operational overhead is higher than Cloud Run (VM patching, instance templates, autoscaling tuning).
- C: GKE Autopilot reduces some operational overhead, but still introduces cluster-level complexity (Kubernetes objects, upgrades, networking). Cloud Spanner is highly available and global, but is a significant cost and complexity increase compared to Cloud SQL, and may be overkill for a monolithic app migration. Also, using a global Spanner instance with separate databases for EU and non-EU customers can satisfy residency, but it’s a heavier architectural shift than needed and may not align with the cost minimization and simplicity goals.
- D: Unmanaged instance groups increase operational burden (no autoscaling group management, more manual operations). Cloud SQL does not support a single multi-region instance spanning regions; this option is architecturally incorrect. DNS-based cutover for blue/green is slower and less precise than load balancer or platform-level traffic splitting and can cause longer propagation delays, impacting RTO and deployment control.

Therefore, B is the best fit for minimizing operational overhead, meeting data residency, supporting global traffic, and enabling safe blue/green deployments.

Ready to Accelerate Your GCP-PCA Preparation?

Join thousands of professionals who are advancing their careers through expert certification preparation with FlashGenius.

  • ✅ Unlimited practice questions across all GCP-PCA domains
  • ✅ Full-length exam simulations with real-time scoring
  • ✅ AI-powered performance tracking and weak area identification
  • ✅ Personalized study plans with adaptive learning
  • ✅ Mobile-friendly platform for studying anywhere, anytime
  • ✅ Expert explanations and study resources
Start Free Practice Now

Already have an account? Sign in here

About GCP-PCA Certification

The GCP-PCA certification validates your expertise in managing and provisioning a cloud solution infrastructure and other critical domains. Our comprehensive practice questions are carefully crafted to mirror the actual exam experience and help you identify knowledge gaps before test day.

PCA Domain Practice
Analyzing & Optimizing Technical and Business Processes
Cost optimization, process design, SDLC/CI-CD, governance, and stakeholder-driven trade-offs — PCA-style scenarios.
Start Domain Practice →
PCA Domain Practice
Managing & Provisioning a Cloud Solution Infrastructure
Compute, networking, storage, IAM, and scaling decisions — designed to mirror real PCA architecture trade-offs.
Practice Infrastructure Questions →
PCA Domain Practice
Designing & Planning a Cloud Solution Architecture
End-to-end architecture scenarios: requirements, trade-offs, migrations, HA/DR planning, and future-proofing.
Practice Architecture Questions →

Want a structured prep flow? Use Domain Practice first, then switch to Mixed Practice and Exam Simulation for full PCA readiness.

Explore FlashGenius PCA Prep →
PCA MASTER GUIDE

Google Professional Cloud Architect (PCA) — Ultimate 2026 Guide & 8-Week Study Plan

Go beyond practice questions. Learn how the exam is structured, how domains are weighted, and how to prepare efficiently using a proven, exam-aligned 8-week roadmap.

Read the PCA Ultimate Guide →