GCP-PCA Practice Questions: Designing and planning a cloud solution architecture Domain
Test your GCP-PCA knowledge with 10 practice questions from the Designing and planning a cloud solution architecture domain. Includes detailed explanations and answers.
GCP-PCA Practice Questions
Master the Designing and planning a cloud solution architecture Domain
Test your knowledge in the Designing and planning a cloud solution architecture domain with these 10 practice questions. Each question is designed to help you prepare for the GCP-PCA certification exam with detailed explanations to reinforce your learning.
Question 1
A global retail company is building a new order-processing platform on Google Cloud. Orders are submitted from web and mobile apps worldwide and must be acknowledged to users within 200 ms. The downstream processing (fraud checks, inventory allocation, shipping label generation) can take several seconds and must be resilient to spikes during flash sales. The platform must: - Support millions of orders per hour during peak events - Ensure no order is lost once acknowledged to the user - Minimize operational overhead for the small SRE team - Allow independent deployment and scaling of fraud, inventory, and shipping components Which architecture best meets these requirements?
Show Answer & Explanation
Correct Answer: B
Option B best satisfies the latency, durability, scalability, and operational requirements while enabling loosely coupled components.
Analysis:
- Acknowledgement within 200 ms: The frontend must quickly persist the order and return. Writing to a strongly consistent, highly available database and publishing to Pub/Sub is typically fast enough.
- No order loss: Requires durable storage and reliable messaging. Cloud Spanner provides strong consistency and high availability; Pub/Sub provides at-least-once delivery and durable message storage.
- Handling spikes: Pub/Sub decouples ingestion from processing, smoothing spikes. Cloud Run scales automatically based on request and subscription load.
- Independent deployment and scaling: Separate Cloud Run services per domain (fraud, inventory, shipping) allow independent scaling and release cycles.
- Operational overhead: Cloud Run, Pub/Sub, and Spanner are fully managed and reduce cluster and VM management.
Why other options are suboptimal:
- A: Synchronous processing of fraud, inventory, and shipping in the same request conflicts with the 200 ms requirement and makes the system fragile during spikes. A monolith tightly couples components and increases operational risk. Cloud SQL may struggle at millions of orders per hour without complex sharding and tuning.
- C: GKE can meet scale requirements but increases operational overhead for a small SRE team (cluster management, upgrades, capacity planning). Internal HTTP calls between services create tight coupling and can propagate failures; there is no durable message queue to absorb spikes. Bigtable is good for high throughput but is not a natural fit for transactional order processing and consistency semantics.
- D: Cloud Storage as the primary order store is not ideal for transactional workloads and querying. A single function doing all downstream processing is tightly coupled and may hit execution time limits under complex workflows. Firestore is fine for some use cases but the overall pattern lacks robust decoupling and fine-grained scaling of separate components.
Therefore, B is the best architectural choice given the constraints.
Question 2
A global media company is building a new video-on-demand platform on Google Cloud. The platform will serve users in North America, Europe, and Asia-Pacific. Requirements: • Users must experience initial video start times under 2 seconds in all regions. • The catalog and user profiles are read-heavy, with occasional writes (e.g., new subscriptions, profile updates). • The business expects traffic spikes during major events and wants to minimize operational overhead. • Data residency: User profile data for EU residents must remain in the EU region. • The team has limited SRE capacity and wants to avoid managing complex distributed databases. You are designing the data layer and content delivery architecture. What should you do?
Show Answer & Explanation
Correct Answer: C
Option C best balances latency, compliance, and operational simplicity:
• Latency and performance:
– Firestore regional instances in europe-west, us-central, and asia-southeast keep user profile data close to users, reducing read/write latency.
– Cloud Spanner for catalog data provides globally consistent, strongly consistent reads and writes with horizontal scalability. Catalog is read-heavy and global, making Spanner appropriate.
– Multi-region GCS with Cloud CDN ensures low-latency content delivery and automatic caching close to users.
• Compliance:
– EU user profiles are stored in europe-west, satisfying data residency. Application-level routing ensures EU users are directed to the EU Firestore instance.
– Non-EU users can be served from US or APAC regions without violating EU residency.
• Operational overhead:
– Firestore is fully managed, auto-scales, and avoids complex cluster management.
– Spanner is managed and designed for global workloads, reducing the need to manage sharding or complex replication.
– Cloud CDN + multi-region GCS is low-ops for content delivery.
Why others are suboptimal:
A) Single multi-region Spanner for all user profiles and catalog data + regional GCS in us-central1:
– Latency: A single regional GCS bucket in us-central1 increases latency for EU and APAC users, especially for large video assets.
– Compliance: Multi-region Spanner spanning multiple continents may not guarantee that EU user data remains strictly in the EU region; data may be stored and processed outside the EU.
– Operationally simpler than some alternatives, but fails the data residency requirement and optimal content latency.
B) Regional Cloud SQL instances with async replication + dual-region GCS per continent:
– Latency: Regional Cloud SQL per region can be good for latency.
– Compliance: You could keep EU data in an EU instance, but cross-region replication and routing logic become complex.
– Operational overhead: Managing multiple Cloud SQL instances, replication, failover, and schema changes across regions is higher ops burden than Firestore/Spanner. Async replication also introduces potential consistency issues for global catalog data.
D) Single regional Firestore and GCS in europe-west:
– Compliance: EU residency is satisfied, but this over-constrains all data to EU.
– Latency: Users in North America and APAC will experience higher latency for both metadata and video content, likely violating the <2s start time requirement under load.
– This design prioritizes compliance at the expense of user experience and global performance.
Therefore, C is the best trade-off across latency, compliance, and operational simplicity.
Question 3
A financial services company is designing a new internal reporting portal on Google Cloud. The portal will: - Be accessed only by employees through the corporate VPN - Display aggregated financial reports generated daily from BigQuery - Require strict access control: only specific teams can see certain reports - Need to log all access to reports for audit purposes - Minimize the need for custom authentication and authorization code Which architecture best satisfies the security and operational requirements?
Show Answer & Explanation
Correct Answer: B
Option B leverages Google Cloud’s managed identity and access capabilities to minimize custom security code while meeting strict access control and audit requirements.
Key points:
- Internal-only access: Cloud Run with ingress restricted to internal traffic (VPC or internal load balancing) ensures the service is not publicly accessible.
- Authentication and authorization: Identity-Aware Proxy (IAP) provides centralized authentication and fine-grained access control policies without custom auth code. It integrates with corporate identity providers (IdPs) via OAuth/OpenID Connect.
- Least custom code: Using IAP offloads most authN/authZ logic to a managed service.
- Auditing: IAP access logs and Cloud Audit Logs provide detailed records of who accessed which resources and when, suitable for financial audits.
- Operational simplicity: Cloud Run is serverless and fully managed; no server or cluster management is required.
Why other options are suboptimal:
- A: Internal TCP/UDP Load Balancer does not provide HTTP-level identity features. Implementing LDAP-based auth and detailed authorization in the app increases complexity and maintenance. VPC Flow Logs capture network-level, not user-level, access, which is insufficient for audit of “who saw which report”.
- C: Public HTTP(S) Load Balancer exposes the portal to the internet, conflicting with the internal-only requirement. While OAuth 2.0 and logging can work, this requires more custom auth code and careful configuration to avoid exposure.
- D: App Engine with public access and IP-based restriction via Cloud Armor is weaker than identity-based access control. IP-based controls are brittle (VPN changes, IP spoofing concerns) and do not provide user-level authorization. Custom RBAC in the app increases complexity, and public exposure is not ideal for sensitive financial data.
Thus, B best aligns with security-by-design, minimal custom auth code, and strong auditing.
Question 4
A healthcare analytics company is migrating a sensitive patient-reporting application to Google Cloud. The application: - Serves clinicians in a single country with strict data residency laws - Requires 99.9% availability - Must store all patient-identifiable data at rest encrypted with customer-managed keys - Must support complex SQL queries and transactional updates - Needs to be fully managed to minimize operational burden The company also wants the ability to perform read-only analytics queries without impacting transactional performance. Which database architecture best meets these requirements?
Show Answer & Explanation
Correct Answer: A
Option A aligns best with the requirements around data residency, availability, encryption, transactional support, and operational simplicity.
Key requirements mapping:
- Single-country data residency: A regional deployment in a compliant region is appropriate. Multi-region services that span countries may violate residency constraints.
- 99.9% availability: Cloud SQL with high availability in a region can meet this SLA when configured correctly.
- Customer-managed encryption keys (CMEK): Cloud SQL supports CMEK for data at rest.
- Complex SQL and transactions: Cloud SQL (PostgreSQL) is a relational database with full SQL and ACID transactions.
- Managed service: Cloud SQL is fully managed.
- Analytics without impacting OLTP: Read replicas in the same region can offload read-heavy analytics queries while keeping data in-country.
Why other options are suboptimal:
- B: Cloud Spanner is powerful and supports CMEK and SQL, but a multi-region instance typically spans multiple geographic locations, which may conflict with strict single-country residency laws. Also, Spanner is often more expensive and complex than needed for a single-country, 99.9% SLA workload.
- C: Firestore is document-oriented and not ideal for complex SQL queries and traditional transactional reporting. While exporting to BigQuery for analytics is valid, the primary transactional requirement is for SQL and transactional updates, which Firestore does not natively provide in the same way as a relational database.
- D: Cloud SQL for MySQL with nightly exports to BigQuery provides analytics but introduces high data latency (up to 24 hours) for analytics, which may not be acceptable for clinicians needing up-to-date reports. Also, relying on nightly batch exports is operationally more complex than using read replicas for near-real-time analytics.
Thus, A provides a balanced, compliant, and operationally simple architecture that meets all stated constraints.
Question 5
A global retail company is modernizing its on-premises e-commerce platform to Google Cloud. The platform is a monolithic Java application with the following requirements: - Must support seasonal traffic spikes up to 10x normal load for short periods. - Checkout and payment flows must maintain sub-200 ms latency for users in North America and Europe. - The company has a small operations team and wants to minimize day-2 operational overhead. - The application writes to a relational database and uses local disk for caching product catalog data. - The business wants to gradually refactor the monolith into microservices over the next 2–3 years. You need to design the initial target architecture on Google Cloud that best balances performance, cost, and long-term maintainability while enabling gradual modernization. What should you do?
Show Answer & Explanation
Correct Answer: C
Option C best aligns with the requirements and constraints:
- Performance & global reach: Cloud Run (fully managed) can be fronted by a global external HTTP(S) load balancer, providing low-latency access for users in North America and Europe. It supports rapid scale-out to handle 10x traffic spikes.
- Operational simplicity: Cloud Run is fully managed, removing the need to manage servers, clusters, or OS patching. This fits the small operations team and minimizes day-2 operations.
- Gradual modernization: Containerizing the monolith is a natural first step that later allows splitting into multiple services, each deployable as separate Cloud Run services. This supports the 2–3 year refactoring plan.
- Stateful needs: Cloud SQL satisfies the relational database requirement. Memorystore for Redis provides a managed, low-latency cache without the operational burden of managing Redis on VMs.
- Cost efficiency: Cloud Run’s scale-to-zero and per-request billing help control costs outside peak seasons, while minimum instances can be tuned for latency and warm capacity.
Why the other options are suboptimal:
- Option A (Compute Engine MIG + Cloud SQL + local SSD cache):
- Technically viable but increases operational overhead: OS patching, capacity planning, instance templates, and autoscaling tuning.
- Local SSD caching is tied to individual VMs, making cache warm-up and consistency more complex and less flexible when scaling.
- Less aligned with the goal of minimizing operations and enabling gradual microservices adoption.
- Option B (GKE + Cloud SQL + Redis on VMs):
- GKE reduces some operational burden but still requires cluster management, node pool sizing, upgrades, and Kubernetes expertise.
- Running Redis on Compute Engine VMs adds more operational overhead (patching, failover, scaling) compared to a managed cache.
- This is more complex than necessary for a team explicitly wanting minimal day-2 operations.
- Option D (Immediate refactor to microservices + Cloud Run + Firestore):
- Requires a large up-front refactor before migration, increasing risk, timeline, and complexity; this contradicts the requirement to gradually refactor over 2–3 years.
- Replacing a relational database with Firestore may require significant data model changes and application rewrites, which is not aligned with a phased approach.
- While Cloud Run and Cloud CDN are good managed choices, the forced refactor and data store change make this option risky and misaligned with constraints.
Therefore, option C provides the best balance of performance, cost, operational simplicity, and a clear path to gradual modernization.
Question 6
A global retail company is modernizing its legacy on‑premises order management system. The new system will be deployed on Google Cloud and must: • Serve web and mobile clients globally with p95 latency under 200 ms for read operations. • Support peak traffic spikes of 20x during seasonal sales with minimal manual intervention. • Ensure that order creation and payment processing remain strongly consistent and ACID-compliant. • Minimize operational overhead for infrastructure management. • Keep costs predictable and avoid overprovisioning for rare peak events. Additional constraints: • The company’s risk team requires that payment data be stored in a single geographic region for regulatory reasons. • Business stakeholders want to roll out new features weekly with minimal downtime. You are designing the core data and application architecture. What should you do?
Show Answer & Explanation
Correct Answer: D
Option D best balances latency, consistency, regulatory, and operational requirements.
Analysis of requirements:
- Global low-latency reads (<200 ms p95): This suggests regional deployments close to users and a globally distributed data store for read-heavy data (orders, catalog, etc.).
- Strong consistency and ACID for order creation and payments: Both operations must not exhibit eventual consistency; they need transactional guarantees.
- Regulatory constraint: Payment data must reside in a single geographic region.
- Operational simplicity and elastic scaling: Prefer fully managed, serverless, and autoscaling services.
- Frequent releases with minimal downtime: Stateless services and managed platforms simplify blue/green or rolling deployments.
Why D is best:
- Cloud Run (fully managed) in multiple regions behind a global HTTP(S) Load Balancer provides:
- Low-latency access for users globally.
- Automatic scaling to handle 20x traffic spikes with minimal ops overhead.
- Simple deployment model for weekly releases.
- Dual-database strategy:
- Cloud Spanner multi-region for order data:
- Strong consistency and ACID transactions across regions.
- Global low-latency reads and writes with high availability.
- Ideal for order state, inventory, and other globally accessed transactional data.
- Single-region Cloud SQL for payment data:
- Satisfies regulatory requirement to keep payment data in one region.
- Provides ACID transactions for payment records.
- The application can orchestrate transactions so that payment operations are localized to that region while orders are globally consistent via Spanner.
- This design:
- Meets latency targets for most user interactions via Spanner and regional Cloud Run.
- Respects regulatory constraints for payment data.
- Minimizes operational overhead by using fully managed services.
Why not A:
- Monolithic app on Compute Engine:
- Higher operational overhead (instance management, patching, capacity planning).
- Less agile for weekly feature rollouts.
- Single-region Cloud SQL for all data:
- Global users will experience higher latency due to cross-continent round trips.
- May struggle with 20x spikes without careful capacity planning and scaling.
- While technically possible, this design does not optimize for global latency or operational simplicity.
Why not B:
- Cloud Run multi-region is good for stateless services and autoscaling, but using a single-region Cloud SQL for both orders and payments:
- Violates the latency requirement for global users (all reads/writes go to one region).
- Puts more pressure on a single regional database during 20x spikes.
- It meets regulatory needs but not the global latency and scalability goals for the core order workload.
Why not C:
- Cloud Spanner multi-region for all data (including payments):
- Technically strong for global consistency and latency, but conflicts with the explicit requirement that payment data be stored in a single geographic region.
- May increase cost and regulatory complexity for payment data.
- GKE Autopilot is managed but still more operationally complex than Cloud Run (cluster concepts, upgrades, etc.).
- While this design is robust and performant, it fails the regulatory constraint and is not the simplest operationally.
Therefore, D is the only option that satisfies global latency, strong consistency, regulatory constraints, and operational simplicity together.
Question 7
A healthcare analytics startup is designing a new platform on Google Cloud to process and analyze sensitive patient data from multiple hospitals. Requirements: • All data is classified as PHI and must comply with HIPAA. • Raw data arrives as batch files (CSV, JSON) several times per day from each hospital. • The platform must support both scheduled batch analytics and near-real-time dashboards (latency < 1 minute) for aggregated, de-identified metrics. • The team wants to minimize the risk of data exfiltration and enforce least privilege. • They prefer managed services and want to avoid managing long-running servers. You need to design the ingestion and analytics architecture. Which approach best meets these requirements?
Show Answer & Explanation
Correct Answer: B
Option B best addresses compliance, security, and operational needs:
• Compliance and security:
– Regional Cloud Storage with CMEK supports HIPAA and strong encryption controls.
– VPC Service Controls reduce data exfiltration risk by creating a security perimeter around Cloud Storage, BigQuery, and Dataflow.
– BigQuery row-level security and authorized views allow fine-grained access control and de-identification for analytics consumers.
• Architecture and latency:
– Dataflow batch jobs can be scheduled frequently (e.g., every few minutes) to approach near-real-time for aggregated metrics while processing batch files.
– Separating raw and de-identified datasets enforces least privilege: only a small set of users/services access raw PHI, while most users access only de-identified data.
• Operational simplicity:
– Dataflow is fully managed and serverless for data processing; no need to manage clusters.
– Cloud Scheduler + Dataflow templates provide repeatable, low-ops batch pipelines.
Why others are suboptimal:
A) Cloud Functions loading directly into BigQuery raw tables:
– Cloud Functions can handle ingestion, but:
– Dashboards directly on raw tables containing PHI increase risk and complicate least-privilege enforcement.
– No explicit de-identification layer or separate datasets for PHI vs non-PHI.
– Lacks VPC Service Controls, which are highly recommended for PHI and HIPAA workloads.
C) Multi-region bucket, Cloud Run jobs, single BigQuery dataset:
– Multi-region storage may not align with hospital or regulatory requirements for data locality.
– Default encryption is less controlled than CMEK for PHI.
– Single dataset without clear separation of raw vs de-identified data makes least privilege and de-identification harder.
– Relying only on project-level IAM is coarse-grained for sensitive healthcare data.
D) Dataproc + Cloud SQL:
– Persistent Dataproc cluster contradicts the desire to avoid managing long-running servers.
– Cloud SQL is not ideal for large-scale analytics; BigQuery is more appropriate for analytical workloads.
– Managing database users and scaling Cloud SQL for analytics is more operationally complex.
Therefore, B provides a secure, compliant, and low-ops architecture with clear separation of PHI and de-identified analytics.
Question 8
A financial services company is modernizing a monolithic on-premises application that processes loan applications. They want to move to Google Cloud and gradually decompose the monolith into microservices. Requirements: • Regulatory requirement: All customer PII must remain in a specific country (single GCP region). • The application must support synchronous APIs for partners with a 300 ms p95 latency SLO within the region. • The company wants to minimize operational overhead while enabling blue/green deployments and canary releases. • The team has limited Kubernetes expertise but strong CI/CD practices. • They expect unpredictable spikes in traffic during marketing campaigns. You need to propose a target architecture for the application layer that meets these requirements and supports gradual modernization. What should you recommend?
Show Answer & Explanation
Correct Answer: C
Option C aligns best with the requirements and constraints:
• Regulatory / residency:
– Cloud Run services can be deployed in a single specified region, ensuring PII remains in-country.
• Latency:
– Cloud Run in-region with a regional external HTTP(S) load balancer can meet a 300 ms p95 SLO for synchronous APIs, assuming reasonable application performance.
• Operational overhead:
– Cloud Run (fully managed) abstracts away cluster management, node scaling, and patching, which is ideal for a team with limited Kubernetes expertise.
– Autoscaling based on request load handles unpredictable spikes.
• Modernization and deployments:
– Containerizing the monolith allows an initial lift-and-shift with minimal code changes.
– You can gradually extract microservices into separate Cloud Run services.
– Cloud Run supports traffic splitting, enabling blue/green and canary releases with minimal operational complexity.
Why others are suboptimal:
A) Compute Engine managed instance group:
– Meets residency and can meet latency, but:
– Higher operational overhead: managing OS patching, capacity planning, autoscaling tuning, and deployment orchestration.
– Blue/green and canary are possible but more complex to implement and manage compared to Cloud Run’s built-in traffic splitting.
– Less aligned with the goal of minimizing operational overhead and enabling easy microservice decomposition.
B) GKE Standard cluster:
– Supports microservices and can meet latency and residency.
– However, the team has limited Kubernetes expertise, and GKE Standard requires managing node pools, upgrades, and cluster configuration.
– Operational overhead is significantly higher than Cloud Run, especially during the early modernization phase.
D) Full refactor before migration + GKE Autopilot:
– GKE Autopilot reduces some operational burden, but still requires Kubernetes knowledge and cluster-level considerations.
– Refactoring into microservices before migration is high risk and delays value; it conflicts with the requirement to gradually decompose the monolith.
– Service mesh and canary deployments add complexity that the current team may not be ready to manage.
Thus, C provides a low-ops, region-bound, scalable platform that supports gradual modernization and advanced deployment strategies.
Question 9
A healthcare provider is migrating its patient portal to Google Cloud. The portal exposes REST APIs to mobile and web clients and must integrate with on-premises electronic health record (EHR) systems. Requirements are: - Compliance: All patient data must comply with HIPAA and be protected with strong access controls. - Connectivity: The portal must securely access on-premises EHR systems with low operational overhead. - Availability: The API layer must be highly available across zones within a single region. - Security: The company wants centralized API security policies (rate limiting, authentication, threat protection) and detailed audit logs. - Future-proofing: They plan to expose some APIs to third-party partners in the future. You need to design the architecture for the API layer and connectivity to on-premises systems. What should you do?
Show Answer & Explanation
Correct Answer: C
Option C best addresses compliance, security, availability, and future extensibility:
- Compliance & security: Apigee X provides enterprise-grade API management with strong security features (OAuth2, JWT validation, threat protection, quotas, rate limiting) and detailed audit logging, which is important for HIPAA compliance. VPC Service Controls add an additional layer of defense-in-depth around data access.
- Centralized API security policies: Apigee is designed for centralized API governance, making it easier to apply consistent policies across internal and future external APIs.
- Availability: Cloud Run is regional and can be configured across multiple zones within a region, providing high availability. Apigee X is also designed for high availability.
- Connectivity: Dedicated Interconnect offers more reliable, lower-latency connectivity than VPN, which is appropriate for critical healthcare integrations with EHR systems, especially at scale.
- Future-proofing: Apigee is well-suited for exposing APIs to third-party partners with fine-grained control, monetization options, and developer portal support.
Why the other options are suboptimal:
- Option A (GKE + internal LB + VPN + in-app security):
- GKE adds operational complexity (cluster management, upgrades) compared to Cloud Run.
- Managing authentication, rate limiting, and threat protection in application code increases development and maintenance burden and is error-prone.
- Internal load balancer is not ideal for external mobile/web clients; you would need additional components to expose APIs externally.
- Option B (Cloud Run + external LB + Cloud Armor + Cloud Endpoints):
- Cloud Run plus Cloud Endpoints can handle authentication and quotas, and Cloud Armor adds some security, but this combination is less feature-rich than Apigee for enterprise API management (e.g., advanced policies, partner onboarding, monetization, complex traffic management).
- Cloud VPN may be sufficient initially but is less reliable and scalable than Dedicated Interconnect for mission-critical healthcare integrations.
- While technically viable, it is less aligned with the long-term need for partner APIs and strong governance.
- Option D (Compute Engine MIG + LB + Cloud Armor + IAP):
- Compute Engine requires more operational work (OS patching, capacity management) than Cloud Run.
- IAP is well-suited for securing access to internal web apps and some APIs but is not a full-featured API management platform. It lacks advanced API lifecycle management and partner onboarding capabilities.
- This design does not provide centralized, rich API management features needed for future third-party exposure.
Option C combines fully managed compute (Cloud Run), enterprise API management (Apigee X), strong network connectivity (Dedicated Interconnect), and additional perimeter security (VPC Service Controls), making it the most appropriate architecture for a regulated healthcare API platform.
Question 10
A media streaming company is designing a new recommendation engine on Google Cloud. The system will: - Ingest real-time user activity events (views, likes, skips) from a global user base - Require near-real-time (under 5 seconds) updates to recommendations shown to users - Store several terabytes of historical interaction data for model training - Minimize infrastructure management and support automatic scaling - Keep costs low while allowing future expansion of machine learning capabilities Which architecture best balances performance, cost, and operational simplicity?
Show Answer & Explanation
Correct Answer: C
Option C provides a well-balanced architecture for low-latency recommendations, scalable ingestion, and cost-effective analytics and training.
Reasoning:
- Real-time ingestion and processing: Pub/Sub + Dataflow streaming is a managed, scalable pattern for real-time event processing with minimal ops overhead.
- Near-real-time recommendations (<5 seconds): Cloud Bigtable is optimized for low-latency, high-throughput key-value access, ideal for serving per-user or per-item feature vectors. Dataflow can update Bigtable within seconds.
- Historical data for training: BigQuery is well-suited for large-scale analytical queries and ML feature engineering. Dataflow can aggregate and write historical data to BigQuery.
- Operational simplicity: Dataflow is fully managed; Cloud Run is serverless; Bigtable and BigQuery are managed services. No cluster management is required.
- Cost and future ML: BigQuery integrates with BigQuery ML and Vertex AI; training jobs can read from BigQuery. Bigtable is used only for serving-time features, which is cost-efficient for low-latency access.
Why other options are suboptimal:
- A: GKE introduces cluster management overhead, which conflicts with the requirement to minimize infrastructure management. While Bigtable and Cloud Storage are appropriate stores, managing a custom streaming app on GKE is more complex than using Dataflow.
- B: Using BigQuery for real-time feature serving is problematic: per-request queries from Cloud Run to BigQuery can introduce higher and more variable latency and higher per-query costs. BigQuery is not optimized as a low-latency online serving store.
- D: Direct streaming into BigQuery and serving from materialized tables is simpler but still suffers from latency and cost issues for per-request queries. Scheduled queries add additional delay, making it harder to guarantee under-5-second freshness for recommendations.
Therefore, C best meets the low-latency, scalability, cost, and operational simplicity requirements.
Ready to Accelerate Your GCP-PCA Preparation?
Join thousands of professionals who are advancing their careers through expert certification preparation with FlashGenius.
- ✅ Unlimited practice questions across all GCP-PCA domains
- ✅ Full-length exam simulations with real-time scoring
- ✅ AI-powered performance tracking and weak area identification
- ✅ Personalized study plans with adaptive learning
- ✅ Mobile-friendly platform for studying anywhere, anytime
- ✅ Expert explanations and study resources
Already have an account? Sign in here
About GCP-PCA Certification
The GCP-PCA certification validates your expertise in designing and planning a cloud solution architecture and other critical domains. Our comprehensive practice questions are carefully crafted to mirror the actual exam experience and help you identify knowledge gaps before test day.
Want a structured prep flow? Use Domain Practice first, then switch to Mixed Practice and Exam Simulation for full PCA readiness.
Explore FlashGenius PCA Prep →