Free SnowPro Core Quick Start Practice Test — 10 Questions Across All 5 Snowflake Domains
Last updated: May 2026 · Aligned with the current Snowflake SnowPro Core COF-C03 exam
This free SnowPro Core quick start practice test draws 10 questions across all 5 Snowflake SnowPro Core (COF-C03) domains. Use it for a fast readiness check before diving into per-domain study.
10 Free SnowPro Core Mixed-Domain Practice Questions
Sample Question 1 — Account Management and Data Governance
A healthcare company stores patient data in Snowflake. Compliance requires that:
- All users can query every row in the PATIENTS table.
- The SSN column must appear masked for most users.
- Only users with the PRIVILEGED_HEALTHCARE role should see the real SSN values.
Which Snowflake feature is the most appropriate to meet this requirement?
- A. Attach a column-level masking policy to the SSN column that returns unmasked values only when CURRENT_ROLE() = 'PRIVILEGED_HEALTHCARE' (Correct answer)
- B. Create a row access policy on the PATIENTS table to filter out rows where SSN is not allowed to be seen
- C. Use a secure view that filters out the SSN column completely for all users
- D. Apply a tag to the SSN column and rely on the tag alone to hide the values
Correct answer: A
Explanation: Correct answer (A): Column-level masking policies are designed to transform returned values based on session context such as CURRENT_ROLE(). By attaching a masking policy to the SSN column that checks for PRIVILEGED_HEALTHCARE, all rows remain visible, but the SSN is dynamically masked for unauthorized roles and unmasked only for privileged users.
Why the other options are wrong:
- Option B: Row access policies control which rows are visible, not how column values are masked. Using a row access policy here would incorrectly restrict rows instead of just masking SSNs.
- Option C: A secure view could hide the SSN column entirely, but the requirement states that some users must see the real SSN while others see masks. A single secure view alone cannot provide dynamic masking by role without additional logic, and would typically hide the column rather than mask it.
- Option D: Tags provide classification and metadata but do not themselves enforce security. A tag on its own will not cause Snowflake to mask the SSN values without a masking policy or other enforcement mechanism using that tag.
Sample Question 2 — Account Management and Data Governance
A data engineer accidentally deleted several rows from a critical Snowflake table 12 hours ago. The table has a Time Travel retention period of 2 days, and the account uses the standard Snowflake Fail-safe configuration.
The engineer wants to restore the table to its state just before the accidental delete, without involving Snowflake Support.
Which capability should the engineer use?
- A. Query the table using Time Travel to an "as of" timestamp and restore the data from that snapshot (Correct answer)
- B. Request Snowflake Support to restore the table from Fail-safe because Fail-safe is for user-driven recovery
- C. Create a zero-copy clone of the database, which automatically includes the state before deletion
- D. Use a row access policy to filter out all rows modified in the last 12 hours
Correct answer: A
Explanation: Correct answer (A): Time Travel is designed for user-accessible recovery of historical data within the configured retention period. With a 2-day retention and the delete occurring 12 hours ago, the engineer can query the table as of a timestamp just before the delete and either copy the data back or restore the table directly using Time Travel features, without involving Snowflake Support.
Why the other options are wrong:
- Option B: Fail-safe is a Snowflake-managed retention meant for disaster recovery and is not directly user-accessible via SQL. It is not the primary mechanism for routine user-driven recovery within the Time Travel window.
- Option C: Zero-copy cloning creates a snapshot at the moment of cloning, not retroactively at a prior point in time. Cloning now would include the state after the delete, so it would not help recover the lost rows.
- Option D: Row access policies control row visibility based on predicates but do not restore deleted data. Once rows are deleted, they must be recovered using Time Travel, not filtered back in with a policy.
Sample Question 3 — Data Collaboration
A finance team in Account A shares a set of tables with a marketing team in Account B using a Snowflake secure data share. After a month, the marketing team asks why their Snowflake storage bill increased due to the shared data. As the Snowflake administrator, how should you respond?
- A. Explain that Account B is now paying for both storage and compute for the shared data because it is copied into their account when the share is created.
- B. Explain that Account A continues to pay for the storage of the shared data and Account B only pays for the compute used to query the shared data. (Correct answer)
- C. Explain that storage for shared data is split 50/50 between Account A and Account B, while each account pays its own compute.
- D. Explain that Account A pays for compute and Account B pays for storage for all queries on the shared database.
Correct answer: B
Explanation: Correct answer (B): In Snowflake secure data sharing, the underlying data is not copied to the consumer account. A single copy of the data remains in the provider account, and the provider (Account A) is billed for that storage. The consumer (Account B) creates a read-only database pointing to this data and is only billed for the compute used by its own virtual warehouses when querying the shared data.
Why the other options are wrong:
- Option A: Incorrect because Snowflake sharing is metadata-based; the data is not copied into the consumer account, so it does not create additional storage charges for Account B.
- Option C: Incorrect because Snowflake does not split storage billing between accounts. Storage is billed once to the provider that owns the underlying tables.
- Option D: Incorrect because each account pays for its own compute. The provider is responsible for storage costs, not compute on behalf of the consumer.
Sample Question 4 — Data Collaboration
A retailer uses Snowflake and wants to provide near real-time access to detailed sales data to a single logistics partner that also has its own Snowflake account in the same cloud region. The retailer wants a private, one-to-one arrangement and does not want the data to be discoverable by other customers. They also want to avoid building ETL pipelines or exporting files. Which Snowflake feature best meets these requirements?
- A. Publish the sales data as a public listing on Snowflake Marketplace so the logistics partner can subscribe to it.
- B. Export the sales data to cloud object storage and have the logistics partner load it into their own Snowflake account.
- C. Create a direct secure data share from the retailer’s account to the logistics partner’s Snowflake account. (Correct answer)
- D. Create a reader account for the logistics partner and share the sales data with that reader account.
Correct answer: C
Explanation: Correct answer (C): A direct secure data share between two full Snowflake accounts in the same region provides near real-time access without copying or exporting data. It supports a private, one-to-one relationship, where the provider exposes specific tables or secure views to a known consumer without making the data discoverable to others, and avoids ETL or file movement by using metadata pointers to the provider’s data.
Why the other options are wrong:
- Option A: Incorrect because Snowflake Marketplace is designed for discoverable listings to a broader or curated audience. Publishing a public listing conflicts with the requirement for a private, one-to-one arrangement.
- Option B: Incorrect because exporting to cloud storage and re-loading introduces file movement and ETL-like processes, which the scenario explicitly wants to avoid and which reduce near real-time access.
- Option D: Incorrect because reader accounts are intended for consumers who do not have their own Snowflake account. Here, the logistics partner already has a Snowflake account, so a direct secure share is the appropriate mechanism.
Sample Question 5 — Data Loading, Unloading, and Connectivity
A data engineer needs to load a one-time CSV file that is currently stored on their laptop into a Snowflake table. The company does not use any cloud object storage directly, and the engineer wants a simple, supported Snowflake-native approach.
Which approach should the engineer use?
- A. Create an external stage that points to an S3 bucket and use COPY INTO from that stage
- B. Use SnowSQL to PUT the CSV file into an internal stage, then run COPY INTO the target table from that stage (Correct answer)
- C. Configure Snowpipe on an external stage and wait for the file to be automatically ingested
- D. Use the Snowflake Python connector to stream the local file directly into the table with INSERT statements
Correct answer: B
Explanation: Correct answer (B): The file is on the engineer’s laptop and the organization does not use cloud object storage directly. Snowflake supports using SnowSQL’s PUT command to upload local files into an internal stage, and then using COPY INTO <table> FROM @internal_stage to bulk load the data. This is the intended pattern for loading local files via Snowflake-managed storage.
Why the other options are wrong:
- Option A: An external stage points to external cloud storage (such as S3), but the scenario explicitly states the company does not use cloud object storage directly. Also, PUT/GET only work with internal stages, not external ones, so this would not help load a laptop-based file without first moving it to cloud storage outside of Snowflake.
- Option C: Snowpipe continuously loads from stages, typically external ones backed by cloud object storage and event notifications. There is no existing external cloud storage or event setup here, and Snowpipe does not solve the problem of moving a local laptop file into a stage.
- Option D: The Python connector is for programmatic access to Snowflake, typically for running queries. While it could be used with row-by-row INSERTs, that would be inefficient and does not leverage Snowflake’s native staged-file bulk load pattern. The blueprint emphasizes COPY INTO from stages as the primary bulk load mechanism.
Sample Question 6 — Data Loading, Unloading, and Connectivity
A company wants to connect a popular BI dashboard tool to Snowflake so analysts can build interactive reports directly on Snowflake data. The BI tool expects a standard SQL database driver and will primarily issue SELECT queries.
Which Snowflake connectivity option is the most appropriate for this use case?
- A. Snowflake Python Connector
- B. Snowflake JDBC or ODBC driver (Correct answer)
- C. SnowSQL command-line client
- D. Snowpark libraries
Correct answer: B
Explanation: Correct answer (B): BI and reporting tools usually connect to Snowflake using standard database drivers. Snowflake provides JDBC and ODBC drivers specifically for this purpose, and the blueprint states these are the standard connectivity options used by BI tools for interactive querying.
Why the other options are wrong:
- Option A: The Python connector is designed for Python applications and scripts, not as a generic SQL driver for BI tools. It is better suited for programmatic workloads and pipelines than for direct BI connectivity.
- Option C: SnowSQL is a command-line client, ideal for scripting and administrative tasks, not for interactive BI tools that require a long-running driver integrated into the application.
- Option D: Snowpark libraries provide APIs for languages such as Python, Java, or Scala to work with data in Snowflake, but they are designed for developer workloads and not as generic JDBC/ODBC-style drivers for BI reporting tools.
Sample Question 7 — Performance Optimization, Querying, and Transformation
A finance team runs a single, very heavy month-end reconciliation query on a large fact table. The query currently runs on an X-Small virtual warehouse and takes 45 minutes to complete. There is almost no concurrency on this workload, and they only care about reducing the runtime of this one query.
What is the most appropriate Snowflake-native change to try first?
- A. Increase the size of the existing virtual warehouse from X-Small to Large before running the query (Correct answer)
- B. Convert the warehouse to a multi-cluster warehouse with multiple X-Small clusters
- C. Enable search optimization service on the fact table
- D. Create a materialized view that simply selects all columns from the fact table
Correct answer: A
Explanation: Correct answer (A): Scaling up the warehouse (X-Small to Large) increases compute resources for the individual query and can reduce runtime for a CPU- or I/O-bound query when concurrency is low. Since only one heavy query runs and concurrency is not a concern, a larger single warehouse is the most appropriate first change.
Why the other options are wrong:
- Option B: A multi-cluster warehouse improves concurrency by running multiple queries on separate clusters, but each query still runs on a single cluster. Converting to multi-cluster does not directly speed up this single query and adds unnecessary cost.
- Option C: Search optimization service is ideal for highly selective filters and point lookups. A month-end reconciliation query typically scans large portions of the table, so search optimization is unlikely to be the main performance driver here.
- Option D: A materialized view that simply mirrors the base table without selective filters or aggregations usually does not reduce the amount of data scanned for a full-table style reconciliation and will add maintenance cost for refresh.
Sample Question 8 — Performance Optimization, Querying, and Transformation
An analyst runs a query against a large table and notes it takes 30 seconds. They immediately rerun the exact same query and it returns in less than a second.
Later that day, new rows are loaded into the same table. The analyst again runs the exact same query text with the same session settings. This time, the query takes around 10 seconds and scans data, but still seems faster than the first run.
Which explanation best describes Snowflake's behavior?
- A. The result cache is reused even after the underlying table data has changed, so the query never needs to scan data again
- B. The result cache is invalidated by data changes, but the warehouse/local cache can still speed up access to recently read data pages (Correct answer)
- C. Only the metadata cache is used in all cases, and it always returns the full result set without scanning data
- D. All types of cache are cleared automatically whenever data is loaded into the table
Correct answer: B
Explanation: Correct answer (B): When the query is first repeated with no data changes, it can be served entirely from the result cache. After new rows are loaded, the result cache is invalidated and the query must be re-executed. However, the warehouse/local cache can still speed up access to recently scanned data pages, explaining why the third run is faster than the first but still has to scan data.
Why the other options are wrong:
- Option A: This ignores that the result cache is invalidated when underlying data changes. Snowflake does not return stale results from the result cache after base-table modifications.
- Option C: The metadata cache holds partition-level metadata to support pruning, not full result sets. It cannot by itself return the entire result set without scanning data when the result cache is invalid.
- Option D: Data loads do not clear all caches. In particular, metadata cache and warehouse/local cache are not globally flushed on each load; only the result cache is invalidated when underlying data changes.
Sample Question 9 — Snowflake AI Data Cloud Features & Architecture
A finance manager asks why an increase in the amount of data stored in Snowflake will not automatically increase compute costs. As the Snowflake architect, how should you describe the relationship between storage and compute in Snowflake?
- A. Storage and compute are separate layers; storage usage and compute usage are billed independently and can scale independently. (Correct answer)
- B. Storage and compute must always scale together, so more stored data always requires larger virtual warehouses.
- C. Compute charges in Snowflake are directly tied to the number of micro-partitions stored, regardless of query activity.
- D. The Cloud Services layer runs all queries, so there is no separate compute charge for virtual warehouses.
Correct answer: A
Explanation: Correct answer (A): Snowflake’s architecture separates storage from compute. Data is stored once in centralized storage, and virtual warehouses (compute) are billed only when they run queries. You can store more data without automatically increasing compute costs unless workloads actually query that data. This independence is a core architectural feature.
Why the other options are wrong:
- Option B: Snowflake does not require storage and compute to scale together. Larger data volumes may need more compute for certain workloads, but that is a design choice, not a constraint.
- Option C: Compute charges are based on virtual warehouse usage, not directly on how many micro-partitions exist. Idle data does not incur compute charges.
- Option D: The Cloud Services layer coordinates and optimizes queries, but virtual warehouses provide the actual compute for query execution and are billed separately.
Sample Question 10 — Snowflake AI Data Cloud Features & Architecture
A company uses a BI dashboard connected to Snowflake. At the top of every hour, about 300 users refresh the dashboard simultaneously. The queries are not complex, but users frequently report that dashboards are "waiting" because queries are queued. The dashboard currently uses a single MEDIUM virtual warehouse. What is the most appropriate Snowflake-native change to address this problem?
- A. Increase the warehouse size from MEDIUM to 2X-LARGE to speed up each query.
- B. Convert the warehouse to a multi-cluster MEDIUM warehouse to add clusters during peak concurrency. (Correct answer)
- C. Create a second database and move half of the tables there to reduce queuing.
- D. Replicate the Snowflake account to another region and point half of the users to each region.
Correct answer: B
Explanation: Correct answer (B): The issue is high concurrency (many simultaneous dashboard refreshes), not heavy per-query compute. A multi-cluster MEDIUM warehouse adds more clusters of the same size to handle concurrent queries without queuing. Scaling out addresses concurrency, while keeping per-query performance roughly the same.
Why the other options are wrong:
- Option A: Increasing size (scaling up) improves per-query throughput but does not directly address queuing when many users run queries at once. Concurrency is better handled by scaling out via multi-cluster warehouses.
- Option C: Moving tables to another database does not change the fact that all queries are still competing for the same single warehouse’s resources.
- Option D: Cross-region replication is for disaster recovery and distribution, not a primary solution to intra-region query concurrency. It also adds complexity and does not leverage Snowflake’s built-in multi-cluster warehouse capability.
The 5 SnowPro Core COF-C03 Exam Domains
Start the free SnowPro Core quick practice test now | All SnowPro Core domains | SnowPro Core Cheat Sheet