Free SnowPro Core Account Management and Data Governance Practice Test 2026 — Snowflake COF-C03 Questions

Last updated: May 2026 · Aligned with the current Snowflake SnowPro Core COF-C03 exam · 20% of the exam

This free SnowPro Core Account Management and Data Governance practice test covers Snowflake account administration and data governance — RBAC, roles, resource monitors, dynamic data masking, row access policies, and tagging. Each question includes a detailed explanation with real Snowflake AI Data Cloud context — perfect for COF-C03 exam prep.

Key Topics in SnowPro Core Account Management and Data Governance

10 Free SnowPro Core Account Management and Data Governance Practice Questions with Answers

Each question below includes 4 answer options, the correct answer, and a detailed explanation. These are real questions from the FlashGenius SnowPro Core question bank for the Account Management and Data Governance domain (20% of the exam).

Sample Question 1 — Account Management and Data Governance

A healthcare company stores patient data in Snowflake. Compliance requires that: - All users can query every row in the PATIENTS table. - The SSN column must appear masked for most users. - Only users with the PRIVILEGED_HEALTHCARE role should see the real SSN values. Which Snowflake feature is the most appropriate to meet this requirement?

  1. A. Attach a column-level masking policy to the SSN column that returns unmasked values only when CURRENT_ROLE() = 'PRIVILEGED_HEALTHCARE' (Correct answer)
  2. B. Create a row access policy on the PATIENTS table to filter out rows where SSN is not allowed to be seen
  3. C. Use a secure view that filters out the SSN column completely for all users
  4. D. Apply a tag to the SSN column and rely on the tag alone to hide the values

Correct answer: A

Explanation: Correct answer (A): Column-level masking policies are designed to transform returned values based on session context such as CURRENT_ROLE(). By attaching a masking policy to the SSN column that checks for PRIVILEGED_HEALTHCARE, all rows remain visible, but the SSN is dynamically masked for unauthorized roles and unmasked only for privileged users. Why the other options are wrong: - Option B: Row access policies control which rows are visible, not how column values are masked. Using a row access policy here would incorrectly restrict rows instead of just masking SSNs. - Option C: A secure view could hide the SSN column entirely, but the requirement states that some users must see the real SSN while others see masks. A single secure view alone cannot provide dynamic masking by role without additional logic, and would typically hide the column rather than mask it. - Option D: Tags provide classification and metadata but do not themselves enforce security. A tag on its own will not cause Snowflake to mask the SSN values without a masking policy or other enforcement mechanism using that tag.

Sample Question 2 — Account Management and Data Governance

A data engineer accidentally deleted several rows from a critical Snowflake table 12 hours ago. The table has a Time Travel retention period of 2 days, and the account uses the standard Snowflake Fail-safe configuration. The engineer wants to restore the table to its state just before the accidental delete, without involving Snowflake Support. Which capability should the engineer use?

  1. A. Query the table using Time Travel to an "as of" timestamp and restore the data from that snapshot (Correct answer)
  2. B. Request Snowflake Support to restore the table from Fail-safe because Fail-safe is for user-driven recovery
  3. C. Create a zero-copy clone of the database, which automatically includes the state before deletion
  4. D. Use a row access policy to filter out all rows modified in the last 12 hours

Correct answer: A

Explanation: Correct answer (A): Time Travel is designed for user-accessible recovery of historical data within the configured retention period. With a 2-day retention and the delete occurring 12 hours ago, the engineer can query the table as of a timestamp just before the delete and either copy the data back or restore the table directly using Time Travel features, without involving Snowflake Support. Why the other options are wrong: - Option B: Fail-safe is a Snowflake-managed retention meant for disaster recovery and is not directly user-accessible via SQL. It is not the primary mechanism for routine user-driven recovery within the Time Travel window. - Option C: Zero-copy cloning creates a snapshot at the moment of cloning, not retroactively at a prior point in time. Cloning now would include the state after the delete, so it would not help recover the lost rows. - Option D: Row access policies control row visibility based on predicates but do not restore deleted data. Once rows are deleted, they must be recovered using Time Travel, not filtered back in with a policy.

Sample Question 3 — Account Management and Data Governance

A centralized data platform team manages a Snowflake account. They want to onboard a new group of marketing analysts who: - Should only be able to run SELECT queries on tables and views in the ANALYTICS.MARKETING schema. - Must not be able to create or modify objects. - Should not be granted powerful system roles like ACCOUNTADMIN. Which RBAC approach best meets these requirements following least-privilege principles?

  1. A. Create a custom role MARKETING_READ_ONLY, grant USAGE on the ANALYTICS database and MARKETING schema, grant SELECT on required tables/views, and assign this role to the analysts (Correct answer)
  2. B. Grant SYSADMIN directly to all marketing analysts and rely on training them not to modify objects
  3. C. Grant USAGE on the ANALYTICS database and MARKETING schema to PUBLIC so the analysts inherit access automatically
  4. D. Grant OWNERSHIP on the ANALYTICS.MARKETING schema to the MARKETING department manager and let the manager share their user credentials

Correct answer: A

Explanation: Correct answer (A): Creating a dedicated custom role with just the necessary USAGE and SELECT privileges aligns with Snowflake's RBAC model and least-privilege principles. Granting this role to marketing analysts lets them query the required data without the ability to create or modify objects or access other areas of the account. Why the other options are wrong: - Option B: SYSADMIN is a powerful system role that can create and manage many objects across the account. Relying on training instead of proper RBAC configuration violates least-privilege and introduces significant risk. - Option C: Granting USAGE broadly to PUBLIC would expose the schema to every user in the account, not just marketing analysts, which is the opposite of least-privilege and may violate governance requirements. - Option D: Sharing user credentials is a serious security anti-pattern and undermines auditing and accountability. OWNERSHIP also grants far more control than needed and is not appropriate for read-only analysts.

Sample Question 4 — Account Management and Data Governance

A data engineering team has a role DATA_SCIENTIST that must be able to query all current and future tables in the PROD.ANALYTICS schema. They ran GRANT SELECT ON ALL TABLES IN SCHEMA PROD.ANALYTICS TO ROLE DATA_SCIENTIST once. Existing tables are accessible, but new tables created later are not visible to users with the DATA_SCIENTIST role. What is the best way to ensure DATA_SCIENTIST automatically gains SELECT on tables created in PROD.ANALYTICS in the future?

  1. A. Grant SELECT on FUTURE TABLES in the PROD.ANALYTICS schema to the DATA_SCIENTIST role (Correct answer)
  2. B. Re-run GRANT SELECT ON ALL TABLES IN SCHEMA every time a new table is created
  3. C. Grant OWNERSHIP on the PROD.ANALYTICS schema to the DATA_SCIENTIST role
  4. D. Grant USAGE on the PROD database and schema to the DATA_SCIENTIST role

Correct answer: A

Explanation: Correct answer (A): Future grants are specifically designed so that newly created objects automatically receive specified privileges for a target role. Granting SELECT on FUTURE TABLES in the schema ensures that any new table in PROD.ANALYTICS will immediately be queryable by the DATA_SCIENTIST role without manual re-grants. Why the other options are wrong: - Option B: Re-running GRANT SELECT ON ALL TABLES each time works but is error-prone and does not scale. Snowflake provides future grants precisely to avoid this manual maintenance. - Option C: OWNERSHIP on the schema is overly broad, giving full control over all objects in the schema and violating least-privilege. It is unnecessary for simply querying tables. - Option D: USAGE on the database and schema is necessary but not sufficient; it does not grant SELECT on tables. The issue here is missing SELECT on new tables, not missing USAGE.

Sample Question 5 — Account Management and Data Governance

An organization wants to ensure that finance users can access Snowflake only when they are connected from the corporate office network. Other departments should continue to access Snowflake from any location. What is the best way to implement this requirement?

  1. A. Define a network policy that allows only the corporate office IP range and bind it to the finance users (Correct answer)
  2. B. Grant USAGE on the FINANCE database only to roles used from the corporate office
  3. C. Enable MFA for finance users so that only they must provide a second factor when logging in remotely
  4. D. Create a row access policy on all finance tables that filters queries coming from non-corporate IPs

Correct answer: A

Explanation: Correct answer (A): Network policies in Snowflake restrict allowed IP address ranges and can be bound at the user level. Defining a policy that allows only the corporate office IP range and applying it to finance users ensures they can log in only from that network, while other users remain unaffected. Why the other options are wrong: - Option B: Database USAGE privileges control which data objects can be accessed, not from where the login originates. This does nothing to restrict login locations. - Option C: MFA strengthens authentication but does not restrict access by IP range. Finance users could still log in from any network, just with an additional factor. - Option D: Row access policies filter rows returned by queries, based on predicates and session context. They do not block login attempts from specific IP addresses.

Sample Question 6 — Account Management and Data Governance

A data governance team wants a scalable way to protect all personally identifiable information (PII) columns across multiple Snowflake databases. Requirements: - Sensitive columns (e.g., EMAIL, PHONE, SSN) must be identified centrally. - Analysts should see masked values for all PII by default. - Privileged roles (e.g., PRIV_PII_ACCESS) should see real values. - The mechanism should work consistently even as new PII columns are added in different schemas. Which approach best satisfies these requirements?

  1. A. Apply a PII classification tag to sensitive columns and attach masking policies to those columns that use session context (such as CURRENT_ROLE()) and can reference tags for governance logic (Correct answer)
  2. B. Create a single secure view per database that excludes PII columns and require all analysts to use those views
  3. C. Define a row access policy at the database level that hides rows with PII and grant bypass privileges to PRIV_PII_ACCESS
  4. D. Rely only on tags to hide PII columns from query results without additional policies

Correct answer: A

Explanation: Correct answer (A): Tagging PII columns provides centralized classification, and column-level masking policies can be attached to those columns to enforce dynamic masking based on session context (such as the current role). As new columns are tagged as PII, applying the masking policy to them ensures consistent protection, while privileged roles can be allowed to see unmasked values. Why the other options are wrong: - Option B: Secure views can hide or transform columns, but managing a separate view for every table and updating them for every new PII column is not scalable. Additionally, this approach does not inherently provide dynamic unmasking for privileged roles without complex view logic. - Option C: Row access policies filter rows, not individual column values, and do not inherently target "PII" columns across schemas. Hiding entire rows with any PII is overly restrictive and not aligned with the requirement to mask only specific columns. - Option D: Tags alone provide metadata and classification but do not enforce security. Without masking policies or other enforcement, PII would still appear in clear text in query results.

Sample Question 7 — Account Management and Data Governance

In a Snowflake account, a data engineer created several production tables while their current role was ENG_DEV_PERSONAL. The intended design is that a shared role DATA_ENGINEER_OWNERSHIP should own all production tables so that multiple engineers can manage grants and schema changes. Currently: - ENG_DEV_PERSONAL is the owner of the tables. - DATA_ENGINEER_OWNERSHIP has SELECT and MODIFY but cannot grant privileges to other roles. What is the best way to align the implementation with the intended ownership model?

  1. A. Transfer OWNERSHIP of the production tables from ENG_DEV_PERSONAL to the DATA_ENGINEER_OWNERSHIP role (Correct answer)
  2. B. Grant the DATA_ENGINEER_OWNERSHIP role to all engineers and keep ENG_DEV_PERSONAL as table owner
  3. C. Grant ACCOUNTADMIN to the lead engineer so they can manage all grants regardless of table ownership
  4. D. Grant FUTURE OWNERSHIP on all tables in the schema to DATA_ENGINEER_OWNERSHIP so ownership changes automatically

Correct answer: A

Explanation: Correct answer (A): In Snowflake, the role that creates an object becomes its owner and controls grants. To align with the design where DATA_ENGINEER_OWNERSHIP owns production tables, ownership must be transferred from ENG_DEV_PERSONAL to DATA_ENGINEER_OWNERSHIP using GRANT OWNERSHIP. After transfer, the shared role can manage grants and schema changes as intended. Why the other options are wrong: - Option B: Granting DATA_ENGINEER_OWNERSHIP to more users affects which privileges they inherit but does not change who owns the tables. ENG_DEV_PERSONAL would still control grants and DDL, contradicting the intended design. - Option C: ACCOUNTADMIN is an extremely powerful role and should be tightly controlled. Using it to bypass a proper ownership model violates least-privilege and is unnecessary when ownership can be transferred. - Option D: Snowflake does not support a "FUTURE OWNERSHIP" mechanism. Future grants can apply privileges like SELECT on future tables, but they do not change or retroactively control object ownership.

Sample Question 8 — Account Management and Data Governance

A company uses Snowflake database replication to maintain a read-only disaster recovery (DR) environment in a secondary account. They configured strict network policies (corporate IP allowlist) and MFA in the primary account. After enabling database replication to the DR account, they notice that DR users can connect from any IP address if network policies are not configured there. What should the Snowflake administrator do to enforce the same network access restrictions in the DR account?

  1. A. Configure equivalent network policies directly in the DR account because account-level settings are not replicated (Correct answer)
  2. B. Rely on database replication to propagate network policies, which will eventually synchronize to the DR account
  3. C. Create row access policies in the primary account to block all queries from non-corporate IPs and expect them to apply in DR
  4. D. Enable Fail-safe on the replicated databases so network policies are enforced automatically across accounts

Correct answer: A

Explanation: Correct answer (A): Snowflake's database replication replicates databases and certain governance configurations, but account-level settings like network policies are not replicated. To enforce the same IP restrictions in the DR account, equivalent network policies must be created and configured separately in that account. Why the other options are wrong: - Option B: Database replication does not copy account-level settings such as network policies. Waiting will not propagate those settings, leaving the DR account less secure. - Option C: Row access policies only control which rows are visible to queries and do not govern who can log in from which IP. They are not a substitute for network policies and also apply at the database object level, not the account level. - Option D: Fail-safe is about data retention for disaster recovery and does not enforce security settings or network access controls across accounts.

Sample Question 9 — Account Management and Data Governance

A Snowflake provider shares customer transaction data with an external partner using a secure share. The provider wants to: - Allow the partner to see only aggregated data (e.g., by region and month). - Prevent the partner from accessing the underlying detailed transaction table or seeing the provider’s query logic. Which approach best meets these requirements?

  1. A. Create a secure view that aggregates the transaction data and include only this secure view in the share (Correct answer)
  2. B. Share the base transaction table directly and instruct the partner to run only approved aggregation queries
  3. C. Apply a row access policy to the base table that removes detailed columns from the result set
  4. D. Tag the base table as SENSITIVE so Snowflake automatically hides implementation details from the partner

Correct answer: A

Explanation: Correct answer (A): Secure views hide underlying base objects and query text from consumers and are required for scenarios where implementation details must be protected. By creating a secure view that performs the necessary aggregations and sharing only that view, the provider exposes only aggregated results while preventing access to the detailed table and query logic. Why the other options are wrong: - Option B: Sharing the base table directly gives the partner full access to detailed data and does not prevent them from running non-approved queries, which violates the requirement. - Option C: Row access policies filter rows, not columns or query logic. They cannot remove columns from result sets; column-level control would require views or masking, not row access alone. - Option D: Tags provide classification metadata but do not enforce access control or hide SQL logic by themselves. Tagging a table as SENSITIVE would not prevent the partner from seeing its structure or data if shared directly.

Sample Question 10 — Account Management and Data Governance

A data engineer reports that when using the ANALYST role they receive an error: "Insufficient privileges to operate on schema 'RAW'" when running: SELECT * FROM SALES_DB.RAW.ORDERS; The ANALYST role already has USAGE on the SALES_DB database and SELECT on the SALES_DB.RAW.ORDERS table. What should the Snowflake administrator do to resolve this while following least-privilege principles?

  1. A. Grant USAGE on the SALES_DB.RAW schema to the ANALYST role. (Correct answer)
  2. B. Grant OWNERSHIP on the SALES_DB.RAW.ORDERS table to the ANALYST role.
  3. C. Grant USAGE on all databases in the account to the ANALYST role.
  4. D. Grant SELECT on the SALES_DB database to the ANALYST role.

Correct answer: A

Explanation: Correct answer (A): To query a table, the active role must have USAGE on the database, USAGE on the schema, and SELECT on the table. The error mentions the schema 'RAW', and the role already has USAGE on the database and SELECT on the table, so the missing privilege is USAGE on the schema. Granting USAGE on the SALES_DB.RAW schema to the ANALYST role satisfies the requirement with minimal additional access. Why the other options are wrong: - Option B: OWNERSHIP is far more powerful than needed and violates least-privilege principles. The engineer only needs to read from the table, not alter, drop, or transfer ownership. - Option C: Granting USAGE on all databases is overly broad and unrelated to the specific schema error. It increases risk and does not follow least-privilege practices. - Option D: SELECT on the database is not a valid privilege and would still not address the missing schema-level USAGE. The error clearly indicates a schema privilege issue, not a database-level read requirement.

How to Study SnowPro Core Account Management and Data Governance

Combine these SnowPro Core Account Management and Data Governance practice questions with the free Snowflake University SnowPro Core learning path and hands-on practice in a Snowflake 30-day trial account. The COF-C03 exam rewards applied knowledge of the Snowflake AI Data Cloud, so always tie concepts back to real worksheets, warehouses, and roles you've built.

About the Snowflake SnowPro Core COF-C03 Exam

Other SnowPro Core Domains

Start the free SnowPro Core Account Management and Data Governance practice test now | 10-question quick start | All SnowPro Core domains | SnowPro Core Cheat Sheet