FlashGenius Logo FlashGenius
Login Sign Up

TERRAFORM-004 Practice Questions: HCP Terraform Domain

Test your TERRAFORM-004 knowledge with 10 practice questions from the HCP Terraform domain. Includes detailed explanations and answers.

TERRAFORM-004 Practice Questions

Master the HCP Terraform Domain

Test your knowledge in the HCP Terraform domain with these 10 practice questions. Each question is designed to help you prepare for the TERRAFORM-004 certification exam with detailed explanations to reinforce your learning.

Question 1

You create an HCP Terraform workspace that uses a remote backend and remote operations. You want to provide cloud credentials without committing them to Git. Which approach best follows HCP Terraform best practices for managing these sensitive values?

A) Define the credentials as environment variables in the HCP Terraform workspace and mark them as sensitive.

B) Store the credentials in a `terraform.tfvars` file and commit it to the VCS repository used by the workspace.

C) Hard-code the credentials directly in the provider block and rely on `terraform fmt` to hide them.

D) Set the credentials as input variable defaults in `variables.tf` so they are available to all workspaces.

Show Answer & Explanation

Correct Answer: A

Explanation:

In HCP Terraform, the recommended way to provide sensitive credentials is via workspace environment variables (e.g., `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`) or sensitive Terraform variables, marked as sensitive in the UI. This keeps secrets out of version control. B) commits secrets to VCS, which is insecure. C) hard-codes credentials in configuration, which is also insecure and not masked by `terraform fmt`. D) putting secrets as variable defaults in configuration still exposes them in VCS and is not a best practice.

Question 2

You manage an HCP Terraform workspace that uses VCS integration. A pull request is opened with Terraform changes, and HCP Terraform automatically runs a plan. The plan fails because a required variable `db_password` is not set. You want to fix this without modifying the Terraform configuration or the VCS repo. What should you do?

A) Set `db_password` as a workspace variable in HCP Terraform and mark it as sensitive.

B) Create a `terraform.tfvars` file with `db_password` and commit it to the repository.

C) Run `terraform apply -var db_password=...` locally and push the updated state to HCP Terraform.

D) Add a default value for `db_password` directly in the `variable` block during the next commit.

Show Answer & Explanation

Correct Answer: A

Explanation:

The correct way to provide a required variable for an HCP Terraform workspace, without changing the configuration, is to set it as a workspace variable (and mark it sensitive for secrets) in the HCP Terraform UI or via API (A). B: Committing secrets in `terraform.tfvars` is insecure and modifies the repo. C: Local applies with different backends do not automatically update HCP Terraform state and can cause drift. D: Adding a default in the configuration requires modifying the repo, which the scenario explicitly wants to avoid.

Question 3

You created an HCP Terraform workspace using the CLI-driven workflow (terraform login and terraform init with the cloud block). Later, your team connects the same Git repo to a new VCS-driven workspace in HCP Terraform for the same configuration. Now both workspaces manage identical resources. What is the safest way to avoid conflicting changes to the same infrastructure?

A) Continue using both workspaces but ensure only one is used at a time.

B) Migrate state from the CLI-driven workspace to the VCS-driven workspace, then stop using the original workspace.

C) Disable state locking in both workspaces so they can coordinate changes automatically.

D) Change the resource names in the configuration so each workspace manages different resources.

Show Answer & Explanation

Correct Answer: B

Explanation:

Two HCP Terraform workspaces managing the same resources will cause conflicts and drift. The correct approach is to consolidate management into a single workspace by migrating state (using HCP Terraform’s state migration tools) from the old CLI-driven workspace to the new VCS-driven one, then decommissioning the original workspace (B). Option A is risky; human error can easily lead to concurrent or conflicting changes. Option C is incorrect and dangerous; disabling locking increases the risk of corruption and is not how coordination works. Option D changes resource identities and would cause Terraform to destroy and recreate resources rather than safely consolidating state.

Question 4

Your team is migrating from local Terraform runs to HCP Terraform. You push your configuration to a GitHub repo and connect it to a new HCP Terraform workspace using VCS-driven runs. After pushing a change, you notice `terraform plan` and `terraform apply` no longer run on your laptop. Instead, runs appear in the HCP Terraform UI. What is the main reason this behavior is preferred in this setup?

A) It guarantees that state is always stored locally on each developer’s machine for faster access.

B) It centralizes execution, using remote operations so plans and applies run in HCP Terraform with shared state.

C) It disables locking, allowing multiple developers to apply changes to the same workspace simultaneously.

D) It forces all Terraform commands to be run only through the GitHub Actions CI pipeline.

Show Answer & Explanation

Correct Answer: B

Explanation:

With a VCS-connected HCP Terraform workspace, runs are typically remote operations: HCP Terraform executes `plan` and `apply` in its own environment, using a shared remote state. This centralizes execution, improves collaboration, and ensures consistent runs. A) is incorrect because state is not stored locally; it is stored remotely in HCP Terraform. C) is incorrect because HCP Terraform supports state locking to prevent concurrent conflicting changes. D) is incorrect because HCP Terraform runs are triggered directly by VCS events; GitHub Actions may be used additionally but is not required or enforced by HCP Terraform.

Question 5

You have a Terraform configuration that currently uses a local backend. You want to migrate to HCP Terraform so that state is stored remotely and runs are executed in HCP Terraform. You have already created an HCP Terraform workspace and connected it to your VCS repo. What is the most appropriate next step to safely migrate your existing local state?

A) Delete the local `.tfstate` file and let HCP Terraform create a new empty state during the next run.

B) Run `terraform state pull` and manually upload the JSON output into HCP Terraform using the UI.

C) Update the backend configuration to use the `remote` backend for HCP Terraform and run `terraform init` with the workspace’s environment variables set, then follow the prompts to migrate state.

D) Run `terraform apply -replace=*` in HCP Terraform to recreate all resources from scratch using the new backend.

Show Answer & Explanation

Correct Answer: C

Explanation:

To migrate from a local backend to HCP Terraform, you configure the `remote` backend (or `cloud` block) to point to your HCP Terraform workspace, then run `terraform init`. Terraform will detect the existing local state and prompt you to migrate it to the remote backend (C). Deleting the local state (A) loses the mapping between configuration and real resources, causing Terraform to try to recreate everything. Manually uploading JSON from `terraform state pull` (B) is not the documented migration workflow. `terraform apply -replace=*` (D) would force recreation of all resources, which is risky and unnecessary for a backend migration.

Question 6

You have multiple HCP Terraform workspaces for different environments (dev, staging, prod). Several AWS credentials and common variables (like `region` and `owner_tag`) must be identical across all workspaces. How should you manage this in HCP Terraform?

A) Define all shared variables in each workspace separately and mark them as sensitive where needed.

B) Store shared variables in a variable set and attach that variable set to each workspace that needs them.

C) Commit a `terraform.tfvars` file with all shared variables into the Git repo for each workspace.

D) Use `locals` blocks in each configuration to hard-code the shared values instead of variables.

Show Answer & Explanation

Correct Answer: B

Explanation:

HCP Terraform variable sets are designed to define variables once and reuse them across multiple workspaces. Option B is correct: create a variable set with shared variables (including sensitive ones like credentials) and attach it to the relevant workspaces. A is possible but duplicates configuration and increases risk of drift. C is discouraged because committing credentials or shared environment data into version control is insecure and inflexible. D hard-codes values in configuration, making them harder to manage and change across environments and not leveraging HCP Terraform’s variable management.

Question 7

You manage an HCP Terraform workspace that provisions a production VPC. You want to prevent accidental `terraform destroy` from the HCP Terraform UI, but still allow normal `plan` and `apply` operations for incremental changes. Which configuration is most appropriate?

A) Set `prevent_destroy = true` in a `lifecycle` block for critical resources and keep using HCP Terraform as usual

B) Disable the workspace in HCP Terraform so no one can run `destroy`, then re-enable it only when changes are needed

C) Change the workspace to use a local backend so `destroy` cannot be run from HCP Terraform

D) Configure the workspace to run only speculative plans and never applies

Show Answer & Explanation

Correct Answer: A

Explanation:

Using `lifecycle { prevent_destroy = true }` on critical resources prevents them from being destroyed by any Terraform run (including `terraform destroy`) while still allowing normal updates. This is a configuration-level safeguard appropriate for production. B: Disabling the workspace blocks all runs, not just destroy, and is not a practical workflow. C: Moving to a local backend removes the benefits of HCP Terraform and does not inherently prevent destroy; it just moves it elsewhere. D: Speculative plans only show changes and never apply them, which would block all updates, not just destroy.

Question 8

Your HCP Terraform workspace is configured with VCS-driven runs and remote execution. A developer wants to test a small change quickly without waiting for a pull request review, but you must still keep the workspace state consistent and avoid local state files. What is the best approach?

A) Run `terraform plan` and `terraform apply` locally with a local backend, then later import the resources into HCP Terraform.

B) Disable remote execution in the workspace, run Terraform locally, then re-enable it after testing.

C) Use the HCP Terraform UI or API to start a speculative plan from the feature branch without applying it.

D) Run `terraform apply` locally with `-target` to limit changes and rely on HCP Terraform to reconcile later.

Show Answer & Explanation

Correct Answer: C

Explanation:

Speculative plans in HCP Terraform allow you to test changes (often from a feature branch) without affecting state or applying infrastructure changes (C). They run in the same remote environment and keep state consistent. A and D: Local applies with a different backend create separate state and can cause drift; they are not appropriate when you want to keep state centralized in HCP Terraform. B: Toggling remote execution and running locally undermines the benefits of remote operations and can introduce inconsistencies.

Question 9

Your team wants to use HCP Terraform to manage AWS infrastructure. They want all plans and applies to run in HCP Terraform, and they don’t want to share local state files. You already have a working local configuration and state. What is the most appropriate first step to move this project to HCP Terraform?

A) Create a new HCP Terraform workspace, configure the CLI-driven workflow, then run `terraform init` with the HCP Terraform backend block added.

B) Create a new HCP Terraform workspace, upload the local `terraform.tfstate` file manually, and continue running `terraform apply` locally.

C) Create a new HCP Terraform workspace connected to your VCS repo and delete the existing local `.terraform` directory before pushing any code.

D) Create a new HCP Terraform workspace, configure a remote backend in a new directory, and re-write all resources so they can be imported later.

Show Answer & Explanation

Correct Answer: A

Explanation:

To move an existing local project to HCP Terraform using remote operations, you configure the HCP Terraform backend in your configuration and re-run `terraform init`, which will migrate state to the remote backend. Option A correctly describes: (1) creating a workspace, (2) using CLI-driven workflow, and (3) adding the backend block and reinitializing. B is incorrect because continuing to run `terraform apply` locally with a local backend does not use HCP Terraform remote operations. C is incorrect because VCS-driven runs are not required; also deleting `.terraform` is not the key step. D is incorrect because you do not need to re-write resources or re-import everything; state migration is handled by `terraform init` with the new backend.

Question 10

Your team is migrating from local Terraform runs to HCP Terraform. You push your configuration to a GitHub repo and connect it to a new HCP Terraform workspace using VCS-driven runs. After pushing a change, you notice the plan is executed in HCP Terraform, not on your laptop. You still want to review the plan before it is applied. How should you manage this workflow?

A) Disable VCS integration and run `terraform plan` and `terraform apply` locally against the HCP Terraform workspace backend.

B) Keep VCS-driven runs enabled and configure the workspace to require manual confirmation before applying plans.

C) Run `terraform plan` locally and then upload the plan file to HCP Terraform for automatic apply.

D) Use `terraform apply -auto-approve` locally to bypass HCP Terraform and keep state only in the remote backend.

Show Answer & Explanation

Correct Answer: B

Explanation:

In a VCS-driven workspace, HCP Terraform performs remote operations automatically when changes are pushed. You can still require human review by configuring the workspace to use a manual apply workflow, where plans are generated automatically but must be explicitly confirmed in the UI or via API before apply. A: Disabling VCS integration removes the main benefit of remote, VCS-triggered runs. B: Correct; this is the standard pattern for reviewing plans in HCP Terraform. C: HCP Terraform does not accept uploaded local plan files for apply; it generates its own plans. D: Running local applies with `-auto-approve` would bypass HCP Terraform’s remote runs and is not aligned with using VCS-driven workspaces.

Ready to Accelerate Your TERRAFORM-004 Preparation?

Join thousands of professionals who are advancing their careers through expert certification preparation with FlashGenius.

  • ✅ Unlimited practice questions across all TERRAFORM-004 domains
  • ✅ Full-length exam simulations with real-time scoring
  • ✅ AI-powered performance tracking and weak area identification
  • ✅ Personalized study plans with adaptive learning
  • ✅ Mobile-friendly platform for studying anywhere, anytime
  • ✅ Expert explanations and study resources
Start Free Practice Now

Already have an account? Sign in here

About TERRAFORM-004 Certification

The TERRAFORM-004 certification validates your expertise in hcp terraform and other critical domains. Our comprehensive practice questions are carefully crafted to mirror the actual exam experience and help you identify knowledge gaps before test day.

Terraform Associate (004)
HCP Terraform Practice Questions (004)
Master HCP Terraform concepts tested on the 004 exam—workspaces, remote operations, state storage, VCS-driven runs, and team-ready workflows.
Read the HCP Terraform Set →
Terraform Associate (004)
Terraform Fundamentals Practice Questions (004)
Strengthen your core Terraform skills—providers, resources vs data sources, dependency lock file, variables/outputs, and everyday CLI flow.
Read the Fundamentals Set →
Terraform Associate (004)
Infrastructure as Code (IaC) Concepts Practice Questions (004)
Nail the IaC concepts that show up on exam day—declarative vs imperative, idempotency, drift, and when Terraform is the right tool.
Read the IaC Concepts Set →
Terraform Associate (004)
Terraform Associate (004) – Ultimate Cheat Sheet
A concise, exam-focused cheat sheet covering Terraform CLI workflows, HCL essentials, state management, modules, refactoring (`moved` / `removed`), and HCP Terraform concepts — perfect for last-minute revision before the 004 exam.
Open the Terraform 004 Cheat Sheet →