19 minutes
The Curious Case of Terraform Workspaces

Introduction
A master programmer was asked: “What makes code elegant?” He replied: “When you remove what is not essential, what remains is truth.”
If you’ve spent any time in the Terraform community, you’ve likely encountered the heated debate around workspaces. Many developers read HashiCorp’s documentation and immediately conclude that Terraform workspaces should be avoided. But is this really what the documentation says? Or have we collectively misunderstood the nuanced guidance HashiCorp provides?
I used to be in the “workspaces are evil” camp myself. I’d built wrappers, used off-the-shelf abstraction layers, anything to avoid workspaces. Then I started working with larger teams and more complex deployments, and I began to see a different perspective. What if the guidance isn’t “never use workspaces,” but something more nuanced?
Sometimes the simplest solution is right in front of you.
Based on my experience and interpretation of the documentation, let’s dive deep into what Terraform workspaces actually are, what the documentation really says, and how to use them effectively as part of a broader infrastructure-as-code strategy that includes proper system decomposition and environment management.
Key Concepts
The ancient Unix wisdom teaches: “Understand your abstractions, or they will abstract you.”
Before we dive into the workspace debate, let’s establish a crucial concept that’s central to understanding the proper use of workspaces: composition layers.
Composition layers are logical groupings of related infrastructure components that naturally belong together and share similar lifecycle characteristics. They represent the fundamental architectural boundaries in your infrastructure-as-code approach.
Examples of composition layers include:
- Network layer: VPCs, subnets, routing tables, NAT gateways, security groups
- Storage layer: RDS databases, S3 buckets, EFS file systems, backup policies
- Compute layer: EC2 instances, Auto Scaling groups, Load Balancers
- Application layer: Specific applications or services with their dependencies
The key principle is that components within a composition layer typically:
- Change at similar frequencies and for similar reasons
- Have similar operational requirements and ownership
- Share blast radius considerations (if one breaks, what else might be affected?)
- Benefit from being deployed and managed together
For example, your VPC structure rarely changes, but your application deployments happen frequently. These belong in separate composition layers because they have different lifecycle needs.
This architectural pattern directly addresses many of the concerns raised about Terraform workspaces, as we’ll see throughout this post.
The Great Misunderstanding
A novice asked the master: “I have read the first page of the manual. Am I now wise?” The master replied: “A man who stops reading after the first page will spend his life debugging the last page.”
The infamous documentation section that causes so much confusion states:
“Important: Workspaces are not appropriate for system decomposition or deployments requiring separate credentials and access controls. Refer to Use Cases in the Terraform CLI documentation for details and recommended alternatives.”
Many readers stop here and declare workspaces off-limits. But this interpretation might be missing some context. One way to read this is as guidance to not use workspaces as your only strategy for managing complex multi-environment deployments.
The community’s concerns about workspaces are valid. System decomposition, credential isolation, and blast radius control are critical challenges. The question is whether workspaces can be part of the solution when these other concerns are properly addressed.
But here’s what the docs actually recommend workspaces for:
“Workspaces are convenient because they let you create different sets of infrastructure with the same working copy of your configuration and the same plugin and module caches.”
So which is it? Are workspaces problematic or convenient?
What Terraform Workspaces Actually Are
A student asked: “What is a workspace?” The master replied: “The same house with different keys to different rooms.”
At their core, Terraform workspaces are simply a way to manage multiple state files within a single configuration.
According to the official documentation:
“The persistent data stored in the backend belongs to a workspace. The backend initially has only one workspace containing one Terraform state associated with that configuration. Some backends support multiple named workspaces, allowing multiple states to be associated with a single configuration.”
The important distinction is that workspaces let you switch between different states within the same configuration. You’re not duplicating your Terraform configuration files. Instead, you get isolated state contexts that all use the same infrastructure code, allowing you to deploy the same configuration to different environments or test different variations safely.
For example, within your network
composition layer, you might have:
network-prod
workspace (production VPC and subnets)network-dev
workspace (smaller dev environment VPC)network-feature-zones
workspace (testing additional availability zones)
All use the same Terraform configuration, but maintain completely separate state files.
They’re a built-in feature that allows you to:
- Switch between different states using
terraform workspace select
- Maintain separate state files for the same composition layer configuration
- Reference the current workspace name in your configuration using
${terraform.workspace}
- Deploy multiple instances of the same composition layer safely
That’s it. No magic, no complex orchestration, no additional code to maintain. Just state file management with a convenient logical interface within composition layer boundaries. So why do so many developers avoid this simple, built-in feature?
The Statements That Scare People Away
An infrastructure architect reflected: “Tools are not good or bad. They are appropriate or inappropriate. Wisdom lies in knowing the difference.”
Now that we understand that workspaces actually manage multiple states within a composition layer, let’s examine what the documentation says about their appropriate usage.
Here are the statements that cause the most concern:
“Workspaces are not appropriate for system decomposition or deployments requiring separate credentials and access controls.”
“CLI workspaces use the same backend, so they are not a suitable isolation mechanism for this scenario.”
“Workspaces alone are not a suitable tool for system decomposition because each subsystem should have its own separate configuration and backend.”
“Workspaces can be helpful for specific use cases, but they are not required to use the Terraform CLI. We recommend using alternative approaches for complex deployments requiring separate credentials and access controls.”
“Instead of creating CLI workspaces, you can use one or more re-usable modules to represent the common elements and then represent each instance as a separate configuration that instantiates those common elements in the context of a different backend.”
These are all valid concerns. System decomposition, credential isolation, team boundaries, deployment complexity, and code reusability are critical challenges in infrastructure management. The crucial point is that these are concerns that workspaces are NOT meant to solve!
There are other mechanisms to address these challenges. These include separate backends, different IAM roles, composition layer boundaries, reusable modules, remote state data sources, and orchestration tools. The question then becomes: what does the documentation actually recommend as the complete solution?
What The Documentation Actually Recommends
A student complained: “These tools are so complex!” The Unix master replied: “Complexity is what happens when you refuse to understand simplicity.”
Here’s what many people miss: the documentation doesn’t just tell you what workspaces aren’t good for, it also describes what you should do instead. When you read these statements together, they outline a complete architectural approach.
The documentation’s recommended approach includes:
- System decomposition FIRST: Break infrastructure into logical composition layers with separate configurations
- Separate backends: Use different backends for proper team and component isolation
- Reusable modules: Create common patterns to avoid code duplication
- Component communication: Use “paired resources and data sources” for composition layers to communicate
- Alternative deployment strategies: Consider separate configurations over workspace-only approaches
The critical insight is in this statement:
“Instead of creating CLI workspaces, you can use one or more re-usable modules to represent the common elements and then represent each instance as a separate configuration that instantiates those common elements in the context of a different backend.”
This isn’t saying “never use workspaces.” It’s describing the complete architecture that addresses all the concerns raised. The question becomes: where do workspaces fit within this recommended architecture?
The answer: Workspaces work safely within each properly isolated composition layer for managing different environments or feature branches of that same layer, once you have the proper foundation in place.
This interpretation suggests the documentation is actually describing a comprehensive approach where workspaces have their proper place as part of the solution, not as the entire solution.
Now let’s walk through how to implement this complete approach:
Building The Recommended Architecture
Step 1: System Decomposition FIRST
The wise system builder said: “First, architect your components. Then, let the tools serve the architecture. Never let tools dictate the architecture.”
The documentation emphasizes that workspaces are not appropriate for system decomposition, so this is where we address that concern before workspaces even come into play.
The first step is breaking your infrastructure into logical composition layers that make sense for your context. This could be based on your organization’s responsibility model, blast radius control, lifecycle velocities, or whatever boundaries work best for your team.
Note: Consider separating storage and stateful resources from compute and networking. Persistent resources like RDS databases, S3 buckets, and EFS file systems often have different lifecycle needs. They tend to change less frequently, require careful migration strategies, and often need to persist even when applications are rebuilt. You may find that mixing them with faster-changing compute resources creates operational complexity.
Option 1: Co-located Structure
infrastructure/
├── network/ # Could be its own repo
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf
│ ├── dev.tfvars
│ ├── staging.tfvars
│ └── prod.tfvars
├── storage/ # Could be its own repo
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf
│ ├── dev.tfvars
│ ├── staging.tfvars
│ └── prod.tfvars
└── workload-a/ # Could be its own repo
├── main.tf
├── variables.tf
├── outputs.tf
├── dev.tfvars
├── staging.tfvars
└── prod.tfvars
Option 2: Centralized Config Structure
infrastructure/
├── composition-layers/
│ ├── network/ # Could be its own repo
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ ├── storage/ # Could be its own repo
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ └── workload-a/ # Could be its own repo
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
└── config/
├── network/
│ ├── dev.tfvars
│ ├── staging.tfvars
│ └── prod.tfvars
├── storage/
│ ├── dev.tfvars
│ ├── staging.tfvars
│ └── prod.tfvars
└── workload-a/
├── dev.tfvars
├── staging.tfvars
└── prod.tfvars
Option 3: Local Config Structure
infrastructure/
├── network/ # Could be its own repo
│ ├── terraform/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ └── config/
│ ├── dev.tfvars
│ ├── staging.tfvars
│ └── prod.tfvars
├── storage/ # Could be its own repo
│ ├── terraform/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ └── config/
│ ├── dev.tfvars
│ ├── staging.tfvars
│ └── prod.tfvars
└── workload-a/ # Could be its own repo
├── terraform/
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
└── config/
├── dev.tfvars
├── staging.tfvars
└── prod.tfvars
This decomposition directly addresses the workspace concerns we discussed earlier. By properly separating composition layers, you get:
- System decomposition solved: Each composition layer has clear boundaries and responsibilities
- Credential isolation ready: Different teams can manage different composition layers with separate access controls
- Blast radius control: Changes to
workload-a
don’t risk breaking the network or corrupting databases - Independent deployment cycles: Each composition layer can be deployed separately based on its change frequency
- State isolation: Each composition layer has its own state file, preventing accidental dependencies
- Operational safety: You can rebuild applications without touching persistent data stores
- Team boundaries: The network team manages
network/
, the storage team managesstorage/
Note: Infrastructure shifts and ebbs like the Sahara. You will be buried in sand if you stand still. What works for a 5-person startup won’t work for a 500-person enterprise. Be prepared to evolve your structure as your team and infrastructure mature.
This decomposition addresses the fundamental workspace concerns by establishing proper architectural boundaries. With these foundations in place, we can now add secure credential isolation.
Step 2: Separate Backends for Team Isolation
A security engineer asked: “How do I know my access controls work?” The master replied: “When teams work in one garden - each tends their own plot, but all may harvest what grows well.”
The documentation states that “CLI workspaces use the same backend, so they are not a suitable isolation mechanism” for team separation. This is where our composition layer approach directly addresses the concern.
The advantage of our system decomposition is that since we’ve properly separated our infrastructure into distinct composition layers, we naturally gain the capability for credential isolation, regardless of whether we use workspaces or not. Each composition layer can have its own dedicated backend with different IAM roles and access controls.
Network Composition Layer Backend:
# network/backend.tf
terraform {
backend "s3" {
bucket = "mycompany-terraform-network-state" # Dedicated bucket for network composition layer
key = "terraform.tfstate"
region = "us-east-1"
encrypt = true
use_lockfile = true # State locking via S3 object locking
assume_role {
role_arn = "arn:aws:iam::123456789012:role/NetworkTeamRole" # Only network team has access
}
}
}
Application Composition Layer Backend:
# workload-a/backend.tf
terraform {
backend "s3" {
bucket = "mycompany-terraform-workload-a-state" # Dedicated bucket for workload-a composition layer
key = "terraform.tfstate"
region = "us-east-1"
encrypt = true
use_lockfile = true # State locking via S3 object locking
assume_role {
role_arn = "arn:aws:iam::123456789012:role/AppTeamRole" # Only app team has access
}
}
}
Now each team has proper credential isolation with dedicated backends and credentials. The network team can’t access the application team’s state, and vice versa. This is where workspaces become safe and appropriate. They work within each properly isolated composition layer.
Advanced Security: Composition Layer + Environment Isolation
For maximum security (perfect for highly regulated environments), you can use dedicated backends per composition layer AND environment-specific access policies:
Network Composition Layer Bucket Policy (on mycompany-terraform-network-state
):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "NetworkDevTeamAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/NetworkDevRole"
},
"Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket"],
"Resource": [
"arn:aws:s3:::mycompany-terraform-network-state/*"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"terraform.tfstate",
"env:/network-dev/*",
"env:/network-feature-*/*"
]
}
}
},
{
"Sid": "NetworkProdTeamAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/NetworkProdRole"
},
"Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket"],
"Resource": [
"arn:aws:s3:::mycompany-terraform-network-state/*"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"env:/network-prod/*"
]
}
}
}
]
}
Application Composition Layer Bucket Policy (on mycompany-terraform-workload-a-state
):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AppTeamAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/AppTeamRole"
},
"Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket"],
"Resource": [
"arn:aws:s3:::mycompany-terraform-workload-a-state/*"
]
}
]
}
This gives you multiple security layers:
- Backend-level isolation - Network team can’t even see app team’s backend
- Environment-level restrictions - Dev team can’t touch prod state files
- Composition layer boundaries - Complete separation between infrastructure composition layers
- Workspace safety - Feature branches are isolated within appropriate environments
Note: This example uses the S3 backend with IAM role-based isolation within a single AWS account, which is common for self-managed Terraform. For even stronger isolation, many organizations prefer separate AWS accounts for different environments or teams, providing complete billing, resource, and access boundaries. If you’re using HCP Terraform with the remote backend, workspace configuration works differently. In that case, you would use
workspaces.prefix
to map CLI workspaces to remote workspaces. For example,prefix = "network-"
mapsterraform workspace select prod
to thenetwork-prod
remote workspace.
With secure backend isolation established, we’ve addressed the credential separation concern. Now we can ensure our composition layers don’t become repetitive code maintenance burdens.
Step 3: Reusable Modules
The DRY principle master said: “Write once, use many times. But abstract wrongly, debug infinite times.”
The documentation recommends using “one or more re-usable modules to represent the common elements” as part of the complete solution. This addresses the code duplication concern while working alongside our composition layer architecture.
This approach isn’t replacing workspaces; it’s providing the reusable foundation that makes workspaces safe and effective within each composition layer.
Modules provide reusable infrastructure patterns within our composition layer architecture. They work alongside the foundation we’ve built, not competing with it, but making it more maintainable. Here’s how:
VPC Module Example:
# modules/vpc/main.tf
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = var.enable_dns_hostnames
enable_dns_support = var.enable_dns_support
tags = merge(var.tags, {
Name = "${var.name_prefix}-vpc"
})
}
variable "name_prefix" {
description = "Prefix for resource names"
type = string
}
This module encapsulates VPC creation with configurable parameters. It’s reusable across different composition layers and environments without duplicating code.
Using the Module:
# network/main.tf
module "vpc" {
source = "../modules/vpc"
vpc_cidr = var.vpc_cidr
name_prefix = var.environment
enable_dns_hostnames = var.enable_dns_hostnames
tags = var.tags
}
The network composition layer uses this module, passing in environment-specific values. The same module can be used across different composition layers or environments with different configurations.
With reusable modules in place, we’ve eliminated code duplication while maintaining our composition layer boundaries. Now we can address how these separate layers communicate with each other.
Step 4: Composition Layer Communication
The distributed systems sage declared: “Loose coupling with high cohesion: this is the way of maintainable systems.”
The documentation provides specific guidance on how separate composition layers should communicate: “When multiple configurations represent distinct system components rather than multiple deployments, you can pass data from one component to another using paired resources types and data sources.”
This directly addresses our need to connect our distinct composition layers. Here’s how the recommended “paired resources and data sources” pattern works:
Approach 1: Paired Resource Outputs and Remote State Data Sources
Network component exposes data via resource outputs:
# network/outputs.tf
output "vpc_id" {
value = module.vpc.vpc_id
}
output "private_subnet_ids" {
value = module.vpc.private_subnet_ids
}
Application component consumes data via terraform_remote_state
data source:
# workload-a/main.tf
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "mycompany-terraform-network-state"
key = "terraform.tfstate"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::123456789012:role/NetworkTeamRole"
}
}
}
resource "aws_instance" "app" {
ami = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
subnet_id = data.terraform_remote_state.network.outputs.private_subnet_ids[0]
availability_zone = data.aws_availability_zones.available.names[0]
}
Approach 2: Paired Resource Tags and Named Data Sources
# workload-a/main.tf
# Instead of remote state, use data sources to discover resources
data "aws_vpc" "main" {
filter {
name = "tag:Name"
values = ["${var.environment}-vpc"]
}
}
data "aws_subnets" "private" {
filter {
name = "vpc-id"
values = [data.aws_vpc.main.id]
}
filter {
name = "tag:Type"
values = ["private"]
}
}
resource "aws_instance" "app" {
ami = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
subnet_id = data.aws_subnets.private.ids[0]
availability_zone = data.aws_availability_zones.available.names[0]
}
Both approaches implement the “paired resources and data sources” pattern the documentation describes:
- Approach 1: Explicit pairing through outputs and remote state creates clear dependencies
- Approach 2: Implicit pairing through resource tags and named data sources provides looser coupling
Both maintain composition layer boundaries while enabling the necessary communication between distinct system components.
With composition layer communication established, we have the complete foundation in place. Now we can finally show where workspaces fit within this properly architected system.
Step 5: Where Workspaces Finally Fit
A configuration management wizard observed: “The same code in different environments should behave predictably different.”
Now that we have the complete foundation in place, we can finally show workspaces in their proper context. Remember the documentation’s guidance: workspaces are for creating “a parallel, distinct copy of a set of infrastructure in order to test a set of changes before modifying the main production infrastructure.”
Within each properly isolated composition layer, workspaces provide exactly that. They offer a safe way to test variations. To handle environment-specific configuration differences, tfvars files provide the solution. This combination gives you a clean separation between your infrastructure code and environment-specific configuration.
Why tfvars + workspaces work so well together:
- Same code, different configs: Workspaces let you deploy identical Terraform code with different configurations
- Environment isolation: Each workspace gets its own state, but can use different tfvars for sizing, features, and settings
- Configuration outside code: tfvars keep environment-specific details out of your .tf files, making them truly reusable
- Safe testing: Feature workspaces can use dev.tfvars for smaller, cheaper resources while prod uses prod.tfvars
- Clean workflows: Switch workspace, apply with appropriate tfvars for simple and predictable operations
Here’s how this elegant combination works in practice:
Environment-Specific tfvars:
# workload-a/config/dev.tfvars
environment = "dev"
instance_type = "t3.micro"
min_capacity = 1
max_capacity = 2
enable_monitoring = false
# workload-a/config/prod.tfvars
environment = "prod"
instance_type = "t3.large"
min_capacity = 3
max_capacity = 10
enable_monitoring = true
Clean Resource Configuration:
# workload-a/main.tf
resource "aws_autoscaling_group" "app" {
name = "app-asg-${terraform.workspace}"
min_size = var.min_capacity
max_size = var.max_capacity
desired_capacity = var.min_capacity
# Use network composition layer's outputs
vpc_zone_identifier = data.terraform_remote_state.network.outputs.private_subnet_ids
}
The Complete Workflow:
cd workload-a/
# 1. Create feature branch environment for development
terraform workspace new workload-a-feature-user-dashboard
terraform apply -var-file=config/dev.tfvars # Test with smaller, cheaper resources
# 2. Test in shared dev environment
terraform workspace select workload-a-dev
terraform apply -var-file=config/dev.tfvars # Shared dev environment
# 3. Deploy to production after testing
terraform workspace select workload-a-prod
terraform apply -var-file=config/prod.tfvars # Full production resources
# 4. Clean up feature environment
terraform workspace select workload-a-feature-user-dashboard
terraform destroy -var-file=config/dev.tfvars
terraform workspace select workload-a-dev # Switch to any other workspace
terraform workspace delete workload-a-feature-user-dashboard
This demonstrates the workspace documentation’s guidance. It creates “parallel, distinct copies” to test changes safely before affecting production infrastructure. The tfvars and workspace combination provides the perfect interface with the same infrastructure code, different configurations, and isolated state.
Notice how we never modified the Terraform code itself. We just switched workspaces and applied different configurations. This is the elegant simplicity the documentation describes.
This workflow is both scalable and portable. The same commands work whether you’re running them locally on your laptop, in a GitHub Actions pipeline, GitLab CI, Jenkins, or any other CI/CD system. No special orchestration tools or complex deployment scripts required. Just standard Terraform commands that work everywhere.
Why This Works
The Unix philosophy teaches: “Do one thing, do it well, and compose with others.”
This approach addresses every concern the documentation raises because it implements the complete architecture HashiCorp describes:
System Decomposition FIRST: Each composition layer has its own configuration and backend
Separate Backends: Dedicated backends with composition layer-specific credentials
Reusable Modules: VPC module demonstrates DRY infrastructure patterns
Composition Layer Communication: Remote state enables data sharing between composition layers
Team Boundaries: Different teams manage different composition layers with appropriate access controls
Blast Radius Control: Changes are isolated within composition layer boundaries
Simple Deployment Interface: Workspaces + tfvars provide elegant environment management without complex orchestration
Workspaces Within Layers: Safe testing environments within proper boundaries
Workspaces work safely within each properly isolated composition layer for testing variations and managing parallel environments, exactly as the documentation intended.
The beauty of this approach is that workspaces encapsulate all the complexity of environment management within a simple, powerful interface without compromising any of the established architectural foundations. You get the benefits of proper system decomposition, security isolation, and team boundaries, while workspaces provide an elegant way to manage multiple environments within each composition layer.
Key Takeaways
An infrastructure architect reflected: “Tools are not good or bad. They are appropriate or inappropriate. Wisdom lies in knowing the difference.”
- Read the documentation completely - The warning is about using workspaces ALONE, not avoiding them entirely
- System decomposition first - Break infrastructure into logical composition layers with separate backends
- Use reusable modules - DRY your infrastructure patterns
- Enable data sharing - Composition layers communicate via terraform_remote_state
- Separate credentials - Different teams, different access controls
- Workspaces within layers - Use workspaces for multiple environments and feature testing within each composition layer
- tfvars for configuration - Keep environment-specific config out of your Terraform code
Looking back at workspace debates I’ve witnessed and participated in, I’ve come to believe the challenge wasn’t the tool itself. It was attempting to use workspaces as the primary solution for complex scenarios instead of as one part of a comprehensive infrastructure strategy.
Interestingly, many teams may already have solid foundations. They have composition layer decomposition, separate backends, reusable modules, and proper credential isolation. They just need the workspaces and tfvars combination to complete the pattern.
The “curious case” of Terraform workspaces isn’t that they’re bad. It’s that many of us interpreted the documentation in a way that led to avoiding workspaces entirely. When used properly as part of the complete pattern HashiCorp actually describes, they’re not just useful, they’re elegant.
Sometimes the simplest solution really is right in front of you. It just requires building the proper foundation first.