Operations 16 min read

Managing Environments with Pipelines: Strategies, Code Samples, and Best Practices

This article explains how to use continuous delivery pipelines to manage multiple infrastructure environments, compares three stack organization strategies, provides Terraform examples for staging and production, and discusses benefits, challenges, and best practices for versioned stack definitions, artifact repositories, and automated testing in DevOps workflows.

DevOps
DevOps
DevOps
Managing Environments with Pipelines: Strategies, Code Samples, and Best Practices

When defining infrastructure for software deployment, tools such as Terraform, AWS CloudFormation, Azure Resource Manager, Google Cloud Deployment Manager, and OpenStack Heat allow you to capture environment creation, modification, and recreation in a transparent, repeatable, and testable way.

However, after using these tools for a while you may encounter pitfalls: large infrastructures can become fragile, and a single mistake (e.g., an accidental change to /etc/hosts on every server) can break access to all environments.

Before applying changes you need a safe method to test them. The article outlines three common approaches: (1) put all environments in a single stack, (2) define each environment in its own stack, and (3) create a single stack definition that is upgraded through a pipeline.

A stack project (or instance) is a group of infrastructure managed as one unit, analogous to an AWS CloudFormation stack or a Terraform state file. A stack definition is the file or set of files used by a tool to create the stack.

Below is a simple Terraform example that defines separate resources for a staging and a production environment:

# STAGING ENVIRONMENT
resource "aws_vpc" "staging_vpc" {
  cidr_block = "10.0.0.0/16"
}

resource "aws_subnet" "staging_subnet" {
  vpc_id     = "${aws_vpc.staging_vpc.id}"
  cidr_block = "10.0.1.0/24"
}

resource "aws_security_group" "staging_access" {
  name   = "staging_access"
  vpc_id = "${aws_vpc.staging_vpc.id}"
}

resource "aws_instance" "staging_server" {
  instance_type          = "t2.micro"
  ami                   = "ami-ac772edf"
  vpc_security_group_ids = ["${aws_security_group.staging_access.id}"]
  subnet_id              = "${aws_subnet.staging_subnet.id}"
}

# PRODUCTION ENVIRONMENT
resource "aws_vpc" "production_vpc" {
  cidr_block = "10.0.0.0/16"
}

resource "aws_subnet" "production_subnet" {
  vpc_id     = "${aws_vpc.production_vpc.id}"
  cidr_block = "10.0.1.0/24"
}

resource "aws_security_group" "production_access" {
  name   = "production_access"
  vpc_id = "${aws_vpc.production_vpc.id}"
}

resource "aws_instance" "production_server" {
  instance_type          = "t2.micro"
  ami                   = "ami-ac772edf"
  vpc_security_group_ids = ["${aws_security_group.production_access.id}"]
  subnet_id              = "${aws_subnet.production_subnet.id}"
}

Using a single stack for all environments is the simplest but also the most error‑prone, because changes to staging can unintentionally affect production.

Defining each environment in its own stack isolates changes, but maintaining many duplicate files can become burdensome as the number of environments grows.

The pipeline‑driven approach keeps a single stack definition that is parameterized per environment and versioned as an immutable artifact. A typical workflow is:

Commit changes to the source repository.

The CD server detects the commit, tags the definition with a version, and stores it in an artifact repository.

The CD server applies the versioned definition to the staging environment and runs automated tests.

If tests pass, the same version is promoted to the production environment.

Artifacts are often stored in an S3 bucket. Example commands to publish and promote a version are:

aws s3 sync ./our-project/ s3://our-project-repository/1.0.123/
aws s3 sync --delete \
  s3://our-project-repository/1.0.123/ \
  s3://our-project-repository/staging/

This method brings several advantages: developers can spin up sandbox instances without affecting others, blue‑green deployments become straightforward, and testers can create and destroy environments on demand. All changes flow through the pipeline, ensuring consistency, auditability, and reduced risk of manual errors.

Developers still need a local workflow for rapid iteration: pull the latest definition, run the tool locally to create a test stack, verify results, then push the changes back to the shared repository for the pipeline to handle.

In summary, using pipelines to manage infrastructure provides a reliable, repeatable process that scales with team size and complexity, while requiring disciplined automation, secret management, and comprehensive testing to realize its full benefits.

CI/CDDevOpsenvironment-managementTerraformInfrastructure as Code
DevOps
Written by

DevOps

Share premium content and events on trends, applications, and practices in development efficiency, AI and related technologies. The IDCF International DevOps Coach Federation trains end‑to‑end development‑efficiency talent, linking high‑performance organizations and individuals to achieve excellence.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.