Engineering ManagementMar 24, 202617 min read

Deployment strategies: Types, trade-offs, and how to choose

Roger Winter

Content Marketing Manager

A deployment strategy is the method a team uses to move new code into a production environment. It determines how traffic shifts between versions, how much risk each release represents, and how quickly the team can roll back when something breaks. The choice isn’t academic: a mismatch between strategy and system can mean downtime, failed rollouts, or hours of manual recovery.

Deploying means putting code into an environment. Releasing means exposing that code to users. Some strategies treat these as the same event. A rolling update replaces instances, and users hit the new version as each one comes online. Others deliberately separate them. Feature flags let a team deploy on Tuesday and release on Thursday, after internal validation. That separation shapes every section that follows.

What are the different types of deployment strategies?

Every strategy makes a trade-off between two things: how much risk it puts on production, and how much effort it takes to set up. A big bang deployment is dead simple but takes the whole system down. A canary deployment barely touches production traffic, but it needs traffic splitting, metric collection, and automated analysis to work. Neither axis is better. The right combination depends on what’s being deployed and what breaks if it goes wrong.

2026-03-24-deploy-strategy-01

Big bang deployment

A big bang deployment stops all running instances of the current version before starting instances of the new version. There is no traffic splitting, no coexistence of old and new code, and no gradual rollout. In Kubernetes, this is strategy.type: recreate

How it works: The deployment controller terminates every pod running the old version, then starts pods with the new version. Traffic returns only after the new pods pass their readiness checks.

2026-03-24-deploy-strategy-02

The trade-off is straightforward: simplicity for downtime. There’s no backward compatibility requirement, but users are locked out for the duration, which can range from seconds to minutes depending on application startup time.

Use big bang when downtime is acceptable: dev and staging environments, scheduled maintenance windows, or situations where a breaking change makes it impossible to run two versions at the same time.

Rolling deployment

A rolling deployment updates instances in batches while the application stays up. In Kubernetes, this is the default strategy (strategy.type: RollingUpdate), controlled by maxUnavailable and maxSurge.

How it works: The deployment controller scales up a new ReplicaSet in batches while scaling down the old one, checking readiness probes at each step.

2026-03-24-deploy-strategy-03

Rolling deployments are resource-efficient because they don’t require a duplicate environment. The cost is version coexistence: old and new versions serve traffic simultaneously, so the application must be backward-compatible. Rollback isn’t instant either. Reverting means rolling in the opposite direction.

Use rolling deployments for backward-compatible changes where infrastructure cost matters. For most teams without traffic-splitting infrastructure, it’s a reasonable starting point.

CircleCI’s Kubernetes orb supports rolling deployments with the get_rollout_status command, which monitors rollout progress and fails the job if the rollout stalls.

Blue-green deployment

A blue-green deployment maintains two identical production environments. One (blue) serves all traffic. The other (green) sits idle. To deploy, the team pushes the new version to green, validates it, and switches traffic. If something goes wrong, traffic flips back to blue in under a minute.

How it works. The new version deploys to the idle environment. The team runs smoke tests or integration tests against it. Once validated, the traffic switch happens: a load balancer rule change, a Kubernetes service selector update, or a DNS weight shift. If something breaks after the switch, the same mechanism points traffic back to blue.

2026-03-24-deploy-strategy-04

The most visible cost is double the compute. The less obvious cost is the database. Both environments share the same database, so schema changes must work with both application versions simultaneously. This is where most blue-green failures happen, and where the expand-migrate-contract pattern matters.

In the expand phase, add new columns or tables alongside the existing ones without removing anything. In the migrate phase, backfill existing data into the new structures. In the contract phase, once all traffic runs on the new code, remove the old columns. Each phase is a separate deploy, and the constraint throughout is N-1 compatibility: the new application version must work with the previous schema, and vice versa.

Use blue-green when rollback speed is a hard requirement and the team can handle the infrastructure cost.

CircleCI workflows support blue-green deployments through manual approval gates. A pipeline can deploy to the idle environment, run validation jobs, and hold at an approval step until an engineer confirms the switch.

Canary deployment

A canary deployment sends a small percentage of production traffic, typically 1–5%, to the new version while the rest continues hitting the current version. If the canary’s metrics look healthy, traffic gradually increases. If metrics degrade, all traffic returns to the current version.

How it works: A traffic management layer (an ingress controller, service mesh, or Argo Rollouts) routes a defined percentage of requests to the new version. Monitoring systems collect metrics from both canary and baseline. After a defined observation window, traffic either increases or rolls back.

2026-03-24-deploy-strategy-05

Canary costs less than blue-green because it only needs enough instances to handle its traffic share. The complexity cost is higher: it requires traffic-splitting infrastructure, real-time metric comparison, and automated analysis. Without those, it’s just a rolling deployment with extra steps. Kayenta (built by Google and Netflix) and Argo Rollouts both automate the promote-or-rollback decision based on statistical analysis of canary metrics.

CircleCI uses Argo Rollouts internally for canary deployments across the majority of its own production services. The CircleCI release agent integrates with Argo Rollouts, giving teams the ability to promote, roll back, or cancel canary rollouts directly from the CircleCI deploys UI.

A/B testing deployment

An A/B testing deployment routes traffic to different application versions based on user segments to measure business metrics like conversion rate, revenue per session, or engagement. Unlike canary (random split for safety), A/B testing targets specific user groups to answer a business question.

How it works: The team defines a segmentation rule that includes users from a specific region, users on mobile, or a percentage assigned by a consistent hashing function. Both versions run simultaneously while an analytics system collects the target metrics. When the experiment reaches statistical significance, the winning version rolls out to all users.

2026-03-24-deploy-strategy-06

A/B testing is an experimentation technique, not a risk mitigation strategy. It assumes both versions are stable and production-ready. The risk profile is higher than canary because there’s no automated rollback triggered by error rates or latency.

Use A/B testing when the goal is optimization, not safety. It requires a segmentation layer, an analytics pipeline, and enough traffic volume to reach statistical significance.

Shadow deployment

A shadow deployment mirrors real production traffic to a new version running alongside the current one. The current version handles all user-facing responses. The shadow version processes the same requests, but its responses are never served to users. They’re logged and compared against production output as validation.

How it works: A traffic mirroring layer duplicates incoming requests and sends copies to the shadow environment. Responses are compared against the production version’s responses, either in real time or in batch.

2026-03-24-deploy-strategy-07

Shadow deployments are the safest way to test against real traffic because users are never affected. That makes them a strong fit for ML model swaps, algorithm rewrites, and database migration validation. The primary limitation is stateful operations: if the shadow version writes to a database or processes payments, those operations execute twice. Mirroring a service with write-side effects requires filtering those paths, which reduces test fidelity.

Feature flag deployment

A feature flag deployment decouples deploying code from releasing it to users. New code ships to production behind a flag that defaults to “off.” The team turns the flag on for specific users, percentages, or segments when ready. If something goes wrong, flipping the flag off hides the change instantly.

How it works: A feature flag platform (LaunchDarkly, Unleash, Flagsmith, or a homegrown service) stores flag configurations and serves them to the application via an SDK. When an engineer toggles a flag, the change propagates within seconds. No new deployment, no pod restart, no load balancer change.

2026-03-24-deploy-strategy-08

Feature flags enable trunk-based development, internal dogfooding, and instant rollback at the application level. They pair naturally with any other deployment strategy on this list. The trade-off is flag debt: flags that stay in the codebase after their feature has launched create clutter and increase test surface. The fix is discipline. Set an expiration date for every flag, and treat removal as part of the feature’s definition of done.

Immutable deployment

An immutable deployment replaces infrastructure entirely rather than modifying it in place. Every change produces a new artifact (container image, VM image, AMI) deployed to fresh instances. No instance is ever patched or reconfigured after it starts running.

How it works: The CI pipeline builds the application, packages it into an immutable artifact, and pushes it to a registry. New instances are created from that artifact. Old instances are terminated.

2026-03-24-deploy-strategy-09

Immutability solves configuration drift: the gradual divergence between what an instance should be running and what it’s actually running. Containers enforce this by design, but the principle extends to Packer-built AMIs and Nix-based configurations. AWS includes immutable infrastructure in its Well-Architected Framework. The cost is a stricter build process where every change requires a full build-test-deploy cycle.

Immutability isn’t a traffic-routing choice. It’s a decision about how artifacts are built, and it pairs with any other strategy on this list.

GitOps-based deployment

A GitOps deployment uses a Git repository as the single source of truth for what should be running in each environment. An agent running inside the cluster (ArgoCD, Flux) continuously compares live state to the declared state in Git and reconciles any drift.

How it works: The team stores Kubernetes manifests, Helm charts, or Kustomize overlays in a Git repository. When an engineer merges a pull request, the agent detects the new commit and applies it. If someone manually edits a resource in the cluster, the agent reverts it to match Git.

2026-03-24-deploy-strategy-10

The OpenGitOps project defines four principles: desired state is declarative, versioned and immutable in Git, pulled automatically by agents, and continuously reconciled. The pull-based model means cluster credentials never leave the cluster.

The trade-off is scope. ArgoCD and Flux are purpose-built for Kubernetes. Teams on VMs, serverless, or hybrid environments can adopt the principles but won’t get the same out-of-box reconciliation.

CircleCI and ArgoCD are a common pairing: CircleCI runs build, test, and image-push stages, and ArgoCD deploys when the image tag updates in the GitOps repository.

How to choose the right deployment strategy

2026-03-24-deploy-strategy-11

Comparing deployment strategies gives us a broad overview of how to handle deployments. But we also need to consider factors specific to your project and its requirements. Let’s take a look at five crucial factors that will help to match your project to the right deployment strategy.

  1. Downtime tolerance. Can the system go offline during a deploy? If yes, even briefly during a maintenance window, big bang is still an option. If no, eliminate big bang. If the system can’t tolerate even brief version coexistence during a rollout, the remaining options are blue-green, canary, or feature flags.
  2. Infrastructure budget. Can the team run a full duplicate production environment? If yes, blue-green works. If not, canary is a better fit because it only needs enough instances to handle its traffic share. Rolling is the cheapest of all since it reuses existing capacity. Budget eliminates blue-green for a lot of teams.
  3. Team maturity and tooling. Does the team have traffic-splitting infrastructure (a service mesh, weighted ingress, or Argo Rollouts)? Can it run automated metric comparison between canary and baseline? If the answer to either is no, start with rolling deployments and feature flags. Grow into canary once the tooling exists. Canary without monitoring infrastructure means someone watching dashboards manually for every release, and that doesn’t scale.
  4. Architecture. A monolith deploys as a single unit, which makes blue-green or rolling the simplest fit. Microservices benefit from canary or progressive delivery, where each service rolls out independently on its own schedule. Serverless functions are best controlled through feature flags, since the cloud provider manages the infrastructure and there are no instances to route between.
  5. Compliance and audit requirements. Regulated industries often need a full audit trail of what was deployed, when, and by whom. GitOps provides this by default: every change is a Git commit with an author and a review trail. Some compliance frameworks require full-environment validation before a traffic switch, which favors blue-green. If the team operates under SOC 2, HIPAA, or PCI-DSS, the deployment strategy needs to satisfy the auditor, not just the engineers.

In practice, most teams won’t rely on a single deployment strategy. A canary validates stability, a feature flag controls who sees the change, and GitOps governs how it gets applied. When these techniques combine into an automated, data-driven release process, that combination is called progressive delivery.

What is progressive delivery?

Progressive delivery is a framework, not a standalone strategy. The term, coined by James Governor of RedMonk in 2018, describes releasing software through controlled exposure: deploy to a small audience, observe the results, and widen access based on data. Canary deployments, feature flags, A/B testing, and blue-green deployments all fit under this umbrella. What progressive delivery adds is the expectation that these techniques work together, with automated guardrails governing each stage of the rollout.

In practice, a progressive delivery pipeline might look like this: the CI system builds and tests the artifact. The artifact deploys behind a canary that receives 2% of traffic. An automated analysis run checks error rates and latency against the baseline for ten minutes. If the metrics pass, traffic increases to 25%, then 50%, then 100%, with analysis at each step. If any step fails, traffic returns to the previous version automatically. Feature flags can layer on top, controlling which users see new functionality even after the code has fully rolled out.

For teams stitching together ad hoc monitoring checks and manual approvals, progressive delivery is the name for what they’re trying to build.

How to set up deployment strategies in a CI/CD pipeline

CI/CD (continuous integration and continuous delivery) is the practice of automating how code gets built, tested, and shipped. The CI side merges code changes frequently and runs automated tests against each one. The CD side takes a passing build and moves it toward production. The deployment strategy determines what happens at that last stage: how the new version reaches users, how traffic shifts, and what happens if something goes wrong. That’s where pipeline design matters.

Each deployment strategy implies a different pipeline shape. The pipeline should reflect the strategy’s stages explicitly rather than treating deployment as a single step at the end.

Let’s look at a practical example. Here is a simplified CircleCI config for a canary deployment using Argo Rollouts:

workflows:
  deploy:
    jobs:
      - build
      - test:
          requires: [build]
      - push-image:
          requires: [test]
      - update-manifests:
          requires: [push-image]
      # Argo Rollouts handles canary traffic shifting
      # CircleCI release agent monitors rollout progress

The CI pipeline handles everything up to artifact delivery. Argo Rollouts handles canary traffic shifting, analysis runs, and auto-promotion. CircleCI’s release agent connects the two, giving teams visibility into rollout progress and the ability to promote, cancel, or roll back from the CircleCI deploys dashboard. For teams not using Argo Rollouts, CircleCI’s deploy markers track each deployment and enable rollback pipelines that revert to a previous stable version directly from the web app.

For blue-green, CircleCI workflow approval gates provide a structured hold point. For canary, Argo Rollouts AnalysisRun resources query metric providers and auto-promote if thresholds hold. For feature flag deployments, the promote/rollback decision lives in the flag platform itself.

Always handle database migrations separately. The expand-migrate-contract pattern described in the blue-green section should be built into the pipeline as a first-class concern. Schema migrations run in their own stage before the application deployment begins. The expand step deploys first and must be backward-compatible with the currently running code. The contract step runs in a later pipeline, after confirming the old schema is no longer in use. Violating N-1 compatibility at any stage means a rollback will fail.

Deployment strategy best practices

  • Start simple and grow into complexity. Rolling deployments are a reasonable default. Add canary when the team has monitoring and traffic-splitting in place. Don’t jump to progressive delivery on day one.
  • Automate rollback, not just deployment. If rollback is manual, it’s slow. If it’s slow, teams hesitate to deploy. That hesitation leads to larger, riskier batches. The cycle breaks when rollback is automated and fast enough that deploying feels low-stakes.
  • Monitor the canary, not just the deploy. Track error rates, latency percentiles (p50, p95, p99), and business metrics during and after every release. If the only health check is “did the container start,” problems will reach users before anyone notices.
  • Separate deploy from release when possible. Feature flags give the team control over when users see changes, independent of when code reaches production.
  • Treat the deployment pipeline as code. Version it, review it, test it. If the pipeline config lives outside version control, it’s a single point of failure.
  • Plan for the database. The expand-migrate-contract pattern should be part of the team’s workflow from the start, not something introduced after the first failed blue-green switch.

Deployment strategy tools

Most teams don’t need a tool for every category below. The right set depends on which strategy they’ve chosen and what infrastructure they already have.

CI/CD platforms run the build-test-deploy pipeline. CircleCI provides workflow orchestration, approval gates, deploy markers, rollback pipelines, and a release agent for Kubernetes and Argo Rollouts integration. GitHub Actions, GitLab CI, and Jenkins are other common options.

Progressive delivery and canary controllers handle traffic shifting and automated analysis on top of Kubernetes. Argo Rollouts is the most widely adopted. Flagger and Spinnaker offer similar capabilities.

GitOps agents reconcile cluster state to match a Git repository. ArgoCD and Flux are the two dominant options, both CNCF graduated projects.

Feature flag platforms control release timing independently of deployment. LaunchDarkly, Unleash, and Flagsmith all provide SDK-based flag evaluation and user targeting.

Traffic management enables the weighted routing that canary, A/B, and shadow deployments require. Istio, Linkerd, and AWS ALB weighted target groups each support this at different levels of complexity.

Canary analysis automates the promote-or-rollback decision. Kayenta, Datadog, and Prometheus + Grafana can all feed metrics into tools like Argo Rollouts.

The tool matters less than the practice. But the CI/CD platform is the foundation everything else plugs into. CircleCI supports staged workflows, approval gates, deploy tracking, and rollback out of the box, which means teams spend less time wiring together deployment infrastructure and more time shipping.

Conclusion

A deployment strategy is a decision about how much risk a team is willing to tolerate per release and how fast they need to recover when something breaks. There isn’t a single correct answer. A big bang deployment is the right choice for a breaking migration during a maintenance window. A canary with automated analysis is the right choice for a high-traffic production service. Most teams will use more than one.

The right answer depends on the system, the team, and the stakes, and it will change as all three grow. Start with what works today, and build toward what the team will need next.

Get started with CircleCI for free and explore the deployment docs to set up deploy tracking, rollback pipelines, and Kubernetes release management.