Apr 17, 202612 min read

How to set up rolling deployments with CircleCI

Roger Winter

Content Marketing Manager

A rolling deployment updates running application instances in batches, replacing old instances with new ones while the application keeps serving traffic.

The concept applies to any system that can run multiple instances of an application, but Kubernetes has it built in as the default deployment strategy. Kubernetes terminates an old pod only after its replacement passes the configured readiness check, so no requests land on an unready instance.

In a CI/CD workflow (where code changes are automatically built, tested, and deployed), rolling deployments fit naturally as the final step. CircleCI automates this pipeline: when code is pushed to a repository, CircleCI builds the image, applies the Kubernetes manifests, and monitors the rollout.

This tutorial walks through setting up a rolling deployment on Kubernetes, automated inside a CircleCI pipeline. It covers the Kubernetes manifests with maxSurge and maxUnavailable configured for zero-downtime deploys, and the CircleCI config that builds the image and applies the deployment, deploy tracking, rollback, and post-deploy monitoring.

Prerequisites

Before starting, you’ll need the following:

  • A CircleCI account free tier works.
  • A Kubernetes cluster (this tutorial uses GKE, but EKS, AKS, or a local cluster like minikube or kind all work).
  • kubectl installed and configured to reach the cluster.
  • Docker installed locally for building the sample image.
  • A Docker Hub account for the container registry.
  • The sample repository cloned from GitHub: [CIRCLECI-GWP/kubernetes-rolling-deployment-circleci(https://github.com/CIRCLECI-GWP/kubernetes-rolling-deployment-circleci){: target=”_blank”}.

Google Cloud’s quickstart guide covers GKE cluster creation. Getting started with CircleCI walks through connecting a repository and running a first pipeline.

How rolling deployments work in Kubernetes

When Kubernetes receives a Deployment update (a new image tag, a changed environment variable, any modification to the pod template), it creates a new ReplicaSet. The Deployment controller then runs both ReplicaSets simultaneously, scaling the new one up and the old one down in increments. Two parameters, maxSurge and maxUnavailable, control the pace. The tutorial configures these in Step 2.

Before terminating any old pod, Kubernetes waits for its replacement to pass a readiness probe. If the probe never passes, the rollout stalls and the old pods keep serving traffic. Both versions run side by side during the rollout, so changes need to be backward-compatible.

Diagram showing Kubernetes rolling update process

Rollback works the same way in reverse: Kubernetes scales the previous ReplicaSet back up while scaling the current one down.

Rolling deployments don’t give teams traffic-splitting control. Every new pod that passes its readiness check starts receiving traffic immediately. If percentage-based traffic control is needed, a canary deployment is the better fit. Review Canary vs blue-green deployment to reduce downtime for a comparison.

Step 1 — Build and containerize the sample application

The sample application is a Node.js HTTP server with no dependencies. It serves three endpoints:

  • GET / returns a styled HTML status page showing the running version, pod hostname, and a timestamp.
  • GET /api returns the same information as JSON.
  • GET /health returns {"status":"ok"} for readiness and liveness probes.

The version comes from the APP_VERSION environment variable, which the Kubernetes manifest sets at deploy time. Hitting GET /api on two pods during a rollout returns different version values, making the update visible.

The full source is in the sample repository. The Dockerfile uses node:20-alpine and runs as a non-root user:

FROM node:20-alpine
WORKDIR /app
COPY package.json .
COPY index.js .
EXPOSE 3000
USER node
CMD ["node", "index.js"]

Build and test locally before pushing:

docker build -t rolling-tutorial-app:local ./app
docker run -p 3000:3000 rolling-tutorial-app:local

# In another terminal:
curl localhost:3000/api
# {"version":"1.0.0","hostname":"...","timestamp":"..."}

curl localhost:3000/health
# {"status":"ok"}

Step 2 — Write the Kubernetes manifests

Deployment manifest

The Deployment manifest in k8s/deployment.yml uses envsubst placeholders that the CircleCI pipeline fills in at deploy time. envsubst replaces ${VARIABLE} references in a file with their environment variable values, so the pipeline can inject the image tag and Docker Hub username before applying the manifest.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rolling-tutorial-app
  namespace: default
  annotations:
    circleci.com/project-id: ${CIRCLE_PROJECT_ID}
    circleci.com/operation-timeout: 10m
  labels:
    app: rolling-tutorial-app
    circleci.com/component-name: rolling-tutorial-app
    circleci.com/version: ${IMAGE_TAG}
spec:
  replicas: 3
  selector:
    matchLabels:
      app: rolling-tutorial-app
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: rolling-tutorial-app
        circleci.com/component-name: rolling-tutorial-app
        circleci.com/version: ${IMAGE_TAG}
    spec:
      containers:
        - name: rolling-tutorial-app
          image: ${DOCKERHUB_USERNAME}/rolling-tutorial-app:${IMAGE_TAG}
          ports:
            - containerPort: 3000
          env:
            - name: APP_VERSION
              value: "${IMAGE_TAG}"
          readinessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 10
            periodSeconds: 5
            failureThreshold: 3
          livenessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 15
            periodSeconds: 10

The key settings to note:

  • maxUnavailable: 0 and maxSurge: 1 — no pods go offline until a replacement is ready, and only one extra pod runs at a time. With three replicas, the full rollout takes three cycles.
  • readinessProbe — controls whether traffic routes to the pod. Kubernetes won’t terminate an old pod until the new one passes this check. If initialDelaySeconds is too short for the app’s startup time, the rollout stalls.
  • livenessProbe — catches pods that are running but stuck. If it fails, Kubernetes restarts the pod.
  • circleci.com/* labels and annotations — CircleCI’s deploy tracking reads these from running pods. They must not go in spec.selector.matchLabels, which Kubernetes treats as immutable. Review the component configuration docs for the full reference.

Service manifest

The Service in k8s/service.yml creates a stable network endpoint that routes traffic to pods matching the app: rolling-tutorial-app selector.

apiVersion: v1
kind: Service
metadata:
  name: rolling-tutorial-app
  namespace: default
spec:
  selector:
    app: rolling-tutorial-app
  ports:
    - port: 80
      targetPort: 3000
  type: LoadBalancer

The LoadBalancer type provisions an external IP so the application is reachable from outside the cluster. During a rolling update, the Service routes to both old and new pods; it doesn’t distinguish between versions.

Step 3 — Create the CircleCI pipeline

Connect the repository

CircleCI Projects page with Set Up Project button highlighted

In CircleCI, click Projects in the left sidebar, find the repository, and click Set Up Project. Select the Fastest option to use the existing .circleci/config.yml. For a full description, go to Getting started with CircleCI.

Create the context

CircleCI Organization Settings showing environment variables added to the rolling-tutorial context

The pipeline needs secrets that shouldn’t live in the repository. CircleCI contexts store environment variables available to jobs at runtime. Create one named rolling-tutorial:

  1. In CircleCI, go to Organization Settings > Contexts
  2. Click Create Context and name it rolling-tutorial
  3. Add three environment variables:
Variable Value
DOCKERHUB_USERNAME Docker Hub username
DOCKERHUB_PASSWORD Docker Hub password or access token
KUBECONFIG_DATA Base64-encoded kubeconfig for the cluster

To generate KUBECONFIG_DATA for GKE:

gcloud container clusters get-credentials <cluster-name> --zone <zone> --project <project-id>
cat ~/.kube/config | base64 | tr -d '[:space:]'

For EKS, AKS, or other providers, configure kubectl access per the provider’s documentation, then encode the resulting kubeconfig the same way.

Pipeline config

The pipeline has two jobs: build-and-push builds the Docker image and pushes it to Docker Hub, and deploy applies the manifest to the cluster and monitors the rollout.

version: 2.1

jobs:
  build-and-push:
    docker:
      - image: cimg/base:stable
    steps:
      - checkout
      - setup_remote_docker:
          docker_layer_caching: true
      - run:
          name: Build Docker image
          command: |
            docker build -t $DOCKERHUB_USERNAME/rolling-tutorial-app:$CIRCLE_SHA1 ./app
      - run:
          name: Push to Docker Hub
          command: |
            echo "$DOCKERHUB_PASSWORD" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
            docker push $DOCKERHUB_USERNAME/rolling-tutorial-app:$CIRCLE_SHA1

  deploy:
    docker:
      - image: cimg/base:stable
    steps:
      - checkout
      - run:
          name: Install kubectl
          command: |
            KUBECTL_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
            curl -LO "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl"
            chmod +x kubectl
            sudo mv kubectl /usr/local/bin/kubectl
      - run:
          name: Configure kubeconfig
          command: |
            mkdir -p ~/.kube
            echo "$KUBECONFIG_DATA" | tr -d '[:space:]' | base64 --decode > ~/.kube/config
            chmod 600 ~/.kube/config
      - run:
          name: Install envsubst
          command: sudo apt-get update -qq && sudo apt-get install -y gettext-base
      - run:
          name: Apply Kubernetes manifests
          command: |
            export IMAGE_TAG=$CIRCLE_SHA1
            envsubst < k8s/deployment.yml | kubectl apply -f -
            kubectl apply -f k8s/service.yml
      - run:
          name: Monitor rollout
          command: kubectl rollout status deployment/rolling-tutorial-app --timeout=5m

workflows:
  build-deploy:
    jobs:
      - build-and-push:
          context: rolling-tutorial
      - deploy:
          requires:
            - build-and-push
          context: rolling-tutorial
          filters:
            branches:
              only: main

A few things to note:

  • $CIRCLE_SHA1 as the image tag — the full Git commit hash, so every image traces back to the exact commit that produced it.
  • deploy runs only on main — pull request builds run build-and-push without triggering a deploy.
  • kubectl rollout status --timeout=5m — blocks until the rollout completes or times out. If a new pod never passes its readiness probe, the command exits non-zero and the job fails.

The next step adds deploy markers to this config for tracking deployments in CircleCI’s Deploys dashboard.

Step 4 — Add deploy markers for visibility

Deploy markers record metadata about each deployment (what version, when, which pipeline, which commit) and surface it in CircleCI’s Deploys dashboard as a timeline grouped by component and environment.

Enroll the project

Before markers register, the project needs to be connected to the Deploys feature. In CircleCI, open the project and click Deploys in the left sidebar. The setup wizard walks through installing the GitHub App integration and creating the initial environment. This is a one-time step per project.

CircleCI Deploys setup wizard with GitHub App integration and environment creation steps

The wizard includes an AI-assisted option that reads the existing .circleci/config.yml and proposes where to add deploy marker commands. It opens a pull request with the changes. You can optionally run this tool on your own config file. We used it for the example in this tutorial.

Add the marker steps

Add these steps to the deploy job in .circleci/config.yml. The release plan step goes before any deployment work; the release update and release log steps go after the rollout completes.

      # Add before "Install kubectl"
      - run:
          name: Plan deployment
          command: |
            circleci run release plan "${CIRCLE_JOB}" \
              --environment-name="production" \
              --component-name="rolling-tutorial-app" \
              --target-version="$CIRCLE_SHA1"

      # Add after "Monitor rollout"
      - run:
          name: Update deployment status to running
          command: circleci run release update "${CIRCLE_JOB}" --status=RUNNING
      - run:
          name: Log deploy marker
          command: |
            circleci run release log \
              --component-name=rolling-tutorial-app \
              --environment-name=production \
              --target-version=$CIRCLE_SHA1
      - run:
          name: Update deployment status to success
          command: circleci run release update "${CIRCLE_JOB}" --status=SUCCESS
          when: on_success
      - run:
          name: Update deployment status to failed
          command: circleci run release update "${CIRCLE_JOB}" --status=FAILED
          when: on_fail

release plan registers the deployment as “planned” before any work starts. After the rollout, release update --status=RUNNING marks it active, then release log records the event. Finally, release update closes the lifecycle with either SUCCESS or FAILED depending on whether the job passed. The --component-name and --environment-name control how deployments are grouped in the dashboard; --target-version identifies the specific deploy.

The deploy markers documentation covers both approaches in detail: release plan/release update for full lifecycle tracking, and release log for simple event logging.

After a few deploys, the Deploys dashboard shows a timeline for the rolling-tutorial-app component in the production environment. Each entry links back to the pipeline and commit. From the same view, rollbacks can be triggered directly.

CircleCI Deploys dashboard showing rolling-tutorial-app deployment timeline in production

The complete .circleci/config.yml with deploy markers included is in the sample repository.

Step 5 — Trigger a rolling update

Push a change to main to see the rolling update in action. Edit something visible in app/index.js (the title or version string), then commit and push:

git add app/index.js
git commit -m "Update app title to trigger rolling update"
git push origin main

In CircleCI, open the project’s Pipelines view. The build-deploy workflow starts automatically. Both jobs should complete green.

![CircleCI Pipelines view showing build-and-push and deploy jobs completed successfully(//images.ctfassets.net/il1yandlcjgk/1oI4abAcIMc2rVbK3FqgdY/5d0746ead9c23e21bbb4fbba19af2c08/2026-04-16-rolling-deploys-workflow-success.png){: .zoomable }

Once the pipeline finishes, get the external IP of the Service and confirm the new version is live:

kubectl get svc rolling-tutorial-app
# Note the EXTERNAL-IP column

curl http://<EXTERNAL-IP>/api
# {"version":"<commit-sha>","hostname":"rolling-tutorial-app-...","timestamp":"..."}

curl http://<EXTERNAL-IP>/health
# {"status":"ok"}

Once we have the external IP, we can check that our service is live. It should show us a simple page confirming that the deployment worked:

Browser showing sample app status page confirming  successful rolling deployment

To watch a rollout live the next time a change is pushed:

kubectl rollout status deployment/rolling-tutorial-app

This prints each stage of the rolling update as pods are created and terminated, ending with deployment "rolling-tutorial-app" successfully rolled out.

​​If the rollout stalls, run kubectl describe pod <new-pod-name> and check the Events section for readiness probe failures. If pods are in CrashLoopBackOff, run kubectl logs <pod-name> to see the crash reason. With maxUnavailable: 0, old pods keep serving traffic in both cases.

Verifying and troubleshooting the rollout

A rollout isn’t finished when all pods are running. Comparing error rates and latency percentiles (p50, p95, p99) from the five minutes before and after the deploy catches regressions that functional tests miss. CircleCI’s Deploys dashboard records deploy timing, so correlating a release with a metric change in Prometheus, Grafana, or Datadog is straightforward.

On the Kubernetes side, check for pod restarts after a deploy:

kubectl get pods -l app=rolling-tutorial-app

A rising RESTARTS count means a pod is crash-looping. Run kubectl describe pod <pod-name> to see probe failures or OOM kills in the Events section, or kubectl logs <pod-name> to see the application’s crash output.

The most common rollout issues:

  • Rollout stuck in progress. Usually a misconfigured readiness probe: the path is wrong, the port doesn’t match, or initialDelaySeconds is too short. kubectl describe pod <new-pod-name> shows probe failure events.
  • CrashLoopBackOff. The new image crashes on startup. With maxUnavailable: 0, old pods keep serving traffic while the new pod fails. Check kubectl logs <pod-name> for the crash reason.
  • Mixed versions serving requests. During any rolling update, both old and new pods receive traffic simultaneously. This is expected, but it causes errors if the change isn’t backward-compatible (a renamed API field, a removed response key, an incompatible session format). Rolling deployments require backward-compatible changes. For changes that can’t meet that constraint, blue-green deployment is the better fit.

Step 6 — Set up a rollback pipeline

Rolling back a Kubernetes Deployment re-activates the previous ReplicaSet through the same rolling update process. The rollback pipeline in .circleci/rollback.yml automates this from the CircleCI Deploys dashboard.

version: 2.1

jobs:
  rollback:
    docker:
      - image: cimg/base:stable
    environment:
      COMPONENT_NAME: << pipeline.deploy.component_name >>
      ENVIRONMENT_NAME: << pipeline.deploy.environment_name >>
      TARGET_VERSION: << pipeline.deploy.target_version >>
    steps:
      - run:
          name: Install kubectl
          command: |
            KUBECTL_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
            curl -LO "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl"
            chmod +x kubectl && sudo mv kubectl /usr/local/bin/kubectl
      - run:
          name: Configure kubeconfig
          command: |
            mkdir -p ~/.kube
            echo "$KUBECONFIG_DATA" | tr -d '[:space:]' | base64 --decode > ~/.kube/config
            chmod 600 ~/.kube/config
      - run:
          name: Plan rollback release
          command: |
            circleci run release plan "${CIRCLE_JOB}" \
              --component-name=${COMPONENT_NAME} \
              --environment-name=${ENVIRONMENT_NAME} \
              --target-version=${TARGET_VERSION} \
              --rollback
      - run:
          name: Roll back deployment
          command: kubectl rollout undo deployment/rolling-tutorial-app
      - run:
          name: Monitor rollback rollout
          command: kubectl rollout status deployment/rolling-tutorial-app --timeout=5m
      - run:
          name: Update rollback status to SUCCESS
          command: circleci run release update "${CIRCLE_JOB}" --status=SUCCESS
          when: on_success
      - run:
          name: Update rollback status to FAILED
          command: circleci run release update "${CIRCLE_JOB}" --status=FAILED
          when: on_fail

  cancel-rollback:
    docker:
      - image: cimg/base:stable
    steps:
      - run:
          name: Update rollback status to CANCELED
          command: circleci run release update "${CIRCLE_JOB}" --status=CANCELED

workflows:
  rollback:
    jobs:
      - rollback:
          context: rolling-tutorial
      - cancel-rollback:
          context: rolling-tutorial
          requires:
            - rollback:
              - canceled

A few things to note:

  • pipeline.deploy.* parameters — when CircleCI triggers a rollback from the Deploys dashboard, it injects the component name, environment, and target version automatically.
  • release plan --rollback — registers this as a rollback in the dashboard, distinguishing it from a regular deploy.
  • cancel-rollback job — fires only when the rollback job is explicitly canceled, so stale “in progress” entries don’t accumulate in the dashboard.

Connect the rollback pipeline

First, create the pipeline definition: in CircleCI, go to Project Settings > Project Setup > Add Pipeline. Set the config source and checkout source to the project repository, and the config filepath to .circleci/rollback.yml. The pipeline must be created here (not just by adding the file to the repo) so that CircleCI can trigger it and inject the pipeline.deploy.* parameters.

CircleCI Project Settings with Add Pipeline form pointing to .circleci/rollback.yml

Then, go to Project Settings > Deploys and select the new pipeline in the Rollback Pipeline dropdown.

CircleCI Project Settings Deploys tab with
    Rollback Pipeline dropdown

After this, rollbacks can be triggered directly from the Deploys dashboard. Review the rollback pipeline documentation for the full setup procedures.

Trigger a rollback

To roll back, open the Deploys dashboard, find the current deployment for rolling-tutorial-app, and click Rollback. CircleCI triggers the rollback pipeline, injecting the previous version’s metadata automatically.

CircleCI Deploys dashboard with Rollback button
    for the rolling-tutorial-app component

Once the pipeline completes, verify the previous version is running again:

bash

curl http://<EXTERNAL-IP>/api
# {"version":"<previous-commit-sha>","hostname":"rolling-tutorial-app-...","timestamp":"..."}

The Deploys dashboard shows the rollback as a separate entry, linked to the pipeline that ran it.

When to use rolling deployments (and when not to)

Rolling deployments work well when the change is backward-compatible and infrastructure cost matters. They don’t require a duplicate environment, they work with standard Kubernetes and no additional tooling, and rollback completes in seconds to minutes.

They’re the wrong choice when:

  • The change includes breaking API or schema changes that can’t coexist with the previous version during a partial rollout.
  • Instant rollback is required (re-running the update in reverse takes time, unlike blue-green where rollback is a traffic switch).
  • Percentage-based traffic control is needed for canary testing progressive delivery with CircleCI and Argo Rollouts.

For a comparison of rolling, blue-green, canary, and other strategies, see Deployment strategies: types, trade-offs, and how to choose.

Conclusion

This tutorial covered a rolling deployment pipeline from image build through rollback. The same CircleCI pipeline structure supports blue-green, canary, and progressive delivery workflows as an application’s deployment needs grow.

To start building: sign up for a free CircleCI account, connect a repository, and use the pipeline config from this tutorial as a starting point. The deployment documentation covers additional strategies and configuration options.