Building CI/CD pipelines using dynamic config

Developer Advocate, CircleCI

Creating robust, manageable, and reusable functionality is a big part of my job as a CI/CD engineer. Recently, I wrote about managing reusable pipeline configuration by adopting and implementing pipeline variables within pipeline configuration files. As I showed in that tutorial, pipeline variables and orbs have added some flexibility to this process, but they are still a bit limited. The nature of pipeline configuration files sometimes restricts developers who want a solution that fits their specific build processes. Those restrictions can lead developers to create “workarounds” like executing scripts in pre-commit hooks to generate config before a commit. Another example of a workaround is using jobs to trigger pipeline runs via the API that set pipeline parameters. Some of these solutions achieve their desired effect, but they can be inefficient and overly complex, require unfamiliar workarounds, or have edge cases that are not easily solved.
To address this need, CircleCI has released dynamic configuration. Dynamic config gives you the ability to natively inject dynamism in pipeline configurations. You can use dynamic configuration to execute a separate config file using scripts. It is a big step forward in flexibility, and it means you can customize which sections of the config you want to test and validate. Dynamic config also lets you maintain multiple config.yml files in a single code repository, to selectively identify and execute your primary config.yml files. This feature offers a wide range of powerful capabilities to easily specify and execute a variety of dynamic pipeline workloads.
In this post, I will walk you through how to implement dynamic configuration by creating a config file that is not in the root configuration folder. You’ll build a complete CI/CD pipeline that deploys a Node.js application to DigitalOcean Kubernetes using Terraform Cloud.
Prerequisites
Before you begin, you’ll need a:
- CircleCI account
- Docker Hub account
- DigitalOcean account
- Terraform Cloud account
- Snyk account
- Basic knowledge of CI/CD, Docker, and Kubernetes
Getting started with the example project
I will be using this code repo and code as examples in this post. You can either fork the project or use import project to branch your own version and follow along.
Testing the application locally
Before setting up the CI/CD pipeline, verify that the example application works correctly by running the tests locally:
git clone https://github.com/CIRCLECI-GWP/circleci-dynamic-config-project.git
cd circleci-dynamic-config-project
# Install dependencies
npm install
# Run the unit tests
npm test
Your output should be similar to:
> nodejs-circleci@0.0.1 test
> mocha
Node server is running on port: 5000
Welcome to CI/CD Server
GET /
✓ returns status code 200
welcomeMessage
✓ Validate Message
2 passing (26ms)
Project structure
The example project uses this structure:
├── .circleci/
│ └── config.yml # Setup workflow configuration
├── app.js # Node.js application
├── package.json # Node.js dependencies
├── Dockerfile # Container configuration
├── scripts/
│ └── generate-pipeline-config # Dynamic config generator
├── terraform/
│ ├── do_create_k8s/ # Kubernetes cluster creation
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── output.tf
│ └── do_k8s_deploy_app/ # Application deployment
│ ├── main.tf
│ ├── variables.tf
│ ├── output.tf
│ └── deployment.tf
└── test/
├── test.js # Unit tests
└── smoke_test # End-to-end test script
Connect CircleCI to your repository
After forking or cloning the project to your GitHub account, you need to connect it to CircleCI. Log in to CircleCI and go to the Projects dashboard. Find your circleci-dynamic-config-project
repository in the project list and click Set Up next to your repository.
CircleCI will detect the .circleci/config.yml
file in your repository. Click Set Up Project to start the initial pipeline run.
The pipeline will start immediately, but it will fail on the first run because the necessary environment variables haven’t been configured yet. This is completely expected!
Don’t worry about this initial failure; it’s a normal part of the setup process. It actually enables you to access the project settings so you can configure the required environment variables.
Setting up dynamic configuration (optional)
Note: Projects created after December 1st, 2023 have dynamic config enabled by default. Most projects will already have this enabled, so you can skip this step.
For older projects that need dynamic config enabled manually:
- Go to the Projects dashboard in the CircleCI application
- Select your project
- Select Project Settings in the upper-right corner
- On the left-hand panel, select Advanced
- Towards the bottom, toggle the switch for “Enable dynamic config using setup workflows” to the “on” position
Setting up required environment variables
After the initial pipeline failure, you need to configure the required environment variables in CircleCI. They are:
- Docker hub credentials
- DigitalOcean API token
- Terraform Cloud API token
- Snyk token
Docker Hub credentials
DOCKER_LOGIN
- Your Docker Hub usernameDOCKER_PASSWORD
- Your Docker Hub password/token
- Go to Docker Hub. Click Account Settings → Security → Access Tokens
- Create a new token with Read/Write permissions
- Use your username for
DOCKER_LOGIN
and the token forDOCKER_PASSWORD
DigitalOcean API token
DIGITAL_OCEAN_TOKEN
- DigitalOcean API token
- Go to DigitalOcean Control Panel
- Click Generate New Token, give it a name, then click Full Access → Create
- Copy the token value
Terraform Cloud API token
TERRAFORM_TOKEN
- Terraform Cloud API token
- Go to Terraform Cloud
- Click Account Settings → Tokens
- Create an API token, then copy the token value
Snyk token
SNYK_TOKEN
- For security scanning
- Go to Snyk →
- Click Account Settings → Auth Token
- Copy the token value
Setting up Terraform Cloud
Before running the pipeline, you need to create the required Terraform Cloud workspaces:
-
Create an organization
- Go to Terraform Cloud
- Create a new organization named
CircleCI-Author-Program
(or update the organization name in your Terraform files)
-
Create workspaces
- Create workspace:
iac-do
(for Kubernetes cluster creation) - Create workspace:
deploy-iac-do
(for application deployment) - Set both workspaces to use “API-driven workflow”
- Create workspace:
Adding environment variables to CircleCI
After your first pipeline run fails, you can go to the project settings to configure the environment variables. Go to the Projects dashboard in the CircleCI application and select your project (it should now be visible after the initial run).
Click Project Settings in the upper-right corner, then select Environment Variables from the left panel. Add each of the environment variables listed above using the Add Environment Variable button.
Note: The project must have run at least once (even if it fails) before you can access the Project Settings to add environment variables.
Successful pipeline execution
Once all environment variables are configured, re-run the pipeline:
- Monitor in CircleCI: Go to your CircleCI dashboard to watch the pipeline execute
- Approve Destruction: After the smoke tests pass, approve the destruction of resources to avoid ongoing costs
Expected output
When the pipeline runs successfully (after environment variables are configured), your output should contain:
- Security scans complete without critical issues
- Tests pass with stored artifacts
- Docker image built and pushed to Docker Hub
- Kubernetes cluster created on DigitalOcean
- Application deployed with LoadBalancer service
- Smoke test validates the application is accessible
- Manual approval for cleanup
- Resources destroyed to prevent charges
Understanding the set-up workflow
The set-up workflow is defined in .circleci/config.yml
:
# This file demonstrates how to leverage dynamic configuration to execute a separate config file using scripts.
version: 2.1
setup: true
orbs:
continuation: circleci/continuation@2.0.0
jobs:
generate-config:
executor: continuation/default
steps:
- checkout
- run:
name: Generate Pipeline generated_config.yml file
command: |
#The generate script has 2 arguments: 1) Terraform Version 2) DigitalOcean CLI Version
./scripts/generate-pipeline-config "0.14.5" "1.124.0" # Terraform CLI and Digital Ocean versions to install
- continuation/continue:
parameters: "{}"
configuration_path: configs/generated_config.yml
workflows:
setup-workflow:
jobs:
- generate-config
Key elements of this configuration:
setup: true
- Makes this a dynamic config file that uses setup workflowscontinuation
orb - Enables orchestration of your primary configurationsgenerate-config
job - Executes the script that generates the dynamic configurationcontinuation/continue
- Continues to the generated configuration file
The script generate-pipeline-config
takes two arguments:
- Terraform version to install (
1.12.2
) - DigitalOcean CLI version to install (
1.59.0
)
The dynamic configuration generator
The scripts/generate-pipeline-config
script creates a comprehensive CI/CD pipeline. The key sections are:
- Modern container images and security
- Enhanced Terraform Cloud integration
- Kubernetes authentication handling
Modern container images and security
The generated configuration uses updated, secure container images:
cat << EOF > configs/generated_config.yml
version: 2.1
orbs:
docker: circleci/docker@2.8.2
node: circleci/node@7.1.0
snyk: snyk/snyk@2.3.0
terraform: circleci/terraform@3.6.0
jobs:
scan_app:
docker:
- image: cimg/node:24.0.2 # Updated to Node.js 24
Enhanced Terraform Cloud integration
The script includes sophisticated Terraform Cloud workspace management via API:
# Get workspace ID from workspace name
WORKSPACE_ID=\$(curl -s --header "Authorization: Bearer \$TERRAFORM_TOKEN" \
https://app.terraform.io/api/v2/organizations/CircleCI-Author-Program/workspaces/\$TF_CLUSTER_WS | \
jq -r '.data.id')
if [ "\$WORKSPACE_ID" = "null" ] || [ -z "\$WORKSPACE_ID" ]; then
echo "ERROR: Could not find workspace '\$TF_CLUSTER_WS' in organization 'CircleCI-Author-Program'"
exit 1
fi
# Update cluster_name variable
VAR_ID=\$(curl -s --header "Authorization: Bearer \$TERRAFORM_TOKEN" \
https://app.terraform.io/api/v2/workspaces/\$WORKSPACE_ID/vars | \
jq -r ".data // [] | .[] | select(.attributes.key==\"cluster_name\") | .id")
Kubernetes authentication handling
The script properly handles Kubernetes authentication for Terraform:
# Extract Kubernetes Cluster Information
export K8S_CLUSTER_ENDPOINT=\$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}')
# Keep the certificate in base64 format as expected by the Kubernetes provider
export K8S_CLUSTER_CA_CERTIFICATE=\$(kubectl config view --raw -o jsonpath='{.clusters[0].cluster.certificate-authority-data}')
# Create service account token for Terraform
if ! kubectl get secret tf-admin-token -n kube-system >/dev/null 2>&1; then
kubectl create secret generic tf-admin-token \
--namespace kube-system \
--type kubernetes.io/service-account-token \
--dry-run=client -o yaml | \
kubectl annotate -f - kubernetes.io/service-account.name=tf-admin --local -o yaml | \
kubectl apply -f -
fi
Pipeline workflow
The generated pipeline executes the following workflow:
scan_app
- Performs security scanning of the application codescan_push_docker_image
- Builds, scans, and pushes Docker image to registryrun_tests
- Executes unit tests and stores resultscreate_do_k8s_cluster
- Creates Kubernetes cluster on DigitalOcean using Terraformdeploy_to_k8s
- Configures Kubernetes authentication and prepares deployment variablesremote_terraform_apply
- Deploys the application to Kubernetes using Terraform Cloudsmoketest_k8s_deployment
- Validates the deployment with end-to-end testsapprove_destroy
- Manual approval step for cleanupdestroy_k8s_cluster
- Cleans up all resources
Workflow dependencies
The jobs are orchestrated with proper dependencies:
workflows:
scan_deploy:
jobs:
- scan_app
- scan_push_docker_image
- run_tests
- create_do_k8s_cluster
- deploy_to_k8s:
requires:
- create_do_k8s_cluster
- scan_push_docker_image
- remote_terraform_apply:
requires:
- deploy_to_k8s
- smoketest_k8s_deployment:
requires:
- remote_terraform_apply
- approve_destroy:
type: approval
requires:
- smoketest_k8s_deployment
- destroy_k8s_cluster:
requires:
- approve_destroy
Enhanced Terraform configuration
The Terraform configuration has been updated for better security and compatibility using:
- Kubernetes provider configuration
- Secure Docker configuration
Kubernetes provider configuration
In terraform/do_k8s_deploy_app/main.tf
:
provider "digitalocean" {
token = var.do_token
}
provider "kubernetes" {
host = var.k8s_cluster_endpoint
token = var.k8s_cluster_token
cluster_ca_certificate = base64decode(var.k8s_cluster_ca_certificate)
}
Secure Docker configuration
The Dockerfile
has been updated with security best practices:
FROM node:24-alpine
# Create app directory with non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
WORKDIR /usr/src/app
# Copy package files and install dependencies
COPY package*.json ./
RUN npm ci --only=production
# Copy application code and change ownership
COPY . .
RUN chown -R nextjs:nodejs /usr/src/app
USER nextjs
EXPOSE 5000
CMD [ "node", "app.js" ]
Key benefits of dynamic configuration
These benefits include:
- Conditional pipeline generation
- Version and environment injection
- Multiple configuration management
Conditional pipeline generation
Dynamic configuration allows you to generate different pipelines based on runtime conditions, branch names, or file changes. This eliminates the need for complex conditional logic within static YAML files.
Version and environment injection
You can dynamically inject version numbers, environment-specific configurations, and tool versions directly into your pipeline configuration, making it truly adaptable to your development workflow.
Multiple configuration management
Maintain multiple pipeline configurations in a single repository and selectively execute the appropriate one based on your needs - perfect for microservices or multi-environment deployments.
Extending dynamic configuration
This pattern opens up powerful possibilities for advanced CI/CD workflows:
Multi-environment deployments - Generate different configurations for development, staging, and production environments with appropriate resource sizing and security controls.
Microservices orchestration - Detect which services have changed and generate pipelines that only build and deploy the affected components.
Feature-driven pipelines - Use feature flags or branch patterns to conditionally include or exclude specific deployment steps, testing phases, or security scans.
Matrix builds - Dynamically generate configurations for testing across multiple language versions, platforms, or dependency combinations.
For example, you could enhance the generate-pipeline-config
script to adapt based on your branch strategy:
# Example: Generate different configs based on branch
if [ "$CIRCLE_BRANCH" = "main" ]; then
ENVIRONMENT="production"
CLUSTER_SIZE="large"
else
ENVIRONMENT="staging"
CLUSTER_SIZE="small"
fi
Conclusion
Dynamic config gives developers more flexibility to create tailored CI/CD pipelines that execute their unique software development processes. In this tutorial, you’ve learned how to:
- Set up dynamic configuration using setup workflows
- Generate complex pipeline configurations with shell scripts
- Integrate with modern cloud services (Terraform Cloud, DigitalOcean Kubernetes)
- Implement security best practices throughout the pipeline
- Create a production-ready deployment workflow
This pattern can be accomplished with any language, framework, or stack. While you used Bash for this example, you could use Python, JavaScript, or any other language to generate your dynamic configurations.
The combination of CircleCI’s dynamic configuration with Infrastructure as Code practices provides a powerful foundation for scalable, maintainable CI/CD pipelines.
The complete source code for this example is available on GitHub.