Simultaneous multi-cloud deployment to AWS and GCP with CircleCI
Fullstack Developer and Tech Author
AWS recently experienced a significant outage. The outage took down major services, including parts of McDonald’s mobile ordering system, some Netflix features, and many other applications that relied solely on AWS infrastructure. This event perfectly illustrates why relying on just one cloud platform can be risky.
Beyond outage resilience, multi-cloud strategies address several other real-world challenges. Imagine this scenario: Your startup just landed a major enterprise client who requires their data to stay in specific regions for compliance reasons. Your current single-cloud setup no longer meets regulations. You solution is to deploy the same application to AWS (for US operations) and Google Cloud Platform (for EU operations) while maintaining consistency, security, and your peace of mind.
Companies like Spotify, Netflix, and many financial institutions now use multi-cloud strategies. These strategies work not just for redundancy, but for leveraging the best features from each provider while avoiding vendor lock-in.
Instead of managing separate deployment processes for each cloud provider, you can create a unified CircleCI pipeline that deploys your application consistently across multiple clouds with a single configuration.
In this tutorial, you’ll build a real multi-cloud deployment pipeline that deploys a Node.js application to AWS ECS Fargate and Google Cloud Run at the same time. You’ll end up with a working system that demonstrates the power of cloud-agnostic CI/CD.
What you will learn
You’ll discover how to create a unified deployment pipeline that handles multiple cloud providers seamlessly. You’ll build a containerized Node.js application and deploy it to both AWS ECS Fargate and Google Cloud Run using a single CircleCI configuration. You’ll learn to manage cloud-specific authentication, handle environment-specific configurations, implement parallel deployments, and monitor deployments across multiple providers. The tutorial will also cover strategies for handling deployment failures and maintaining consistency between cloud environments.
Prerequisites
To follow along with this tutorial, make sure you have the following tools and accounts set up:
- Node.js version 18 or higher installed on your computer.
- Docker installed locally for building containers.
- GitHub accountfor hosting your code repository.
- CircleCI account for running CI/CD pipelines.
- AWS account with billing enabled and:
- An IAM user with programmatic access and policies for ECS, ECR, and CloudWatch.
- Admin access for one-time ECS task execution role setup.
- Your AWS Account ID and preferred AWS region.
- Google Cloud Platform account with billing enabled.
- GCP project with billing linked.
- Required APIs enabled: Cloud Run Admin API, Google Container Registry API, and Cloud Build API.
- Service account with Artifact Registry Create-on-Push Repository Administrator, Artifact Registry Writer, Cloud Run Admin, Service Account User, and Storage Admin roles.
- The service account JSON key downloaded.
- Your GCP project ID and preferred GCP region.
- Basic familiarity with Docker, CI/CD concepts, and cloud services.
The multi-cloud advantage
Before you get started with the implementation, take a minute to think about why multi-cloud matters. Traditional single-cloud deployments create vendor lock-in and single points of failure. Multi-cloud strategies provide vendor redundancy, geographic optimization, cost flexibility, and the ability to leverage different provider capabilities.
By deploying to multiple cloud providers, you gain operational resilience and strategic flexibility. You can distribute workloads based on regional requirements, take advantage of competitive pricing across providers, and avoid being locked into a single vendor’s ecosystem. This approach also enables you to meet diverse compliance requirements and optimize for specific use cases without architectural constraints.
Setting up the demo application
Start with a simple but realistic Node.js application that you will deploy across both clouds. This application will include health checks, environment-specific configuration, and logging essential for production multi-cloud deployments.
Project structure
Create a new directory for your multi-cloud demo:
mkdir multi-cloud-demo
cd multi-cloud-demo
Creating the application
Your first step is to build a Node.js web application with proper health monitoring. Create a package.json file and add this content:
{
"name": "multi-cloud-demo",
"version": "1.0.0",
"description": "Demo app for multi-cloud deployment",
"main": "server.js",
"scripts": {
"start": "node server.js",
"test": "jest",
"dev": "nodemon server.js"
},
"dependencies": {
"express": "^4.18.2",
"cors": "^2.8.5"
},
"devDependencies": {
"jest": "^29.7.0",
"supertest": "^6.3.3",
"nodemon": "^3.0.1"
}
}
The package.json defines your Node.js project with Express for web serving and CORS for cross-origin requests. The test dependencies include Jest for unit testing and Supertest for HTTP testing, while nodemon enables development auto-reloading.
Create the main application in server.js and paste this code:
const express = require("express");
const cors = require("cors");
const app = express();
const port = process.env.PORT || 3000;
// Middleware
app.use(cors());
app.use(express.json());
// Environment info
const cloudProvider = process.env.CLOUD_PROVIDER || "unknown";
const deploymentTime = process.env.DEPLOYMENT_TIME || new Date().toISOString();
const appVersion = process.env.APP_VERSION || "1.0.0";
// Routes
app.get("/", (req, res) => {
res.json({
message: "Multi-Cloud Demo Application",
cloud: cloudProvider,
version: appVersion,
deployedAt: deploymentTime,
timestamp: new Date().toISOString(),
});
});
app.get("/health", (req, res) => {
res.status(200).json({
status: "healthy",
cloud: cloudProvider,
uptime: process.uptime(),
timestamp: new Date().toISOString(),
});
});
app.get("/info", (req, res) => {
res.json({
environment: {
nodeVersion: process.version,
platform: process.platform,
cloud: cloudProvider,
region: process.env.CLOUD_REGION || "unknown",
},
deployment: {
version: appVersion,
deployedAt: deploymentTime,
},
});
});
// Error handling
app.use((err, req, res, next) => {
console.error("Error:", err.message);
res.status(500).json({
error: "Internal Server Error",
cloud: cloudProvider,
});
});
// 404 handler
app.use((req, res) => {
res.status(404).json({
error: "Not Found",
path: req.path,
cloud: cloudProvider,
});
});
// Only start server if this file is run directly
if (require.main === module) {
app.listen(port, "0.0.0.0", () => {
console.log(`Multi-cloud demo app running on port ${port}`);
console.log(`Cloud Provider: ${cloudProvider}`);
console.log(`Version: ${appVersion}`);
});
}
module.exports = app;
This Express application creates a cloud-aware web server with three key endpoints: a root endpoint that displays deployment information, a health check endpoint for monitoring, and an info endpoint that provides environment details. The app uses environment variables to identify which cloud provider it’s running on and includes proper error handling.
Create tests in test/app.test.js and use this code:
const request = require("supertest");
const app = require("../server");
describe("Multi-Cloud Demo App", () => {
test("GET / returns app info", async () => {
const response = await request(app).get("/").expect(200);
expect(response.body).toHaveProperty("message");
expect(response.body).toHaveProperty("cloud");
expect(response.body).toHaveProperty("version");
});
test("GET /health returns health status", async () => {
const response = await request(app).get("/health").expect(200);
expect(response.body.status).toBe("healthy");
expect(response.body).toHaveProperty("uptime");
});
test("GET /info returns environment info", async () => {
const response = await request(app).get("/info").expect(200);
expect(response.body).toHaveProperty("environment");
expect(response.body).toHaveProperty("deployment");
});
test("GET /nonexistent returns 404", async () => {
await request(app).get("/nonexistent").expect(404);
});
});
These Jest tests verify that your application endpoints work correctly. They test the main route returns proper JSON structure, the health endpoint responds with a healthy status, the info endpoint provides environment data, and 404 errors are handled gracefully.
Containerizing the application
Create a Dockerfile optimized for both AWS and GCP deployments:
# Use official Node.js runtime as base image
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm install --omit=dev
# Copy application code
COPY . .
# Create non-root user for security
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nodejs -u 1001
# Change ownership of app directory
RUN chown -R nodejs:nodejs /app
USER nodejs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node -e "const http = require('http'); \
const options = { hostname: 'localhost', port: 3000, path: '/health', timeout: 2000 }; \
const req = http.request(options, (res) => { process.exit(res.statusCode === 200 ? 0 : 1); }); \
req.on('error', () => process.exit(1)); \
req.end();"
# Start the application
CMD ["npm", "start"]
This Dockerfile creates a secure, production-ready container using Node.js 18 Alpine. It uses npm install --omit=dev to install only production dependencies (which is more compatible across different npm versions than npm ci --only=production), runs as a non-root user for security, includes a built-in health check, and follows Docker best practices for layer optimization.
Add a .dockerignore file:
node_modules
npm-debug.log
.git
.gitignore
README.md
.nyc_output
coverage
.env
This .dockerignore file prevents unnecessary files from being copied into the Docker image, reducing build time and image size while keeping sensitive files like .env out of the container.
Setting up cloud infrastructure
Now you can prepare the infrastructure configurations for both AWS and GCP. You will use infrastructure as code to ensure consistency and repeatability across environments. These configuration files define how your containerized application will run on each cloud platform, including resource allocation, networking, and environment variables.
Organize your cloud-specific configurations in separate directories at the root of your project:
multi-cloud-demo/
├── aws/
│ ├── task-definition.json
│ └── ecs-task-execution-role.yaml
├── gcp/
│ └── service.yaml
├── server.js
├── package.json
└── Dockerfile
This structure keeps cloud configurations organized and makes it easy to maintain platform-specific settings without cluttering the main application code.
AWS ECS setup
AWS ECS (Elastic Container Service) requires a task definition that describes how to run your container. This JSON file specifies the Docker image to use, resource requirements, networking configuration, and environment variables. ECS will use this definition to launch and manage your application containers on Fargate.
Create the aws directory and the task definition file aws/task-definition.json:
{
"family": "multi-cloud-demo",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"executionRoleArn": "arn:aws:iam::{AWS_ACCOUNT_ID}:role/ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "multi-cloud-demo",
"image": "{ECR_REPOSITORY_URI}:latest",
"portMappings": [
{
"containerPort": 3000,
"protocol": "tcp"
}
],
"environment": [
{
"name": "CLOUD_PROVIDER",
"value": "AWS"
},
{
"name": "CLOUD_REGION",
"value": "{AWS_REGION}"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/multi-cloud-demo",
"awslogs-region": "{AWS_REGION}",
"awslogs-stream-prefix": "ecs"
}
},
"healthCheck": {
"command": [
"CMD-SHELL",
"wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1"
],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 10
}
}
]
}
This ECS task definition configures Fargate to run your container with 256 CPU units and 512MB memory. It includes environment variables to identify the cloud provider, CloudWatch logging configuration, and health checks using wget to monitor application availability.
GCP Cloud Run setup
Google Cloud Run uses a different approach than AWS ECS. Instead of task definitions, Cloud Run uses Knative service configurations written in YAML. This service definition specifies how Cloud Run should deploy and manage your containerized application, including auto-scaling rules, resource limits, and traffic management.
Cloud Run automatically handles load balancing, scaling to zero when idle, and scaling up based on incoming requests. The configuration below sets up your application with appropriate resource limits and health checks.
Create the gcp directory and the service definition file gcp/service.yaml:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: multi-cloud-demo
annotations:
run.googleapis.com/ingress: all
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/maxScale: "10"
run.googleapis.com/cpu-throttling: "false"
run.googleapis.com/memory: "512Mi"
run.googleapis.com/cpu: "0.25"
spec:
containerConcurrency: 100
containers:
- image: gcr.io/{GCP_PROJECT_ID}/multi-cloud-demo:latest
ports:
- containerPort: 3000
env:
- name: CLOUD_PROVIDER
value: "GCP"
- name: CLOUD_REGION
value: "{GCP_REGION}"
resources:
limits:
cpu: 250m
memory: 512Mi
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
This Cloud Run service configuration uses Knative serving to deploy your container with automatic scaling up to 10 instances. It sets resource limits matching your ECS configuration, includes both liveness and readiness probes for health monitoring, and allows unauthenticated access for public web traffic.
AWS Prerequisites Setup
Before creating the CircleCI pipeline, you need to set up the required IAM role for ECS deployments and ensure your IAM user has the necessary permissions. This section builds on the prerequisite that you already have an IAM user with programmatic access. If you haven’t created this user yet, follow the AWS documentation to create an IAM user with programmatic access and note down the Access Key ID and Secret Access Key.
Creating the ECS Task Execution Role
AWS ECS requires a special IAM role to pull container images from ECR and write logs to CloudWatch. The IAM user you created for programmatic access won’t have permission to create IAM roles (following security best practices), so you’ll need administrator privileges for this one-time setup.
We’ve included a CloudFormation template in the aws folder that creates this role automatically. This approach ensures consistent, repeatable infrastructure setup and follows AWS best practices for IAM resource management.
First, create the CloudFormation template file. In your aws directory, create a new file called ecs-task-execution-role.yaml and add this content:
AWSTemplateFormatVersion: "2010-09-09"
Description: "Creates the required ecsTaskExecutionRole for ECS Fargate tasks in the multi-cloud demo"
Resources:
EcsTaskExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: ecsTaskExecutionRole
Description: "Allows ECS tasks to pull container images and publish logs"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- ecs-tasks.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
Tags:
- Key: Project
Value: multi-cloud-demo
- Key: Purpose
Value: tutorial
Outputs:
RoleArn:
Description: "ARN of the created ECS Task Execution Role"
Value: !GetAtt EcsTaskExecutionRole.Arn
Export:
Name: !Sub "${AWS::StackName}-EcsTaskExecutionRoleArn"
RoleName:
Description: "Name of the created ECS Task Execution Role"
Value: !Ref EcsTaskExecutionRole
Export:
Name: !Sub "${AWS::StackName}-EcsTaskExecutionRoleName"
This CloudFormation template creates the IAM role with the exact permissions ECS needs to pull container images from ECR and write logs to CloudWatch. The template includes outputs that export the role ARN and name for potential use in other CloudFormation stacks.
Setting up the role using AWS Console
Log into the AWS Console with your root account or an IAM user with administrator privileges. Navigate to the CloudFormation service and click “Create Stack” then “With new resources”. Upload the aws/ecs-task-execution-role.yaml file from your project directory and use multi-cloud-demo-ecs-role as the stack name.
Click through the creation wizard, making sure to acknowledge that CloudFormation might create IAM resources on the review page. The stack creation process typically takes 1-2 minutes to complete. Once the status shows “CREATE_COMPLETE”, your ECS task execution role is ready.
Granting PassRole permission to your CI user
After creating the ECS task execution role, you need to allow your programmatic access user to “pass” this role to ECS services. This follows the principle of least privilege while giving your CI pipeline the specific permissions it needs.
Go to the IAM service in the AWS Console and find your CircleCI user in the Users section. Click on your user, then go to the Permissions tab and click “Add permissions”. Select “Create inline policy” and switch to the JSON tab.
Replace the default policy with this PassRole policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::*:role/ecsTaskExecutionRole"
}
]
}
Name the policy ECSTaskExecutionPassRole and create it. This policy allows your CI user to tell ECS which IAM role to use without granting full administrative privileges.
Adding required AWS managed policies
In addition to the PassRole policy, your CircleCI IAM user needs some AWS managed policies to interact with ECS, ECR, and CloudWatch services. Attach these policies to your IAM user:
AmazonECS_FullAccessprovides full access to Amazon ECS resources.AmazonEC2ContainerRegistryFullAccessallows pushing and pulling Docker images from ECR.CloudWatchActionsEC2Accessenables CloudWatch operations for ECS tasks.AmazonCloudWatchEvidentlyFullAccessprovides CloudWatch logging capabilities.
To attach these policies:
- Go to the IAM service in the AWS Console.
- Find your CircleCI user and click on it.
- Click the “Add permissions” button.
- Select “Attach policies directly”
- Search for and select each of the policies listed in the previous instructions.
- Click “Add permissions” to attach them.
These policies provide the permissions needed for the CircleCI pipeline to create ECR repositories, manage ECS clusters and services, handle CloudWatch logging, and interact with other AWS resources automatically.
Verifying your setup
To confirm everything is configured correctly, check that the CloudFormation stack completed successfully and the role was created. In the AWS Console, go to the CloudFormation service and verify that your multi-cloud-demo-ecs-role stack shows “CREATE_COMPLETE” status.
You can also verify the role exists by going to the IAM service in the AWS Console, clicking on “Roles” in the sidebar, and searching for ecsTaskExecutionRole. You should see the role listed with the attached policy AmazonECSTaskExecutionRolePolicy.
If you encounter any “role not found” errors later in the pipeline, double-check that the CloudFormation stack completed successfully.
Creating the CircleCI pipeline
Now you’ll create a unified CircleCI pipeline that deploys to both clouds simultaneously. This configuration will handle authentication, building, and deployment for both providers.
Create .circleci/config.yml:
version: 2.1
# CircleCI Orbs for cloud deployments
orbs:
gcp-gcr: circleci/gcp-gcr@0.16.3
gcp-cloud-run: circleci/gcp-cloud-run@1.0.2
# Reusable commands for AWS and GCP setup
commands:
install-aws-cli:
description: "Install AWS CLI v2 in temporary directory to avoid file conflicts"
steps:
- run:
name: Install AWS CLI
command: |
# Install in temporary directory to avoid conflicts with project files
mkdir -p /tmp/awscli-install
cd /tmp/awscli-install
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip -q awscliv2.zip
sudo ./aws/install
cd -
rm -rf /tmp/awscli-install
echo 'export PATH=/usr/local/bin:$PATH' >> $BASH_ENV
source $BASH_ENV
aws --version
- run:
name: Configure AWS
command: |
source $BASH_ENV
aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
aws configure set default.region $AWS_DEFAULT_REGION
install-gcloud:
description: "Install and configure Google Cloud SDK"
steps:
- run:
name: Install gcloud
command: |
export CLOUDSDK_CORE_DISABLE_PROMPTS=1
export CLOUDSDK_INSTALL_DIR=$HOME
curl -sSL https://sdk.cloud.google.com | bash
source "$HOME/google-cloud-sdk/path.bash.inc"
echo 'source "$HOME/google-cloud-sdk/path.bash.inc"' >> $BASH_ENV
gcloud version
- run:
name: Authenticate GCP
command: |
source $BASH_ENV
echo "$GCLOUD_SERVICE_KEY" > creds.json
gcloud auth activate-service-account --key-file=creds.json
gcloud config set project "$GOOGLE_PROJECT_ID"
# Pipeline parameters for flexible deployment options
parameters:
deploy-aws:
type: boolean
default: true
deploy-gcp:
type: boolean
default: true
jobs:
# Test job: Run unit tests and verify Docker build
test:
docker:
- image: cimg/node:18.20
steps:
- checkout
- setup_remote_docker
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- v1-dependencies-
- run:
name: Install dependencies
command: npm install
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
- run:
name: Run tests
command: npm test
- run:
name: Test Docker build
command: docker build -t multi-cloud-demo:test .
# AWS: Build and push Docker image to ECR
build-and-push-aws:
docker:
- image: cimg/node:18.20
steps:
- checkout
- setup_remote_docker
- install-aws-cli
- run:
name: Configure ECR authentication
command: |
source $BASH_ENV
aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
- run:
name: Create ECR repository if needed
command: |
source $BASH_ENV
echo "Checking if ECR repository 'multi-cloud-demo' exists..."
if ! aws ecr describe-repositories --repository-names multi-cloud-demo --region $AWS_DEFAULT_REGION > /dev/null 2>&1; then
echo "Creating ECR repository..."
aws ecr create-repository --repository-name multi-cloud-demo --region $AWS_DEFAULT_REGION
echo "ECR repository created successfully!"
else
echo "ECR repository already exists."
fi
- run:
name: Build and push Docker image to ECR
command: |
DEPLOYMENT_TIME=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
APP_VERSION=${CIRCLE_SHA1:0:7}
ECR_REPO="$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/multi-cloud-demo"
echo "Building Docker image..."
docker build \
--build-arg DEPLOYMENT_TIME=$DEPLOYMENT_TIME \
--build-arg APP_VERSION=$APP_VERSION \
-t $ECR_REPO:latest \
-t $ECR_REPO:${CIRCLE_SHA1} \
.
echo "Pushing Docker image..."
docker push $ECR_REPO:latest
docker push $ECR_REPO:${CIRCLE_SHA1}
# GCP: Build and push Docker image to GCR
build-and-push-gcp:
docker:
- image: cimg/node:18.20
steps:
- checkout
- setup_remote_docker
- install-gcloud
- run:
name: Configure Docker for GCR
command: |
source $BASH_ENV
gcloud auth configure-docker gcr.io
- gcp-gcr/build-image:
image: multi-cloud-demo
tag: "latest,${CIRCLE_SHA1}"
dockerfile: Dockerfile
registry-url: gcr.io
extra_build_args: |
--build-arg DEPLOYMENT_TIME=$(date -u +"%Y-%m-%dT%H:%M:%SZ") \
--build-arg APP_VERSION=${CIRCLE_SHA1:0:7}
- gcp-gcr/push-image:
image: multi-cloud-demo
tag: "latest,${CIRCLE_SHA1}"
registry-url: gcr.io
# AWS: Deploy to ECS Fargate
deploy-to-aws:
docker:
- image: cimg/node:18.20
steps:
- checkout
- install-aws-cli
- run:
name: Replace placeholders in task definition
command: |
sed -i "s/{AWS_ACCOUNT_ID}/$AWS_ACCOUNT_ID/g" aws/task-definition.json
sed -i "s/{ECR_REPOSITORY_URI}/$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com\/multi-cloud-demo/g" aws/task-definition.json
sed -i "s/{AWS_REGION}/$AWS_DEFAULT_REGION/g" aws/task-definition.json
sed -i "s/{CIRCLE_SHA1}/${CIRCLE_SHA1}/g" aws/task-definition.json
- run:
name: Ensure ECS cluster is active
command: |
echo "Checking ECS cluster status..."
CLUSTER_STATUS=$(aws ecs describe-clusters --clusters multi-cloud-cluster --region $AWS_DEFAULT_REGION --query 'clusters[0].status' --output text 2>/dev/null || echo "NOTFOUND")
echo "Current cluster status: $CLUSTER_STATUS"
if [ "$CLUSTER_STATUS" = "NOTFOUND" ] || [ "$CLUSTER_STATUS" = "None" ]; then
echo "Creating ECS cluster..."
aws ecs create-cluster --cluster-name multi-cloud-cluster --region $AWS_DEFAULT_REGION
elif [ "$CLUSTER_STATUS" = "INACTIVE" ]; then
echo "Cluster is INACTIVE. Recreating..."
aws ecs delete-cluster --cluster multi-cloud-cluster --region $AWS_DEFAULT_REGION
sleep 5
aws ecs create-cluster --cluster-name multi-cloud-cluster --region $AWS_DEFAULT_REGION
elif [ "$CLUSTER_STATUS" = "ACTIVE" ]; then
echo "ECS cluster is already active."
else
echo "Unknown status. Recreating cluster..."
aws ecs delete-cluster --cluster multi-cloud-cluster --region $AWS_DEFAULT_REGION 2>/dev/null || true
sleep 5
aws ecs create-cluster --cluster-name multi-cloud-cluster --region $AWS_DEFAULT_REGION
fi
echo "ECS cluster is ready."
sleep 10
- run:
name: Create CloudWatch log group
command: |
source $BASH_ENV
echo "Setting up CloudWatch logging..."
if ! aws logs describe-log-groups \
--log-group-name-prefix "/ecs/multi-cloud-demo" \
--region $AWS_DEFAULT_REGION \
--query 'logGroups[?logGroupName==`/ecs/multi-cloud-demo`]' \
--output text | grep -q "/ecs/multi-cloud-demo"; then
echo "Creating log group /ecs/multi-cloud-demo..."
aws logs create-log-group \
--log-group-name "/ecs/multi-cloud-demo" \
--region $AWS_DEFAULT_REGION
# Try to set retention policy (optional)
if aws logs put-retention-policy \
--log-group-name "/ecs/multi-cloud-demo" \
--retention-in-days 7 \
--region $AWS_DEFAULT_REGION 2>/dev/null; then
echo "Retention policy set to 7 days."
else
echo "Note: Retention policy not set (insufficient permissions)."
fi
else
echo "CloudWatch log group already exists."
fi
- run:
name: Register task definition
command: |
source $BASH_ENV
echo "Registering ECS task definition..."
aws ecs register-task-definition \
--cli-input-json file://aws/task-definition.json \
--region $AWS_DEFAULT_REGION
- run:
name: Create or update ECS service
command: |
source $BASH_ENV
echo "Managing ECS service..."
SERVICE_STATUS=$(aws ecs describe-services \
--cluster multi-cloud-cluster \
--services multi-cloud-demo-service \
--region $AWS_DEFAULT_REGION \
--query 'services[0].status' \
--output text 2>/dev/null || echo "NOTFOUND")
if [ "$SERVICE_STATUS" = "ACTIVE" ]; then
echo "Updating existing service..."
aws ecs update-service \
--cluster multi-cloud-cluster \
--service multi-cloud-demo-service \
--task-definition multi-cloud-demo \
--region $AWS_DEFAULT_REGION
else
echo "Creating new service..."
# Get default VPC and subnet
VPC_ID=$(aws ec2 describe-vpcs --filters "Name=is-default,Values=true" --query 'Vpcs[0].VpcId' --output text --region $AWS_DEFAULT_REGION)
SUBNET_ID=$(aws ec2 describe-subnets --filters "Name=vpc-id,Values=$VPC_ID" --query 'Subnets[0].SubnetId' --output text --region $AWS_DEFAULT_REGION)
# Create or get security group
SG_ID=$(aws ec2 describe-security-groups --filters "Name=group-name,Values=multi-cloud-demo-sg" --query 'SecurityGroups[0].GroupId' --output text --region $AWS_DEFAULT_REGION 2>/dev/null)
if [ "$SG_ID" == "None" ] || [ -z "$SG_ID" ]; then
echo "Creating security group..."
SG_ID=$(aws ec2 create-security-group \
--group-name multi-cloud-demo-sg \
--description "Security group for multi-cloud demo" \
--vpc-id $VPC_ID \
--region $AWS_DEFAULT_REGION \
--query 'GroupId' --output text)
# Allow HTTP traffic on port 3000
aws ec2 authorize-security-group-ingress \
--group-id $SG_ID \
--protocol tcp \
--port 3000 \
--cidr 0.0.0.0/0 \
--region $AWS_DEFAULT_REGION
fi
# Create ECS service
aws ecs create-service \
--cluster multi-cloud-cluster \
--service-name multi-cloud-demo-service \
--task-definition multi-cloud-demo \
--desired-count 1 \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={subnets=[$SUBNET_ID],securityGroups=[$SG_ID],assignPublicIp=ENABLED}" \
--region $AWS_DEFAULT_REGION
fi
# GCP: Deploy to Cloud Run
deploy-to-gcp:
docker:
- image: cimg/node:18.20
steps:
- checkout
- install-gcloud
- run:
name: Replace placeholders in service definition
command: |
sed -i "s/{GCP_PROJECT_ID}/$GOOGLE_PROJECT_ID/g" gcp/service.yaml
sed -i "s/{GCP_REGION}/$GOOGLE_COMPUTE_REGION/g" gcp/service.yaml
sed -i "s/:latest/:${CIRCLE_SHA1}/g" gcp/service.yaml
- gcp-cloud-run/deploy:
platform: managed
image: gcr.io/$GOOGLE_PROJECT_ID/multi-cloud-demo:${CIRCLE_SHA1}
service-name: multi-cloud-demo
region: $GOOGLE_COMPUTE_REGION
unauthenticated: true
# Verify both deployments are successful
verify-deployments:
docker:
- image: cimg/node:18.20
steps:
- install-aws-cli
- install-gcloud
- run:
name: Verify AWS ECS deployment
command: |
source $BASH_ENV
echo "Verifying AWS ECS deployment..."
SERVICE_STATUS=$(aws ecs describe-services \
--cluster multi-cloud-cluster \
--services multi-cloud-demo-service \
--region $AWS_DEFAULT_REGION \
--query 'services[0].status' \
--output text)
if [ "$SERVICE_STATUS" = "ACTIVE" ]; then
echo "AWS ECS service is active and running"
echo ""
echo "Access your AWS application:"
echo "1. Go to ECS Console > Clusters > multi-cloud-cluster"
echo "2. Click on the service > Tasks tab"
echo "3. Click on the running task to find the public IP"
echo "4. Access via http://[PUBLIC_IP]:3000"
echo ""
else
echo "AWS ECS service verification failed"
exit 1
fi
- run:
name: Verify GCP Cloud Run deployment
command: |
source $BASH_ENV
echo "Verifying GCP Cloud Run deployment..."
gcloud run services describe multi-cloud-demo \
--region=$GOOGLE_COMPUTE_REGION \
--format="value(status.conditions[0].status)" > /tmp/status.txt
GCP_STATUS=$(cat /tmp/status.txt)
if [ "$GCP_STATUS" = "True" ]; then
SERVICE_URL=$(gcloud run services describe multi-cloud-demo \
--region=$GOOGLE_COMPUTE_REGION \
--format="value(status.url)")
echo "GCP Cloud Run service is ready and running"
echo ""
echo "Access your GCP application:"
echo "Service URL: $SERVICE_URL"
echo ""
else
echo "GCP Cloud Run service verification failed"
exit 1
fi
- run:
name: Deployment summary
command: |
echo "Multi-cloud deployment completed successfully!"
echo ""
echo "Your application is now running on both:"
echo "• AWS ECS Fargate (check console for public IP)"
echo "• GCP Cloud Run (URL shown above)"
echo ""
echo "Both services are running the same containerized Node.js app!"
# Workflow definitions for different deployment scenarios
workflows:
# Test only (no deployment)
test-only:
when:
and:
- not: << pipeline.parameters.deploy-aws >>
- not: << pipeline.parameters.deploy-gcp >>
jobs:
- test
# AWS only deployment
aws-deploy:
when:
and:
- << pipeline.parameters.deploy-aws >>
- not: << pipeline.parameters.deploy-gcp >>
jobs:
- test
- build-and-push-aws:
requires:
- test
filters:
branches:
only: main
- deploy-to-aws:
requires:
- build-and-push-aws
# GCP only deployment
gcp-deploy:
when:
and:
- << pipeline.parameters.deploy-gcp >>
- not: << pipeline.parameters.deploy-aws >>
jobs:
- test
- build-and-push-gcp:
requires:
- test
filters:
branches:
only: main
- deploy-to-gcp:
requires:
- build-and-push-gcp
# Multi-cloud deployment (default)
multi-cloud-deploy:
when:
and:
- << pipeline.parameters.deploy-aws >>
- << pipeline.parameters.deploy-gcp >>
jobs:
- test
- build-and-push-aws:
requires:
- test
filters:
branches:
only: main
- build-and-push-gcp:
requires:
- test
filters:
branches:
only: main
- deploy-to-aws:
requires:
- build-and-push-aws
- deploy-to-gcp:
requires:
- build-and-push-gcp
- verify-deployments:
requires:
- deploy-to-aws
- deploy-to-gcp
This CircleCI configuration creates a production-ready multi-cloud deployment pipeline with enhanced error handling and reliability features. The pipeline includes comprehensive cloud CLI installation commands with improved isolation and authentication handling.
The configuration defines flexible workflow modes controlled by pipeline parameters: test-only runs when both deploy parameters are false, aws-deploy and gcp-deploy handle single-cloud deployments, and multi-cloud-deploy manages full multi-cloud deployment with verification.
For AWS deployment, the pipeline intelligently handles ECS cluster states, automatically creates ECR repositories and CloudWatch log groups if they don’t exist, and manages ECS services with proper networking configuration including VPC, subnets, and security groups.
For GCP deployment, the pipeline uses CircleCI orbs combined with custom gcloud commands to build, push, and deploy containers to Cloud Run with proper authentication and configuration management.
The verification job provides detailed feedback about deployment status and gives users clear instructions on how to access their deployed applications on both platforms.
Setting up the project in CircleCI
Now that you have your complete application and CircleCI configuration, it’s time to get it running on CircleCI.
Push your code to GitHub
Initialize a Git repository and push all your code to GitHub:
# Initialize git repository
git init
git add .
git commit -m "Initial commit: Multi-cloud deployment demo"
# Add remote and push (replace with your GitHub username and repo name)
git remote add origin https://github.com/your-username/multi-cloud-demo.git
git branch -M main
git push -u origin main
Set up the project in CircleCI
Go to the Projects page on the CircleCI dashboard. Select the associated GitHub account to add the project. Click Set Up Project.
You will be prompted to enter the branch where your configuration file is stored. CircleCI will detect the .circleci/config.yml file and start building the project.
The pipeline will fail because you haven’t yet set up the environment variables for the CircleCI credentials.
This is the normal CircleCI workflow:
- Trigger the pipeline to validate your configuration:
- Add the missing pieces.
Prepare your cloud credentials
Before setting up environment variables, you’ll need to gather the necessary credentials from both AWS and GCP. Make sure you’ve completed the AWS prerequisites setup described earlier in the tutorial. These prerequisites include create the ECS task execution role and granting PassRole permissions to your CircleCI user.
For AWS
- AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY: Create an IAM user with programmatic access and policies for ECS, ECR, and CloudWatch
- AWS_ACCOUNT_ID: Find this in your AWS account settings
- AWS_DEFAULT_REGION: Choose your preferred region (e.g.,
us-east-1)
For GCP
- Enable required APIs: Cloud Run Admin API, Google Container Registry API, and Cloud Build API.
- GOOGLE_PROJECT_ID: Your GCP project ID from the project dashboard.
- GOOGLE_COMPUTE_REGION: Your preferred region (e.g.,
us-central1). - Use
GCLOUD_SERVICE_KEYto create a service account for these roles:- Artifact Registry Create-on-Push Repository Administrator
- Artifact Registry Writer
- Cloud Run Admin
- Service Account User
- Storage Admin
- Download the JSON key file and copy its contents directly as the environment variable value.
Add your environment variables
With your credentials ready, add the environment variables to your CircleCI project. Go to your project in CircleCI:
- Click Project Settings.
- Click Environment Variables in the sidebar.
- Click Add Environment Variable for each variable.
AWS environment variables:
AWS_ACCESS_KEY_ID=your-aws-access-key-id
AWS_SECRET_ACCESS_KEY=your-aws-secret-access-key
AWS_DEFAULT_REGION=us-east-1
AWS_ACCOUNT_ID=your-aws-account-id
GCP environment variables:
GOOGLE_PROJECT_ID=your-gcp-project-id
GOOGLE_COMPUTE_REGION=us-central1
GCLOUD_SERVICE_KEY={"type":"service_account","project_id":"your-project",...}
Testing the multi-cloud deployment
After adding all the environment variables, you’re ready to test your multi-cloud deployment pipeline. The easiest way to test your configuration is to make a small change and push it to your repository. Make a small change to your code (like updating a comment in server.js) and commit and push the change:
git add .
git commit -m "Update config - test pipeline with credentials"
git push
Go back to your project’s Pipelines page to watch the new run. This approach triggers all workflows fresh and ensures the environment variables are properly loaded. With your multi-workflow setup, this is more reliable than trying to rerun individual failed workflows.
With everything configured, watch as CircleCI simultaneously deploys to both AWS and GCP.
Monitoring deployment progress
Your CircleCI dashboard will show parallel execution of both cloud deployments. The workflow runs tests first, then builds and pushes container images to both AWS ECR and Google Container Registry simultaneously. Finally, it deploys to both ECS Fargate and Cloud Run in parallel.
Once the pipeline completes successfully, both cloud deployments will run. The verification job in CircleCI provides deployment status and access information for both platforms.
Verify Google Cloud Platform (GCP) deployment
For GCP, the verification step displays the Cloud Run service URL directly in the pipeline output:
Verifying GCP Cloud Run deployment...
GCP Cloud Run service is ready and running
Access your GCP application:
Service URL: https://multi-cloud-demo-uhjn4wqanq-uc.a.run.app
You can test the GCP deployment by making a request to the health endpoint:
curl https://multi-cloud-demo-uhjn4wqanq-uc.a.run.app/health
{"status":"healthy","cloud":"GCP","uptime":0.349758271,"timestamp":"2025-10-21T19:13:25.875Z"}
Verify Amazon Web Services (AWS) deployment
For AWS, you’ll need to find the public IP through the ECS console since Fargate tasks receive dynamic IP addresses:
Verifying AWS ECS deployment...
AWS ECS service is active and running
Access your AWS application:
1. Go to ECS Console > Clusters > multi-cloud-cluster
2. Click on the service > Tasks tab
3. Click on the running task to find the public IP
4. Access via http://[PUBLIC_IP]:3000
Once you have the public IP, test the AWS deployment:
curl http://13.59.19.161:3000/health
{"status":"healthy","cloud":"AWS","uptime":1367.422680464,"timestamp":"2025-10-21T19:14:26.403Z"}
Comparing deployment results
Both curl commands test the deployments by hitting their public endpoints. Notice that the responses are identical except for the cloud field, which demonstrates that the same containerized application is running consistently across both platforms. This consistency is the key benefit of your unified multi-cloud deployment approach.
Conclusion
You’ve successfully built a production-ready multi-cloud deployment pipeline that demonstrates the power of cloud-agnostic CI/CD. Your application now runs on both AWS and GCP, managed by a single CircleCI configuration that handles authentication, building, and deployment consistently across both platforms.
The key principles you’ve learned - unified CI/CD, consistent containerization, and parallel deployments - can scale to support more complex applications and additional cloud providers. With CircleCI managing the complexity, your team can focus on building great applications while maintaining the freedom to leverage the best features from multiple cloud providers.
Note: To avoid ongoing charges, remember to delete the resources you created during this tutorial. You can remove the ECS service, cluster, and ECR repository from the AWS console, and delete the Cloud Run service from the GCP console.