CircleCI Server v3.x Installation Phase 3
Before you begin with the CircleCI server v3.x execution installation phase, ensure you have run through Phase 1 – Prerequisites and Phase 2 - Core services installation.
In the following sections, replace any items or credentials displayed between < > with your details. |
Phase 3: Execution environment installation
Output Processor
Output processor is responsible for handling the output from Nomad clients. It is a key service to scale if you find your system slowing down. We recommend increasing the output processor replica set to scale the service up to meet demand.
Access the KOTS Admin Console by running the following command, substituting your namespace: kubectl kots admin-console -n <YOUR_CIRCLECI_NAMESPACE>
Locate and enter the following in Settings:
-
Output Processor Load Balancer Hostname - The following command provides the IP address of the service:
kubectl get service output-processor --namespace=<YOUR_CIRCLECI_NAMESPACE>
-
Save your configuration. You will deploy and validate your setup after you complete Nomad client setup.
Nomad Clients
As mentioned in the Overview, Nomad is a workload orchestration tool that CircleCI uses to schedule (through Nomad Server) and run (through Nomad Clients) CircleCI jobs.
Nomad clients are installed outside of the Kubernetes cluster, while their control plane (Nomad Server) is installed within the cluster. Communication between your Nomad Clients and the Nomad control plane is secured with mTLS. The mTLS certificate, private key, and certificate authority will be output after you complete installation of the Nomad Clients.
Once completed, you can update your CircleCI server configuration so your Nomad control plane can communicate with your Nomad Clients.
Cluster Creation with Terraform
CircleCI curates Terraform modules to help install Nomad clients in your chosen cloud provider. You can browse the modules in our public repository, including example Terraform config files (main.tf
) for both AWS and GCP. Some information about your cluster and server installation is required to complete your main.tf
. How to get this information is described in the following sections.
If you would also like to set up Nomad Autoscaler at this stage, see the Nomad Autoscaler section of this guide, as some of the requirements can be included in this Terraform setup. |
AWS
You need some information about your cluster and server installation to complete the required fields for the Terraform configuration file (main.tf
). A full example, as well as a full list of variables, can be found here.
-
Server_endpoint - You need to know the Nomad Server endpoint, which is the external IP address of the nomad-server-external Loadbalancer. You can get this information with the following command:
kubectl get service nomad-server-external --namespace=<YOUR_CIRCLECI_NAMESPACE>
-
Subnet ID (subnet), VPC ID (vpcId), and DNS server (dns_server) of your cluster. Run the following command to get the cluster VPC ID (vpcId), CIDR block (serviceIpv4Cidr), and subnets (subnetIds):
aws eks describe-cluster --name=<YOUR_CLUSTER_NAME>
This returns something similar to the following:
{... "resourcesVpcConfig": { "subnetIds": [ "subnet-033a9fb4be69", "subnet-04e89f9eef89", "subnet-02907d9f35dd", "subnet-0fbc63006c5f", "subnet-0d683b6f6ba8", "subnet-079d0ca04301" ], "clusterSecurityGroupId": "sg-022c1b544e574", "vpcId": "vpc-02fdfff4c", "endpointPublicAccess": true, "endpointPrivateAccess": false ... "kubernetesNetworkConfig": { "serviceIpv4Cidr": "10.100.0.0/16" }, ... }
Then, using the VPCID you just found, run the following command to get the CIDR Block for your cluster. For AWS, the DNS Server is the third IP in your CIDR block (
CidrBlock
), for example your CIDR block might be10.100.0.0/16
, so the third IP would be10.100.0.2
.aws ec2 describe-vpcs --filters Name=vpc-id,Values=<YOUR_VPCID>
This returns something like the following:
{... "CidrBlock": "192.168.0.0/16", "DhcpOptionsId": "dopt-9cff", "State": "available", "VpcId": "vpc-02fdfff4c" ...}
Once you have filled in the appropriate information, you can deploy your Nomad clients by running the following command from within the directory of the main.tf
file:
terraform init
terraform plan
terraform apply
After Terraform is done spinning up the Nomad client(s), it outputs the certificates and keys needed for configuring the Nomad control plane in CircleCI server. Copy them somewhere safe. The apply process usually only takes a minute.
GCP
You need the IP address of the Nomad control plane (Nomad Server), which was created when you deployed CircleCI Server. You can get the IP address by running the following command:
kubectl get service nomad-server-external --namespace=<YOUR_CIRCLECI_NAMESPACE>
You also need the following information:
-
The GCP Project you want to run Nomad clients in.
-
The GCP Zone you want to run Nomad clients in.
-
The GCP Region you want to run Nomad clients in.
-
The GCP Network you want to run Nomad clients in.
-
The GCP Subnetwork you want to run Nomad clients in.
You can copy the following example to your local environment and fill in the appropriate information for your specific setup.
variable "project" {
type = string
default = "<your-project>"
}
variable "region" {
type = string
default = "<your-region>"
}
variable "zone" {
type = string
default = "<your-zone>"
}
variable "network" {
type = string
default = "<your-network-name>"
# if you are using a shared vpc, provide the network endpoint rather than the name. eg:
# default = "https://www.googleapis.com/compute/v1/projects/<host-project>/global/networks/<your-network-name>"
}
variable "subnetwork" {
type = string
default = "<your-subnetwork-name>"
# if you are using a shared vpc, provide the network endpoint rather than the name. eg:
# default = "https://www.googleapis.com/compute/v1/projects/<service-project>/regions/<your-region>/subnetworks/<your-subnetwork-name>"
}
variable "server_endpoint" {
type = string
default = "<nomad-server-loadbalancer>:4647"
}
variable "nomad_auto_scaler" {
type = bool
default = false
description = "If true, terraform will create a service account to be used by nomad autoscaler."
}
variable "enable_workload_identity" {
type = bool
default = false
description = "If true, Workload Identity will be used rather than static credentials'"
}
variable "k8s_namespace" {
type = string
default = "circleci-server"
description = "If enable_workload_identity is true, provide application k8s namespace"
}
provider "google-beta" {
project = var.project
region = var.region
zone = var.zone
}
module "nomad" {
source = "git::https://github.com/CircleCI-Public/server-terraform.git//nomad-gcp?ref=3.4.0"
zone = var.zone
region = var.region
network = var.network
subnetwork = var.subnetwork
server_endpoint = var.server_endpoint
machine_type = "n2-standard-8"
nomad_auto_scaler = var.nomad_auto_scaler
enable_workload_identity = var.enable_workload_identity
k8s_namespace = var.k8s_namespace
unsafe_disable_mtls = false
assign_public_ip = true
preemptible = true
target_cpu_utilization = 0.50
}
output "module" {
value = module.nomad
}
Once you have filled in the appropriate information, you can deploy your Nomad clients by running the following commands:
terraform init
terraform plan
terraform apply
After Terraform is done spinning up the Nomad client(s), it outputs the certificates and key needed for configuring the Nomad control plane in CircleCI server. Copy them somewhere safe.
Nomad Autoscaler
Nomad provides a utility to automatically scale up or down your Nomad clients, provided your clients are managed by a cloud provider’s autoscaling resource. With Nomad Autoscaler, you only need to provide permission for the utility to manage your autoscaling resource and specify where it is located. You can enable this resource via KOTS, which deploys the Nomad Autoscaler service along with your Nomad servers. Below, we go through how to set up Nomad Autoscaler for your provider.
The maximum and minimum Nomad client count overwrite the corresponding values set when you created your autoscaling group or managed instance group. It is recommended that you keep these values and those used in your Terraform the same so that they do not compete. |
If you do not require this service, click the Save config button to update your installation and redeploy server.
AWS
-
Create an IAM user or role and policy for Nomad Autoscaler. You may take one of the following approaches:
-
Our nomad module creates an IAM user and outputs the keys if you set variable
nomad_auto_scaler = true
. You may reference the example in the link for more details. If you have already created the clients, you can update the variable and runterraform apply
. The created user’s access key and secret will be available in Terraform’s output. -
You may also create a Nomad Autoscaler IAM user manually with the IAM policy below. Then you need to generate an access and secret key for this user.
-
You may create a Role for Service Accounts for Nomad Autoscaler and attach the following IAM policy:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "autoscaling:CreateOrUpdateTags", "autoscaling:UpdateAutoScalingGroup", "autoscaling:TerminateInstanceInAutoScalingGroup" ], "Resource": "<<Your Autoscaling Group ARN>>" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "autoscaling:DescribeScalingActivities", "autoscaling:DescribeAutoScalingGroups" ], "Resource": "*" } ] }
-
-
In your KOTS Admin Console, set Nomad Autoscaler to
enabled
. -
Set Max Node Count* - This overwrites what is currently set as the max for your ASG. It is recommended to keep this value and what was set in your Terraform the same.
-
Set Min Node Count* - This overwrites what is currently set as the min for your ASG. It is recommended to keep this value and what was set in your Terraform as the same.
-
Select cloud provider:
AWS EC2
. -
Add the region of the autoscaling group.
-
You can chose one of the following:
-
Add the Nomad Autoscaler user’s access key and secret key.
-
Or, the Nomad Autoscaler role’s ARN.
-
-
Add the name of the autoscaling Group your Nomad clients were created in.
GCP
-
Create a service account for Nomad Autoscaler
-
Our nomad module creates a service acount and outputs a file with the keys if you set the variables
nomad_auto_scaler = true
andenable_workload_identity = false
. You may reference the examples in the link for more details. If you have already created the clients, simply update the variable and runterraform apply
. The created user’s key will be available in a file namednomad-as-key.json
. If you are using GKE Workload Identities, set the variablesnomad_auto_scaler = true
andenable_workload_identity = true
. -
You may also create a nomad gcp service account manually. The service account will need the role
compute.admin
. It will also need the roleiam.workloadIdentityUser
if using Workload identities
-
-
Set Nomad Autoscaler to
enabled
-
Set Maximum Node Count*
-
Set Minimum Node Count*
-
Select cloud provider:
Google Cloud Platform
-
Add your Project ID
-
Add Managed Instance Group Name
-
Instance group type: Zonal or Regional.
-
You can choose one of the following:
-
JSON of GCP service account for Nomad Autoscaler
-
Or, the Nomad Autoscaler Service Account Email Address if using Workload Identities. Steps to enable Workload Identities on GCP cluster are here.
-
Enable workload identity for
nomad-autoscaler
(kubernetes) service account
-
gcloud iam service-accounts add-iam-policy-binding <YOUR_SERVICE_ACCOUNT_EMAIL> \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:<GCP_PROJECT_ID>.svc.id.goog[circleci-server/nomad-autoscaler]"
If you are switching from static JSON credentials to Workload Identity, you should delete the keys from GCP as well as from CircleCI KOTS Admin Console. |
Configure and Deploy
Now that you have successfully deployed your Nomad clients, you can configure CircleCI server and the Nomad control plane. Access the KOTS Admin Console by running the following command, substituting your namespace: kubectl kots admin-console -n <YOUR_CIRCLECI_NAMESPACE>
Enter the following in Settings:
-
Nomad Load Balancer (required)
kubectl get service nomad-server-external --namespace=<YOUR_CIRCLECI_NAMESPACE>
-
Nomad Server Certificate (required) - Provided in the output from
terraform apply
-
Nomad Server Private Key (required) - Provided in the output from
terraform apply
-
Nomad Server Certificate Authority (CA) Certificate (required) - Provided in the output from
terraform apply
-
Build Agent Image - If you want to use a custom Docker registry to supply the CircleCI Build Agent, contact customer support for assistance.
Click the Save config button to update your installation and redeploy server.
Nomad Clients Validation
CircleCI has created a project called realitycheck which allows you to test your Server installation. We are going to follow the project so we can verify that the system is working as expected. As you continue through the next phase, sections of realitycheck will move from red to green.
To run realitycheck, you need to clone the repository. Depending on your GitHub setup, you can use one of the following commands:
GitHub Cloud
git clone -b server-3.0 https://github.com/circleci/realitycheck.git
GitHub Enterprise
git clone -b server-3.0 https://github.com/circleci/realitycheck.git
git remote set-url origin <YOUR_GH_REPO_URL>
git push
Once you have successfully cloned the repository, you can follow it from within your CircleCI server installation. You need to set the following variables. For full instructions please see the repository readme.
Name | Value |
---|---|
CIRCLE_HOSTNAME | <YOUR_CIRCLECI_INSTALLATION_URL> |
CIRCLE_TOKEN | <YOUR_CIRCLECI_API_TOKEN> |
Name | Environmental Variable Key | Environmental Variable Value |
---|---|---|
org-global | CONTEXT_END_TO_END_TEST_VAR | Leave blank |
individual-local | MULTI_CONTEXT_END_TO_END_VAR | Leave blank |
Once you have configured the environmental variables and contexts, rerun the realitycheck tests. You should see the features and resource jobs complete successfully. Your test results should look something like the following:
VM service
VM service configures VM and remote docker jobs. You can configure a number of options for VM service, such as scaling rules. VM service is unique to EKS and GKE installations because it specifically relies on features of these cloud providers.
AWS
-
Get the Information Needed to Create Security Groups
The following command returns your VPC ID (
vpcId
), CIDR Block (serviceIpv4Cidr
), Cluster Security Group ID (clusterSecurityGroupId
) and Cluster ARN (arn
) values, which you need throughout this section:aws eks describe-cluster --name=<your-cluster-name>
-
Create a security group
Run the following commands to create a security group for VM service:
aws ec2 create-security-group --vpc-id "<YOUR_VPCID>" --description "CircleCI VM Service security group" --group-name "circleci-vm-service-sg"
This outputs a GroupID to be used in the next steps:
{ "GroupId": "sg-0cd93e7b30608b4fc" }
-
Apply security group Nomad
Use the security group you just created and CIDR block values to apply the security group to the following:
aws ec2 authorize-security-group-ingress --group-id "<YOUR_GroupId>" --protocol tcp --port 22 --cidr "<YOUR_serviceIpv4Cidr>"
aws ec2 authorize-security-group-ingress --group-id "<YOUR_GroupId>" --protocol tcp --port 2376 --cidr "<YOUR_serviceIpv4Cidr>"
If you created your Nomad Clients in a different subnet from CircleCI server, you need to rerun the above two commands with each subnet CIDR. -
Apply the security group for SSH
Run the following command to apply the security group rules so users can SSH into their jobs:
aws ec2 authorize-security-group-ingress --group-id "<YOUR_GroupId>" --protocol tcp --port 54782
-
Create user
Create a new user with programmatic access:
aws iam create-user --user-name circleci-vm-service
Optionally, vm-service does support the use of a service account role in place of AWS keys. If you would prefer to use a role, follow these instructions using the policy in step 6 below. Once done, you may skip to step 9 which is enabling vm-service in KOTS.
-
Create policy
Create a
policy.json
file with the following content. You should fill in the ID of the VM Service security group created in step 2 (VMServiceSecurityGroupId
) and VPC ID (vpcID
) below.{ "Version": "2012-10-17", "Statement": [ { "Action": "ec2:RunInstances", "Effect": "Allow", "Resource": [ "arn:aws:ec2:*::image/*", "arn:aws:ec2:*::snapshot/*", "arn:aws:ec2:*:*:key-pair/*", "arn:aws:ec2:*:*:launch-template/*", "arn:aws:ec2:*:*:network-interface/*", "arn:aws:ec2:*:*:placement-group/*", "arn:aws:ec2:*:*:volume/*", "arn:aws:ec2:*:*:subnet/*", "arn:aws:ec2:*:*:security-group/<YOUR_VMServiceSecurityGroupID>" ] }, { "Action": "ec2:RunInstances", "Effect": "Allow", "Resource": "arn:aws:ec2:*:*:instance/*", "Condition": { "StringEquals": { "aws:RequestTag/ManagedBy": "circleci-vm-service" } } }, { "Action": [ "ec2:CreateVolume" ], "Effect": "Allow", "Resource": [ "arn:aws:ec2:*:*:volume/*" ], "Condition": { "StringEquals": { "aws:RequestTag/ManagedBy": "circleci-vm-service" } } }, { "Action": [ "ec2:Describe*" ], "Effect": "Allow", "Resource": "*" }, { "Effect": "Allow", "Action": [ "ec2:CreateTags" ], "Resource": "arn:aws:ec2:*:*:*/*", "Condition": { "StringEquals": { "ec2:CreateAction" : "CreateVolume" } } }, { "Effect": "Allow", "Action": [ "ec2:CreateTags" ], "Resource": "arn:aws:ec2:*:*:*/*", "Condition": { "StringEquals": { "ec2:CreateAction" : "RunInstances" } } }, { "Action": [ "ec2:CreateTags", "ec2:StartInstances", "ec2:StopInstances", "ec2:TerminateInstances", "ec2:AttachVolume", "ec2:DetachVolume", "ec2:DeleteVolume" ], "Effect": "Allow", "Resource": "arn:aws:ec2:*:*:*/*", "Condition": { "StringEquals": { "ec2:ResourceTag/ManagedBy": "circleci-vm-service" } } }, { "Action": [ "ec2:RunInstances", "ec2:StartInstances", "ec2:StopInstances", "ec2:TerminateInstances" ], "Effect": "Allow", "Resource": "arn:aws:ec2:*:*:subnet/*", "Condition": { "StringEquals": { "ec2:Vpc": "<YOUR_vpcID>" } } } ] }
-
Attach policy to user
Once you have created the policy.json file, attach it to an IAM policy and created user.
aws iam put-user-policy --user-name circleci-vm-service --policy-name circleci-vm-service --policy-document file://policy.json
-
Create an access key and secret for the user
If you have not already created them, you will need an access key and secret for the
circleci-vm-service
user. You can create those by running the following command:aws iam create-access-key --user-name circleci-vm-service
-
Configure server
Configure VM Service through the KOTS Admin Console. Details of the available configuration options can be found in the VM Service guide.
Once you have configured the fields, save your config and deploy your updated application.
GCP
You need additional information about your cluster to complete the next section. Run the following command:
gcloud container clusters describe
This command returns something like the following, which includes network, region and other details that you need to complete the next section:
addonsConfig:
gcePersistentDiskCsiDriverConfig:
enabled: true
kubernetesDashboard:
disabled: true
networkPolicyConfig:
disabled: true
clusterIpv4Cidr: 10.100.0.0/14
createTime: '2021-08-20T21:46:18+00:00'
currentMasterVersion: 1.20.8-gke.900
currentNodeCount: 3
currentNodeVersion: 1.20.8-gke.900
databaseEncryption:
…
-
Create firewall rules
Run the following commands to create a firewall rule for VM service in GKE:
gcloud compute firewall-rules create "circleci-vm-service-internal-nomad-fw" --network "<network>" --action allow --source-ranges "0.0.0.0/0" --rules "TCP:22,TCP:2376"
If you have used auto-mode, you can find the Nomad clients CIDR based on the region by referring to the table here. gcloud compute firewall-rules create "circleci-vm-service-internal-k8s-fw" --network "<network>" --action allow --source-ranges "<clusterIpv4Cidr>" --rules "TCP:22,TCP:2376"
gcloud compute firewall-rules create "circleci-vm-service-external-fw" --network "<network>" --action allow --rules "TCP:54782"
-
Create user
We recommend you create a unique service account used exclusively by VM Service. The Compute Instance Admin (Beta) role is broad enough to allow VM Service to operate. If you wish to make permissions more granular, you can use the Compute Instance Admin (beta) role documentation as reference.
gcloud iam service-accounts create circleci-server-vm --display-name "circleci-server-vm service account"
If your are deploying CircleCI server in a shared VCP, you should create this user in the project in which you intend to run your VM jobs. -
Get the service account email address
gcloud iam service-accounts list --filter="displayName:circleci-server-vm service account" --format 'value(email)'
-
Apply role to service account
Apply the Compute Instance Admin (Beta) role to the service account:
gcloud projects add-iam-policy-binding <YOUR_PROJECT_ID> --member serviceAccount:<YOUR_SERVICE_ACCOUNT_EMAIL> --role roles/compute.instanceAdmin --condition=None
And
gcloud projects add-iam-policy-binding <YOUR_PROJECT_ID> --member serviceAccount:<YOUR_SERVICE_ACCOUNT_EMAIL> --role roles/iam.serviceAccountUser --condition=None
-
Get JSON Key File
If you are using Workload Identities for GKE, this step is not required.
After running the following command, you should have a file named
circleci-server-vm-keyfile
in your local working directory. You will need this when you configure your server installation.gcloud iam service-accounts keys create circleci-server-vm-keyfile --iam-account <YOUR_SERVICE_ACCOUNT_EMAIL>
-
Enable Workload Identity for Service Account
This step is required only if you are using Workload Identities for GKE. Steps to enable Workload Identities are here
gcloud iam service-accounts add-iam-policy-binding <YOUR_SERVICE_ACCOUNT_EMAIL> \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:<GCP_PROJECT_ID>.svc.id.goog[circleci-server/vm-service]"
If you are switching from static JSON credentials to Workload Identity, you should delete the keys from GCP as well as from CircleCI KOTS Admin Console. |
-
Configure Server
Configure VM Service through the KOTS Admin Console. Details of the available configuration options can be found in the VM Service guide.
Once you have configured the fields, save your config and deploy your updated application.
VM Service Validation
Once you have configured and deployed CircleCI server, you should validate that VM Service is operational. You can rerun the realitycheck project within your CircleCI installation and you should see the VM Service Jobs complete. At this point, all tests should pass.
Runner
Overview
CircleCI runner does not require any additional server configuration. Server ships ready to work with runner. However, you need to create a runner and configure the runner agent to be aware of your server installation. For complete instructions for setting up runner, see the runner documentation.
Runner requires a namespace per organization. Server can have many organizations. If your company has multiple organizations within your CircleCI installation, you need to set up a runner namespace for each organization within your server installation. |
What to read next
Help make this document better
This guide, as well as the rest of our docs, are open source and available on GitHub. We welcome your contributions.
- Suggest an edit to this page (please read the contributing guide first).
- To report a problem in the documentation, or to submit feedback and comments, please open an issue on GitHub.
- CircleCI is always seeking ways to improve your experience with our platform. If you would like to share feedback, please join our research community.
Need support?
Our support engineers are available to help with service issues, billing, or account related questions, and can help troubleshoot build configurations. Contact our support engineers by opening a ticket.
You can also visit our support site to find support articles, community forums, and training resources.
CircleCI Documentation by CircleCI is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.