Getting started with Kubernetes: how to set up your first cluster

ContentLab Author, Cloud Engineer

Kubernetes is an excellent tool for deploying and scaling complex distributed systems, but it’s no secret that getting started with Kubernetes is a challenge. Most Kubernetes tutorials use a tool like Minikube to get you started, which doesn’t teach you much about configuring production-ready clusters.
In this article, we’ll take a different approach and show you how to set up a real-world, production-ready Kubernetes cluster using Amazon Elastic Kubernetes Service (Amazon EKS) and Terraform.
Introducing Terraform
Hashicorp’s Terraform is an infrastructure as code (IaC) solution that allows you to declaratively define the desired configuration of your cloud infrastructure. Using the Terraform CLI, you can provision this configuration locally or as part of automated CI/CD pipelines.
Terraform is similar to configuration tools provided by cloud platforms like AWS CloudFormation or Azure Resource Manager, but it has the advantage of being provider-agnostic. If you’re not familiar with Terraform, we recommend that you first go through their getting started with AWS guide to review the most important concepts.
Defining infrastructure
Start by building a Terraform configuration that provisions an Amazon EKS cluster and an AWS Virtual Private Cloud (VPC).
Create the main.tf
file with this content:
provider "aws" {
region = "eu-west-1"
}
data "aws_availability_zones" "azs" {
state = "available"
}
locals {
cluster_name = "eks-circleci-cluster"
}
In this file, set up the AWS provider with the region set to eu-west-1
. Feel free to change this to any AWS region you prefer.
The AWS provider will check various places for valid credentials to use, so be sure to set these.
Then fetch the availability zones that we can use in this region. This makes it easier to change the region without requiring any other changes. You will set the cluster name as a variable because you’ll be using it multiple times. You can change this value if you need to.
Configure the VPC. Add this content to the file:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 6.0"
name = "eks-circleci-vpc"
cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.azs.names, 0, 2)
private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
public_subnets = ["10.0.3.0/24", "10.0.4.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}
public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
By using the AWS VPC module you greatly simplify VPC creation. Configure the VPC to use the first two AZs in the region where you deploy our Terraform template. This is the minimum number of AZs required by EKS.
To save costs, we only configure a single NAT gateway. The AWS VPC module will set up the routing tables properly to route everything through this single NAT gateway. You will enable DNS hostnames for the VPC. This is a requirement for EKS.
Finally, set the tags required by EKS so that it can discover its subnets and know where to place public and private load balancers.
Next, configure the EKS cluster using the AWS EKS module.
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 21.0"
name = local.cluster_name
kubernetes_version = "1.33"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
eks_managed_node_groups = {
initial = {
instance_types = ["t3.large"]
min_size = 1
max_size = 1
desired_size = 1
}
}
tags = {
Environment = "tutorial"
Terraform = "true"
}
}
Configure the cluster to use the VPC you’ve created and define a single managed node group with one t3.large
instance. This will be enough to create a simple test resource in the cluster while minimizing costs.
Finally, add this configuration to the file:
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_name
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_name
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster.token
}
output "cluster_name" {
description = "Kubernetes Cluster Name"
value = module.eks.cluster_name
}
output "cluster_endpoint" {
description = "Endpoint for EKS control plane"
value = module.eks.cluster_endpoint
}
output "region" {
description = "AWS region"
value = "eu-west-1"
}
Fetch some data from the Amazon EKS cluster configuration and configure the Terraform Kubernetes Provider to authenticate with the cluster. This provider is also used within the AWS EKS module for cluster authentication and access management. This is an EKS-specific method for granting an AWS entity access to a cluster.
Finally, the output
blocks will display the information required to authenticate with the cluster – more on that after we have provisioned the cluster.
You now have a Terraform configuration completely ready for spinning up an EKS cluster. Next you’ll apply this configuration and create a test resource in your cluster.
Provisioning a Kubernetes cluster
Configure your shell to authenticate the Terraform AWS provider. Make sure your working directory is in the same directory where you just created the Terraform file. Initialize the workspace:
$ terraform init
Initializing modules...
Downloading registry.terraform.io/terraform-aws-modules/eks/aws 21.3.1 for eks...
- eks in .terraform/modules/eks
- eks.eks_managed_node_group in .terraform/modules/eks/modules/eks-managed-node-group
- eks.eks_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
- eks.fargate_profile in .terraform/modules/eks/modules/fargate-profile
Downloading registry.terraform.io/terraform-aws-modules/kms/aws 4.0.0 for eks.kms...
- eks.kms in .terraform/modules/eks.kms
- eks.self_managed_node_group in .terraform/modules/eks/modules/self-managed-node-group
- eks.self_managed_node_group.user_data in .terraform/modules/eks/modules/_user_data
Downloading registry.terraform.io/terraform-aws-modules/vpc/aws 6.4.0 for vpc...
- vpc in .terraform/modules/vpc
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/time versions matching ">= 0.9.0"...
- Finding hashicorp/tls versions matching ">= 4.0.0"...
- Finding hashicorp/null versions matching ">= 3.0.0"...
- Finding hashicorp/cloudinit versions matching ">= 2.0.0"...
- Finding hashicorp/aws versions matching ">= 6.0.0, >= 6.13.0"...
- Finding latest version of hashicorp/kubernetes...
- Installing hashicorp/cloudinit v2.3.7...
- Installed hashicorp/cloudinit v2.3.7 (signed by HashiCorp)
- Installing hashicorp/aws v6.14.1...
- Installed hashicorp/aws v6.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.38.0...
- Installed hashicorp/kubernetes v2.38.0 (signed by HashiCorp)
- Installing hashicorp/time v0.13.1...
- Installed hashicorp/time v0.13.1 (signed by HashiCorp)
- Installing hashicorp/tls v4.1.0...
- Installed hashicorp/tls v4.1.0 (signed by HashiCorp)
- Installing hashicorp/null v3.2.4...
- Installed hashicorp/null v3.2.4 (signed by HashiCorp)
Terraform has been successfully initialized!
[...]
The terraform init
command downloads all providers included in the config file. You can now apply and provision the VPC and the cluster.
Note: This will take 10 to 15 minutes to complete.
$ terraform apply
[...]
Plan: 52 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
[...]
Apply complete! Resources: 52 added, 0 changed, 0 destroyed.
Outputs:
cluster_endpoint = "https://XXXXXXXXXXXXX.gr7.eu-west-1.eks.amazonaws.com"
cluster_name = "eks-circleci-cluster"
region = "eu-west-1"
That’s it: you now have an Amazon EKS cluster fully up and running.
Now you’ll deploy a pod and expose it through a load balancer to ensure that your cluster works as expected.
To authenticate with the cluster, you need to have kubectl and the aws-iam-authenticator installed. You can configure kubectl
to work with your cluster by running:
$ aws eks update-kubeconfig --region eu-west-1 --name eks-circleci-cluster
Added new context arn:aws:eks:eu-west-1:XXXXXXXXXXXX:cluster/eks-circleci-cluster to /home/user/.kube/config
Once done, create a new deployment and expose it:
$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
$ kubectl expose deployment/nginx --port=80 --type=LoadBalancer
service/nginx exposed
$ kubectl get service nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.100.XX.XX aXXXXXXXXXXXXXXXXXXXXXX-XXXXXXXXXX.eu-west-1.elb.amazonaws.com 80:XXXXX/TCP 2m
Get the EXTERNAL-IP
value from the final output – this is the DNS entry of the AWS load balancer. It may take a few minutes for the DNS to propagate. When it has, you will be greeted by the “Welcome to nginx!” page. Success!
An easier way: the CircleCI aws-eks
orb
You’ve learned one method for spinning up an Amazon EKS cluster. However, assembling the configuration and keeping it up to date involves quite a bit of manual work.
Fortunately, the CircleCI aws-eks orb can help. CircleCI orbs contain pre-packaged configuration code that makes it easier to integrate with other developer tools. This is just one of the many orbs available in the orb registry.
The aws-eks orb
can automatically spin up, test, and tear down an Amazon EKS cluster. You can use it to create powerful workflows where applications are tested in a clean, fully isolated, temporary EKS cluster.
A CircleCI account is the main prerequisite to get started with the orb. Sign up with CircleCI and connect a Git repository to your account in a CircleCI project.
In this project, go to the Environment Variables tab under Settings. Add the variables that the project needs to authenticate with AWS. Be sure that the AWS IAM User has the permissions required to create an EKS cluster and its dependencies. Set the value of the region variable to the region where you want to provision the cluster.
Next, create the .circleci/config.yml
file in your Git repository and add to it this content:
version: 2.1
orbs:
aws-eks: circleci/aws-eks@3.0.1
kubernetes: circleci/kubernetes@1.3.1
jobs:
test-cluster:
executor: aws-eks/python3
parameters:
cluster-name:
description: |
Name of the EKS cluster
type: string
steps:
- kubernetes/install-kubectl
- aws-eks/update-kubeconfig-with-authenticator:
cluster-name: << parameters.cluster-name >>
- run:
command: |
kubectl get services
name: Test cluster
workflows:
deployment:
jobs:
- aws-eks/create-cluster:
cluster-name: my-first-cluster
- test-cluster:
cluster-name: my-first-cluster
requires:
- aws-eks/create-cluster
- aws-eks/delete-cluster:
cluster-name: my-first-cluster
requires:
- test-cluster
This file configures a workflow with three jobs:
- Uses the
create-cluster
command from theaws-eks
orb to create a cluster and its dependencies using the eksctl utility. - Runs a simple test to verify the cluster works as expected.
- Destroys the cluster.
Commit this file to your repository, and CircleCI will automatically start the workflow that will take 15 to 20 minutes. Of course, you can add all kinds of steps required after cluster creation, such as provisioning resources and running tests.
Compared with the Terraform method of creating an Amazon EKS cluster that we discussed earlier, the aws-eks
orb drastically simplifies and speeds up the process of managing the lifecycle of an EKS cluster. The complexity of the EKS cluster itself, as well as the configuration of its dependencies (such as the VPC), is completely abstracted away. This is a low-maintenance solution that allows you to focus your efforts on building valuable continuous integration workflows with automated tests.
Next steps
You’ve learned that creating your first Kubernetes cluster doesn’t have to be difficult or scary. Terraform provides an easy way to define the cluster infrastructure. Through simple CLI commands, you can easily provision the defined infrastructure.
The CircleCI orb simplifies the process even further. You can reach the same end result without writing your own Terraform code or running any commands.
The best way to learn this is to do it yourself. Start by creating your cluster manually, using Terraform code along with its CLI. Then, try CircleCI to see just how easy and fast it is to create a cluster using the aws-eks
orb.