Build and test your first Kubernetes operator with Go, Kubebuilder, and CircleCI
Fullstack Developer and Tech Author
Kubernetes operators extend the Kubernetes API with custom logic, automating tasks like provisioning, configuration, and policy enforcement. Instead of managing these tasks manually or with ad hoc scripts, Operators codify your workflows into controllers that run natively inside the cluster.
In this tutorial, you’ll build a simple operator using Go and Kubebuilder; a framework that scaffolds much of the boilerplate so you can focus on core logic. The operator will automatically label every new Kubernetes namespace with a default set of labels, such as team and env.
You’ll also integrate CircleCI into your workflow to automate testing and image builds. This ensures your operator is reliable and production-ready from the first commit.
By the end of this tutorial, you’ll have:
- A working operator that listens for new namespaces and applies labels
- A local test setup powered by envtest
- A CI pipeline that runs
go vet, tests your controller, and builds multi-architecture Docker images
Why build a namespace labeling operator?
Namespaces in Kubernetes separate environments, teams, or projects within the same cluster. But in shared clusters (especially in large teams or enterprises) it’s easy for namespaces to become inconsistent or lack essential metadata like team ownership, environment, or cost center.
Manually labeling namespaces is error-prone and often forgotten. A namespace labeling operator solves this by watching for new namespaces and applying a predefined set of labels automatically.
This helps:
- Maintain consistent metadata for observability, cost tracking, and access control
- Enforce organizational standards without relying on humans
- Keep growing clusters organized and auditable
Even though it’s a simple use case, this operator shows how automation can improve consistency and reduce operational toil, exactly what operators are designed for.
Prerequisites
This tutorial is written for macOS or other Unix-based systems. If you’re using Windows, you may need to adapt some commands or use a Unix-like environment such as WSL (Windows Subsystem for Linux).
Before you begin, make sure you have the following installed:
- Go (1.24 or later) installed on your local machine
- kubectl (for interacting with Kubernetes)
- Docker installed on your local machine
- Docker Hub account
- Minikube (for running a local Kubernetes cluster). Download and install Minikube if you don’t have it set up already. Follow the Minikube installation guide for your operating system.
- Basic knowledge of Go programming language
- Familiarity with Kubernetes concepts
Install Kubebuilder
Kubebuilder is a toolkit for building Kubernetes APIs using CRDs (Custom Resource Definitions). We’ll use it to scaffold our operator project.
Follow the official Kubebuilder installation guide to install Kubebuilder on your machine.
curl -L -o kubebuilder "https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH)"
chmod +x kubebuilder && sudo mv kubebuilder /usr/local/bin/
kubebuilder version
You should see output similar to:
Version: cmd.version{KubeBuilderVersion:"4.6.0", KubernetesVendor:"1.33.0", ...}
Scaffold the operator project
Start by creating a new directory and initializing your Go module:
mkdir namespace-auto-labeler && cd namespace-auto-labeler
go mod init github.com/<your-github-username>/namespace-auto-labeler
kubebuilder init --domain=k8s.operators.dev --repo=github.com/<your-github-username>/namespace-auto-labeler
This creates the basic project structure and configuration files. The --domain flag sets the domain for your custom resources, and --repo specifies the Go module path.
Replace <your-github-username> with your GitHub handle.
This creates your project structure:
namespace-auto-labeler/
├── config/
│ ├── default/
│ ├── manager/
│ └── rbac/
├── test/
├── cmd/
│ └── main.go
├── Makefile
├── go.mod
Scaffold a controller for namespaces
You’re not creating a custom resource, just a controller that watches the built-in namespace resource.
kubebuilder create api --group core --version v1 --kind Namespace --namespaced=false
When prompted, enter:
- Create Resource?
n(Namespaces already exist in Kubernetes). - Create Controller?
y(You want to scaffold the controller logic).
This command sets up the necessary files to manage the namespace resource. The --namespaced=false flag indicates that this controller will not manage a namespaced custom resource, but rather the built-in namespace resource.
Kubebuilder expects a certain directory structure for its generated Makefile and deployment commands to work correctly. Even though you are not creating a new Custom Resource Definition, some commands rely on the presence of the config/crd/bases directory.
To ensure compatibility with these commands, create the directory and a placeholder file:
mkdir -p config/crd/bases
touch config/crd/bases/.keep
This step helps maintain the expected project structure and avoids errors during future builds or deployments.
Implement the reconciliation logic
Open internal/controller/namespace_controller.go and replace the Reconcile() function with:
func (r *NamespaceReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)
// Fetch the Namespace object
var ns corev1.Namespace
if err := r.Get(ctx, req.NamespacedName, &ns); err != nil {
if apierrors.IsNotFound(err) {
return ctrl.Result{}, nil
}
return ctrl.Result{}, err
}
// Define required labels
requiredLabels := map[string]string{
"team": "unknown",
"env": "dev",
}
// Check if labels are missing
needsPatch := false
if ns.Labels == nil {
ns.Labels = map[string]string{}
needsPatch = true
}
for key, val := range requiredLabels {
if _, ok := ns.Labels[key]; !ok {
ns.Labels[key] = val
needsPatch = true
}
}
// Patch the Namespace if needed
if needsPatch {
if err := r.Update(ctx, &ns); err != nil {
log.Error(err, "unable to update namespace with default labels")
return ctrl.Result{}, err
}
log.Info("namespace labeled", "name", ns.Name, "labels", ns.Labels)
} else {
log.Info("namespace already has required labels", "name", ns.Name)
}
return ctrl.Result{}, nil
}
Make sure your imports include:
import (
"context"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
)
This logic ensures every namespace gets the team and env labels if they’re missing.
Write a unit test for the namespace controller
Now that your operator logic is in place, let’s write a unit test to verify that it works as expected.
Kubebuilder projects scaffold a basic test suite for you, but to properly test our namespace labeling logic, you’ll need to customize it slightly.
Your controller tests live in two files under internal/controller/:
internal/controller/suite_test.go— handles the setup and teardown of a fake Kubernetes control plane.internal/controller/namespace_controller_test.go— placeholder for your test logic.
You’ll update both files.
Update the test suite setup
Open internal/controller/suite_test.go and update its content:
package controller
import (
"context"
"os"
"path/filepath"
"testing"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/envtest"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
)
var (
ctx context.Context
cancel context.CancelFunc
testEnv *envtest.Environment
cfg *rest.Config
k8sClient client.Client
k8sManager ctrl.Manager
scheme = runtime.NewScheme()
)
func TestControllers(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Controller Suite")
}
var _ = BeforeSuite(func() {
ctx, cancel = context.WithCancel(context.TODO())
ctrl.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true)))
By("bootstrapping test environment")
envBinaryPath := os.Getenv("KUBEBUILDER_ASSETS")
Expect(envBinaryPath).NotTo(BeEmpty(), "KUBEBUILDER_ASSETS must be set before running tests")
os.Setenv("KUBEBUILDER_ASSETS", envBinaryPath)
testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "config", "crd", "bases")},
ErrorIfCRDPathMissing: false,
}
var err error
cfg, err = testEnv.Start()
Expect(err).NotTo(HaveOccurred())
Expect(cfg).NotTo(BeNil())
Expect(clientgoscheme.AddToScheme(scheme)).To(Succeed())
Expect(corev1.AddToScheme(scheme)).To(Succeed())
k8sManager, err = ctrl.NewManager(cfg, ctrl.Options{
Scheme: scheme,
})
Expect(err).NotTo(HaveOccurred())
k8sClient = k8sManager.GetClient()
Expect(k8sClient).NotTo(BeNil())
})
var _ = AfterSuite(func() {
By("tearing down the test environment")
cancel()
err := testEnv.Stop()
Expect(err).NotTo(HaveOccurred())
})
This sets up everything your test needs, including a fake control plane, scheme registration, and lifecycle hooks.
Update the namespace controller test
Next, open internal/controller/namespace_controller_test.go and implement the test logic:
package controller
import (
"time"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("NamespaceReconciler", func() {
BeforeEach(func() {
// Start the controller inside a goroutine
go func() {
defer GinkgoRecover()
err := (&NamespaceReconciler{
Client: k8sManager.GetClient(),
Scheme: k8sManager.GetScheme(),
}).SetupWithManager(k8sManager)
Expect(err).NotTo(HaveOccurred())
Expect(k8sManager.Start(ctx)).To(Succeed())
}()
})
It("should add default labels to namespaces missing them", func() {
ns := &corev1.Namespace{}
ns.Name = "test-ns"
Expect(k8sClient.Create(ctx, ns)).To(Succeed())
// Defer cleanup
DeferCleanup(func() {
_ = k8sClient.Delete(ctx, ns)
})
Eventually(func() map[string]string {
var updated corev1.Namespace
err := k8sClient.Get(ctx, types.NamespacedName{Name: ns.Name}, &updated)
if err != nil {
return nil
}
return updated.Labels
}, 5*time.Second, 500*time.Millisecond).Should(SatisfyAll(
HaveKeyWithValue("team", "unknown"),
HaveKeyWithValue("env", "dev"),
))
})
})
This test creates a new namespace and checks that your controller adds the team and env labels automatically.
Install envtest binaries
Before running any test, ensure the test environment binaries are installed:
make setup-envtest
This command fetches the necessary Kubernetes binaries (like a fake kube-apiserver and etcd) for the simulated environment.
Set KUBEBUILDER_ASSETS
Before running your tests, you must ensure the KUBEBUILDER_ASSETS environment variable is set. This variable tells the test suite where to find the Kubernetes binaries (etcd, kube-apiserver, and kubectl) used by the envtest environment.
After running make setup-envtest, export the environment variable:
export KUBEBUILDER_ASSETS=$(./bin/setup-envtest use 1.33.0 --os $(go env GOOS) --arch $(go env GOARCH) -p path)
Note: The 1.33.0 in the command refers to the Kubernetes version for the control plane binaries used by envtest.
You can change this to match the version of Kubernetes you use in your development or production clusters.
If this is not set, tests will fail with an error like this:
[FAILED] KUBEBUILDER_ASSETS must be set before running tests
Run the tests
Once your tests are in place, you can run them locally using the provided Makefile targets.
Run the test suite
To run the entire test suite, simply run:
make test
This will run all tests in your project using the simulated Kubernetes environment. The first run may take longer as dependencies are downloaded and binaries are set up.
Here’s the output:
=== RUN TestControllers
Running Suite: Controller Suite - path/to/namespace-auto-labeler/internal/controller
=====================================================================================================================
Random Seed: 1752238926
Will run 1 of 1 specs
•
Ran 1 of 1 Specs in 14.481 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
--- PASS: TestControllers (14.49s)
PASS
coverage: 69.6% of statements
ok github.com/0000/namespace-auto-labeler/internal/controller 14.617s coverage: 69.6% of statements
github.com/0000/namespace-auto-labeler/test/utils coverage: 0.0% of statements
Run tests directly
If you’re iterating quickly and already have the envtest setup done, you can run your controller test directly:
go test ./internal/controller -v
Your output shows that your tests passed:
=== RUN TestControllers
Running Suite: Controller Suite - /Users/yemiwebby/tutorial/circleci/namespace-auto-labeler/internal/controller
===============================================================================================================
Random Seed: 1751703309
Will run 1 of 1 specs
•
Ran 1 of 1 Specs in 4.030 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
--- PASS: TestControllers (4.03s)
PASS
ok github.com/0000/namespace-auto-labeler/internal/controller 4.528s
Customize the Makefile
This is optional. If you want more control, you can modify your Makefile to ensure your tests always run in the correct simulated environment and include coverage reports:
.PHONY: test
test: manifests generate fmt vet setup-envtest ## Run controller tests only.
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) \
--os $$(go env GOOS) --arch $$(go env GOARCH) \
--bin-dir $(LOCALBIN) -p path)" \
go test ./internal/controller -v -coverprofile cover.out
This ensures your tests always run in the correct simulated environment and generates a coverage report.
Add RBAC permissions
Kubernetes uses Role-Based Access Control (RBAC) to manage what actions different users and controllers can perform within the cluster. For your operator to watch and update namespaces, it needs explicit permissions. Without these, the controller will not be able to read or modify namespace resources, and your reconciliation logic will fail.
Open config/rbac/role.yaml and ensure it includes permissions to watch and update namespaces:
# permissions to manage namespaces
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch", "update", "patch"]
Without these permissions, your operator won’t be able to update namespaces.
Register the controller
Once your controller logic and permissions are in place, you need to register your controller with the manager. This step ensures that your NamespaceReconciler is started by the controller manager and begins watching for Namespace events in the cluster.
In cmd/main.go, confirm you have:
if err := (&controller.NamespaceReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Namespace")
os.Exit(1)
}
This registers your NamespaceReconciler with the controller manager, allowing it to handle namespace events.
Containerize the operator
To deploy your operator to a Kubernetes cluster, you need to package it as a Docker image. This allows Kubernetes to run your operator as a containerized application.
Set your image name
export IMG=docker.io/<your-dockerhub-username>/namespace-auto-labeler:latest
Kubebuilder’s Makefile expects the IMG environment variable to specify the full image name (including registry, repository, and tag) for your operator. By exporting this variable, you ensure that all build and deployment commands use the correct image reference. This makes it easy to push your image to Docker Hub and later pull it from any Kubernetes cluster.
Issue the command below to build your operator image:
make docker-buildx
The make docker-buildx command leverages Docker Buildx to build and push your operator image for multiple architectures (such as amd64 and arm64). This is important for compatibility, as Kubernetes clusters may run on different hardware platforms—including x86 servers and ARM-based nodes (like Raspberry Pi or Apple Silicon).
Verify your image
docker manifest inspect docker.io/<your-dockerhub-username>/namespace-auto-labeler:latest
Verifying your image with docker manifest inspect ensures that your image supports both amd64 and arm64 architectures. This step helps catch build or push issues early and guarantees your operator will run reliably on any supported Kubernetes platform.
Test your operator on Minikube
Now that your operator image is ready, you can test it in a real Kubernetes environment using Minikube.
Start Minikube
If you haven’t started Minikube yet, go to the root of your project directory. Issue this command:
minikube start
This command starts a local Kubernetes cluster using Minikube. It sets up a single-node cluster that you can use for development and testing.
If you prefer to avoid pushing to Docker Hub while testing locally, build the image directly into Minikube’s Docker daemon:
eval $(minikube docker-env)
make docker-buildx IMG=namespace-auto-labeler:latest
Deploy the operator
To deploy your operator to the cluster, simply run:
make deploy IMG=docker.io/<your-dockerhub-username>/namespace-auto-labeler:latest
This command uses the Makefile generated by Kubebuilder to automate the deployment process. It creates the necessary namespace for your operator, sets up the required RBAC permissions, and launches your controller as a Deployment using the image you specified with the IMG variable. With this single command, your operator will be running in your Kubernetes cluster, ready to watch for new namespaces and automatically apply labels as they are created.
Test namespace labeling
Once your operator is running:
kubectl create namespace test-ns
Give it a few seconds, then check the labels:
kubectl get ns test-ns --show-labels
Your output:
NAME STATUS AGE LABELS
test-ns Active 14s env=dev,kubernetes.io/metadata.name=test-ns,team=unknown
Clean up
To clean up your Minikube cluster and remove the operator, run:
make undeploy
kubectl delete namespace test-ns
Automate with CircleCI
In this section, you’ll set up a CircleCI pipeline that automatically lints your code, runs your tests, and builds your operator image on every commit or pull request.
Create .circleci/config.yml and define your CircleCI configuration:
version: 2.1
executors:
go-executor:
docker:
- image: cimg/go:1.24.4
jobs:
lint-test:
executor: go-executor
steps:
- checkout
- run:
name: Install dependencies
command: go mod tidy
- run:
name: Lint
command: go vet ./...
- run:
name: Install envtest tools and download control-plane binaries
command: |
go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
setup-envtest use 1.33.0 --os linux --arch amd64 --bin-dir ./testbin
- run:
name: Run Integration Tests
command: |
export KUBEBUILDER_ASSETS=$(pwd)/testbin/k8s/1.33.0-linux-amd64
go test ./internal/controller -v
docker-build:
docker:
- image: cimg/base:stable
steps:
- checkout
- setup_remote_docker
- run:
name: Build and Push Docker Image
command: |
echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USER" --password-stdin
make docker-buildx IMG=docker.io/<your-dockerhub-username>/namespace-auto-labeler:latest
workflows:
build-and-deploy:
jobs:
- lint-test
- docker-build:
requires:
- lint-test
The lint-test job handles the basics: it checks out your code, installs dependencies, runs go vet for static analysis, sets up the envtest environment, and runs your integration tests in a simulated Kubernetes control plane. This ensures your controller behaves correctly before it’s ever built into an image.
The docker-build job logs into Docker Hub and builds a multi-architecture image using make docker-buildx. It only runs if the lint-test job passes, ensuring your CI pipeline only builds and pushes tested, verified code.
Don’t forget to replace <your-dockerhub-username> with your Docker Hub username.
Add your Docker Hub credentials as environment variables in the CircleCI project settings.
Save your changes and push your code to GitHub.
Setting up the project in CircleCI
Log in to CircleCI and select your namespace-auto-labeler repository from the project list.
Choose the branch housing your .circleci/config.yml file and click Set Up Project. This will trigger your first build automatically.
The build will fail on the first run, as you still need to add the required environment variables.
Create environment variables in CircleCI
From your CircleCI dashboard, go to your project settings and add the following environment variables:
DOCKERHUB_USER: Your Docker Hub usernameDOCKERHUB_PASS: Your Docker Hub password or access tokens
After adding these variables, re-run the pipeline. It should now pass successfully.
With CI in place, every push will trigger linting, tests, and a Docker build. This feedback loop helps you catch regressions early and ensures your operator is always ready for deployment.
Conclusion
You’ve built a practical Kubernetes operator, written and run tests, containerized your code, and automated your workflow with CircleCI. This foundation is production-ready for internal tools and serves as a strong starting point for building more advanced operators. As you continue your operator journey, try exploring advanced reconciliation patterns like these:
- Using finalizers to handle cleanup before deletion
- Managing state and controller logic effectively
- Integrating external APIs or services to extend functionality
- Using admission webhooks for real-time validation or mutation of resources
- Defining and managing custom resources to model complex domain-specific workflows
With these next steps, you’ll be well-equipped to automate and scale your Kubernetes operations even further. You can find the full code in this GitHub repository.