Deploying a Kubernetes Cluster with Kubeadm

by Alex Bartley Nees


This blog post describes my experience deploying a Kubernetes cluster from scratch with kubeadm. I decided to do this because it would be a good way to learn Kubernetes, and I also wanted to easily deploy my blog site and personal projects. I decided to go with a single node cluster to keep things simpler to start with as I learnt and because this was sufficient for my blog and a few personal projects.

What is Kubernetes?

Kubernetes is an open-source container orchestrator, simply it orchestrates your containers. We can easily spin up containers with Docker, but this isn’t enough for a production system. What happens if a container suddenly stops working, how do we deal with health monitoring, and how do we deploy new versions of our application with zero downtime? This is an example of what Kubernetes helps solve and a whole lot more. Kubernetes is usually abbreviated to K8s. Typically we interact with K8s through the command line with a tool called KubeCTL. Applications and services are defined as YAML files and are usually applied through CI/CD pipelines. Kubernetes involves writing a lot of YAML! Kubernetes is declarative, so we tell Kubernetes what we want our applications to look like and Kubernetes will attempt to match the running environment with our desired state. For example, if we want 5 instances of our application running, and Kubernetes only sees 3, it will automatically spin up another 2 containers.

What is Kubeadm?

Kubeadm is a tool that helps us to create a minimum viable Kubernetes cluster that conforms to best practices. Kubeadm is intentionally minimal to provide core cluster bootstrapping, so we also need to provide and configure networking, a container runtime and storage solutions by ourselves. I decided to use Kubeadm as the hands-on approach made it a great learning tool.

Using Ansible

I wanted to deploy my blog site and some personal projects to a VPS (Virtual private server) running on a single node K8s cluster. Setting up the cluster involved a lot of steps which would be time-consuming to remember and configure manually. I decided to use Ansible to automate the process of configuring the VPS and setting up the cluster as it helped to document every step and make the process repeatable while being easier to mantain and manage than shell scripts. Ansible uses what it calls “playbooks” - YAML files that declare configurations and define tasks to be executed. This allowed me to break down the cluster setup into clear, logical stages: infrastructure preparation, Kubernetes installation, networking setup, and add-on deployments. I could then easily run this against my VPS and have everything up and running.

Prerequisites

The first stage of the process was setting up all the prerequisites needed for the cluster. I decided to use Rocky Linux on my VPS as I was a big fan of Fedora and wanted enterprise-grade stability. The first step was setting up the firewall to open the required ports for Kubernetes:

The next step was installing Kubernetes components and setting up the container runtime. First, I added the Kubernetes repository to DNF (Rocky Linux’s package manager) to install three essential tools:

For the container runtime, I installed:

An important step was making sure that specific kernel parameters were configured for networking to work correctly. I needed to enable:

Without these configurations, pod-to-pod communication and network policies wouldn’t work properly.

Initialising the Control Plane

After installing the prerequisites, the next step was initializing the Kubernetes control plane. The control plane manages the overall state of the cluster and when we run kubeadm init, it sets up these components:

In a production environment, you would typically have multiple control planes for high availability. However, since this is a single-node cluster for running my blog and personal projects, one control plane node is sufficient.

By default, Kubernetes won’t run regular workloads (like your applications) on control plane nodes. This is a safety feature - in a multi-node cluster, you want to keep your control plane dedicated to managing the cluster rather than running applications.

However, since I’m running a single-node cluster, I needed to change this behavior. If I didn’t, I wouldn’t be able to run my blog or other applications because there would be no worker nodes to run them on! This is done by removing what Kubernetes calls a “taint” from the control plane node:

kubectl taint nodes --all node-role.kubernetes.io/control-plane-

Kubernetes Networking

In a Kubernetes cluster, the pods need to communicate with each other. However, Kubernetes doesn’t handle pod networking by itself. We need to install a Container Network Interface (CNI) plugin. For this, I decided on Flannel because:

To install Flannel on our cluster we download the latest Flannel manifest from Github and apply it to our cluster using kubectl: kubectl apply -f /tmp/kube-flannel.yml

Installing Helm

To make deploying applications to our cluster easier, I installed Helm, Helm is a package manager for Kubernetes. It uses “charts” which are packages of pre-configured Kubernetes resources which is what I used to package my blog and personal projects.

The installation process involves:

This means that instead of managing Kubernetes YAML files manually, we can use Helm charts that handle the configuration and make deploying complex applications easier.

Installing NGINX Ingress Controller

To allow external traffic into our cluster, we need an Ingress Controller. I chose the NGINX Ingress Controller, which acts as a reverse proxy and load balancer. This lets us expose our applications to the internet and route traffic. For example with the ingress controller, I can route traffic to my portfolio and blog site at alexbartleynees.com as well as a personal project at a subdomain.

I installed it using Helm with some specific configurations:

TLS Certificates with cert-manager

To secure traffic to our cluster with HTTPS, we need TLS certificates. I installed cert-manager to automate certificate management. Cert-manager can automatically obtain and renew certificates from Let’s Encrypt.

I also installed Reflector alongside cert-manager. This helps manage certificates across different namespaces - when cert-manager obtains a certificate, Reflector can automatically copy it to other namespaces where it’s needed. For example, I have multiple namespaces in my cluster for my blog site and different projects. Reflector makes it easy to copy the certificate to namespaces where it is needed.

This means that once configured, cert-manager will automatically handle all our HTTPS certificates, and Reflector ensures they’re available wherever needed in our cluster.

Storage with OpenEBS

To provide persistent storage for our applications, we need a storage solution. I chose OpenEBS because it can provide storage directly from the node’s local disk, which works well for a single-node setup.

I installed OpenEBS using Helm and configured it to use the local-hostpath storage class. This means when an application needs storage, OpenEBS will automatically create it using the node’s local disk space. I set this as the default storage class so any application requesting storage will automatically use it.

I needed this for running an instance of PostgresSQL on my cluster.

Deploying Applications with Helm

After setting up all the cluster components, I used Helm to deploy my applications. I store my Helm charts in Docker Hub, which makes it easy to version and distribute them. Each application (like my blog) has its own Helm chart that defines how it should run in Kubernetes. The deployment process:

This made it simple to manage deployments of my blog and other projects to the cluster.

Continuous Deployment

To make deploying to my cluster easy, I use GitHub Actions. When I push changes to the main branch, GitHub Actions automatically:

This means whenever I make changes to my code, they’re automatically deployed to my cluster without any manual steps. The workflow uses secrets stored in GitHub to securely access Docker Hub and my Kubernetes cluster.

Conclusion

Setting up a Kubernetes cluster from scratch with kubeadm was a great learning experience. I now have a better understanding of how Kubernetes works under the hood - from container networking to storage and ingress configurations. While it took more work than using a managed service, the knowledge gained was invaluable. The cluster is now successfully running my blog and several personal projects with:

Using Ansible to automate the setup means I can easily recreate the cluster if needed, and the experience has made me much more confident in working with Kubernetes.

All the Ansible playbooks used to set up this cluster are available in my GitHub repository: repo