I decided to dig into Kubernetes; again

And I don’t know why I waited so long.

Kubernetes has been around for a while, and I have seen it increasingly deployed in corporate areas. I have played with it via home and my data center lab environments over the past few years but have yet to get things where I wanted them or operate as expected. Earlier this year, I worked with a customer, helping them install some open-source applications, operational and secure in their varied Kubernetes clusters. This required deep dive learning, working with Helm, building images based on their gold-image standards, and being repeatable across their environments.

I started by doing everything by hand, and that was a disaster. There are so many moving components to get right, and I clearly could not.

In my last round, I had done a k3s set up in both environments. Things worked differently than expected, and I could not operationalize a working ingress-nginx component in either environment.

My home environment consists of 3 Raspberry PI 4 Model B 8G SBCs using local storage and NFS from my local NAS. It is running k3s.

My data center lab (previous) consisted of 3 Ubuntu 22.04 LTS VMs with 10G of RAM utilizing both local and NFS storage from my more enterprise-like storage infrastructure. It had also been built via k3s.

Last week I decided to dive in again but do it differently this time. I tore down the old infrastructure I had built. I had nothing running as I wanted to be happier with how things were set up.

I found the kubespray (repo), and as I have experience in Ansible, I decided to give it a go. I did throw more resources at things, but mainly to separate the control-plane/etcd nodes from the worker nodes. This new Kubernetes cluster has three control-plane/etcd nodes (4G, one vCPU) and four worker nodes (10G, three vCPU) utilizing local and NFS storage. I wrote up three posts after building some example configurations.

It works! So far. I’ll break it soon; I bet, as it is just what I do.

The data center lab environment is built using Terraform, and I can repeatedly tear it down and rebuild it as needed. Though, this only creates and manages the virtual hardware resources, as I have yet to try to integrate the Ansible work from kubespray to do it all on the fly.

I also wrote an article for the home lab; you can read more here.