Inventory
I bought 3 Raspberry PI (RPI) 4 8G Single Board Computers (SBC) to play with Kubernetes (and later, services for my home).
Over the last few years, I have rebuilt these systems and installed different deployments of Kubernetes.
The most straightforward deployment I have found is via k3s though its default installs were causing me some failures. Note: This is not the fault of k3s
but more of my issues with what I was trying to build and consume.
I started by doing everything by hand, which was a disaster. Awful would be the word.
For the example below:
- I will be turning off
traefik
upon installation as the ingress controller and will include the commands to installingress-nginx
on the cluster once it is operational - The generated Kubernetes will be readable for anyone logged into the RPI (not all that secure, but this is a home lab, right?).
- I will add example installs for an nfs-client provisioner in case you have a NAS at home and would like something more resilient than the micro-SD card as they notoriously fail with a lot of input/output operations
Host Environment
Many options are available for what to install on an RPI as your operating system environment (OSE).
I am partial to the Debian ecosystem. This includes distributions like Ubuntu. In my lab environment, I use Ubuntu LTS, but for my home, I stick with Debian. Don’t judge me.
To install your choice of OSE, you can use the Raspberry PI Imager. I won’t waste your time explaining how to get this done; others do much better.
If you are going to run a Kubernetes cluster in your home network, I would suggest a couple of things:
- I recommend the use of ethernet instead of WiFi
- More predictable network performance
- Less jitter or packet loss
- If possible, power your nodes with Power over Ethernet (POE) as it simplifies cabling and reduces mess. This has an added cost ($20-25) but so worth it.
- Instead of having power and an ethernet cable, you can have just the one ethernet cable
- Use static IP addresses for your nodes
- I feel this is required to be successful
I am using Debian 11 (Bullseye) on my hosts.
Prepare the Hosts
The default username for most installs is pi
; I’ll use this in my examples.
Whether you have 1 or 7 RPIs, you should endeavor to set them up the same way. Here is what I do.
I will use three RPI nodes for the rest of this article.
The network will be 172.30.1.0/24
, and the username will be pi
, as stated above.
pi-1
- IP address:
172.30.1.250
- IP address:
pi-2
- IP address:
172.30.1.251
- IP address:
pi-3
- IP address:
172.30.1.252
- IP address:
For this example setup, I will be provisioning 3 RPIs using the following information:
- Passwordless
ssh
login- I add my SSH public key to
~/.ssh/authorized_keys
so I can log in without a password - example command line:
ssh -i ~/.ssh/private_key [email protected]
- I add my SSH public key to
- Passwordless
sudo
- Run
sudo visudo
, and if you are prompted for a password, then do the following:- original looks like
%sudo ALL=(ALL:ALL) ALL
- new line looks like
%sudo ALL=(ALL:ALL) NOPASSWD:ALL
- save it
- original looks like
- Run
- Verify your static IP addressing
- Different distributions will use different methods; all that matters is that each node has a static IP address
- You can do this via your DHCP server via the MAC address of each host if that is easier for you
- Different distributions will use different methods; all that matters is that each node has a static IP address
Install k3s
You can choose which node will be your first install. I will select pi-1
, since it is the first.
Choose an install token
you will use to deploy your first and subsequent nodes.
You can generate a random hex string and use that (which is what I have done).
ps ax | shasum | cut -c1-26
For the example installs, I will use 78e994fbec69110bd251f4cf68
as my token. (Don’t worry, this is not the token I used when I did my deployment)
Install your first node
Log into your RPI:
ssh -i ~/.ssh/private_key [email protected]
Then run the following command:
curl -sfL https://get.k3s.io | K3S_TOKEN=78e994fbec69110bd251f4cf68 INSTALL_K3S_VERSION=v1.26.7+k3s1 sh -s - server --disable traefik --write-kubeconfig-mode "0644" --cluster-init
This will spit out many lines from the installation, and you should not see any errors. If you do, please stop here, and work on resolving things before you try to install it again.
If you need to do a reset on your host, you can run the following:
sudo k3s-uninstall.sh
And it will reset things and uninstall k3s
and its components.
The k3s
install includes the kubectl
utility, which you can run from any nodes once installed. Grabbing the Kubernetes configuration to utilize other tools or remotely access Kubernetes from a workstation would be best.
You need the contents of this file: /etc/rancher/k3s/k3s.yaml
, and a super easy way to copy it to your local machine is to run:
mkdir -p ~/.kube scp -i ~/.ssh/private_key [email protected]:/etc/rancher/k3s/k3s.yaml ~/.kube/config
You will need to update the sever
section of the YAML if you’re going to operate remotely. Just change the line:
- `https://127.0.0.1:6443`
with
- `https://172.30.1.250:6443`
Either way, run this to see the status of your first node:
kubectl get nodes
Your output will be similar to the following:
NAME STATUS ROLES AGE VERSION pi-1 Ready control-plane,etcd,master 3d3h v1.26.7+k3s1
Install subsequent nodes
You have made it past the first hurdle. Congratulations!
I suggest you run the following command and ensure all pods are ready before installing your subsequent nodes.
kubectl get pods -A
This is also as easy as your first node install:
ssh -i ~/.ssh/private_key [email protected]
Then run the following command:
curl -sfL https://get.k3s.io | K3S_TOKEN=78e994fbec69110bd251f4cf68 INSTALL_K3S_VERSION=v1.26.7+k3s1 sh -s - server --disable traefik --write-kubeconfig-mode "0644" --server https://172.30.1.250:6443
As before, it will emit status updates and, if successful, will be joined as part of your new Kubernetes cluster.
Repeat the above for the other nodes, with just one remaining for my example.
kubectl get nodes
Your output will be similar to the following:
NAME STATUS ROLES AGE VERSION pi-1 Ready control-plane,etcd,master 3d3h v1.26.7+k3s1 pi-2 Ready control-plane,etcd,master 3d3h v1.26.7+k3s1 pi-3 Ready control-plane,etcd,master 3d3h v1.26.7+k3s1
Install Kubernetes Services
First, let’s get ingress-nginx
installed. This is used to host services outside of Kubernetes and expose those services.
I am using Helm as the package manager. It’s just easy. Every system is different, so go through the installation documentation to install it for your workstation.
Since I want ingress-nginx
to be the default, create a file called default-nginx.yaml
with the following contents:
## nginx configuration ## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/index.md commonLabels: {} controller: watchIngressWithoutClass: true
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace --values default-nginx.yaml
This will take a minute or two. Run the following command until ‘READY’ is ‘1/1’:
kubectl get deploy/ingress-nginx-controller -n ingress-nginx
And you should see output similar to the following once it is active:
NAME READY UP-TO-DATE AVAILABLE AGE ingress-nginx-controller 1/1 1 1 3d3h
Next, let’s get cert-manager
installed so you can create secure endpoints for your services.
helm repo add jetstack https://charts.jetstack.io helm repo update helm upgrade --install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true --version v1.12.3 --set ingressShim.defaultIssuerName=letsencrypt-prod --set ingressShim.defaultIssuerKind=ClusterIssuer --set ingressShim.defaultIssuerGroup=cert-manager.io
This installs quickly.
To check the deployment, you can use kubectl
until you see something similar:
kubectl get deploy/cert-manager -n cert-manager
NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 1/1 1 1 3d4h
Create a config update file so that Letsencrypt is the default issuer.
You need to replace YOUR-EMAIL-ADDRESS
with your email address.
--- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-prod namespace: cert-manager spec: acme: email: YOUR-EMAIL-ADDRESS privateKeySecretRef: name: letsencrypt-prod server: https://acme-v02.api.letsencrypt.org/directory solvers: - http01: ingress: class: nginx
And then apply:
kubectl apply -f prod-issuer.yaml
You can check:
kubectl get clusterissuer
And the following output:
NAME READY AGE letsencrypt-prod True 3d4h
You should now have a working Kubernetes cluster.
What will you deploy?
Uninstall k3s
Well, you had fun, and now you want to revert. It is straightforward and does not take long.
Use ssh
to log into each node and run the following:
sudo k3s-uninstall.sh
It may take a few minutes. When done, just issue sudo reboot
, and the host(s) will reboot and be clean of any k3s
elements.