Kubernetes VMware VSphere Infrastructure – Phase 3

Preamble

What is Kubespray? (repo)

Kubespray is a composition of Ansible playbooks, inventory, provisioning tools, and domain knowledge for generic OS/Kubernetes cluster configuration management tasks.

I found Kubespray recently and thought, well, I know Anisble, and I am learning Kubernetes, so can this help me on my journey?

The answer? Yes

This is a follow-up post to phase 2, where I shared example Terraform HCL files and scripts for deployment of the infrastructure.

BTW: I am using v2.22.1 from the repository.

Table of Contents

Requirements

Based on my example Terraform deployment, I will use the IPv4 side of things for the examples.

You will need to collect the IP addresses of your hosts to fill in the values for your Ansible Playbook inventory.

You also need the username and path to your SSS private key for passwordless access to your hosts.

You must update these items for your configuration, network setup, etc.

You need to clone the repository and check out the release.

git clone https://github.com/kubernetes-sigs/kubespray.git --branch=v2.22.1
cd kubespray
cp -rp inventory/sample inventory/example

WARNING:
You should also read the README.md file closely, particularly the need to install some Python modules so that Ansible can do its work, but here is the quick command line that will get it done. I’d highly suggest setting up a Python venv, though below is quick and dirty.

cd kubespray
pip install -r requirements.txt

Within the cloned repo is a directory structure layout (see below), and I will only show examples for the three files listed as they are the bulk of the configurations used in this deployment.

inventory/
   example/
      hosts.yaml
      inventory.ini
   group_vars/
      k8s_cluster/
         addons.yml

Below I’ll make sure to have download links for my example files.

Required changes

From my example Terraform variables, I have the following information.

Control-plane nodes:

  • cp1192.168.7.129
  • cp2192.168.7.130
  • cp3192.168.7.131

Worker nodes:

  • worker1192.168.7.65
  • worker2192.168.7.66
  • worker3192.168.7.67

My example uses:

  • ansible_user: linux-user – replace linux-user with the user of your deployed VM
  • ansible_ssh_private_key_file: ~/.ssh/id_rsa – replace id_rsa with your private key filename

Here are the files relating to the above configuration. I’ll provide links in the download section.

all:
  hosts:
    worker1:
      ansible_host: 192.168.7.65
      ip: 192.168.7.65
      access_ip: 192.168.7.65
      ansible_user: linux-user
      ansible_ssh_private_key_file: ~/.ssh/id_rsa
    worker2:
      ansible_host: 192.168.7.66
      ip: 192.168.7.66
      access_ip: 192.168.7.66
      ansible_user: linux-user
      ansible_ssh_private_key_file: ~/.ssh/id_rsa
    worker3:
      ansible_host: 192.168.7.67
      ip: 192.168.7.67
      access_ip: 192.168.7.67
      ansible_user: linux-user
      ansible_ssh_private_key_file: ~/.ssh/id_rsa
    cp1:
      ansible_host: 192.168.7.71
      ip: 192.168.7.71
      access_ip: 192.168.7.71
      ansible_user: linux-user
      ansible_ssh_private_key_file: ~/.ssh/id_rsa
    cp2:
      ansible_host: 192.168.7.72
      ip: 192.168.7.72
      access_ip: 192.168.7.72
      ansible_user: linux-user
      ansible_ssh_private_key_file: ~/.ssh/id_rsa
    cp3:
      ansible_host: 192.168.7.73
      ip: 192.168.7.73
      access_ip: 192.168.7.73
      ansible_user: linux-user
      ansible_ssh_private_key_file: ~/.ssh/id_rsa
  children:
    kube_master:
      hosts:
        cp2:
        cp3:
    kube_control_plane:
      hosts:
        cp3:
        cp2:
        cp1:
    kube_node:
      hosts:
        worker1:
        worker2:
        worker3:
    etcd:
      hosts:
        cp3:
        cp2:
        cp1:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_master:
        kube_node:
    calico_rr:
      hosts: {}
# ## Configure the 'ip' variable to bind Kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not an etcd member does not need to set the value or can set the empty string value.
[all]
# worker1 ansible_host=95.54.0.12  # ip=10.3.0.1 etcd_member_name=etcd1
# worker2 ansible_host=95.54.0.13  # ip=10.3.0.2 etcd_member_name=etcd2
# worker3 ansible_host=95.54.0.14  # ip=10.3.0.3 etcd_member_name=etcd3
# cp1 ansible_host=95.54.0.16  # ip=10.3.0.5 etcd_member_name=etcd5
# cp2 ansible_host=95.54.0.17  # ip=10.3.0.6 etcd_member_name=etcd6
# cp3 ansible_host=95.54.0.18  # ip=10.3.0.7 etcd_member_name=etcd7

# ## configure a bastion host if your nodes are not directly reachable
# [bastion]
# bastion ansible_host=x.x.x.x ansible_user=some_user

[kube_control_plane]
cp1
cp2
cp3

[kube_control_plane:vars]
ansible_user=linux-user
ansible_ssh_private_key_file=~/.ssh/id_rsa

[etcd]
cp1
cp2
cp3

[etcd:vars]
ansible_user=linux-user
ansible_ssh_private_key_file=~/.ssh/id_rsa

[kube_node]
worker1
worker2
worker3
cp1
cp2
cp3

[kube_node:vars]
ansible_user=linux-user
ansible_ssh_private_key_file=~/.ssh/id_rsa

[calico_rr]

[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr

Deployment Configuration

Lastly, there is the actual deployment configuration.

I will include my example configuration below, but I would like you to go through this file and enable the options yourself, as every deployment is different. I have also provided a download link toward the end of this post.

You enable add-ons via the addons.yml file in inventory/example/group_vars/k8s_cluster/addons.yml.

These are the three add-ons I have enabled:

  • metrics-server
    • metrics_server_enabled: true
      
  • ingress-nginx
    • # Nginx ingress controller deployment
      ingress_nginx_enabled: true
      # ingress_nginx_host_network: false
      ingress_publish_status_address: ""
      ingress_nginx_nodeselector:
        kubernetes.io/os: "linux"
      ingress_nginx_tolerations:
      #   - key: "node-role.kubernetes.io/master"
      #     operator: "Equal"
      #     value: ""
      #     effect: "NoSchedule"
        - key: "node-role.kubernetes.io/control-plane"
          operator: "Equal"
          value: ""
          effect: "NoSchedule"
      ingress_nginx_namespace: "ingress-nginx"
      ingress_nginx_insecure_port: 80
      ingress_nginx_secure_port: 443
      ingress_nginx_configmap:
        map-hash-bucket-size: "128"
        ssl-protocols: "TLSv1.2 TLSv1.3"
      # ingress_nginx_configmap_tcp_services:
      #   9000: "default/example-go:8080"
      ingress_nginx_configmap_udp_services:
        53: "kube-system/coredns:53"
      # ingress_nginx_extra_args:
      #   - --default-ssl-certificate=default/foo-tls
      ingress_nginx_termination_grace_period_seconds: 300
      ingress_nginx_class: nginx
      ingress_nginx_without_class: true
      ingress_nginx_default: true
      
  • cert-manager
    • # Cert manager deployment
      cert_manager_enabled: true
      cert_manager_namespace: "cert-manager"
      cert_manager_tolerations:
      #   - key: node-role.kubernetes.io/master
      #     effect: NoSchedule
      #   - key: node-role.kubernetes.io/control-plane
      #     effect: NoSchedule
      # cert_manager_affinity:
      #  nodeAffinity:
      #    preferredDuringSchedulingIgnoredDuringExecution:
      #    - weight: 100
      #      preference:
      #        matchExpressions:
      #        - key: node-role.kubernetes.io/control-plane
      #          operator: In
      #          values:
      #          - ""
      # cert_manager_nodeselector:
      #   kubernetes.io/os: "linux"
      
      # cert_manager_trusted_internal_ca: |
      #   -----BEGIN CERTIFICATE-----
      #   [REPLACE with your CA certificate]
      #   -----END CERTIFICATE-----
      # cert_manager_leader_election_namespace: kube-system
      
      cert_manager_dns_policy: "ClusterFirst"
      cert_manager_dns_config:
        nameservers:
          - "1.0.0.1"
          - "1.1.1.1"
      
      # cert_manager_controller_extra_args:
      #   - "--dns01-recursive-nameservers-only=true"
      #   - "--dns01-recursive-nameservers=1.1.1.1:53,8.8.8.8:53"
      cert_manager_controller_extra_args:
        - --default-issuer-name=letsencrypt-prod
        - --default-issuer-kind=ClusterIssuer
        - --default-issuer-group=cert-manager.io
      

Deploy

Well, it’s time to see if this will all work. I wish you the best of luck!

First, let’s run the reset playbook. This will validate your base configuration (housed in hosts.yaml and inventory.ini):

ansible-playbook -i inventory/example/hosts.yaml  --become --become-user=root reset.yml

This takes a little while to run. Within my lab environment, it is less than 15 minutes.

If you have any issues or failures, you will need to address them before you continue.

If all is clean, then it is time to take it to the next level by actually deploying via:

ansible-playbook -i inventory/example/hosts.yaml  --become --become-user=root cluster.yml

This takes a little longer to run and was < 25 minutes in my lab. As before, look for issues or failures and address them as needed.

Now, you have a Kubernetes cluster deployed and ready to access it. You need to grab the Kubernetes config that gives you admin access.

ssh -i~/.ssh/id_ra linux-user@cp1
sudo cat /etc/kubernetes/admin.conf

You must edit or create your ~/.kube/config file and paste the above information.

Moment of truth:

kubectl get nodes

If successful, you should receive output that looks similar to the following:

NAME       STATUS   ROLES           AGE   VERSION
cp1        Ready    control-plane   8d    v1.27.4
cp2        Ready    control-plane   8d    v1.27.4
cp3        Ready    control-plane   8d    v1.27.4
worker1    Ready    <none>          8d    v1.27.4
worker2    Ready    <none>          8d    v1.27.4
worker3    Ready    <none>          8d    v1.27.4

You should now have a working Kubernetes cluster to start your journey.

Download Links

Individual files

Each of these links will download the different files you need to have in the same directory you will be operating Terraform out of.

Summary

We have come to the end, and I hope you have been successful.