Terraform
Here is where I share my Terraform HCL for deploying the machines that will become my Kubernetes cluster.
This is a follow-up post to phase 1, where I showed how I built my VMware VM template for deployment within my vSphere cluster.
Table of Contents
Requirements
A couple of things to have set up or documented before success:
- You need to have a working VM or VM Template ready for deployment
- See my phase 1 post for how I did this
- You need to know the name of your network(s)
- My deployment has a production or public side and NFS storage network; the HCL below is only for a single network deployment, but stubs are commented out for a secondary network
- You need the
datastore-id
from your vSphere infrastructure (could use adata.*
call but call me lazy)- If you have multiple storage destinations, then consider spreading your infrastructure across them
- My examples utilize three different destinations to spread out my nodes
- If you only have a singular
datastore-id
, then update the correct element in thevariables.tf
file for all systems
- I use environment variables for my vSphere credentials and endpoint
VSPHERE_USER="user"
– this user needs permissions for what you will be doing, and you will need to make sure such access is grantedVSPHERE_PASSWORD="me-and-my-password"
– self-explanatoryVSPHERE_SERVER="hostname-not-url
– again, self-explanatory
I used Ubuntu 22.04 LTS as my base and built out my VM Template using a minimal
install, and my scripts are based on the apt
system toolset. If you use a different distribution, you must update scripts accordingly.
Required changes
In the variables.tf
and data.tf
files, you must update values to match your configuration.
File: variables.tf
- external or public network
- example is using
192.168.4.0/22
with the nodes using the higher/24
- IPv6 is listed using a non-routable RFC network (
fc00:4/7
see RFC4193)- to disable, comment out or remove the keys beginning with
ipv6_
in the HCL files
- to disable, comment out or remove the keys beginning with
- example is using
- secondary or NFS network (commented out in HCL)
- example is using
192.168.8.0/22
with nodes in the lower/24
- IPv6 is listed using a non-routable RFC network (
fc00:4/7
see RFC4193)- to disable, comment out or remove the keys beginning with
ipv6_
in the HCL files
- to disable, comment out or remove the keys beginning with
- example is using
vm_username
needs to be updated to the username you chose when you created your VM/VM Template (see phase 1 for hints)
File: data.tf
data "vsphere_datacenter" "datacenter"
- update
name
for your Datacenter object in vSphere
- update
data "vsphere_virtual_machine" "k8s"
- update
name
to the name of your VM/VM Template in vSphere
- update
data "vsphere_resource_pool" "pool"
- update
name
for your Resources object in vSphere
- update
data "vsphere_network" "exterior"
- update
name
for your network object in vSphere for the exterior/primary
- update
data "vsphere_network" "interior"
- update
name
for your network object in vSphere for the interior/NFS (commented out in HCL files)
- update
data "vsphere_folder" "k8s"
- update
path
for your vSphere folder
- update
Here are the files, and I’ll provide links in the next section to download.
# This file needs editing before it will pass any checks and plans # The name of your datacenter # Change 'Default' to the correct name data "vsphere_datacenter" "datacenter" { name = "Default" } # Find the template/vm for cloning # change 'ubuntu2204-template-20230801' to the correct name data "vsphere_virtual_machine" "k8s" { name = "ubuntu2204-template-20230801" datacenter_id = data.vsphere_datacenter.datacenter.id } # The name of your resource pool # change 'Default' to the correct name data "vsphere_resource_pool" "pool" { name = "Default/Resources" datacenter_id = data.vsphere_datacenter.datacenter.id } # VLANing baby - you may have just one # change 'vSwitch70' to the correct name data "vsphere_network" "exterior" { name = "vSwitch70" datacenter_id = data.vsphere_datacenter.datacenter.id } # NFS network # change 'vSwitch91' to the correct name # data "vsphere_network" "interior" { # name = "vSwitch91" # datacenter_id = data.vsphere_datacenter.datacenter.id # } # Your folder within vSphere # And finally, change 'Kubernetes' to your VM and Template folder # destination data "vsphere_folder" "k8s" { path = "Kubernetes" }
# hosting .. hosts # Your VM username for login over ssh variable "vm_username" { type = string default = "linux-user" } ##### # Basically: # hostname # domain name # ipv4 primary IP # ipv4 primary netmask # ipv4 primary gateway # ipv6 primary IP # ipv6 primary netmask # ipv6 primary gateway # I don't remember :( # the datastore-xxxxx ID (manual placement by me) # ipv4 secondary IP (I use a secondary network for NFS on another VLAN) # ipv4 secondary netmask # ipv6 secondary IP # ipv6 secondary netmask ##### ### # The worker's block ### variable "k8s_instance_workers" { type = list(tuple([string, string, string, number, string, string, number, string, number, string, string, number, string, number])) default = [ ["worker1", "example.com", "192.168.7.65", 22, "192.168.7.254", "fc00:beef:c0ff:70::65", 64, "fc00:beef:c0ff:70::ffff", 0, "datastore-12346", "192.168.8.65", 22, "fc00:beef:c0ff:91::65", 64], ["worker2", "example.com", "192.168.7.66", 22, "192.168.7.254", "fc00:beef:c0ff:70::66", 64, "fc00:beef:c0ff:70::ffff", 0, "datastore-12347", "192.168.8.66", 22, "fc00:beef:c0ff:91::66", 64], ["worker3", "example.com", "192.168.7.67", 22, "192.168.7.254", "fc00:beef:c0ff:70::67", 64, "fc00:beef:c0ff:70::ffff", 0, "datastore-12346", "192.168.8.67", 22, "fc00:beef:c0ff:91::67", 64], ] } ### # the etcd/control-plane block # or not - up to you ### variable "k8s_instance_etcd" { type = list(tuple([string, string, string, number, string, string, number, string, number, string, string, number, string, number])) default = [ ["cp1", "example.com", "192.168.7.124", 22, "192.168.7.254", "fc00:beef:c0ff:70::129", 64, "fc00:beef:c0ff:70::ffff", 0, "datastore-40746", "192.168.8.129", 22, "fc00:beef:c0ff:91::129", 64], ["cp2", "example.com", "192.168.7.130", 22, "192.168.7.254", "fc00:beef:c0ff:70::130", 64, "fc00:beef:c0ff:70::ffff", 0, "datastore-40747", "192.168.8.130", 22, "fc00:beef:c0ff:91::130", 64], ["cp3", "example.com", "192.168.7.131", 22, "192.168.7.254", "fc00:beef:c0ff:70::131", 64, "fc00:beef:c0ff:70::ffff", 0, "datastore-32784", "192.168.8.131", 22, "fc00:beef:c0ff:91::131", 64], ] }
Download Links
Individual files
Each of these links will download the different files you need to have in the same directory you will be operating Terraform out of.
Tar file
This tar file will extract a folder named deploy-kubez
with the HCL files and the provisioning script.
Summary
And onwards to the next phase, using kubespray
to deploy Kubernetes into your systems!
You must log in to post a comment.