We are now at the final part of the blog series on vSphere with Kubernetes on VCF 4.0.1 Consolidated Architecture. In this article we will take a look at TKG CLI for deployment, operations and lifecycle management of the TKG Compute clusters.
If you missed the previous parts, you can find it here:
Getting the TKG CLI
To get TKG CLI, visit https://www.vmware.com/go/get-tkg. We need to download and install TKG CLI on a management host. This is the same host where we previously installed kubectl and kubectl-vsphere tools. CLI binaries for Linux, Mac OS, and Windows systems are available, choose the one based on our OS. At the time of writing, the highest version available for TKG CLI is 1.1.0. Installing TKG CLI is straight forward, please follow the instructions mentioned in the official documentation at :
Connecting TKG CLI with the Supervisor Cluster
Authenticate to the Supervisor cluster using ‘kubectl vsphere’ utility and set the context to the Supervisor cluster.
Add the Supervisor Cluster as a Management Cluster to the TKG instance using ‘tkg add management-cluster‘ command.
If there are other Management clusters added previously, they all show up here. Management clusters can be
- Other Supervisor Clusters
- TKG Standalone Management Clusters (on vSphere 6.7U3 only)
We have to switch the context to the correct management cluster which we need to manage using the ‘tkg set management-cluster‘ command.
This is how we switch the context to our Supervisor Cluster.
We will now use the ‘kubectl’ utility to interact with the cluster.
Generating a TKG Cluster Config
We can use the TKG CLI to create YAML config files for TKG clusters without actually creating the clusters using the ‘tkg config cluster‘ command. From the saved yaml file, we can use the ‘tkg create cluster‘ command with the –manifest option to deploy a TKG cluster. Alternatively, we can also use ‘kubectl apply‘ command like how we did in the previous post (but with an additional step)
To generate a config or to create a TKG cluster, we have to set few environment variables for Storage Classes, Virtual Machine Classes, pod / Service CIDRs and the service domain.
Running the ‘tkg config cluster‘ command works in the same way as specifying –dry-run option with ‘tkg create cluster‘ command.
For more information, please visit the official documentation at https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.1/vmware-tanzu-kubernetes-grid-11/GUID-tanzu-k8s-clusters-config.html
Creating a TKG Compute Cluster
We create a TKG cluster using the ‘tkg create cluster‘ command. The below command creates a TKG cluster named ‘tkg-c02-apps’ with 3 control plane nodes and 3 worker nodes on the supervisor namespace ‘vxplanet-apps’.
The below NSX-T objects will also be created:
- A logical Segment for the node VMs
- Dedicated Tier 1 Gateway upstreamed to the T0 Gateway
- A ‘Small’ loadbalancer instantiated on this Tier 1 Gateway
- An L4 VS on this loadbalancer to handle KubeAPI requests to the TKG cluster
- SNAT rules for the node VMs for outbound access.
We can also export the cluster details to a file.
Connecting to the TKG compute cluster
To obtain the kubeconfig of the deployed TKG cluster ‘tkg-c02-apps’ , run the ‘tkg get credentials‘ command.
Let’s also connect to the cluster ‘tkg-c01-apps’ that we created in the previous post using the declarative API.
Scaling out TKG cluster
We can scale out a TKG cluster using the ‘tkg scale cluster‘ command followed by the –worker-machine-count option. In the current version of vSphere with Kubernetes, we can only scale out the nodes, scale-in is not supported.
Upgrading a TKG cluster
Both the Supervisor cluster and TKG clusters uses a rolling update model. We can selectively update the Supervisor cluster and the TKG clusters. However, there is a dependency between them. We must update a Supervisor Cluster before we update TKG clusters managed by that Supervisor Cluster. The Supervisor cluster is upgraded via namespace updates after a successful vCenter upgrade. Supervisor cluster upgrade also upgrades the TKG service and it’s components – CNI, CSI & CPI which is leveraged by the TKG clusters. In vSphere with Kubernetes, upstream TKGs are distributed via Content Libraries.
Check if new TKG versions are available. At the time of writing, the most recent version is v1.17.7
We use the ‘tkg upgrade cluster‘ with the upstream kubernetes version to perform the upgrade.
The process will take a while to complete, and once done we should see the cluster on the new Kubernetes version, v1.17.7
An easy option to try out vSphere with Kubernetes is to enroll into the VMware Hands-on-lab ‘HOL-2113-01-SDC’ which gives us a live nested environment to play with.
Now it’s time to conclude the blog series. This was a pretty long 4 part series and I hope this was informative.
Thanks for reading.
Continue reading? Here are the other parts of this series: