1. Home
  2. Upgrading Kubernetes version 1.18 to 1.21 on a Bright 9.1 cluster.

Upgrading Kubernetes version 1.18 to 1.21 on a Bright 9.1 cluster.

1. Prerequisites
  • This article is written with Bright Cluster Manager 9.1 in mind, where Kubernetes is currently deployed with the default version 1.18.15.
  • The instructions are written with RHEL 8 and Ubuntu 20.04 in mind.
  • These instructions have been executed in production environments a couple of times, all caveats should be covered by this KB article. We do however recommend making a backup of Etcd so a roll-back to an older version is possible.

    This backup can be made without interrupting the running cluster.

    Please follow the instructions on the following URL to create a snapshot of Etcd:

    https://kb.brightcomputing.com/knowledge-base/etcd-backup-and-restore-with-bright-9-0/
2. Upgrade approach
  • Upgrading between these two versions is relatively safe, since no big deprecations have been made in API groups, see https://kubernetes.io/docs/reference/using-api/deprecation-guide/ for more details. This is also why we get away with the next bullet-point.
  • We upgrade from 1.18.15 directly to 1.21.4, even though https://kubernetes.io/releases/version-skew-policy/ now recommends to go one release at a time. We do not have packages for each Kubernetes version for historic reasons in BCM 9.1. In case we add them in the future we will update this KB article.
  • For the purposes of this KB article we will use the following example deployment on six nodes, both head nodes, and four compute-nodes make up the Kubernetes cluster.
root@rb-kube91-a:~# module load kubernetes/default/1.18.15 

root@rb-kube91-a:~# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
node001       Ready    master   9m10s   v1.18.15
node002       Ready    worker   9m11s   v1.18.15
node003       Ready    worker   9m11s   v1.18.15
node004       Ready    worker   9m11s   v1.18.15
rb-kube91-a   Ready    master   8m39s   v1.18.15
rb-kube91-b   Ready    master   9m9s    v1.18.15

root@rb-kube91-a:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", ...}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", ...}
3. Prepare a configuration overlay for control-plane

We’re updating from version 1.18 to 1.21, and in 1.20 new parameters have been added to Kubernetes. If we upgrade the kube apiserver, it will no longer start, because of the missing parameters.

We will create a configuration overlay, without any nodes, categories or headnodes assigned to it for future use.

[rb-kube91-a->configurationoverlay]% clone kube-default-master kube-default-master-new
[rb-kube91-a->configurationoverlay*[kube-default-master-new*]]% set priority 520
[rb-kube91-a->configurationoverlay*[kube-default-master-new*]]% set allheadnodes no
[rb-kube91-a->configurationoverlay*[kube-default-master-new*]]% clear nodes
[rb-kube91-a->configurationoverlay*[kube-default-master-new*]]% clear categories 
[rb-kube91-a->configurationoverlay*[kube-default-master-new*]]% roles
[rb-kube91-a->configurationoverlay*[kube-default-master-new*]->roles*]% use kubernetes::apiserver
[rb-kube91-a->configurationoverlay*[kube-default-master-new*]->roles*[Kubernetes::ApiServer*]]% append options "--service-account-issuer=https://kubernetes.default.svc.cluster.local"
[rb-kube91-a->configurationoverlay*[kube-default-master-new*]->roles*[Kubernetes::ApiServer*]]% append options "--service-account-signing-key-file=/cm/local/apps/kubernetes/var/etc/sa-default.key"
[rb-kube91-a->configurationoverlay*[kube-default-master-new*]->roles*[Kubernetes::ApiServer*]]% commit
4. Prepare software images

We will bump the kubernetes package for each software image that is relevant to the Kubernetes cluster. In this example scenario our two compute nodes are provisioned from /cm/images/default-image. We will use the cm-chroot-sw-img program to replace the kubernetes package.

root@rb-kube91-a:~# cm-chroot-sw-img /cm/images/default-image/  # enters chroot

$ apt install cm-kubernetes- cm-kubernetes121  # for ubuntu

$ yum swap cm-kubernetes cm-kubernetes121  # for RHEL

$ exit
5. Image update one of the workers

We start with one to see if we can update on of the kubelets. This should give us some confidence before upgrading all of the kubelets. We do not start with the control plane (Kubernetes API server, etc., since additional command-line flags have been added since Kubernetes version 1.20)

In our example node001 is a worker, and we will first drain the node. See https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ for more details. This is not strictly necessary, but usually recommended.

kubectl cordon node001                      # disables scheduling
kubectl drain node001 --ignore-daemonsets   # optionally drain as well

The drain command will evict all Pods and prevent anything from being scheduled on the node. After the command finishes successfully we will issue an imageupdate on node001 via cmsh.

root@rb-kube91-a:~# cmsh
[rb-kube91-a]% device
[rb-kube91-a->device]% imageupdate -w node001
Tue Aug 16 15:30:31 2022 [notice] rb-kube91-a: Provisioning started: sending rb-kube91-a:/cm/images/default-image to node001:/, mode UPDATE, dry run = no
Tue Aug 16 15:30:57 2022 [notice] rb-kube91-a: Provisioning completed: sent rb-kube91-a:/cm/images/default-image to node001:/, mode UPDATE, dry run = no
imageupdate -w node001 [ COMPLETED ]

We will now restart cmd, kubelet and kube-proxy services on the node.

pdsh -w node001 'systemctl daemon-reload; systemctl restart cmd; systemctl restart kubelet.service; systemctl restart kube-proxy.service'

After a few moments, verify that the kubelet has been updated correctly.

root@rb-kube91-a:~# kubectl get nodes
NAME          STATUS                     ROLES    AGE    VERSION
node001       Ready,SchedulingDisabled   master   149m   v1.21.4
node002       Ready                      worker   149m   v1.18.15
node003       Ready                      worker   149m   v1.18.15
node004       Ready                      worker   149m   v1.18.15
rb-kube91-a   Ready                      master   148m   v1.18.15
rb-kube91-b   Ready                      master   149m   v1.18.15

Now we can re-enable scheduling for the node.

root@rb-kube91-a:~# kubectl uncordon node001
node/node001 uncordoned
6. Image update the rest of the workers

This can be done similarly to step 5, one-by-one, or in batches. In the case of this KB article we’ll do the remaining compute nodes node00[2-4] in one go, without draining them first.

  • We issue an imageupdate, but for the whole category in cmsh: device; imageupdate -c default -w
  • We restart the services: pdsh -w node00[2-4] 'systemctl daemon-reload; systemctl restart cmd; systemctl restart kubelet.service; systemctl restart kube-proxy.service'
  • We confirm the version has updated.
root@rb-kube91-a:~# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
node001       Ready    master   3h20m   v1.21.4
node002       Ready    worker   3h20m   v1.21.4
node003       Ready    worker   15m     v1.21.4
node004       Ready    worker   15m     v1.21.4
rb-kube91-a   Ready    master   3h19m   v1.18.15
rb-kube91-b   Ready    master   3h20m   v1.18.15
7. Update one of the control-plane nodes

We will pick node001 and add the node to the new overlay created in step 3. If your cluster does not have control-plane nodes running on compute nodes, see the next section on how to update the Head Nodes, and pick a Head Node that runs as a control-plane.

root@rb-kube91-a:~# cmsh
[rb-kube91-a]% configurationoverlay 
[rb-kube91-a->configurationoverlay]% use kube-default-master-new 
[rb-kube91-a->configurationoverlay[kube-default-master-new]]% append nodes node001
[rb-kube91-a->configurationoverlay*[kube-default-master-new*]]% commit
[rb-kube91-a->configurationoverlay[kube-default-master-new]]% 
Tue Aug 16 16:52:52 2022 [notice] node001: Service kube-apiserver was restarted

We expect the Kube API server to be automatically restarted, however, we also want to restart the scheduler and controller-manager.

pdsh -w node001 "systemctl restart kube-scheduler; systemctl restart kube-controller-manager"

In this case we can try to exercise the API server on the node via curl:

root@rb-kube91-a:~# curl -k https://node001:6443; echo
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}

The authorization error is expected here, not important for now, but mentioning for completeness: one way to do an authenticated request would be using a token (which we embed in the kubeconfig by default for the root user):

root@rb-kube91-a:~# grep token .kube/config-default 
    token: 'SOME_LONG_STRING'
root@rb-kube91-a:~# export TOKEN=SOME_LONG_STRING
root@rb-kube91-a:~# curl -s https://node001:6443/openapi/v2  --header "Authorization: Bearer $TOKEN" --cacert /cm/local/apps/kubernetes/var/etc/kubeca-default.pem | less
8. Updating Head Nodes

First we need to execute step 4 on the Head Nodes. In case there are two, execute the following on both.

root@rb-kube91-a:~# apt install cm-kubernetes- cm-kubernetes121  # for ubuntu

root@rb-kube91-a:~# yum swap cm-kubernetes cm-kubernetes121  # for RHEL

We can do kubelet + kube-proxy first as before, or we can do all services at once. Section 5 and 7 can be referenced for the detailed steps. The imageupdate steps can be omitted since those are only relevant for Compute Nodes.

We will update the worker services on the active Head Node first, and verify that the version has updated.

root@rb-kube91-a:~# systemctl daemon-reload; systemctl restart kubelet; systemctl restart kube-proxy;
root@rb-kube91-a:~# kubectl get nodes
NAME          STATUS   ROLES    AGE    VERSION
node001       Ready    master   4h8m   v1.21.4
node002       Ready    worker   4h8m   v1.21.4
node003       Ready    worker   63m    v1.21.4
node004       Ready    worker   63m    v1.21.4
rb-kube91-a   Ready    master   4h7m   v1.21.4
rb-kube91-b   Ready    master   4h8m   v1.18.15

We will now update the Kube API server.

root@rb-kube91-a:~# cmsh
[rb-kube91-a]% configurationoverlay 
[rb-kube91-a->configurationoverlay]% use kube-default-master-new 
[rb-kube91-a->configurationoverlay[kube-default-master-new]]% append nodes master
[rb-kube91-a->configurationoverlay*[kube-default-master-new*]]% commit
Tue Aug 16 17:21:03 2022 [notice] rb-kube91-a: Service kube-apiserver was restarted

And restart the Scheduler and Controller-Manager.

root@rb-kube91-a:~# systemctl restart kube-scheduler; systemctl restart kube-controller-manager;

Finally, we will repeat for the secondary Head Node. And after that, the cluster should be fully updated.

9. Updating Addons (optional)

Due to the fact that no API groups (at least not in the GA/stable and Beta/pre-release tracks) have been removed between Kubernetes 1.18 and 1.21 the original addons that shipped with the installation of Kubernetes 1.18.15 will continue to work.

If there is no direct need to update addons, such as Calico and the Metrics Server, this section can be skipped.

In case we want to update the addons anyway, we need to execute a few steps manually, that would have been automatically done in case a Kubernetes 1.21 setup was done from scratch.

Brief overview of the addon updates between Kubernetes 1.18.15 and 1.21.4:

  • CNI is updated from 0.8.2 to 0.9.1.
  • Calico is updated from 3.10.0 to 3.16.4.
  • Helm is updated from 3.3.1-linux to 3.6.3-linux.
  • Kubernetes dashboard from 2.0.4 to 2.3.1.
  • CoreDNS from 1.7.0 to 1.8.4.
  • Flannel from 0.12.0 to 0.14.0.
  • Kube state metrics from 1.9.8 to 2.1.0.
  • Kube metrics server from 0.3.7 to 0.5.0.

Prepare CNI Networking changes

Updates to CNI result in some different interface names to be used in certain cases. This requires us to modify the firewall role in cmsh before we perform the update.

In cmsh see if there is a tunl0 interface defined for calico, or a cni0 interface for flannel:

[root@rb-kube91-a ~]# cmsh
[rb-kube91-a]% device use master
[rb-kube91-a->device[rb-kube91-a]]% roles
[rb-kube91-a->device[rb-kube91-a]->roles]% use firewall 
[rb-kube91-a->device[rb-kube91-a]->roles[firewall]]% interfaces
[rb-kube91-a->device[rb-kube91-a]->roles[firewall]->interfaces]% list
Index  Zone   Interface    Broadcast    Options     
------ ------ ------------ ------------ ------------
0      cal    cali+        detect       routeback   
1      cal    tunl0                                 

In the above example Calico networking is configured, and the tunl0 interface is already present. The cm-kubernetes-setup wizard adds it by default since version 9.1-9, older versions of Bright did not, therefore it might be missing. In that case, we have to add it.

[rb-kube91-a->device[rb-kube91-a]->roles[firewall]->interfaces]% add cal tunl0
[rb-kube91-a->device*[rb-kube91-a*]->roles*[firewall*]->interfaces[1]]% commit
Wed Aug 17 11:37:19 2022 [notice] rb-kube91-a: Service shorewall was restarted

For Flannel, add cni0 to the interfaces instead, with add flan cni0; commit. In either case, shorewall should be restarted after issuing the commit.

Update the addons

Issuing the following command updates the addons. The output for the command has been omitted to avoid cluttering this KB article, but backups of the original yaml are made to the following directory: /cm/local/apps/kubernetes/var/, this information is printed as part of the output.

cm-kubernetes-setup -v --update-addons

The update script will have backed up the old configuration inside cmdaemon as well:

[rb-kube91-a]% kubernetes 
[rb-kube91-a->kubernetes[default]]% appgroups 
[rb-kube91-a->kubernetes[default]->appgroups]% list
Name (key)                       Applications                  
-------------------------------- ------------------------------
system                           <12 in submode>               
system-backup-2022-08-17-114537  <12 in submode>               

Update ingress controller

The deployment name was changed upstream:

root@rb-kube91ubuntu2004:~# kubectl get deployment -A
NAMESPACE              NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
ingress-nginx          ingress-nginx-controller    1/1     1            1           2m13s
ingress-nginx          nginx-ingress-controller    1/1     1            1           30m

Since we just need the newest, we’ll delete the old one:

kubectl delete deploy -n ingress-nginx nginx-ingress-controller

The same happened to the service name, but in case of the service, cmdaemon will fail to add the new yaml, because the two services attempting to open the same Ingress port will fail. We need to manually clean-up the existing service:

kubectl delete svc -n ingress-nginx ingress-nginx

Now cmdaemon should be able to apply all the updated yaml cleanly.

10. Finalize the update.

Kubernetes should be ready at this point, we can get rid of the old module file and make one final change to the configuration overlays.

root@rb-kube91-a:~# pdsh -A rm -rf /cm/local/modulefiles/kubernetes/default/1.18.15

root@rb-kube91-a:~# cmsh
[rb-kube91-a]% configurationoverlay 
[rb-kube91-a->configurationoverlay]% remove kube-default-master
[rb-kube91-a->configurationoverlay*]% commit
Successfully removed 1 ConfigurationOverlays
Successfully committed 0 ConfigurationOverlays
[rb-kube91-a->configurationoverlay]% set kube-default-master-new priority 510
[rb-kube91-a->configurationoverlay*]% set kube-default-master-new name kube-default-master
[rb-kube91-a->configurationoverlay*]% commit
Successfully committed 1 ConfigurationOverlays
11. Rollback the update.

In order to go back to the previous version 1.18, we have to follow the reverse of steps 1-10.

Downgrade the addons

This is needed if Step 9 was executed only.

[root@rb-kube91-a ~]# cmsh
[rb-kube91-a]% kubernetes 
[rb-kube91-a->kubernetes[default]]% appgroups 
[rb-kube91-a->kubernetes[default]->appgroups]% list
Name (key)                       Applications                  
-------------------------------- ------------------------------
system                           <12 in submode>               
system-backup-2022-08-17-114537  <12 in submode>               
[rb-kube91-a->kubernetes[default]->appgroups]% set system enabled no
[rb-kube91-a->kubernetes*[default*]->appgroups*]% set system-backup-2022-08-17-114537 enabled yes
[rb-kube91-a->kubernetes*[default*]->appgroups*]% commit

This should keep Kubernetes busy for a minute, after it’s done restoring all the resources, do the reverse steps from Step 9 manually:

root@rb-kube91-a:~# kubectl delete deploy -n ingress-nginx ingress-nginx-controller
deployment.apps "ingress-nginx-controller" deleted
root@rb-kube91-a:~# kubectl delete svc -n ingress-nginx ingress-nginx-controller
service "ingress-nginx-controller" deleted

We do not have to undo our changes to the firewall role, we can keep them.

Downgrading the packages

We need to downgrade the newly installed cm-kubernetes121 and downgrade it everywhere to cm-kubernetes (for version 1.18).

Meaning that on both Head Nodes and relevant software images the following command needs to be executed.

apt install cm-kubernetes121- cm-kubernetes  # for ubuntu

yum swap cm-kubernetes121 cm-kubernetes  # for RHEL

Image update relevant nodes

We need to image update the relevant nodes next, in order for all Kubernetes nodes to have the Kubernetes 1.18 binaries again. (e.g. imageupdate -c default -w in cmsh)

Restore the configuration overlay

Depending on whether Step 10 was executed, and whether the kube-default-master-new overlay was already removed, the rollback can be different. In case kube-default-master-new still exists, we can remove + commit it. The lower-priority original kube-default-master overlay should take over the configuration.

[root@rb-kube91-a ~]# cmsh
[rb-kube91-a]% configurationoverlay
[rb-kube91-a->configurationoverlay]% remove kube-default-master-new
[rb-kube91-a->configurationoverlay*]% commit

In the second case kube-default-master was updated in Step 10, we have to remove the extra parameters from the api server role as follows.

[root@rb-kube91-a ~]# cmsh
[rb-kube91-a]% configurationoverlay 
[rb-kube91-a->configurationoverlay]% use kube-default-master 
[rb-kube91-a->configurationoverlay[kube-default-master]]% roles
[rb-kube91-a->configurationoverlay[kube-default-master]->roles]% use kubernetes::apiserver
[rb-kube91-a->configurationoverlay[kube-default-master]->roles[Kubernetes::ApiServer]]% removefrom options "--service-account-issuer=https://kubernetes.default.svc.cluster.local"
[rb-kube91-a->configurationoverlay*[kube-default-master*]->roles*[Kubernetes::ApiServer*]]% removefrom options "--service-account-signing-key-file=/cm/local/apps/kubernetes/var/etc/sa-default.key"
[rb-kube91-a->configurationoverlay*[kube-default-master*]->roles*[Kubernetes::ApiServer*]]% commit

In both cases the Kube API servers may be restarted and can produce errors until we complete the next step.

Restart services

On all the nodes relevant to the Kube cluster, we need to execute the following reload + restarts. In our example setup, as follows. Please note that it includes a restart of Bright Cluster Manager.

root@rb-kube91-a:~# pdsh -w rb-kube91-a,rb-kube91-b,node00[1-4] "systemctl daemon-reload; systemctl restart cmd; systemctl restart '*kube*.service'"

We can cleanup the module file for version 1.21 to avoid it from popping up in tab-completion.

[root@rb-kube91-a ~]# pdsh -A rm -rf /cm/local/modulefiles/kubernetes/default/1.21.4

All versions should be back at 1.18.15.

root@rb-kube91-a:~# kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
node001       Ready    master   23h   v1.18.15
node002       Ready    worker   23h   v1.18.15
node003       Ready    worker   20h   v1.18.15
node004       Ready    worker   20h   v1.18.15
rb-kube91-a   Ready    master   23h   v1.18.15
rb-kube91-b   Ready    master   23h   v1.18.15

Hopefully resources inside Kubernetes are also running in good health and without issues.

It is very unlikely with this downgrade from 1.21 back to 1.18, however, should something get into an invalid, unrecoverable state, we can restore the Etcd database at this point with the snapshot created in Step 1. The instructions for this are explained in the same KB article referenced in Step 1.

Updated on November 17, 2022

Leave a Comment