1. Prerequisites
- This KB article is written with BCM 10.25.06 in mind, but will work with all versions of BCM 10 (and BCM 11)
1.1. Downloading the helper script
These downloads will not be necessary if the version of BCM is either >= 10.25.06 or >= 11.25.05.
# BCM 10+ wget -O /cm/local/apps/cmd/scripts/cm-kubeadm-manage https://support2.brightcomputing.com/etcd/cm-kubeadm-manage chmod +x /cm/local/apps/cmd/scripts/cm-kubeadm-manage
1.2. Example usage
We assume two aliases are in place in order to keep the KB article output more readable:
alias cm-kubeadm-manage='/cm/local/apps/cmd/scripts/cm-kubeadm-manage'
These are the options for the cm-kubeadm-manage
tool:
# cm-kubeadm-manage --help usage: cm-kubeadm-manage [-h] --kube-cluster KUBE_CLUSTER {status,update_configmap,update,update_apiserver,update_controller_manager,update_scheduler,update_apiserver_cert,update_certs,check_certs} ... Manage kubeadm cluster operations positional arguments: {status,update_configmap,update,update_apiserver,update_controller_manager,update_scheduler,update_apiserver_cert,update_certs,check_certs} Action to perform status Show kube control-plane status update_configmap Update the kubeadm-config configmap update Update the kube control-plane (configmap, certs, apiserver, controller-manager, scheduler) update_apiserver Update the kube-apiserver manifest + restart the pod using crictl update_controller_manager Update the kube-controller-manager manifest update_scheduler Update the kube-scheduler manifest update_apiserver_cert Update the kube-apiserver certificate update_certs Renew Kubernetes certificates check_certs Check Kubernetes certificate expiration options: -h, --help show this help message and exit --kube-cluster KUBE_CLUSTER Kubernetes cluster name (required)
- The
--kube-cluster
parameter is mandatory, and actions (status, remove, etc.) may require additional parameters. - In this KB article we will assume the kube cluster we are dealing with has the label/name
default
. The list of kube clusters managed by BCM can be listed bycmsh
as follows:cmsh -c 'kubernetes list'
.
2. Update configmap on control-planes and Head Nodes
Please always start once by invoking the update_configmap
action. This is needed for two reasons:
- We want to ensure that
BCM
configuration is synchronized to thekubeadm
configmap
- We want to ensure that the
kubeadm
configmap
is synchronized to the/root/.kube/kubeadm-init-<cluster>.yaml
file on control-plane nodes. - Please note that no services are affected or restarted. Only configuration used by
kubeadm
is updated in the right places.- Other tasks such as updating manifests, certificates, etc. rely on the configuration being up-to-date.
# cm-kubeadm-manage --kube-cluster=default update_configmap ... 2025-05-13 09:32:49,407 - cm-kubeadm-manage - INFO - Updated Etcd endpoints in kubeadm config ... 2025-05-13 09:32:49,409 - cm-kubeadm-manage - INFO - These are common options across all master nodes: 2025-05-13 09:32:49,409 - cm-kubeadm-manage - INFO - { "apiServer": { "default-watch-cache-size": "2000", "delete-collection-workers": "10", "event-ttl": "30m", "max-mutating-requests-inflight": "1600", "max-requests-inflight": "3200" }, "controllerManager": {}, "scheduler": {} } 2025-05-13 09:32:49,410 - cm-kubeadm-manage - INFO - These are node specific options: 2025-05-13 09:32:49,410 - cm-kubeadm-manage - INFO - { "ci-tmp-100-u2204-field-perch-130706": { "apiServer": {} }, "node001": { "apiServer": {} } } 2025-05-13 09:32:49,419 - cm-kubeadm-manage - DEBUG - Executing: kubectl --kubeconfig=/root/.kube/config-default apply -f /tmp/tmpd0lu58xw.yaml 2025-05-13 09:32:50,143 - cm-kubeadm-manage - INFO - Successfully updated kubeadm config 2025-05-13 09:32:50,203 - cm-kubeadm-manage - INFO - Writing kubeadm-init file to ci-tmp-100-u2204-field-perch-130706 /cm/local/apps/python3/lib/python3.9/site-packages/paramiko/client.py:889: UserWarning: Unknown ssh-ed25519 host key for ci-tmp-100-u2204-field-perch-130706: b'9f57a95274cfe92101fff4c29ede167a' warnings.warn( 2025-05-13 09:32:50,794 - cm-kubeadm-manage - INFO - Successfully wrote /root/.kube/kubeadm-init-default.yaml on ci-tmp-100-u2204-field-perch-130706 2025-05-13 09:32:50,812 - cm-kubeadm-manage - INFO - Writing kubeadm-init file to node001 /cm/local/apps/python3/lib/python3.9/site-packages/paramiko/client.py:889: UserWarning: Unknown ssh-ed25519 host key for node001: b'c6659c4299b88cf0f60a2ea22b507f2c' warnings.warn( 2025-05-13 09:32:51,223 - cm-kubeadm-manage - INFO - Successfully wrote /root/.kube/kubeadm-init-default.yaml on node001
3. Task: Add a hostname or IP to Kubernetes API server public-facing certificate
This can be a problem for various applications that talk to the Kubernetes API server through an IP address or Hostname that is not being recognized by the Kubernetes API server, as it may not be present in the Subject Alternative Name (SAN) part of the certificate.
Let’s say we are using a kubeconfig file that refers to https://my-cluster.tld:10443
and runs into an issue. The relevant part in the error message is the last line.
user@my-laptop$ kubectl get nodes E0513 15:04:21.538773 444333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://my-cluster.tld:10443/api?timeout=32s\": tls: failed to verify certificate: x509: certificate is valid for active, ci-tmp-100-u2204-field-perch-130706, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, localhost, master, not my-cluster.tld" E0513 15:04:21.546104 444333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://my-cluster.tld:10443/api?timeout=32s\": tls: failed to verify certificate: x509: certificate is valid for active, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, localhost, master, node001, not my-cluster.tld" E0513 15:04:21.556455 444333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://my-cluster.tld:10443/api?timeout=32s\": tls: failed to verify certificate: x509: certificate is valid for active, ci-tmp-100-u2204-field-perch-130706, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, localhost, master, not my-cluster.tld" E0513 15:04:21.565265 444333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://my-cluster.tld:10443/api?timeout=32s\": tls: failed to verify certificate: x509: certificate is valid for active, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, localhost, master, node001, not my-cluster.tld" E0513 15:04:21.575378 444333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://my-cluster.tld:10443/api?timeout=32s\": tls: failed to verify certificate: x509: certificate is valid for active, ci-tmp-100-u2204-field-perch-130706, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, localhost, master, not my-cluster.tld" Unable to connect to the server: tls: failed to verify certificate: x509: certificate is valid for active, ci-tmp-100-u2204-field-perch-130706, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, localhost, master, not my-cluster.tld
The correct way to fix this is as follows. We first edit the config map using:
root@headnode# kubectl edit configmap -n kube-system kubeadm-config
Next, in the editor we are presented, we add the additional hostname and/or IP. In our case we added the additional line (highlighted below as bold and italic text)
apiVersion: v1 data: ClusterConfiguration: | apiServer: certSANs: - active - master - localhost - 10.141.255.254 - 10.141.0.1 - my-cluster.tld extraArgs: - name: default-watch-cache-size value: '2000' - name: delete-collection-workers value: '10' - name: event-ttl value: 30m - name: max-mutating-requests-inflight value: '1600' - name: max-requests-inflight value: '3200'
We will repeat section 2 to propagate this configmap to the control-plane nodes
root@headnode# cm-kubeadm-manage --kube-cluster=default update_configmap
Next we will update the certs one control-plane node at a time. We will start with node001
in our example.
# kubectl get nodes NAME STATUS ROLES AGE VERSION ci-tmp-100-u2204-field-perch-130706 Ready control-plane,master 7h39m v1.31.8 node001 Ready control-plane,master,worker 7h37m v1.31.8 node002 Ready worker 7h37m v1.31.8 node003 Ready worker 7h37m v1.31.8 node004 Ready worker 7h38m v1.31.8 node005 Ready worker 7h38m v1.31.8 node006 Ready worker 7h38m v1.31.8
Next:
# cm-kubeadm-manage --kube-cluster=default update_certs node001 ... 2025-05-13 15:15:55,734 - cm-kubeadm-manage - INFO - Certificate expiration status: 2025-05-13 15:15:55,735 - cm-kubeadm-manage - INFO - CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf May 13, 2026 05:36 UTC 364d ca no apiserver May 13, 2026 05:36 UTC 364d ca no apiserver-kubelet-client May 13, 2026 05:36 UTC 364d ca no controller-manager.conf May 13, 2026 05:36 UTC 364d ca no front-proxy-client May 13, 2026 05:36 UTC 364d front-proxy-ca no scheduler.conf May 13, 2026 05:36 UTC 364d ca no !MISSING! super-admin.conf CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca May 11, 2035 05:34 UTC 9y no front-proxy-ca May 11, 2035 05:34 UTC 9y no ... certificate for serving the Kubernetes API renewed certificate for the API server to connect to kubelet renewed certificate embedded in the kubeconfig file for the controller manager to use renewed certificate for the front proxy client renewed certificate embedded in the kubeconfig file for the scheduler manager to use renewed MISSING! certificate embedded in the kubeconfig file for the super-admin Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates. ... renamed '/etc/kubernetes/pki/default/apiserver.crt' -> '/etc/kubernetes/pki/default/apiserver.crt.backup' renamed '/etc/kubernetes/pki/default/apiserver.key' -> '/etc/kubernetes/pki/default/apiserver.key.backup' 2025-05-13 15:15:57,444 - cm-kubeadm-manage - DEBUG - Executing: kubeadm init phase certs apiserver --config /root/.kube/kubeadm-init-default.yaml --v=5 2025-05-13 15:15:57,827 - cm-kubeadm-manage - INFO - Successfully renewed certificates (please read the kubeadm instructions): 2025-05-13 15:15:57,828 - cm-kubeadm-manage - INFO - [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [active kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost master my-cluster.tld node001] and IPs [10. 150.0.1 10.141.0.1 127.0.0.1 10.141.255.254]
We can already spot that it generated a cert with the additional DNS name my-cluster.tld
in the output on the last line.
We can validate this with the following command, on the control-plane we updated.
root@node001:~# openssl x509 -noout -ext subjectAltName -in /etc/kubernetes/pki/default/apiserver.crt X509v3 Subject Alternative Name: DNS:active, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:localhost, DNS:master, DNS:my-cluster.tld, DNS:node001, IP Address:10.150.0. 1, IP Address:10.141.0.1, IP Address:127.0.0.1, IP Address:10.141.255.254
We see the newly added DNS entry DNS:my-cluster.tld
.
kubeadm
suggests to: “Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.”. Since updating the file on disk doesn’t mean it will be used by the Kubernetes API server automatically.
In our case, we were only concerned with the updated Kube API server using the new certificate, so we can choose to only restart the kube apiserver, as follows.
# cm-kubeadm-manage --kube-cluster=default update_apiserver node001 ... 2025-05-13 15:19:19,457 - cm-kubeadm-manage - INFO - Successfully removed kube-apiserver pod (the POD will be rescheduled)
When it comes back up, it should be ready.
Next we do the other control-plane nodes. In our example that means our headnode (ci-tmp-100-u2204-field-perch-130706).
# cm-kubeadm-manage --kube-cluster=default update_certs ci-tmp-100-u2204-field-perch-130706 # cm-kubeadm-manage --kube-cluster=default update_apiserver ci-tmp-100-u2204-field-perch-130706
Next our kubeconfig should no longer have issues using my-cluster.tld
for the endpoint, see below.
user@my-laptop$ grep server ~/.kube/config server: https://my-cluster.tld:10443 user@my-laptop$ kubectl get nodes NAME STATUS ROLES AGE VERSION ci-tmp-100-u2204-field-perch-130706 Ready control-plane,master 7h50m v1.31.8 node001 Ready control-plane,master,worker 7h48m v1.31.8 node002 Ready worker 7h48m v1.31.8 node003 Ready worker 7h48m v1.31.8 node004 Ready worker 7h48m v1.31.8 node005 Ready worker 7h48m v1.31.8 node006 Ready worker 7h48m v1.31.8
4. Task: Rotate all certificates control-plane nodes
The previous section was very specific to update only the Kubernetes API server. In order to update certificates on control-plane nodes we basically have to perform the following actions:
update_configmap
- for each <node>:
update_certs <node>
update_apiserver <node>
update_controller_manager <node>
update_scheduler <node>
Since four actions per node can be tedious to type, we can use a short-hand action update
instead. Which does all of the above for one node, including the configmap update for that node as well.
Assuming we still have the two control-plane nodes, the headnode
and node001
, we execute below two commands. It is recommended to do these one-by-one.
# cm-kubeadm-manage --kube-cluster=default update node001control-plane # cm-kubeadm-manage --kube-cluster=default update ci-tmp-100-u2204-field-perch-130706
Updating the apiserver, controller manager, etc. include restarts of Pods. Please check up on the status in between node updates, and see that they are successful before continuing to the next node.
5. Task: Apply new configuration to Kubelet configuration (/var/lib/kubelet/config.yaml
)
Testing new configuration on one specific kubelet
Sometimes, it can be beneficial to try something out on one node first, let’s say we want to add the following configuration to /var/lib/kubelet/config.yaml
:
systemReserved: cpu: 200m memory: 600Mi
We can SSH to the node and simply modify the file and append it, to get the following result (some output omitted)
root@node003:~# cat /var/lib/kubelet/config.yaml apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: cacheTTL: 0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/default/ca.crt ... syncFrequency: 0s volumeStatsAggPeriod: 0s systemReserved: cpu: 200m memory: 600Mi
Next we restart the kubelet service with systemctl restart kubelet
.
Validating the new configuration on the changed node
With kubectl
we can query a specific kubelet <node>
as follows.
kubectl get --raw /api/v1/nodes/<node>/proxy/configz | jq . | grep <something>
In this example:
root@node003:~# kubectl get --raw /api/v1/nodes/node003/proxy/configz | jq . | grep systemReserved -A 3 "systemReserved": { "cpu": "200m", "memory": "600Mi" },
We can contrast it with a different node that we didn’t manually change (note: no output in this case, means defaults are not changed):
root@node003:~# kubectl get --raw /api/v1/nodes/node004/proxy/configz | jq . | grep systemReserved -A 3 root@node003:~#
Updating multiple nodes directly using pdsh
In some cases we still only want to test certain configuration on multiple nodes, we can do the following.
pdsh -w node00[1-3] "cat << 'EOT' >> /var/lib/kubelet/config.yaml systemReserved: cpu: 200m memory: 600Mi EOT"
However, at some point it is recommended to let BCM handle the configuration changes. Since if the kubelet role is unassigned, the node is rejoined freshly to the cluster, or a FULL sync forces the kubelet to rejoin, in all those cases the file will be regenerated, and will no longer contain our manually applied changes.
Patching the yaml through BCM via the kubelet role.
Please note that there are two configuration overlays typically, one for the control-plane nodes, one for the workers. In that case two roles have to be modified.
This example will show the change on the kube-default-master
overlay for the control-plane nodes.
With the following example cluster, that will affect the configuration of ci-tmp-t-u2204-vine-clown-051608
and node001
.
root@ci-tmp-t-u2204-vine-clown-051608:~# kubectl get nodes NAME STATUS ROLES AGE VERSION ci-tmp-t-u2204-vine-clown-051608 Ready control-plane,master 5h35m v1.32.5 node001 Ready control-plane,master,worker 5h34m v1.32.5 node002 Ready worker 5h34m v1.32.5 node003 Ready worker 5h34m v1.32.5 node004 Ready worker 5h34m v1.32.5 node005 Ready worker 5h34m v1.32.5 node006 Ready worker 5h34m v1.32.5
In BCM 11 we can do the following:
root@ci-tmp-t-u2204-vine-clown-051608:~# cmsh [ci-tmp-t-u2204-vine-clown-051608]% configurationoverlay [ci-tmp-t-u2204-vine-clown-051608->configurationoverlay]% use kube-default-master [ci-tmp-t-u2204-vine-clown-051608->configurationoverlay[kube-default-master]]% roles [ci-tmp-t-u2204-vine-clown-051608->configurationoverlay[kube-default-master]->roles]% use kubelet [ci-tmp-t-u2204-vine-clown-051608->configurationoverlay[kube-default-master]->roles[kubelet]]% set customyaml
This will show us an editor where we can put the yaml that has to be “merged over” the /var/lib/kubelet/config.yaml
file on the node(s) controlled by this configuration overlay. This means, if we write the following yaml:
systemReserved: cpu: 200m memory: 600Mi runtimeRequestTimeout: 10m0s
We will add the systemReserved block, since it was not present at all in the configuration. And in this case it will overwrite the existing configuration “runtimeRequestTimeout: 0s”, since that configuration exists by default (at least on this cluster).
Please note that again no services are automatically restarted, for that we can go to the device submode and issue a restart for the kubelet service on all nodes in the overlay at once, or one-by-one.
# all kubelet services for the overlay [ci-tmp-t-u2204-vine-clown-051608->device]% foreach -e kube-default-master (services; restart kubelet) # one specific node [ci-tmp-t-u2204-vine-clown-051608->device]% use node001 [ci-tmp-t-u2204-vine-clown-051608->device[node001]]% services [ci-tmp-t-u2204-vine-clown-051608->device[node001]->services]% restart kubelet
It is also possible to clone a Configuration Overlay, and then using the priority have it override the kubelet role for a specific set of nodes. That way we can also do a gradual change in configuration, by slowly moving nodes from one overlay to the other.
One more example, let’s say we want to remove yaml configuration instead. We can repeat the set customyaml
step from before in the kubelet
role. And add two yaml documents, separated by the separator ---
. Example below.
featureGates: DynamicResourceAllocation: true --- systemReserved: cpu: 200m memory: 600Mi runtimeRequestTimeout: 10m0s
The above will first remove the DynamicResourceAllocation: true
setting from the /var/lib/kubelet/config.yaml
on the nodes, and then copy over the yaml the systemReserved, runtimeRequestTimeout, etc., like before.
In BCM 10 this is slightly less convenient, and only available in BCM 10.25.06 and higher.
We need to place the patch.yaml
on the filesystem first. For example in /root/patch.yaml
.
root@ci-tmp-100-u2204-swift-seed-051857:~# cat patch.yaml featureGates: DynamicResourceAllocation: true --- systemReserved: cpu: 200m memory: 600Mi runtimeRequestTimeout: 10m0s
Next we have to “escape” it with a one-liner as folalows. It will wrap the multi-line patch.yaml file into one line, encapsulated by two double quotes.
root@rb-bcm10-ubuntu2404:~# python3 -c "import sys; print(f\"\\\"{repr(open('/root/patch.yaml').read())[1:-1]}\\\"\")" "featureGates:\n DynamicResourceAllocation: true\nsyncFrequency: 0s\n---\nsystemReserved:\n cpu: 200m\n memory: 600Mi\nruntimeRequestTimeout: 10m0s\n"
We have to copy the output of the python3
one-liner and leave it in our clipboard, we will use it inside cmsh
as demonstrated below. Note that the instruction set -e custom_yaml <escaped_patched_yaml>
is where the output is used.
root@rb-bcm10-ubuntu2404:~# cmsh [rb-bcm10-ubuntu2404]% configurationoverlay [rb-bcm10-ubuntu2404->configurationoverlay]% use kube-default-master [rb-bcm10-ubuntu2404->configurationoverlay[kube-default-master]]% roles [rb-bcm10-ubuntu2404->configurationoverlay[kube-default-master]->roles]% use kubelet [rb-bcm10-ubuntu2404->configurationoverlay[kube-default-master]->roles[kubelet]]% set -e custom_yaml "featureGates:\n DynamicResourceAllocation: true\nsyncFrequency: 0s\n---\nsystemReserved:\n cpu: 200m\n memory: 600Mi\nruntimeRequestTimeout: 10m0s\n" [rb-bcm10-ubuntu2404->configurationoverlay*[kube-default-master*]->roles*[kubelet*]]% commit # we verify that the output of the custom_yaml field matches our initial patch.yaml file [rb-bcm10-ubuntu2404->configurationoverlay[kube-default-master]->roles[kubelet]]% get custom_yaml featureGates: DynamicResourceAllocation: true syncFrequency: 0s --- systemReserved: cpu: 200m memory: 600Mi runtimeRequestTimeout: 10m0s
The rest of the handling by BCM should be the same. Note that also in BCM 10 the kubelet service is not automatically restarted and should be done manually.
6. Task: Add configuration to Kube API server, Controller Manager or Scheduler
The way to do this is more or less the same for all these components, we’ll use configuring the MultiCIDRServiceAllocator feature as an example, which requires configuration changes to both the Kube API server and controller manager.
If it hasn’t already been done, please start section 2 again (updating the configmap) as follows.
root@headnode# cm-kubeadm-manage --kube-cluster=default update_configmap
Next we have to update kubeadm’s ClusterConfiguration yaml, which is stored inside Kubernetes as a configmap.
root@headnode# kubectl edit configmap -n kube-system kubeadm-config
Please note that we will first show the correct syntax for Kubernetes >= v1.31, which introduced the following new syntax of v1beta4 over the previous version v1beta3.
In the configmap, the apiVersion field specifices which version is being used by kubeadm:
apiVersion: v1 data: ClusterConfiguration: | apiServer: certSANs: - rb-bcm10-ubuntu2404.openstacklocal - master - localhost - 10.141.0.1 - 10.141.255.254 apiVersion: kubeadm.k8s.io/v1beta4 ...
Underneath the apiServer
part, we have to add:
extraArgs: - name: runtime-config value: networking.k8s.io/v1beta1=true - name: feature-gates value: MultiCIDRServiceAllocator=true
Underneath the controllerManager
part:
extraArgs: - name: feature-gates value: MultiCIDRServiceAllocator=true
Below is a full example of how the configuration would look like after these changes.
root@rb-bcm10-ubuntu2404:~# kubectl get configmap -n kube-system kubeadm-config -o yaml apiVersion: v1 data: ClusterConfiguration: | apiServer: certSANs: - rb-bcm10-ubuntu2404.openstacklocal - master - localhost - 10.141.0.1 - 10.141.255.254 extraArgs: - name: runtime-config value: networking.k8s.io/v1beta1=true - name: feature-gates value: MultiCIDRServiceAllocator=true apiVersion: kubeadm.k8s.io/v1beta4 caCertificateValidityPeriod: 87600h0m0s certificateValidityPeriod: 8760h0m0s certificatesDir: /etc/kubernetes/pki/default clusterName: kubernetes controlPlaneEndpoint: 127.0.0.1:10443 controllerManager: extraArgs: - name: feature-gates value: MultiCIDRServiceAllocator=true dns: {} encryptionAlgorithm: RSA-2048 etcd: external: caFile: /etc/kubernetes/pki/default/etcd/ca.crt certFile: /etc/kubernetes/pki/default/apiserver-etcd-client.crt endpoints: - https://10.141.0.1:2379 keyFile: /etc/kubernetes/pki/default/apiserver-etcd-client.key imageRepository: registry.k8s.io kind: ClusterConfiguration kubernetesVersion: v1.31.9 networking: dnsDomain: cluster.local podSubnet: 172.29.0.0/16 serviceSubnet: 10.150.0.0/16 proxy: {} scheduler: {} kind: ConfigMap metadata: creationTimestamp: "2025-06-05T21:09:23Z" name: kubeadm-config namespace: kube-system resourceVersion: "273511" uid: fd23ff76-4c44-4a81-803a-0fbaa6eb9461
Similarly proxy
and scheduler
blocks allow for customizations of extraArgs
.
The pre-Kubernetes 1.31, v1beta3 syntax is slightly different. The extraArgs should be specified as follows:
apiVersion: v1 data: ClusterConfiguration: | apiServer: certSANs: - rb-bcm10-ubuntu2404.openstacklocal - master - localhost - 10.141.0.1 - 10.141.255.254 extraArgs: runtime-config: networking.k8s.io/v1beta1=true feature-gates: MultiCIDRServiceAllocator=true apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki/default clusterName: kubernetes controlPlaneEndpoint: 127.0.0.1:10443 controllerManager: extraArgs: feature-gates: MultiCIDRServiceAllocator=true dns: {} ...
Next we can update the control-plane nodes, one at a time, using.
# cm-kubeadm-manage --kube-cluster=default update <node>
(Or for a more granular approach please refer to Section 4)
After the above update, the configmap may look different, since BCM configured options will also be merged in the configmap correctly.
root@rb-bcm10-ubuntu2404:~# kubectl get configmap -n kube-system kubeadm-config -o yaml apiVersion: v1 data: ClusterConfiguration: | apiServer: ... extraArgs: - name: runtime-config value: networking.k8s.io/v1beta1=true - name: feature-gates value: MultiCIDRServiceAllocator=true - name: default-watch-cache-size value: '2000' - name: delete-collection-workers value: '10' - name: event-ttl value: 30m - name: max-mutating-requests-inflight value: '1600' - name: max-requests-inflight value: '3200' apiVersion: kubeadm.k8s.io/v1beta4 ...
We can validate our changes by having a look at the running Pods or the manifest file on the node (e.g., /etc/kubernetes/manifests/kube-apiserver.yaml
)