In this example we will be scaling out the test-nodes pod from one of our KOPS cluster.
Draining the node of pods before deletion
Login to the KOPS node for one of the clusters and run this command to find the node:
kubectl get nodes --show-labels | grep test-nodes |
That should show you a single node’s name, will be of something like ip-10-99-229-241.ec2.internal
once you know the name you can drain the node
kubectl drain ip-10-99-229-241.ec2.internal |
At first you will receive two errors:
-
cannot delete DaemonSet-managed Pods
To understand a DaemonSet: Imagine a deployment which had a rule which is “you must run at most one pod on a worker node” and “you must run one pod on each worker node where you can run” IE where label selectors say you can run effectively: That’s what a DaemonSet is.
What this error message is saying is that: I can’t evict a DaemonSet as by its nature it can only run here
We can ignore this behaviour by using: –ignore-daemonsets
-
cannot delete Pods with local storage
In pods you can mount storage – usually central – into a pods volumeMounts, you can also mount local storage – either through specific local dirs – or through emptyDirs (which mount a temp volume)
What this error message is saying that the contents of those won’t be ported across – that is fine in our case (and should be in all kubernetes systems) as if you need persistent storage there is central storage for that.
We can ignore this behaviour by using: –delete-emptydir-data
Run the command again with the two flags to ignore the errors:
kubectl drain ip-10-99-229-241.ec2.internal --ignore-daemonsets --delete-emptydir-data |
Once the node has drained you should get a list of pods which have been evicted, to make sure those pods come back to life elsewhere in the cluster, we can view the status of all pods:
kubectl get pods -A | less |
If you see any pods restarting just run this command again after a while to ensure all pods are in either a completed or running state, once everything is fine you can move onto the next step:
Deleting the node
Export the IAM creds for KOPS as normal (pipeline admin ones) and then edit the config file of the node you are scaling out (in our case test-nodes)
kops edit ig test-nodes |
there will be a set of vars
maxSize: 1 minSize: 1 |
(In our case they should always be set to the same value as we haven’t enabled auto-scaling)
set both to 0
then run
kops update cluster |
If all has gone to plan it should show a single change where it scales from 1 => 0, if so you can now run:
kops update cluster --yes |
Congratulations you have just scaled a node out of a cluster