Tear down K8S cluster
1) List all the nodes in K8S cluster.
1) List all the nodes in K8S cluster.
[root@tssperf09 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
tssperf09.lab Ready master 44d v1.9.2
tssperf10.lab Ready <none> 44d v1.9.2
tssperf11.lab Ready <none> 44d v1.9.2
[root@tssperf09 ~]#
2) First drain the node and make sure that the node is empty before shutting it down.
i) First drain all the non-master node's .
[root@tssperf09 ~]# kubectl drain tssperf10.lab --delete-local-data --force --ignore-daemonsets
node "tssperf10.lab" cordoned
WARNING: Ignoring DaemonSet-managed pods: calico-node-k8wcz, kube-proxy-87mt7, mapr-volplugin-pk5q8; Deleting pods with local storage: mapr-volplugin-pk5q8
pod "calico-kube-controllers-d554689d5-4456k" evicted
node "tssperf10.lab" drained
[root@tssperf09 ~]#
ii) Now drain the master node itself .
[root@tssperf09 ~]# kubectl drain tssperf09.lab --delete-local-data --force --ignore-daemonsets
node "tssperf09.lab" cordoned
WARNING: Ignoring DaemonSet-managed pods: calico-etcd-rvxmg, calico-node-qrw75, kube-proxy-2ztzj, mapr-volplugin-rcsz9; Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: etcd-tssperf09.lab, kube-apiserver-tssperf09.lab, kube-controller-manager-tssperf09.lab, kube-scheduler-tssperf09.lab, maprfs-volume-example; Deleting pods with local storage: mapr-volplugin-rcsz9
pod "calico-kube-controllers-d554689d5-6j5b5" evicted
pod "maprfs-volume-example" evicted
pod "kube-dns-6f4fd4bdf-jndmv" evicted
node "tssperf09.lab" drained
[root@tssperf09 ~]#
3) Now we should see the node's has Scheduling disabled for new POD shouldn't land on this nodes .
[root@tssperf09 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
tssperf09.lab Ready,SchedulingDisabled master 44d v1.9.2
tssperf10.lab Ready,SchedulingDisabled <none> 44d v1.9.2
tssperf11.lab Ready,SchedulingDisabled <none> 44d v1.9.2
4) Finally delete slave nodes followed by master node .
[root@tssperf09 ~]# kubectl delete node tssperf10.lab
node "tssperf10.lab" deleted
[root@tssperf09 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
tssperf09.lab Ready,SchedulingDisabled master 44d v1.9.2
tssperf11.lab Ready,SchedulingDisabled <none> 44d v1.9.2
[root@tssperf09 ~]# kubectl delete node tssperf11.lab
node "tssperf11.lab" deleted
[root@tssperf09 ~]# kubectl delete node tssperf09.lab
node "tssperf09.lab" deleted
[root@tssperf09 ~]# kubectl get nodes
No resources found.
[root@tssperf09 ~]#
5) Finally reset all kubeadm installed state on every node.
[root@tssperf09 ~]# kubeadm reset
[preflight] Running pre-flight checks.
[reset] Stopping the kubelet service.
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[root@tssperf09 ~]#
No comments:
Post a Comment