kubernetes集羣計算節點配置升級和擴容計算節點

kubernetes集羣計算節點的升級和擴容
kuernetes集羣計算節點升級
首先查看集羣的節點狀態
Last login: Thu Mar 14 09:39:26 2019 from 10.83.2.89
[root@kubemaster ~]#
[root@kubemaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 17d v1.13.3
kubenode1 Ready <none> 17d v1.13.3
kubenode2 Ready <none> 17d v1.13.3
[root@kubemaster ~]#
查看哪些POD運行在kubenode1節點上面
[root@kubemaster ~]# kubectl get pods -o wide|grep kubenode1
account-summary-689d96d949-49bjr 1/1 Running 0 7d15h 10.244.1.17 kubenode1 <none> <none>
compute-interest-api-5f54cc8dd9-44g9p 1/1 Running 0 7d15h 10.244.1.15 kubenode1 <none> <none>
send-notification-fc7c8ffc4-rk5wl 1/1 Running 0 7d15h 10.244.1.16 kubenode1 <none> <none>
transaction-generator-7cfccbbd57-8ts5s 1/1 Running 0 7d15h 10.244.1.18 kubenode1 <none> <none>
[root@kubemaster ~]#

如果別的命名空間也有pods,也可以加上命名空間,比如 kubectl get pods -n kube-system -o wide|grep kubenode1

使用kubectl cordon命令將kubenode1節點配置爲不可調度狀態;
[root@kubemaster ~]# kubectl cordon kubenode1
node/kubenode1 cordoned
[root@kubemaster ~]#
繼續查看運行的Pod,發現Pod還是運行在kubenode1上面。其實kubectl crodon的用途只是說後續的pod不運行在kubenode1上面,但是仍然在kubenode1節點上面運行的Pod還是沒有驅逐
[root@kubemaster ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 17d v1.13.3
kubenode1 Ready,SchedulingDisabled <none> 17d v1.13.3
kubenode2 Ready <none> 17d v1.13.3
[root@kubemaster ~]# kubectl get pods -n kube-system -o wide|grep kubenode1
kube-flannel-ds-amd64-7ghpg 1/1 Running 1 17d 10.83.32.138 kubenode1 <none> <none>
kube-proxy-2lfnm 1/1 Running 1 17d 10.83.32.138 kubenode1 <none> <none>
[root@kubemaster ~]#
現在需要驅逐Pod,使用的命令是kubectl drain 如果節點上面還有一些DaemonSet的Pod在運行的話,需要加上參數 --ignore-daemonsets
[root@kubemaster ~]# kubectl drain kubenode1 --ignore-daemonsets
node/kubenode1 already cordoned
WARNING: Ignoring DaemonSet-managed pods: node-exporter-s5vfc, kube-flannel-ds-amd64-7ghpg, kube-proxy-2lfnm
pod/traefik-ingress-controller-7899bfbd87-wsl64 evicted
pod/grafana-57f7d594d9-vw5mp evicted
pod/tomcat-deploy-5fd9ffbdc7-cdnj8 evicted
pod/myapp-deploy-6b56d98b6b-rrb5b evicted
pod/transaction-generator-7cfccbbd57-8ts5s evicted
pod/prometheus-848d44c7bc-rtq7t evicted
pod/send-notification-fc7c8ffc4-rk5wl evicted
pod/compute-interest-api-5f54cc8dd9-44g9p evicted
pod/account-summary-689d96d949-49bjr evicted
node/kubenode1 evicted
[root@kubemaster ~]#
再次查看Pod,是否還有Pod在kubenode1上面運行。沒有的話開始關機升級配置,增加配置之後啓動計算節點。
[root@kubemaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 17d v1.13.3
kubenode1 Ready,SchedulingDisabled <none> 17d v1.13.3
kubenode2 Ready <none> 17d v1.13.3
[root@kubemaster ~]#
#發現這個節點還是無法調度的狀態
[root@kubemaster ~]# kubectl uncordon kubenode1
#設置這個計算節點爲可調度
node/kubenode1 uncordoned
[root@kubemaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 17d v1.13.3
kubenode1 Ready <none> 17d v1.13.3
kubenode2 Ready <none> 17d v1.13.3
[root@kubemaster ~]#
至此升級一臺k8s集羣計算節點的任務就此完成了。現在我們再來實現k8s集羣增加一臺計算節點;
kuernetes集羣計算節點擴容
首先參考我以前的一篇關於通過kubeadm安裝k8s集羣的博客:
  https://blog.51cto.com/zgui2000/2354852
  設置好yum源倉庫,安裝好docker-ce、安裝好kubelet等;
[root@kubenode3 yum.repos.d]# cat /etc/yum.repos.d/docker-ce.repo
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-debuginfo]
name=Docker CE Edge - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-source]
name=Docker CE Edge - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[root@kubenode3 yum.repos.d]#
#準備docker-ce yum倉庫文件
[root@kubenode3 yum.repos.d]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
[root@kubenode3 yum.repos.d]#
#準備kubernetes.repo yum倉庫文件
[root@kubenode3 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.83.32.146 kubemaster
10.83.32.138 kubenode1
10.83.32.133 kubenode2
10.83.32.144 kubenode3
#準備hosts文件
[root@kubenode3 yum.repos.d]# getenforce
Disabled
#禁用selinux,可以通過設置/etc/selinux/config文件
systemctl stop firewalld
systemctl disable firewalld
#禁用防火牆
yum ×××tall docker-ce kubelet kubeadm kubectl
#安裝docker、kubelet等
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
#安裝docker鏡像加速器,需要重啓docker服務。systemctl restart Docker
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.13.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.13.3
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.13.3
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
docker pull carlziess/coredns-1.2.6
docker pull quay.io/coreos/flannel:v0.11.0-amd64
docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag carlziess/coredns-1.2.6 k8s.gcr.io/coredns:1.2.6
#將運行的鏡像提前下載到本地,因爲使用kubeadm安裝的k8s集羣,api-server、controller-manager、kube-scheduler、etcd、flannel等組件需要運行爲容器的形式,所以提前把鏡像下載下來;
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl -p
[root@kubenode3 yum.repos.d]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@kubenode3 yum.repos.d]#
現在開始擴容計算節點
  每個token只有24小時的有效期,如果沒有有效的token,可以使用如下命令創建

[root@kubemaster ~]# kubeadm token create
fv93ud.33j7oxtdmodwfn7f
[root@kubemaster ~]#
#創建token
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
c414ceda959552049efccc2d9fd1fc1a2006689006a5f3b05e6ca05b3ff1a93e
#查看Kubernetes認證的SHA256加密字符串
swapoff -a
#關閉swap分區
kubeadm join 10.83.32.146:6443 --token fv93ud.33j7oxtdmodwfn7f --discovery-token-ca-cert-hash sha256:c414ceda959552049efccc2d9fd1fc1a2006689006a5f3b05e6ca05b3ff1a93e --ignore-preflight-errors=Swap
#加入kubernetes集羣
[root@kubemaster ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 18d v1.13.3
kubenode1 Ready <none> 17d v1.13.3
kubenode2 Ready <none> 17d v1.13.3
kubenode3 Ready <none> 2m22s v1.13.4
#查看節點狀態,發現已經成功加入kubenode3節點

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章