Kubernetes高可用集羣搭建

前言

上篇文章介紹了K8S的單master服務搭建,本文介紹搭建K8S集羣的高可用,保證在主master節點掛掉之後,node節點的kubelet還能訪問到另一個主節點的apiserver等組件進行運作

K8S多master節點架構圖

在這裏插入圖片描述根據前面配置的單節點mater服務,再配置一臺新的master節點,然後在搭建負載均衡集羣,在負載均衡服務器上安裝nginx服務,並設置四層轉發功能(如果使用七層需要創建證書驗證,使用四層部署相對簡便一點),nginx通過內部的負載均衡將node節點上需要通過訪問master上kube-apiserver組件的請求,轉發到兩臺k8s-master節點上,這樣就可以實現master節點的高可用,當任意一臺master節點宕機後,也可以通過nginx負載均衡將請求轉發到另一個master節點上。

實驗部署

1、實驗環境規劃

主機名 系統 配置 IP地址 軟件包
master1 centos7.6 2+4G 192.168.7.100 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
master2 centos7.6 2+4G 192.168.7.101 kube-apiserver,kube-controller-manager,kube-scheduler
node1 centos7.6 2+4G 192.168.7.102 kubelet,kube-proxy,Docker,Etcd,Flannel
node2 centos7.6 2+4G 192.168.7.103 kubelet,kube-proxy,Docker,Etcd,Flannel
load balancer(master) centos7.6 2+4G 192.168.7.104 keepalived,nginx
load balancer(slave) centos7.6 2+4G 192.168.7.105 keepalived,nginx

2、單master部署
請參考–Kubernetes單master節點二進制部署
3、master2部署
(1)在master1上將kubernetes的工作目錄拷貝到master2節點

[root@localhost ~]# scp -r /opt/kubernetes/ [email protected]:/opt

(2)在master1將kube-scheduler.service、kube-apiserver.servic、kube-controller-manager.service三個服務啓動腳本拷貝到master2節點

[root@localhost ~]# scp /usr/lib/systemd/system/{kube-scheduler,kube-apiserver,kube-controller-manager}.service [email protected]:/usr/lib/systemd/system/

(3)在master2上更改kubernetes的配置文件kube-apiserver

[root@localhost ~]# cd /opt/kubernetes/cfg
[root@localhost cfg]# vim kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.7.100:2379,https://192.168.7.102:2379,https://192.168.7.103:2379 \
--bind-address=192.168.7.101 \		//更改爲本地的IP地址
--secure-port=6443 \
--advertise-address=192.168.7.101 \	//更改爲本地的IP地址
  • kube-scheduler和kube-controller-manager爲master本地使用的配置文件,不需要更改

(4)將master1上面已有的etcd證書複製到master2節點上

  • master2節點上一定要有etcd的證書,否則apiserver服務無法啓動
[root@localhost ~]# scp -r /opt/etcd/ [email protected]:/opt

(5)啓動master2上面的三個服務,設置開機自啓動

[root@localhost cfg]# systemctl start kube-apiserver.service 
[root@localhost cfg]# systemctl enable kube-apiserver.service 
[root@localhost cfg]# systemctl start kube-scheduler.service 
[root@localhost cfg]# systemctl enable kube-scheduler.service 
[root@localhost cfg]# systemctl start kube-controller-manager.service
[root@localhost cfg]# systemctl enable kube-controller-manager.service 

(6)查看node節點狀態

#設置環境變量
[root@localhost cfg]# vim /etc/profile
#在末行添加
export PATH=$PATH:/opt/kubernetes/bin

[root@localhost cfg]# source /etc/profile
#查看node節點狀態
[root@localhost cfg]# kubectl get node
NAME            STATUS   ROLES    AGE     VERSION
192.168.7.102   Ready    <none>   2d19h   v1.12.3
192.168.7.103   Ready    <none>   2d18h   v1.12.3

4、負載均衡部署
(1)配置nginx服務

#配置nginx的yum源
[root@localhost ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0

[root@localhost ~]# yum list
#安裝nginx服務
[root@localhost ~]# yum install nginx -y
#nginx添加四層轉發
[root@localhost ~]# vim /etc/nginx/nginx.conf 
stream {  
   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;		//日誌文件位置

upstream k8s-apiserver {        
	    server 192.168.7.100:6443;   //master01
        server 192.168.7.101:6443;   //master02
    }
    server { 
                listen 6443;
                proxy_pass k8s-apiserver;
    }
}

#檢查語法
[root@localhost ~]# nginx -t
#啓動服務
[root@localhost ~]# systemctl start nginx
#測試nginx服務能否訪問

在這裏插入圖片描述
(2)配置keepalived服務

#安裝keepalived
[root@localhost ~]# yum install keepalived -y
#更改配置文件
[root@localhost ~]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   # 接收郵件地址
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   # 郵件發送地址
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
#nginx服務監控腳本
vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51		//VRRP 路由 ID實例,每個實例是唯一的
    priority 100				//優先級,備服務器設置 90
    advert_int 1				//指定VRRP 心跳包通告間隔時間,默認1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.7.99/24			//VIP地址
    }
    track_script {
        check_nginx
    }
}

#編寫nginx的監控腳本
[root@localhost ~]# vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
[root@localhost ~]# chmod +x /etc/nginx/check_nginx.sh
#查看漂移地址(使用ip addr命令查看,使用ifconfig命令查看不到)
[root@localhost ~]# ip addr
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:89:10:26 brd ff:ff:ff:ff:ff:ff
    inet 192.168.7.104/24 brd 192.168.7.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.7.99/24 scope global secondary ens33			//此處可以查看到VIP地址已經生效
       valid_lft forever preferred_lft forever
    inet6 fe80::8e4:29b5:f159:6cfd/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

(4)故障轉移測試

#停掉master上的nginx服務,查看keepalived服務狀態
[root@localhost ~]# pkill nginx
[root@localhost ~]# systemctl status keepalived.service 
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
#到backup節點上查看漂移地址,能看到漂移地址說明故障轉移成功
[root@localhost ~]# ip addr
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:46:7a:5a brd ff:ff:ff:ff:ff:ff
    inet 192.168.7.105/24 brd 192.168.7.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.7.99/24 scope global secondary ens33			//此處可以查看到VIP地址已經轉移到backup節點
       valid_lft forever preferred_lft forever
    inet6 fe80::da5e:a139:b39d:9639/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
  • 如果把master服務器的服務再次開起來,漂移地址會重新回到master節點上面去,因爲配置文件中master的優先級比backup的優先級高,keepalived會優先選擇優先級較高的服務器

(5)更改node節點配置文件,將訪問master節點的apiserver地址更換爲VIP地址

[root@localhost ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig 
#更改地址爲VIP
    server: https://192.168.7.99:6443
[root@localhost ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig 
#更改地址爲VIP
server: https://192.168.7.99:6443
[root@localhost ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig 
#更改地址爲VIP
    server: https://192.168.7.99:6443

[root@localhost ~]# grep 99 /opt/kubernetes/cfg/*
/opt/kubernetes/cfg/bootstrap.kubeconfig:    server: https://192.168.7.99:6443
/opt/kubernetes/cfg/kubelet.kubeconfig:    server: https://192.168.7.99:6443
/opt/kubernetes/cfg/kube-proxy.kubeconfig:    server: https://192.168.7.99:6443

#在負載均衡節點查看日誌
[root@localhost ~]# tail /var/log/nginx/k8s-access.log 
192.168.7.102 192.168.7.100:6443 - [02/May/2020:21:34:01 +0800] 200 1103
192.168.7.102 192.168.7.101:6443, 192.168.7.100:6443 - [02/May/2020:21:34:01 +0800] 200 0, 1102

5、測試
(1)測試創建pod

#查看pod
[root@localhost ~]# kubectl get pods
No resources found.
#創建pod
[root@localhost ~]# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
#查看pod,創建完pod後,pod有兩個狀態(ContainerCreating表示正在創建,Running表示創建成功)
[root@localhost ~]# kubectl get pods
NAME                    READY   STATUS              RESTARTS   AGE
nginx-dbddb74b8-bhc27   0/1     ContainerCreating   0          4s
[root@localhost ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-bhc27   1/1     Running   0          49s

(2)日誌問題

#創建完pod後發現master節點無法查看到pod的日誌
[root@localhost ~]# kubectl logs nginx-dbddb74b8-bhc27
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-bhc27)
#需要進行提權才能訪問
[root@localhost ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created

(3)查看pod網絡

[root@localhost ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP            NODE            NOMINATED NODE
nginx-dbddb74b8-bhc27   1/1     Running   0          6m28s   172.17.79.3   192.168.7.102   <none>
#在對應網段的node節點上操作可以直接訪問
[root@localhost ~]# curl 172.17.79.3
#在mater節點查看日誌
[root@localhost ~]# kubectl logs nginx-dbddb74b8-bhc27
172.17.79.1 - - [02/May/2020:10:24:33 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章