Kubernetes----雙master節點二進制部署

前言:

  • 本篇博客承接上篇單節點的環境添加一個master節點,同時添加創建了ngixn負載均衡+keepalived高可用集羣,k8s單節點部署快速入口: K8s單節點部署

一、雙master二進制集羣分析

在這裏插入圖片描述

  • 二進制集羣
  • 與單master的二進制集羣相比,雙master集羣具備高可用的特性,當一個master宕機時,load Blance就會將VIP虛擬地址轉移到另一隻master上,保證了master的可靠性
  • 雙master的核心在於,需要指向一個核心地址,上一篇做單節點的時候,在證書生成等腳本中已定義了VIP地址(192.168.226.100),VIP開啓apiserver,雙master節點開啓端口接收node節點的apiserver請求,其實如果有新的節點加入,不會直接找master節點,而且是直接找到vip進行apiserver請求,然後vip再進行調度,分發到某一個master中執行,此時master收到請求後會給新增的node節點頒發證書
  • shuangmaster集羣還簡歷了nginx負載均衡,緩解了node對master請求的壓力,減輕了master對資源的使用(負擔)。

二、實驗環境介紹

  • 雙master節點部署角色如下:
  • master1 IP地址:192.168.226.128
    • 分配資源:centos7 2個CPU 4G內存
    • 需求組件:kube-apiserver kube-controller-manager kube-scheduler etcd
  • master2 IP地址:192.168.226.137
    • 分配資源:centos7 2個CPU 4G內存
    • 需求組件:kube-apiserver kube-controller-manager kube-scheduler
  • node1節點 IP地址:192.168.226.132
    • 分配資源:centos7 2個CPU 4G內存
    • 需求組件:kubelet kube-proxy docker-ce flannel etcd
  • node2節點 IP地址:192.168.226.133
    • 分配資源:centos7 2個CPU 4G內存
    • 需求組件:kubelet kube-proxy docker-ce flannel etcd
  • nginx_lbm IP地址:192.168.226.148
    • 分配資源:centos7 2個CPU 4G內存
    • 需求組件:nginx keepalived
  • nginx_lbb IP地址:192.168.226.167
    • 分配資源:centos7 2個CPU 4G內存
    • 需求組件:nginx keepalived
  • VIP IP地址:192.168.226.100

三、實驗部署

  • 本篇博客部署的環境基於單節點集羣,鏈接見 ”前言“
#所有節點關閉防火牆、核心防護和網絡管理
##永久關閉安全性功能[root@master2 ~]# systemctl stop firewalld
[root@master2 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master2 ~]# setenforce 0
[root@master2 ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config  

##關閉網絡管理,防止IP地址變化
systemctl stop NetworkManager
systemctl disable NetworkManager 
3.1 搭建master2節點
  • 在master1節點操作
[root@master ~]# scp -r /opt/kubernetes/ [email protected]:/opt
.............省略部分內容
[root@master ~]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service [email protected]:/usr/lib/systemd/system/
.....省略部分內容
  • master2節點操作

    修改配置文件kube-apiserver中的IP地址

#關閉防火牆、核心防護
[root@master2 ~]# systemctl stop firewalld[root@master2 ~]# systemctl disable firewalldRemoved symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.[root@master2 ~]# setenforce 0[root@master2 ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 

#api-server
[root@master2 cfg]# vim kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.226.128:2379,https://192.168.226.132:2379,https://192.168.226.133:2379 \
--bind-address=192.168.226.137 \		#綁定bind地址(本地IP)
--secure-port=6443 \
--advertise-address=192.168.226.137 \	#修改對外展示地址
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  • master1節點操作

    拷貝master01上已有的etcd證書給master2使用

    • PS:因爲新加入的master中也包含apiserver,在apiserver工作時,也會需要與ETCD進行交互,所以也需要ETCD證書進行認證
[root@master ~]# scp -r /opt/etcd/ [email protected]:/opt/
  • master2中,啓動服務

    啓動api-server、api-scheduler、api-controller-manager服務

#apiserver
[root@master2 ~]# systemctl start kube-apiserver.service 
[root@master2 ~]# systemctl enable kube-apiserver.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@master2 ~]# systemctl status kube-apiserver.service 
active(running)

#控制器
[root@master2 ~]# systemctl start kube-controller-manager.service
[root@master2 ~]# systemctl enable kube-controller-manager.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@master2 ~]# systemctl status kube-controller-manager.service
active(running)

#調度器
[root@master2 ~]# systemctl start kube-scheduler.service
[root@master2 ~]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master2 ~]# systemctl status kube-scheduler.service
  • 添加環境變量
[root@master2 ~]# echo "export PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile
[root@master2 ~]# tail -2 /etc/profile
unset -f pathmunge
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/opt/kubernetes/bin/
[root@master2 ~]# source /etc/profile

#查看集羣節點信息
[root@master2 ~]# kubectl get node
NAME              STATUS   ROLES    AGE   VERSION
192.168.226.132   Ready    <none>   45h   v1.12.3
192.168.226.133   Ready    <none>   43h   v1.12.3
3.2 nginx負載均衡部署
  • 在兩臺nginx服務器上關閉核心防護、清空防火牆列表
setenforce 0
iptables -F
  • 在lb1、lb2操作

    添加nginx官方yum源、安裝nginx

[root@localhost ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
----》wq
#使用nginx官方源安裝

#重新加載yum倉庫
[root@localhost ~]# yum list

#安裝nginx
[root@localhost ~]# yum install nginx -y
.....省略部分內容
  • 這裏使用nginx主要使用其stream模塊做四層負載
  • 在nginx配置文件中添加四層轉發功能
[root@localhost ~]# vim /etc/nginx/nginx.conf 
#在event模塊和http模塊間插入stream模塊
stream {

   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;	#指定日誌目錄
    
    upstream k8s-apiserver {		#此處爲兩個nginx地址池
        server 192.168.226.128:6443;		#端口爲6443
        server 192.168.226.137:6443;
    }                 
    server { 
                listen 6443;
                proxy_pass k8s-apiserver;	#反向代理指向nginx地址池
    }
    }
----》wq

#檢查語法
[root@localhost ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
    
#啓動服務
[root@localhost ~]# systemctl start nginx 
[root@localhost ~]# netstat -natp | grep nginx
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      29019/nginx: master 
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      29019/nginx: master
  • 在兩個nginx節點部署keeplived服務

    下載keepalived

[root@localhost ~]# yum install keepalived -y
  • 上傳修改的keepalived配置文件

    nginx_lbm節點 keepalived配置

[root@localhost ~]# cp keepalived.conf /etc/keepalived/keepalived.conf 
cp: overwrite ‘/etc/keepalived/keepalived.conf’? yes
[root@localhost ~]# vim /etc/keepalived/keepalived.conf 

! Configuration File for keepalived

global_defs {
   # 接收郵件地址
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   # 郵件發送地址
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {		#定義check_nginx函數 nginx檢測腳本(此項保證了keepalived與nginx的關聯)
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的
    priority 100    # 優先級,備服務器設置 90
    advert_int 1    # 指定VRRP 心跳包通告間隔時間,默認1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.226.100/24
    }
    track_script {			#監控的促發腳本
        check_nginx			#函數名
    }
}
-----》wq
  • nginx_lbb keepalived配置文件修改
[root@localhost ~]# cp keepalived.conf /etc/keepalived/keepalived.conf 
cp: overwrite ‘/etc/keepalived/keepalived.conf’? yes
[root@localhost ~]# vim /etc/keepalived/keepalived.conf 

! Configuration File for keepalived

global_defs {
   # 接收郵件地址
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   # 郵件發送地址
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 {
    state BACKUP			#修改爲BACKUP	
    interface ens33
    virtual_router_id 51
    priority 90				#優先級的值要比MASTER小
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.226.100/24
    }
    track_script {
        check_nginx
    }
}
----》wq
  • 在兩個nignx節點創建check_nginx.sh腳本

    在/etc/nginx/目錄下創建ngixn檢測腳本,如下:

[root@localhost ~]# vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")  #過濾nginx進程數量 

#如果nginx終止了,keepalived同時停止
if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
----》wq
[root@localhost ~]# chmod +x /etc/nginx/check_nginx.sh
  • 啓動keepalived服務
[root@localhost ~]# systemctl start keepalived.service
  • 在nginx(Master節點查看是否有飄逸地址)

在這裏插入圖片描述

3.2.1 驗證漂移地址
  • pkill nginx_lbm節點的nginx服務,在nginx_lbb查看漂移地址是否轉移
[root@localhost ~]# pkill nginx
[root@localhost ~]# ip addr
#vip地址沒有了

#查看keepalvied服務
[root@localhost ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
#因爲nginx進程停止了,所以keepalived也停止了
  • 查看nginx_lbb(BACKUP)節點漂移地址

在這裏插入圖片描述

  • 恢復操作

    在nginx(Master)節點先啓動nginx服務,再啓動keepalived服務

在這裏插入圖片描述

3.3 修改node節點配置文件
  • 修改兩個node節點配置文件統一指向VIP地址

    修改的配置文件包括:bootstrap.kubeconfig、kubelet.kubeconfig、kube-proxy.kubeconfig

    把所有的server地址統一修改爲VIP地址(192.168.226.100)

[root@node1 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig 
server: https://192.168.226.100:6443

[root@node1 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig 
server: https://192.168.226.100:6443

[root@node1 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig 
server: https://192.168.226.100:6443
#node2做相同修改

#自檢
[root@node1 cfg]# grep 100 *
bootstrap.kubeconfig:    server: https://192.168.226.100:6443
kubelet.kubeconfig:    server: https://192.168.226.100:6443
kube-proxy.kubeconfig:    server: https://192.168.226.100:6443
  • 在nginx lbm上查看nginx的k8s日誌

    已有負載均衡信息,因爲VIP地址在nginx lbm上,所以nginx lbb中是沒有訪問日誌的

    而之所以會有訪問地址,是因爲重啓了kubelet服務,kubelet在重啓後,會訪問VIP地址,而VIP地址在nginx lbm上,同時nginx lbm反向代理將請求交給後端兩個master節點,所以產生了訪問日誌

[root@localhost ~]# tail /var/log/nginx/k8s-access.log 
192.168.226.132 192.168.226.137:6443, 192.168.226.128:6443 - [04/May/2020:17:00:14 +0800] 200 0, 1121
192.168.226.132 192.168.226.128:6443 - [04/May/2020:17:00:14 +0800] 200 1121
192.168.226.133 192.168.226.128:6443 - [04/May/2020:17:00:35 +0800] 200 1122
192.168.226.133 192.168.226.137:6443, 192.168.226.128:6443 - [04/May/2020:17:00:35 +0800] 200 0, 1121

四、測試

4.1 在master1上進行操作
  • 創建pod
[root@master ~]# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created

#run 在羣集中運行一個指定的鏡像
#nginx 容器名稱
#--image=nginx:鏡像名稱
 
#查看狀態    
[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-nztm8   1/1     Running   0          111s
#容器會有一個創建過程,創建過程中STATUS會顯示爲ContainerCreating,完成後會顯示Running
  • 查看創建的pod位置
[root@master ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP           NODE              NOMINATED NODE
nginx-dbddb74b8-nztm8   1/1     Running   0          4m33s   172.17.6.3   192.168.226.132   <none>

#-o 輸出一個網段信息
#172.17.6.3 容器IP
#192.168.226.132 node1節點IP
  • 在node1節點查看容器列表
[root@node1 cfg]# docker ps -a
CONTAINER ID        IMAGE                                                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
#第一個容器就是由K8s創建的
554969363e7f        nginx                                                                 "nginx -g 'daemon of…"   5 minutes ago       Up 5 minutes                            k8s_nginx_nginx-dbddb74b8-nztm8_default_3e5e6b63-8de7-11ea-aded-000c29e424dc_0
39cefc4b2584        registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0   "/pause"                 6 minutes ago       Up 6 minutes                            k8s_POD_nginx-dbddb74b8-nztm8_default_3e5e6b63-8de7-11ea-aded-000c29e424dc_0
b3546847c994        centos:7                                                              "/bin/bash"              4 days ago          Up 4 days                               jolly_perlman
  • 日誌問題(日誌無法查看)

    直接查看日誌是無法查看的,因爲進入容器會被做降權處理,所以需要給其提升一個權限

    在master節點給與權限(集羣中只需要授權一次即可)

[root@master ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
[root@master ~]# kubectl logs nginx-dbddb74b8-nztm8 
[root@master ~]# 
  • 在node1節點訪問Pod IP地址
[root@node1 cfg]# curl 172.17.6.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
  • 回到master節點再次查看日誌
[root@master ~]# kubectl logs nginx-dbddb74b8-nztm8 
172.17.6.1 - - [04/May/2020:09:26:42 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
#此時如果使用node1虛擬機訪問此容器IP。在查看訪問日誌時,會查看到訪問者IP爲docker0的IP

總結

  • 多節點master的二進制集羣基本已完成,之後會再加入一個harbor私有倉庫

  • 小結一下集羣部分組件的一個作用

    ① master節點創建pod資源,依賴於apiserver

    ② apiserver指向的是load Balance

    ③ load Balance 在集羣中擔當着負載均衡的角色

    ④ 所有的kubectl 系列命令可以在兩個master上做操作

    ⑤ 同時load Balance提供了一個漂移地址,爲兩個master節點做了負載均衡

    ⑥ 資源創建在node1、node2節點時,master節點的Scheduler 調度會使用評分機制驗證調度算法,選出後端權重/評分高的node節點指定創建pod資源

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章