k8s多master節點部署(實驗)

k8s多master節點部署(實驗)

前言

  • 上節,我們部署了k8s的單節點,主要的核心點就是證書的創建和頒發,flannel網絡組件也是相當重要的。本文主要是基於單master節點的部署(https://blog.csdn.net/double_happy111/article/details/105858003)來升級並部署的。

1. 多節點的部署

  • 部署master02節點
  • 複製kubernetes目錄下面的所有文件
#複製/opt/kubernetes/目錄下的所有文件到master02節點上
[root@localhost kubeconfig]# scp -r /opt/kubernetes/ [email protected]:/opt
#複製master1中三個組件的啓動腳本:kube-apiserver.service、kube-controller-manager.service、kube-scheduler.service
[root@localhost kubeconfig]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service [email protected]:/usr/lib/systemd/system/

#修改賦值的配置文件
[root@master2 ~]# cd /opt/kubernetes/cfg/
[root@master2 cfg]# ls
kube-apiserver  kube-controller-manager  kube-scheduler  token.csv
[root@master2 cfg]# vim kube-apiserver 
#修改兩處:
--bind-address=192.168.73.62
--advertise-address=192.168.73.62
  • 拷貝master01上已有的etcd證書給master2使用
#拷貝master01上已有的etcd證書給master2使用
[root@master1 ~]# scp -r /opt/etcd/ [email protected]:/opt/
[email protected]'s password: 
etcd                                             100%  523   326.2KB/s   00:00    
etcd                                             100%   18MB  45.1MB/s   00:00    
etcdctl                                          100%   15MB  33.0MB/s   00:00    
ca-key.pem                                       100% 1679   160.2KB/s   00:00    
ca.pem                                           100% 1265   592.6KB/s   00:00    
server-key.pem                                   100% 1679   884.2KB/s   00:00    
server.pem                                       100% 1338   768.5KB/s   00:00   
  • 啓動master02的組件
#啓動master2的三個組件
//開啓 apiserver 組件
[root@master2 cfg]# systemctl start kube-apiserver.service 
[root@master2 cfg]# systemctl enable kube-apiserver.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

////開啓 controller-manager 組件
[root@master2 cfg]# systemctl start kube-controller-manager.service
[root@master2 cfg]# systemctl enable kube-controller-manager.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

//開啓 scheduler 組件
[root@master2 cfg]# systemctl start kube-scheduler.service
[root@master2 cfg]# systemctl enable kube-scheduler.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
  • 添加環境變量,優化kubectl命令
#增加環境變量,優化kubectl命令
[root@master2 cfg]# vim /etc/profile
在末尾添加:
export PATH=$PATH:/opt/kubernetes/bin/
[root@master2 cfg]# source /etc/profile    使之生效
  • 驗證master是否加入k8s集羣
#驗證master2是否加入k8s集羣
[root@localhost cfg]# kubectl get node
NAME              STATUS   ROLES    AGE   VERSION
192.168.73.63   Ready    <none>   50m   v1.12.3
192.168.73.64   Ready    <none>   22s   v1.12.3

2. 搭建nginx負載均衡

  • 關閉防火牆
[root@localhost ~]# systemctl stop firewalld.service 
[root@localhost ~]# setenforce 0
  • 配置官方nginx的yum源
//編寫repo文件
[root@localhost ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
enable=1
  • 加載yum源,安裝nginx
[root@localhost ~]# yum list
[root@localhost ~]# yum install nginx -y   //下載nginx
  • 修改nginx的主配置文件
#[root@localhost ~]# vim /etc/nginx/nginx.conf
在events模塊下添加以下內容:日誌格式、日誌存放位置、upstream模塊
stream {

   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;
#指向master節點的IP地址
    upstream k8s-apiserver {
        server 192.168.73.61:6443;
        server 192.168.73.62:6443;
    }
    server {
                listen 6443;
                proxy_pass k8s-apiserver;
    }
    }
  • 檢查配置文件
//檢查配置文件是否有語法錯誤
[root@localhost ~]# nginx -t   
[root@localhost ~]# systemctl start nginx     //開啓服務
[root@localhost ~]# netstat -natp | grep nginx  

3. 配置keepalived高可用服務

  • 安裝keepalived軟件
#yum安裝keepalived軟件
[root@localhost ~]# yum install keepalived -y
  • nginx01爲master節點
#nginx1節點作爲master
[root@nginx1 ~]# vim /etc/keepalived/keepalived.conf 
//刪除配置文件全部內容,添加以下內容:

! Configuration File for keepalived
global_defs {

   # 接收郵件地址

   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }

   # 郵件發送地址

   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"   ##檢測nginx腳本的路徑,稍後會創建
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100     ##優先級
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.100.100/24      ##虛擬IP地址
    }
    track_script {
        check_nginx
    }
}
  • nginx02爲backup
#node2作爲backup
[root@nginx2 ~]# vim /etc/keepalived/keepalived.conf 
//刪除配置文件全部內容,添加以下內容:
! Configuration File for keepalived
global_defs {

   # 接收郵件地址

   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }

   # 郵件發送地址

   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_SLAVE
}
vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"   ##檢測腳本的路徑,稍後會創建
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90     ##優先級低於master
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.100.100/24      ##虛擬IP地址
    }
    track_script {
        check_nginx
    }
}
  • 創建檢測腳本
#創建檢測腳本
[root@localhost ~]# vim /etc/nginx/check_nginx.sh

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi

[root@localhost ~]# chmod +x /etc/nginx/check_nginx.sh   //授權

#開啓keepalived服務
[root@localhost ~]# systemctl start keepalived.service   
  • 查看高可用的的漂移地址
#查看ip地址,可以看到高可用羣集中的master節點上有漂移地址,backup節點上沒有
[root@localhost ~]# ip a   
#將nginx01的nginx關閉
[root@localhost ~]# systemctl stop nginx
[root@localhost ~]# ip a
#在nginx02上面查看漂移地址
[root@localhost ~]# ip a
  • 然後在nginx01上面重新開啓nginx服務
[root@localhost ~]# systemctl start nginx

4. 修改兩個node節點

  • 修改兩個node節點的配置文件
#修改兩個node節點的配置文件,server ip地址爲同一的IP地址(三個文件)
//修改內容:server: https://192.168.100.100:6443(都改成vip地址)
[root@localhost cfg]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
[root@localhost cfg]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
[root@localhost cfg]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
  • 重啓服務
#重啓服務
[root@localhost cfg]# systemctl restart kubelet.service 
[root@localhost cfg]# systemctl restart kube-proxy.service
  • 查看修改內容
#檢查修改內容
//確保必須在此路徑下 grep 檢查
[root@localhost ~]# cd /opt/kubernetes/cfg/
[root@localhost cfg]#  grep 100 *

#接下來在 nginx1 上查看 nginx 的 k8s日誌:
[root@localhost ~]# tail /var/log/nginx/k8s-access.log

5. 測試

  • 在master01上操作
#在 master1上操作,創建 Pod進行測試
[root@localhost kubeconfig]# kubectl run nginx --image=nginx
  • 查看pod狀態
#查看 Pod 狀態
[root@master1 ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-vj4wk   1/1     Running   0          16s
此時已經創建完成,正在運行中。
  • 查看剛剛創建的nginx日誌
#查看剛剛創建的nginx日誌
[root@master1 ~]# kubectl logs nginx-dbddb74b8-vj4wk
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-vj4wk)
  • 查看問題的解決辦法
#出現 error 是由於權限不足,下面來授權解決一下這個問題。
解決辦法(添加匿名用戶授予權限):
[root@master1 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous 
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
此時,再次查看日誌,就不會出現報錯,但是沒有信息產生,因爲沒有訪問容器。
  • 查看網絡pod
#查看pod網絡
[root@localhost kubeconfig]# kubectl get pods -o wide
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章