一步一步搞定Kubernetes二進制部署(四)——多節點部署

一步一步搞定Kubernetes二進制部署(四)——多節點部署

前言

​ 前面三篇文章已經將單節點的Kubernetes以二進制的方式進行了部署,本文將基於此單節點的配置完成多節點的二進制的Kubernetes部署。

環境和地址規劃

master01地址:192.168.0.128/24

master02地址:192.168.0.131/24

node01地址:192.168.0.129/24

node02地址:192.168.0.130/24

負載均衡服務器:主nginx01:192.168.0.132/24

負載均衡服務器:備nginx02:192.168.0.133/24

Harbor私有倉庫:192.168.0.134/24

一、master02配置

首先,關閉防火牆和核心防護,這個不再多說。

1、添加master02節點,將master01節點中的核心文件拷貝到master02節點上

複製/opt/kubernetes/目錄下的所有文件到master02節點上

[root@master01 ~]# scp -r /opt/kubernetes/ [email protected]:/opt
The authenticity of host '192.168.0.131 (192.168.0.131)' can't be established.
ECDSA key fingerprint is SHA256:Px4bb9N3Hsv7XF4EtyC5lHdA8EwXyQ2r5yeUJ+QqnrM.
ECDSA key fingerprint is MD5:cc:7c:68:15:75:7e:f5:bd:63:e3:ce:9e:df:06:06:b7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.0.131' (ECDSA) to the list of known hosts.
[email protected]'s password: 
token.csv                                                               100%   84    45.1KB/s   00:00    
kube-apiserver                                                          100%  929   786.0KB/s   00:00    
kube-scheduler                                                          100%   94    92.6KB/s   00:00    
kube-controller-manager                                                 100%  483   351.0KB/s   00:00    
kube-apiserver                                                          100%  184MB 108.7MB/s   00:01    
kubectl                                                                 100%   55MB 117.9MB/s   00:00    
kube-controller-manager                                                 100%  155MB 127.1MB/s   00:01    
kube-scheduler                                                          100%   55MB 118.9MB/s   00:00    
ca-key.pem                                                              100% 1679     1.8MB/s   00:00    
ca.pem                                                                  100% 1359     1.5MB/s   00:00    
server-key.pem                                                          100% 1675     1.8MB/s   00:00    
server.pem                                                              100% 1643     1.6MB/s   00:00    
[root@master01 ~]# 

2、拷貝master01節點上三個組件服務啓動的文件到master02上

[root@master01 ~]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service [email protected]:/usr/lib/systemd/system/
[email protected]'s password: 
kube-apiserver.service                                                                  100%  282    79.7KB/s   00:00    
kube-controller-manager.service                                                         100%  317   273.0KB/s   00:00    
kube-scheduler.service                                                                  100%  281   290.5KB/s   00:00    
[root@master01 ~]# 

3、修改master02節點的配置文件

主要是對kube-apiserver文件的ip地址進行修改即可

[root@master02 ~]# cd /opt/kubernetes/cfg/
[root@master02 cfg]# ls
kube-apiserver  kube-controller-manager  kube-scheduler  token.csv
下面對kube-apiserver修改
#修改第5和第7行的ip地址
  2 KUBE_APISERVER_OPTS="--logtostderr=true \
  3 --v=4 \
  4 --etcd-servers=https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379 \
  5 --bind-address=192.168.0.131 \
  6 --secure-port=6443 \
  7 --advertise-address=192.168.0.131 \
  8 --allow-privileged=true \
  9 --service-cluster-ip-range=10.0.0.0/24 \
 10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
 11 --authorization-mode=RBAC,Node \
 12 --kubelet-https=true \
 13 --enable-bootstrap-token-auth \
 14 --token-auth-file=/opt/kubernetes/cfg/token.csv \
 15 --service-node-port-range=30000-50000 \
 16 --tls-cert-file=/opt/kubernetes/ssl/server.pem  \
 17 --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
 18 --client-ca-file=/opt/kubernetes/ssl/ca.pem \
 19 --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
 20 --etcd-cafile=/opt/etcd/ssl/ca.pem \
 21 --etcd-certfile=/opt/etcd/ssl/server.pem \
 22 --etcd-keyfile=/opt/etcd/ssl/server-key.pem"

4、製作master02的etcd證書

爲了保存信息,因此需要將master02節點加入到etcd集羣中,那麼就需要對應的證書。

我們可以直接將master01的證書拷貝給master02使用

在master01上操作

[root@master01 ~]# scp -r /opt/etcd/ [email protected]:/opt
[email protected]'s password: 
etcd.sh                                                                 100% 1812   516.2KB/s   00:00    
etcd                                                                    100%  509   431.6KB/s   00:00    
etcd                                                                    100%   18MB 128.2MB/s   00:00    
etcdctl                                                                 100%   15MB 123.7MB/s   00:00    
ca-key.pem                                                              100% 1679   278.2KB/s   00:00    
ca.pem                                                                  100% 1265   533.1KB/s   00:00    
server-key.pem                                                          100% 1675     1.4MB/s   00:00    
server.pem                                                              100% 1338     1.7MB/s   00:00    
[root@master01 ~]# 
#建議再在master02上驗證一下是否複製成功

5、啓動master02上的三大組件服務

1、開啓apiserver組件服務

[root@master2 cfg]# systemctl start kube-apiserver.service 
[root@master2 cfg]# systemctl enable kube-apiserver.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

2、開啓 controller-manager 組件服務

[root@master02 cfg]# systemctl start kube-controller-manager.service
[root@master02 cfg]#  systemctl enable kube-controller-manager.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

3、開啓scheduler組件服務

[root@master02 cfg]#  systemctl start kube-scheduler.service
[root@master02 cfg]# systemctl enable kube-scheduler.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master02 cfg]# 

4、建議使用systemctl status 命令查看三個服務的狀態是否是active(running)狀態

優化kubectl命令(設置一下環境變量即可)

systemctl status kube-controller-manager.service 
systemctl status kube-apiserver.service 
systemctl status kube-scheduler.service 
#在下面的文件末尾加入一行,聲明命令位置
[root@master02 cfg]# vim /etc/profile
export PATH=$PATH:/opt/kubernetes/bin/
[root@master02 cfg]# source /etc/profile
[root@master02 cfg]# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/opt/kubernetes/bin/

6、驗證master02是否加入Kubernetes的集羣中

只需要使用kubectl命令查看是否有node節點信息即可(筆者在實驗環境中遇到過執行命令阻塞問題,可以嘗試換個終端或者重啓該節點服務器再次驗證)

[root@master02 kubernetes]# kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.0.129   Ready    <none>   18h   v1.12.3
192.168.0.130   Ready    <none>   19h   v1.12.3

二、nginx負載均衡搭建

兩臺nginx服務器一臺爲master一臺爲backup,使用keepalived軟件服務實現負載高可用功能

nginx01:192.168.0.132/24

nginx02:192.168.0.133/24

開啓兩臺服務器,設置主機名、關閉防火牆和核心防護,如下所示配置,以nginx01爲例

[root@localhost ~]# hostnamectl set-hostname nginx01
[root@localhost ~]# su
[root@nginx01 ~]# systemctl stop firewalld.service 
[root@nginx01 ~]# systemctl disable firewalld.service 
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@nginx01 ~]# setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config

#修改爲靜態ip地址,重啓網絡服務後關閉網絡管理服務
[root@nginx01 ~]# systemctl stop NetworkManager
[root@nginx01 ~]# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.

1、配置官方的nginx的yum源,下載nginx

[root@nginx01 ~]# vi /etc/yum.repos.d/nginx.repo 
[nginx]
name=nginx.repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
enabled=1
gpgcheck=0

[root@nginx01 ~]# yum list
[root@nginx01 ~]# yum -y install nginx

2、在兩臺服務器上配置nginx主配置文件

主要是在stream模塊中添加日誌格式、日誌目錄以及做負載均衡的upstream設置

此處就以nginx01爲例演示

[root@nginx01 ~]# vim /etc/nginx/nginx.conf
events {
 10     worker_connections  1024;
 11 }
 12 
 13 stream {
 14     log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
 15     access_log  /var/log/nginx/k8s-access.log  main;
 16 
 17     upstream k8s-apiserver {
 18         server 192.168.0.128:6443;
 19         server 192.168.0.131:6443;
 20     }
 21     server {
 22         listen 6443;
 23         proxy_pass k8s-apiserver;
 24     }
 25 }
 26 

3、開啓並且驗證服務(兩臺都需要驗證)

[root@nginx01 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@nginx01 ~]# systemctl start nginx
[root@nginx01 ~]# netstat -natp | grep nginx
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      41576/nginx: master 
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      41576/nginx: master 

服務開啓沒問題則什麼nginx調度之負載均衡配置完畢,接下來就是使用keepalived實現高可用了

三、keepalived實現高可用集羣

1、直接安裝keepalived軟件以及配置對應節點上的keepalived.conf文件

[root@nginx01 ~]# yum install keepalived -y
#先將原本的配置文件備份,再修改
[root@nginx01 ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

#主nginx節點的配置文件修改如下
cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"   
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100     
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.0.100/24      
    }
    track_script {
        check_nginx
    }
}

#備份的nginx節點的配置文件修改如下
! Configuration File for keepalived

global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"   #檢測腳本路徑,需要自己寫 
}
vrrp_instance VI_1 {
    state BACKUP #表示備份
    interface ens33
    virtual_router_id 51
    priority 90    #優先級低一些 
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.0.100/24 #虛擬ip之前的文章中以及做過設置      
    }
    track_script {
        check_nginx
    }
}

2、創建檢測腳本

[root@nginx01 ~]# vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
#給其權限方便執行
[root@nginx01 ~]# chmod +x /etc/nginx/check_nginx.sh

3、開啓keepalived服務

nginx01上:

[root@nginx01 ~]# systemctl start keepalived
[root@nginx01 ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since 二 2020-05-05 19:16:30 CST; 3s ago
  Process: 41849 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 41850 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─41850 /usr/sbin/keepalived -D
           ├─41851 /usr/sbin/keepalived -D
           └─41852 /usr/sbin/keepalived -D

nginx02上:

[root@nginx02 ~]# systemctl start keepalived
[root@nginx02 ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since 二 2020-05-05 19:16:44 CST; 4s ago
  Process: 41995 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 41996 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─41996 /usr/sbin/keepalived -D
           ├─41997 /usr/sbin/keepalived -D
           └─41998 /usr/sbin/keepalived -D

4、在兩個節點上查看地址使用ip a命令

此時會發現VIP在nginx01,也就是說是在nginx的master節點上,而backup上還是沒有VIP的,可以通過pkill掉nginx01上的nginx進程測試其是否會漂移到nginx02上來驗證keepalived的高可用是否配置成功

[root@nginx01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:ce:99:a4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.132/24 brd 192.168.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.0.100/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::5663:b305:ba28:b102/64 scope link 
       valid_lft forever preferred_lft forever

[root@nginx02 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:71:48:82 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.133/24 brd 192.168.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::915f:60c5:1086:1e04/64 scope link 
       valid_lft forever preferred_lft forever

5、測試pkill nginx01的nginx服務

[root@nginx01 ~]# pkill nginx
#進程結束後keepalived服務也會終止
[root@nginx01 ~]# systemctl status keepalived.service 
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

5月 05 19:16:37 nginx01 Keepalived_vrrp[41852]: Sending gratuitous ARP on ens33 for 192.168.0.100
5月 05 19:16:37 nginx01 Keepalived_vrrp[41852]: Sending gratuitous ARP on ens33 for 192.168.0.100
5月 05 19:16:37 nginx01 Keepalived_vrrp[41852]: Sending gratuitous ARP on ens33 for 192.168.0.100
5月 05 19:16:37 nginx01 Keepalived_vrrp[41852]: Sending gratuitous ARP on ens33 for 192.168.0.100
5月 05 19:21:19 nginx01 Keepalived[41850]: Stopping
5月 05 19:21:19 nginx01 systemd[1]: Stopping LVS and VRRP High Availability Monitor...
5月 05 19:21:19 nginx01 Keepalived_vrrp[41852]: VRRP_Instance(VI_1) sent 0 priority
5月 05 19:21:19 nginx01 Keepalived_vrrp[41852]: VRRP_Instance(VI_1) removing protocol VIPs.
5月 05 19:21:19 nginx01 Keepalived_healthcheckers[41851]: Stopped
5月 05 19:21:20 nginx01 systemd[1]: Stopped LVS and VRRP High Availability Monitor.

查看VIP,跳轉到nginx02上了

[root@nginx02 ~]# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:71:48:82 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.133/24 brd 192.168.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.0.100/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::915f:60c5:1086:1e04/64 scope link 
       valid_lft forever preferred_lft forever

6、恢復keepalived功能

[root@nginx01 ~]# systemctl start nginx
[root@nginx01 ~]# systemctl start keepalived.service
[root@nginx01 ~]# systemctl status keepalived.service 
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since 二 2020-05-05 19:24:30 CST; 3s ago
  Process: 44012 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 44013 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─44013 /usr/sbin/keepalived -D
           ├─44014 /usr/sbin/keepalived -D
           └─44015 /usr/sbin/keepalived -D

驗證VIP,會發現又回到了nginx01上

[root@nginx01 ~]# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:ce:99:a4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.132/24 brd 192.168.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.0.100/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::5663:b305:ba28:b102/64 scope link 
       valid_lft forever preferred_lft forever

7、Kubernetes多節點設置配置文件修改

截至目前,我們配置好了基於keepalived軟件實現的nginx高可用集羣,接下來需要做的一件非常重要的事情就是修改兩個node節點上Kubernetes集羣配置文件的所有的kubeconfig格式文件,因爲之前單節點上我們指向的是master01服務器,而現在是多節點,因此如果指向的還是原先的mater01,那麼當master01服務器故障時則會導致服務無法使用。因此我們需要將其指向VIP,通過LB(負載均衡)服務來實現。

[root@node01 ~]# cd /opt/kubernetes/cfg/
[root@node01 cfg]# ls
bootstrap.kubeconfig  flanneld  kubelet  kubelet.config  kubelet.kubeconfig  kube-proxy  kube-proxy.kubeconfig

[root@node01 cfg]# vim bootstrap.kubeconfig 
[root@node01 cfg]# awk 'NR==5' bootstrap.kubeconfig 
    server: https://192.168.0.100:6443

[root@node01 cfg]# vim kubelet.kubeconfig 
[root@node01 cfg]# awk 'NR==5' kubelet.kubeconfig 
    server: https://192.168.0.100:6443

[root@node01 cfg]# vim kube-proxy.kubeconfig 
[root@node01 cfg]# awk 'NR==5' kube-proxy.kubeconfig 
    server: https://192.168.0.100:6443

修改完成後重啓node節點服務

[root@node01 cfg]#  systemctl restart kubelet
[root@node01 cfg]#  systemctl restart kube-proxy
[root@node01 cfg]# grep 100 *
bootstrap.kubeconfig:    server: https://192.168.0.100:6443
kubelet.kubeconfig:    server: https://192.168.0.100:6443
kube-proxy.kubeconfig:    server: https://192.168.0.100:6443

重啓之後在nginx01上查看Kubernetes的日誌,是以輪循調度將請求分發出去的

[root@nginx01 ~]# tail /var/log/nginx/k8s-access.log
192.168.0.129 192.168.0.128:6443 - [05/May/2020:19:36:55 +0800] 200 1120
192.168.0.129 192.168.0.131:6443 - [05/May/2020:19:36:55 +0800] 200 1118

部署完成進行集羣測試

在master01上創建pod資源進行測試

[root@master01 ~]# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
[root@master01 ~]# kubectl get pods
NAME                    READY   STATUS              RESTARTS   AGE
nginx-dbddb74b8-djr5h   0/1     ContainerCreating   0          14s

此時正在創建過程的狀態,待會再看(30秒到1min左右即可)

[root@master01 ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-djr5h   1/1     Running   0          75s

目前已經是運行狀態了

我們通過kubectl工具來查看剛剛創建的pod中的nginx日誌

[root@master01 ~]# kubectl logs nginx-dbddb74b8-djr5h
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-r5xz9)

Error的報錯原因是因爲權限問題,此時我們設置一下就可以了(添加匿名用戶給予權限即可)

[root@master01 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
[root@master01 ~]# kubectl logs nginx-dbddb74b8-djr5h
[root@master01 ~]# 
#由於沒有訪問,因此不會有日誌生成

我們來查看一下pod的網絡信息

[root@master01 ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP            NODE            NOMINATED NODE
nginx-dbddb74b8-djr5h   1/1     Running   0          5m52s   172.17.91.3   192.168.0.130   <none>
[root@master01 ~]# 

我們發現其建立在了130的服務器上,也就是我們的node02服務器上

此時我們可以在node02上訪問一下

[root@node02 ~]# curl 172.17.91.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

由於我們安裝了flannel組件,因此我們測試一下再node01的瀏覽器上訪問172.17.91.3是否可以訪問該網頁,這裏的flannel組件掛了,因此需要重新搞定一下,最後測試兩個node節點的flannel可以互通(此時地址網段發生改變了)

重新執行那個flannel.sh腳本然後重載進程和重啓docker服務

由於之前的flannel地址失效了,因此在master01節點上創建的pod資源分配的地址也無效了,因此可以刪除,然後會自動創建新的pod資源

[root@master01 ~]# kubectl get  pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE
nginx-dbddb74b8-djr5h   1/1     Running   0          20s   172.17.91.3   192.168.0.130   <none>
[root@master01 ~]# kubectl delete pod nginx-dbddb74b8-djr5h 
pod "nginx-dbddb74b8-djr5h" deleted
#新的pod資源如下
[root@master01 ~]# kubectl get  pods -o wide
NAME                    READY   STATUS              RESTARTS   AGE   IP       NODE            NOMINATED NODE
nginx-dbddb74b8-cnhsl   0/1     ContainerCreating   0          12s   <none>   192.168.0.130   <none>
[root@master01 ~]# kubectl get  pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE
nginx-dbddb74b8-cnhsl   1/1     Running   0          91s   172.17.70.2   192.168.0.130   <none>

然後我們直接在node01節點上的瀏覽器中訪問該地址,結果截圖如下:

一步一步搞定Kubernetes二進制部署(四)——多節點部署

根據該圖的結果表示我們的測試成功了

以上就是此次多節點以二進制方式部署的Kubernetes集羣了。

總結

此次配置的過程給我的收穫是在部署比較複雜的架構時需要注意文件的備份工作,每走一步驗證一步,以防越做越錯,儘可能在實踐之前驗證環境和所需開啓的服務的狀態,最後就是如何解決遇到的一些問題,例如方纔的flannel組件掛掉問題,而且還因禍得福,驗證了Kubernetes的一個特性——故障自動解決。

謝謝您的閱讀!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章