Kubernetes----單節點部署(詳細過程~)

前言:

一、環境介紹

1.1 單master集羣架構圖

在這裏插入圖片描述

  • 前文環境中最後部署了flannel網絡組件,並實現了容器間的通信,本次實驗,首先需要部署Master組件,也是核心組件
  • ① kube-apiserver
    • 作用:集羣的統一入口,各組件協調者,所有對象資源的增刪改查和監聽操作都交給 APIServer 處理後再提交給 Etcd 存儲
  • ② kube-controller-manager
    • 作用:處理羣集中常規後臺任務,一個資源對應一個控制器,而 controller-manager 負責管理這些控制器
  • ③ kub-scheduler
    • 作用:只要不是人爲指定,則均由調度器根據調度算法爲新創建的 Pod 選擇一個 Node 節點,可以任意部署,可以部署在同一個節點上,也可以部署在不同節點上。
  • 配置思路:配置文件----》systemd管理組件—》啓動
1.2 master節點apiserver 啓動流程

在這裏插入圖片描述

  • kubelet :基礎的命令
  • ① 我們在使用此命令的時,例如 kubelet get nodes 查看節點信息時首先會經過master 節點查看各個node點的業務信息,過程中需要bootstrap的授權(bootstrap.kubeconfig權限配置)
  • ② 當有了以上的權限配置且通過以上權限許可之後纔會去找apiserver進行操作
  • ③ apiserver 首先會驗證node節點中的令牌(token)
    • 如果驗證成功,則令牌會釋放出其中的證書,將證書再次進行身份驗證(CA驗證),身份驗證就需要通過csr的簽名,簽名成功之後,再給與對應的證書頒發,頒發許可之後纔會啓動apiserver、授權給與請求命令相對於的服務。(最終授權給bootstrap)
    • 如果在以上的驗證中有一部失敗,apiserver則不會進行啓動
  • ④ Api會最終授權給bootstrap,而命令請求過來之後,必須要有bootstarp的授權才能進行,否則無法輸出結果
  • 我們在本次實例中,就需要生成apiserver的token、證書和簽名,最後要將證書製作出來

二、部署master組件

2.1 master節點 生成證書

創建k8s工作目錄和apiserver的證書目錄

[root@localhost k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p

[root@localhost k8s]# mkdir k8s-cert
  • 生成證書
#製作k8s-cert.sh 腳本
[root@localhost k8s]# cd k8s-cert
[root@localhost k8s-cert]# ls
k8s-cert.sh

#腳本內容
[root@localhost k8s-cert]# cat k8s-cert.sh
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.226.128",			#master1節點
      "192.168.226.137",			#master2節點(爲之後做多節點做準備)
      "192.168.226.100",			#VIP飄逸地址
      "192.168.226.148",			#nginx1負載均衡地址(主)
      "192.168.226.167",			#nginx2負載均衡地址(備)
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
  • 直接執行腳本生成K8S的證書
[root@localhost k8s-cert]# bash k8s-cert.sh 
2020/04/30 10:29:11 [INFO] generating a new CA key and certificate from CSR
2020/04/30 10:29:11 [INFO] generate received request
2020/04/30 10:29:11 [INFO] received CSR
2020/04/30 10:29:11 [INFO] generating key: rsa-2048
2020/04/30 10:29:11 [INFO] encoded CSR
2020/04/30 10:29:11 [INFO] signed certificate with serial number 224110310130036441128901384530959388908827247297
2020/04/30 10:29:11 [INFO] generate received request
2020/04/30 10:29:11 [INFO] received CSR
2020/04/30 10:29:11 [INFO] generating key: rsa-2048
2020/04/30 10:29:11 [INFO] encoded CSR
2020/04/30 10:29:11 [INFO] signed certificate with serial number 267360500176784479148260945996903128404212361049
2020/04/30 10:29:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/04/30 10:29:11 [INFO] generate received request
2020/04/30 10:29:11 [INFO] received CSR
2020/04/30 10:29:11 [INFO] generating key: rsa-2048
2020/04/30 10:29:12 [INFO] encoded CSR
2020/04/30 10:29:12 [INFO] signed certificate with serial number 587192726043635083177090914333196245411381668467
2020/04/30 10:29:12 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/04/30 10:29:12 [INFO] generate received request
2020/04/30 10:29:12 [INFO] received CSR
2020/04/30 10:29:12 [INFO] generating key: rsa-2048
2020/04/30 10:29:12 [INFO] encoded CSR
2020/04/30 10:29:12 [INFO] signed certificate with serial number 447965470562604260700012266801582192724109772025
2020/04/30 10:29:12 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").


#此時查看本地目錄的證書文件,應該有8個
[root@localhost k8s-cert]# ls *.pem
admin-key.pem  ca-key.pem  kube-proxy-key.pem  server-key.pem
admin.pem      ca.pem      kube-proxy.pem      server.pem
  • 把ca server端的證書複製到k8s工作目錄
[root@localhost k8s-cert]# cp ca*.pem server*.pem /opt/kubernetes/ssl
[root@localhost k8s-cert]# ls /opt/kubernetes/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem
2.2 生成token、綁定角色(bootstrap)
  • 解壓kubernetes壓縮包
[root@localhost k8s-cert]# cd ..
[root@localhost k8s]# ls
etcd-cert                 etcd-v3.3.10-linux-amd64.tar.gz     kubernetes-server-linux-amd64.tar.gz
etcd.sh                   flannel-v0.10.0-linux-amd64.tar.gz
etcd-v3.3.10-linux-amd64  k8s-cert

[root@localhost k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz 

[root@localhost k8s]# ls
etcd-cert                 etcd-v3.3.10-linux-amd64.tar.gz     kubernetes
etcd.sh                   flannel-v0.10.0-linux-amd64.tar.gz  kubernetes-server-linux-amd64.tar.gz
etcd-v3.3.10-linux-amd64  k8s-cert
[root@localhost k8s]# cd kubernetes/
[root@localhost kubernetes]# ls
addons  kubernetes-src.tar.gz  LICENSES  server
[root@localhost kubernetes]# cd server/bin
[root@localhost bin]# ls
apiextensions-apiserver              kube-apiserver.docker_tag           kube-proxy
cloud-controller-manager             kube-apiserver.tar                  kube-proxy.docker_tag
cloud-controller-manager.docker_tag  kube-controller-manager             kube-proxy.tar
cloud-controller-manager.tar         kube-controller-manager.docker_tag  kube-scheduler
hyperkube                            kube-controller-manager.tar         kube-scheduler.docker_tag
kubeadm                              kubectl                             kube-scheduler.tar
kube-apiserver                       kubelet                             mounter
#master、node中的組件都包含在其中
  • 複製關鍵命令到k8s的工作目錄中
[root@localhost bin]# cp kube-controller-manager kubectl kube-apiserver kube-scheduler /opt/kubernetes/bin
[root@localhost bin]# ls /opt/kubernetes/bin/
kube-apiserver  kube-controller-manager  kubectl  kube-scheduler
  • 使用head -c 16 /dev/urandom | od -An -t x | tr -d ’ ',隨機生成序列號 生成隨機序列號

    使用random值產生令牌的序列值

[root@localhost bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
664f2017059d58e78f6cce2e47ef383b
#複製以上令牌序列號
  • 創建token(令牌)文件
[root@localhost bin]# cd /opt/kubernetes/cfg
[root@localhost cfg]# ls
[root@localhost cfg]# vim toekn.csv
664f2017059d58e78f6cce2e47ef383b,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
#序列號,用戶名,用戶ID,系統角色(身份)
-----》wq
  • 此角色的定位和作用如下
    • ① 創建位置:在master節點創建bootstrap角色
    • ② 管理node節點的kubelet
    • ③ kubelet-bootstrap 管理、授權system:kubelet-bootstrap
    • ④ 而system:kubelet-bootstrap 則管理node節點的kubelet
    • ⑤ token就是授權給system:kubelet-bootstrap角色,如果此角色沒有token的授權,則不能管理node下的kubelet
2.3 啓動apiserver、scheduler、controller-manager服務
2.3.1 啓動apiserver服務
  • 二進制文件,token,證書準備齊全後,開啓apiserver
#上傳master.sh,解壓,查看apiserver.sh
[root@localhost cfg]# cd /root/k8s
[root@localhost k8s]# unzip master.zip 
Archive:  master.zip
  inflating: apiserver.sh            
  inflating: controller-manager.sh   
  inflating: scheduler.sh            
[root@localhost k8s]# ls
apiserver.sh           etcd-v3.3.10-linux-amd64            kubernetes
controller-manager.sh  etcd-v3.3.10-linux-amd64.tar.gz     kubernetes-server-linux-amd64.tar.gz
etcd-cert              flannel-v0.10.0-linux-amd64.tar.gz  master.zip
etcd.sh                k8s-cert                            scheduler.sh
[root@localhost k8s]# chmod +x controller-manager.sh 
  • apiserver.sh 腳本簡介
#!/bin/bash

MASTER_ADDRESS=$1					#本地地址
ETCD_SERVERS=$2						#羣集

cat <<EOF >/opt/kubernetes/cfg/kube-apiserver		#生成配置文件到k8s工作目錄

KUBE_APISERVER_OPTS="--logtostderr=true \\		#從ETCD讀取、存入數據
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\			#綁定地址
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\		#master本地地址
--allow-privileged=true \\				#允許授權
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\				#plugin插件,包括命名空間中的插件、server端的授權
--authorization-mode=RBAC,Node \\			#使用RBAC模式驗證node端
--kubelet-https=true \\					#允許對方使用https協議進行訪問
--enable-bootstrap-token-auth \\			#開啓bootstrap令牌授權
--token-auth-file=/opt/kubernetes/cfg/token.csv \\	#令牌文件路徑
--service-node-port-range=30000-50000 \\		#開啓的監聽端口
#以下均爲證書文件
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service		#服務啓動腳本
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
  • 開啓apiserver
[root@localhost k8s]# bash apiserver.sh 192.168.226.128 https://192.168.226.128:2379,https://192.168.226.132:2379,https://192.168.226.133:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

#查看api進程驗證啓動狀態
[root@localhost k8s]# ps aux | grep kube
root      12924  103  5.2 293320 202744 ?       Ssl  11:42   0:07 /opt/kubernetes/bi n/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.226.128:2379,https://192.168.226.132:2379,https://192.168.226.133:2379 --bind-address=192.168.226.128 --secure-port=6443 --advertise-address=192.168.226.128 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      13066  0.0  0.0 112660   968 pts/0    R+   11:42   0:00 grep --color=auto kube

  • 查看配置文件是否正常
[root@localhost k8s]# cat /opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.226.128:2379,https://192.168.226.132:2379,https://192.168.226.133:2379 \
--bind-address=192.168.226.128 \
--secure-port=6443 \				#6443就是443端口,使用的https協議
--advertise-address=192.168.226.128 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  • 查看進行端口是否開啓
[root@localhost k8s]# netstat -natp | grep 6443
tcp        0      0 192.168.226.128:6443    0.0.0.0:*               LISTEN      12924/kube-apiserve 
tcp        0      0 192.168.226.128:6443    192.168.226.128:43702   ESTABLISHED 12924/kube-apiserve 
tcp        0      0 192.168.226.128:43702   192.168.226.128:6443    ESTABLISHED 12924/kube-apiserve
2.3.2 啓動scheduler服務
  • 查看scheduler啓動腳本
[root@localhost k8s]# vim scheduler.sh 

#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true \\  #定義日誌記錄
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\			#定義master地址,指向8080端口
--leader-elect"						#定位爲leader

EOF

cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service					#定義啓動腳本
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
  • 啓動scheduler服務
[root@localhost k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

#查看進程
[root@localhost k8s]# ps aux | grep sch
root          9  0.1  0.0      0     0 ?        S    11:32   0:01 [rcu_sched]
root      20958  1.9  0.5  45616 20360 ?        Ssl  11:49   0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
root      21251  0.0  0.0 112664   972 pts/0    R+   11:49   0:00 grep --color=auto sch

#查看服務
[root@localhost k8s]# systemctl status kube-scheduler.service 
#active(running)
2.3.3 啓動controller-manager服務
  • 直接執行腳本
[root@localhost k8s]# ./controller-manager.sh 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

#查看狀態
[root@localhost k8s]# systemctl status kube-controller-manager.service 
#active(running)
  • 最後查看master節點狀態
[root@localhost k8s]# /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"} 

三、node節點部署

3.1 node1節點部署
  • 首先,把master節點的kubelet、kube-proxy拷貝到node節點
[root@localhost k8s]# cd kubernetes/server/bin/
[root@localhost bin]# ls
apiextensions-apiserver              kube-apiserver.docker_tag           kube-proxy
cloud-controller-manager             kube-apiserver.tar                  kube-proxy.docker_tag
cloud-controller-manager.docker_tag  kube-controller-manager             kube-proxy.tar
cloud-controller-manager.tar         kube-controller-manager.docker_tag  kube-scheduler
hyperkube                            kube-controller-manager.tar         kube-scheduler.docker_tag
kubeadm                              kubectl                             kube-scheduler.tar
kube-apiserver                       kubelet                             mounter

#將文件複製到兩個節點
[root@localhost bin]# scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
[root@localhost bin]# scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
  • 在node節點操作
#上傳node壓縮包並解壓(腳本之後會使用到)
[root@node1 ~]# unzip node.zip 
Archive:  node.zip
  inflating: proxy.sh                
  inflating: kubelet.sh              
[root@node1 ~]# ls
anaconda-ks.cfg  flannel-v0.10.0-linux-amd64.tar.gz  kubelet.sh  proxy.sh   下載  圖片  桌面  視頻
flannel.sh       initial-setup-ks.cfg                node.zip    README.md 
  • 回到master節點,進行kube配置(在配置node節點前必須要做的

    kube配置,是master與node節點相互通訊的前提

[root@localhost k8s]# mkdir kubeconfig
[root@localhost k8s]# cd kubeconfig
#上傳kubeconfig.sh腳本
[root@localhost kubeconfig]# ls
kubeconfig.sh
[root@localhost kubeconfig]# mv kubeconfig.sh kubeconfig
[root@localhost kubeconfig]# vim kubeconfig         
APISERVER=$1
SSL_DIR=$2

# 創建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"

# 設置集羣參數
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
  --token=664f2017059d58e78f6cce2e47ef383b \		#僅修改此處令牌序列號,從/opt/kubernetes/cfg/token.csv中獲取
  --kubeconfig=bootstrap.kubeconfig

# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 創建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
-----》wq
  • 設置環境變量
方式一:
[root@localhost kubeconfig]# export PATH=$PATH:/opt/kubernetes/bin/
方式二:
[root@localhost kubeconfig]# vim /etc/profile
export PATH=$PATH:/opt/kubernetes/bin/
[root@localhost kubeconfig]# source /etc/profile

#以上任意一種方式設置完成後,即可補全、使用kubectl命令
[root@localhost kubeconfig]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"} 
  • 執行kubeconfig腳本
[root@localhost kubeconfig]# bash kubeconfig 192.168.226.128 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".

#會運用到此目錄下的證書
[root@localhost kubeconfig]# ls /root/k8s/k8s-cert/

#同時生成兩個配置文件
[root@localhost kubeconfig]# ls
bootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig
  • 拷貝生成的兩個配置文件拷貝到node節點

    複製的這兩個配置文件的作用就是能與master節點通信和被master節點控制

[root@localhost kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
[root@localhost kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/

[root@node1 ~]# ls /opt/kubernetes/cfg/
bootstrap.kubeconfig  flanneld  kube-proxy.kubeconfig
  • 創建bootstrap角色賦予權限用於連接apiserver請求籤名⭐⭐

    只有bootstrap授權之後,node節點纔算完整的添加到羣集、可以被master節點所管理

[root@localhost kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
#clusterrolebinding:role角色綁定
#kubelet-bootstrap:綁定對象    
#--clusterrole=system:node-bootstrapper:綁定身份    
#--user=kubelet-bootstrap:綁定的用戶(授權)
    
#添加完成後,master管理node節點只能通過bootstrap角色
  • node1節點操作
  • 執行kubelet腳本,用於請求連接master主機
[root@node1 ~]# bash kubelet.sh 192.168.226.132
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

#查看kubelet進程
[root@node1 ~]# ps aux | grep kubelet
root      82511  0.4  1.1 550648 43044 ?        Ssl  12:38   0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.226.132 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root      83501  0.0  0.0 112660   968 pts/0    S+   12:39   0:00 grep --color=auto kubelet

#查看服務狀態
[root@node1 ~]# systemctl status kubelet.service
active(running)

#以上master在node節點上的kubelet代理已完成
  • 在master上檢查node1節點的請求
[root@localhost kubeconfig]# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-NuDGm_zrrpO-2TwBvSyjYyBZvD7MVRCVySygFTagfoE   100s   kubelet-bootstrap   Pending

#pending請求狀態,等待集羣給該節點頒發證書
  • 頒發證書
[root@localhost kubeconfig]# kubectl certificate approve node-csr-NuDGm_zrrpO-2TwBvSyjYyBZvD7MVRCVySygFTagfoE
certificatesigningrequest.certificates.k8s.io/node-csr-NuDGm_zrrpO-2TwBvSyjYyBZvD7MVRCVySygFTagfoE approved
#certificate:修改certificate資源
#approve 同意一個簽證證書的請求


#再次查看csr
[root@localhost kubeconfig]# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-NuDGm_zrrpO-2TwBvSyjYyBZvD7MVRCVySygFTagfoE   100s   kubelet-bootstrap   Pending
[root@localhost kubeconfig]# kubectl certificate approve node-csr-NuDGm_zrrpO-2TwBvSyjYyBZvD7MVRCVySygFTagfoE
certificatesigningrequest.certificates.k8s.io/node-csr-NuDGm_zrrpO-2TwBvSyjYyBZvD7MVRCVySygFTagfoE approved
[root@localhost kubeconfig]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-NuDGm_zrrpO-2TwBvSyjYyBZvD7MVRCVySygFTagfoE   7m58s   kubelet-bootstrap   Approved,Issued

#狀態已變爲Approved,Issued(已被允許加入)

#查看集羣節點
[root@localhost kubeconfig]# kubectl get node
NAME              STATUS   ROLES    AGE   VERSION
192.168.226.132   Ready    <none>   21s   v1.12.3
#node1節點已加入羣集
#如果不在線,status中顯示的爲noready,在創建Pod的時候會一直處於pending狀態,將無法創建資源,(如果出現此問題,首先檢查apiserver。而如果找不到節點,先檢查kubelet。)
  • 在node1節點啓動proxy代理服務
[root@node1 ~]# bash proxy.sh 192.168.226.132
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@node1 ~]# systemctl status kube-proxy.service 
active(running)
  • 以上,node1節點已部署完畢
3.2 node2節點部署
  • 把node1節點的/opt/kubernetes 目錄複製到node2節點中
#拷貝k8s工作目錄
[root@node1 ~]# scp -r /opt/kubernetes/ [email protected]:/opt
[email protected]'s password: 
flanneld                                                            100%  241   179.4KB/s   00:00    
bootstrap.kubeconfig                                                100% 2169     1.6MB/s   00:00    
kube-proxy.kubeconfig                                               100% 6275     3.0MB/s   00:00    
kubelet                                                             100%  379   182.1KB/s   00:00    
kubelet.config                                                      100%  269   171.5KB/s   00:00    
kubelet.kubeconfig                                                  100% 2298     1.3MB/s   00:00    
kube-proxy                                                          100%  191    85.9KB/s   00:00    
mk-docker-opts.sh                                                   100% 2139     1.3MB/s   00:00    
flanneld                                                            100%   35MB  22.1MB/s   00:01    
kubelet                                                             100%  168MB  33.6MB/s   00:05    
kube-proxy                                                          100%   48MB  32.1MB/s   00:01    
kubelet.crt                                                         100% 2197     1.2MB/s   00:00    
kubelet.key                                                         100% 1675   992.2KB/s   00:00    
kubelet-client-2020-05-02-17-13-11.pem                              100% 1277   406.7KB/s   00:00    
kubelet-client-current.pem                                          100% 1277   631.8KB/s   00:00 
 
#拷貝啓動腳本    
[root@node1 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service [email protected]:/usr/lib/systemd/system/
[email protected]'s password: 
kubelet.service                                                     100%  264   159.4KB/s   00:00    
kube-proxy.service                                                  100%  231   148.4KB/s   00:00   
  • 修改node2配置文件

    首先需要刪除複製過來的證書,node2節點之後會自行申請證書

    然後需要修改配置文件:kubelet kubelet.confg kube-proxy

#刪除所有證書文件
[root@node2 cfg]# cd /opt/kubernetes/ssl/
[root@node2 ssl]# ls
kubelet-client-2020-05-02-17-13-11.pem  kubelet-client-current.pem  kubelet.crt  kubelet.key
[root@node2 ssl]# rm -rf *
  • 修改kubelet配置文件IP地址
[root@node2 ssl]# cd ../cfg
[root@node2 cfg]# ls
bootstrap.kubeconfig  kubelet         kubelet.kubeconfig  kube-proxy.kubeconfig
flanneld              kubelet.config  kube-proxy
[root@node2 cfg]# vim kubelet


KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.226.133 \	#修改爲node2節點本地地址
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
  • 修改kubelet.conf配置文件
[root@node2 cfg]# vim kubelet.config 

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.226.133				#修改爲本地地址
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2						#DNS解析地址,需要記下來
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
----》wq
  • 修改kube-proxy 配置文件
[root@node2 cfg]# vim kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.226.133 \		#修改爲本地地址
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
  • PS:kube-proxy.kubeconfig配置文件中的地址是指向master地址(單節點不需要修改,多節點需要指向VIP漂移地址:192.168.226.100)
  • 啓動服務
[root@node2 cfg]# systemctl start kubelet
[root@node2 cfg]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
#kubelet 啓動成功後會向master發出加入集羣請求,稍後在master節點查看、授權即可
    
[root@node2 cfg]# systemctl start kube-proxy
[root@node2 cfg]# systemctl enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
  • master節點授權
[root@localhost kubeconfig]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-NuDGm_zrrpO-2TwBvSyjYyBZvD7MVRCVySygFTagfoE   11m32s  kubelet-bootstrap   Approved,Issued
node-csr-586-E0EQCqCoxLqokyyQDcOTtuE8fbjD0qMqQ1rbJUw   3m27s   kubelet-bootstrap   Pending
#第二個就是node2的請求

#授權
[root@localhost kubeconfig]# kubectl certificate approve node-csr-586-E0EQCqCoxLqokyyQDcOTtuE8fbjD0qMqQ1rbJUw
certificatesigningrequest.certificates.k8s.io/node-csr-586-E0EQCqCoxLqokyyQDcOTtuE8fbjD0qMqQ1rbJUw approved
  • 最後查看集羣狀態
[root@localhost kubeconfig]# kubectl get node
NAME              STATUS   ROLES    AGE    VERSION
192.168.226.132   Ready    <none>   7m58s   v1.12.3
192.168.226.133   Ready    <none>   22s    v1.12.3
  • 以上,k8s的單節點部署完成

總結:

之後會基於單節點環境部署多節點~~

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章