Kubernetes(k8s)1.14 離線版集羣 - 部署work節點

聲明:
如果您有更好的技術與作者分享,或者商業合作;
請訪問作者個人網站 http://www.esqabc.com/view/message.html 留言給作者。
如果該案例觸犯您的專利,請在這裏:http://www.esqabc.com/view/message.html 留言給作者說明原由
作者一經查實,馬上刪除。

1、搭建前說明

a、kubernetes - master節點運行組件如下:

docker
kubelet
kube-proxy
flanneld
kube-nginx

如沒有特殊說明,一般都在k8s-01服務器操作

前提提條件、服務器,請查看這個地址:https://blog.csdn.net/esqabc/article/details/102726771

2、安裝依賴包

注意:在所有服務器執行

[root@k8s-01 ~]# cd /opt/k8s/work
.
[root@k8s-01 work]# yum install -y epel-release
.
[root@k8s-01 work]# yum install -y conntrack ipvsadm ntp ntpdate ipset jq iptables curl sysstat libseccomp && modprobe ip_vs

3、部署Docker組件

注意:在所有服務器執行

a、創建配置文件

[root@k8s-01 ~]# mkdir -p /etc/docker/
[root@k8s-01 ~]# cat > /etc/docker/daemon.json <<EOF
添加下面內容:

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://hjvrgh7a.mirror.aliyuncs.com"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

注意:要添加我們harbor倉庫需要在添加下面內容:
www.esqabc.com:就是倉庫地址

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://hjvrgh7a.mirror.aliyuncs.com"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "insecure-registries": ["www.esqabc.com"],
  "storage-driver": "overlay2"
}
EOF

b、安裝Docker請查看這篇文章:https://blog.csdn.net/esqabc/article/details/89881374

c、修改Docker啓動參數

[root@k8s-01 ~]# vi /usr/lib/systemd/system/docker.service
添加下面內容

EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock

或者直接替換,完整配置如下

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd  $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
EnvironmentFile=-/run/flannel/docker
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

d、重新啓動doacker

[root@k8s-01 work]# systemctl daemon-reload && systemctl enable docker && systemctl restart docker

e、檢查服務運行狀態

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status docker|grep Active"
  done

e、檢查 docker0 網橋

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0"
  done

4、部署kubelet組件

  • kubelet運行在每個worker節點上
  • 接收kube-apiserver發送的請求,管理Pod容器,執行交互命令

a、創建kubelet bootstrap kubeconfig文件
注意該操作,在所有服務器執行

4、部署kubelet組件

  • kubelet運行在每個worker節點上,接收kube-apiserver發送的請求,管理Pod容器,執行交互命令
  • kubelet啓動時自動向kube-apiserver註冊節點信息,內置的cAdivsor統計和監控節點的資源使用資源情況。
  • 爲確保安全,部署時關閉了kubelet的非安全http端口,對請求進行認證和授權,拒絕未授權的訪問

a、創建kubelet bootstrap kubeconfig文件

[root@k8s-01 ~]# cd /opt/k8s/work

for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"
    # 創建 token
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:${node_name} \
      --kubeconfig ~/.kube/config)
    # 設置集羣參數
    kubectl config set-cluster kubernetes \
      --certificate-authority=/etc/kubernetes/cert/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
    # 設置客戶端認證參數
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
    # 設置上下文參數
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
    # 設置默認上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig
  done

b、查看kubeadm爲各個節點創建的token

[root@k8s-01 ~]# kubeadm token list --kubeconfig ~/.kube/config
正常圖示:
在這裏插入圖片描述

c、查看各token關聯的Secret

[root@k8s-01 ~]# kubectl get secrets -n kube-system|grep bootstrap-token
在這裏插入圖片描述

d、分發 bootstrap kubeconfig 文件到所有 worker 節點’

[root@k8s-01 ~]# cd /opt/k8s/work

for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"
    scp kubelet-bootstrap-${node_name}.kubeconfig root@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig
  done

e、創建和分發kubelet參數配置

[root@k8s-01 ~]# cd /opt/k8s/work
[root@k8s-01 work]# cat > kubelet-config.yaml.template <<EOF

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "##NODE_IP##"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/etc/kubernetes/cert/ca.pem"
authorization:
  mode: Webhook
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "##NODE_IP##"
clusterDomain: "${CLUSTER_DNS_DOMAIN}"
clusterDNS:
 - "${CLUSTER_DNS_SVC_IP}"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: systemd
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "${CLUSTER_CIDR}"
podPidsLimit: -1
resolvConf: /etc/resolv.conf
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:
  memory.available:  "100Mi"
nodefs.available:  "10%"
nodefs.inodesFree: "5%"
imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOF

說明一下:

  • address:kubelet 安全端口(https,10250)監聽的地址,不能爲 127.0.0.1,否則
    kube-apiserver、heapster 等不能調用 kubelet 的 API;
  • readOnlyPort=0:關閉只讀端口(默認 10255),等效爲未指定;
  • authentication.anonymous.enabled:設置爲 false,不允許匿名訪問 10250 端口;
  • authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啓 HTTP 證書認證;
  • authentication.webhook.enabled=true:開啓 HTTPs bearer token 認證;
  • 對於未通過 x509 證書和 webhook 認證的請求(kube-apiserver 或其他客戶端),將被拒絕,提示
    Unauthorized;
  • authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查詢
    kube-apiserver 某 user、group 是否具有操作資源的權限(RBAC);
  • featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自動
    rotate 證書,證書的有效期取決於 kube-controller-manager 的
    –experimental-cluster-signing-duration 參數;

注意:需要 root 賬戶運行;

f、爲各個節點創建和分發kubelet配置文件

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]}
  do 
    echo ">>> ${node_ip}"
    sed -e "s/##NODE_IP##/${node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_ip}.yaml.template
    scp kubelet-config-${node_ip}.yaml.template root@${node_ip}:/etc/kubernetes/kubelet-config.yaml
  done

g、創建和分發kubelet啓動文件
(1)創建

[root@k8s-01 ~]# cd /opt/k8s/work
[root@k8s-01 ~]# cat > kubelet.service.template <<EOF
添加下面內容:

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=${K8S_DIR}/kubelet
ExecStart=/opt/k8s/bin/kubelet \\
  --allow-privileged=true \\
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --cert-dir=/etc/kubernetes/cert \\
  --cni-conf-dir=/etc/cni/net.d \\
  --container-runtime=docker \\
  --container-runtime-endpoint=unix:///var/run/dockershim.sock \\
  --root-dir=${K8S_DIR}/kubelet \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --config=/etc/kubernetes/kubelet-config.yaml \\
  --hostname-override=##NODE_NAME## \\
  --pod-infra-container-image=gcr.azk8s.cn/google_containers/pause-amd64:3.1 \\
  --image-pull-progress-deadline=15m \\
  --volume-plugin-dir=${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/ \\
  --logtostderr=true \\
  --v=2
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
EOF

說明一下:

  • 如果設置了 –hostname-override 選項,則 kube-proxy 也需要設置該選項,否則會出現找不到 Node 的情況;
  • bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用該文件中的用戶名和
    token 向 kube-apiserver 發送 TLS Bootstrapping 請求;
  • K8S approve kubelet 的 csr 請求後,在 –cert-dir 目錄創建證書和私鑰文件,然後寫入
    –kubeconfig 文件;
  • pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest
    鏡像,它不能回收容器的殭屍

(2)分發

[root@k8s-01 ~]# cd /opt/k8s/work

for node_name in ${NODE_NAMES[@]}
  do 
    echo ">>> ${node_name}"
    sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service
    scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service
  done

注意:創建user和group的CSR權限,不創建kubelet會啓動失敗

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

h、啓動 kubelet 服務

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
    ssh root@${node_ip} "/usr/sbin/swapoff -a"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
  done

i、查看狀態

[root@k8s-01 ~]# kubectl get csr
.
NAME AGE REQUESTOR CONDITION
csr-22kt2 38s system:bootstrap:pkkcl0 Pending
csr-f9trc 37s system:bootstrap:tubfqq Pending
csr-v7jt2 38s system:bootstrap:ds9td8 Pending
csr-zrww2 37s system:bootstrap:hy5ssz Pending

這裏4個節點均處於pending(等待)狀態

j、自動approve CSR請求,創建三個ClusterRoleBinding,分別用於自動approve client、renew client、renew server證書

[root@k8s-01 ~]# cd /opt/k8s/work
[root@k8s-01 ~]# cat > csr-crb.yaml <<EOF
添加下面內容:

# Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
 - apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF

[root@k8s-01 ~]# kubectl apply -f csr-crb.yaml

說明一下:

  • auto-approve-csrs-for-group 自動approve
    node的第一次CSR,注意第一次CSR時,請求的Group爲system:bootstrappers
  • node-client-cert-renewal 自動approve
    node後續過期的client證書,自動生成的證書Group爲system:nodes
  • node-server-cert-renewal 自動approve node後續過期的server證書,自動生成的證書Group

k、查看kubelet
等待1-10分鐘,3個節點的CSR都會自動approved

[root@k8s-01 ~]# kubectl get csr
.
NAME AGE REQUESTOR CONDITION
csr-22kt2 4m48s system:bootstrap:pkkcl0 Approved,Issued
csr-d8tvc 77s system:node:k8s-01 Pending
csr-f9trc 4m47s system:bootstrap:tubfqq Approved,Issued
csr-kcdvx 76s system:node:k8s-02 Pending
csr-m8k8t 75s system:node:k8s-04 Pending
csr-v7jt2 4m48s system:bootstrap:ds9td8 Approved,Issued
csr-wwvwd 76s system:node:k8s-03 Pending
csr-zrww2 4m47s system:bootstrap:hy5ssz Approved,Issued

目前所有節點均爲ready狀態

[root@k8s-01 ~]# kubectl get node
.
NAME STATUS ROLES AGE VERSION
k8s-01 Ready 2m29s v1.14.2
k8s-02 Ready 2m28s v1.14.2
k8s-03 Ready 2m28s v1.14.2
k8s-04 Ready 2m27s v1.14.2

kube-controller-manager爲各node生成了kubeconfig文件和公鑰

[root@k8s-01 ~]# ls -l /etc/kubernetes/kubelet.kubeconfig
在這裏插入圖片描述
[root@k8s-01 ~]# ls -l /etc/kubernetes/cert/|grep kubelet
在這裏插入圖片描述

l、手動approve server cert csr

[root@k8s-01 ~]# kubectl get csr | grep Pending | awk ‘{print $1}’ | xargs kubectl certificate approve
在這裏插入圖片描述
m、查看kubelet API接口
[root@k8s-01 ~]# netstat -lntup|grep kubelet
在這裏插入圖片描述
說明一下:

  • 10248: healthz http 服務;
  • 10250: https 服務,訪問該端口時需要認證和授權(即使訪問 /healthz 也需要);
  • 未開啓只讀端口 10255;
  • 從 K8S v1.10 開始,去除了 –cadvisor-port 參數(默認 4194 端口),不支持訪問 cAdvisor UI &
    API

n、bear token認證和授權

kubectl create sa kubelet-api-test

kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test

SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')

TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')

echo ${TOKEN}

正常圖示:
在這裏插入圖片描述
5、部署kube-proxy組件
a、創建kube-proxy證書籤名請求

[root@k8s-01 ~]# cd /opt/k8s/work
[root@k8s-01 ~]# cat > kube-proxy-csr.json <<EOF
添加下面內容:

{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}
EOF

說明一下:

  • CN:指定該證書的 User 爲 system:kube-proxy;
  • 預定義的 RoleBinding system:node-proxier 將User system:kube-proxy 與 Role
    system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限;
  • 該證書只會被 kube-proxy 當做 client 證書使用,所以 hosts 字段爲空;

b、生成證書和私鑰:

[root@k8s-01 ~]# cd /opt/k8s/work

cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

[root@k8s-01 ~]# ls kube-proxy*
在這裏插入圖片描述
c、創建和分發 kubeconfig 文件
(1)創建
[root@k8s-01 ~]# cd /opt/k8s/work

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

(2)分發

[root@k8s-01 ~]# cd /opt/k8s/work

for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"
    scp kube-proxy.kubeconfig root@${node_name}:/etc/kubernetes/
  done

d、創建kube-proxy配置文件

[root@k8s-01 ~]# cd /opt/k8s/work
[root@k8s-01 ~]# cat > kube-proxy-config.yaml.template <<EOF
添加下面內容:

kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  burst: 200
  kubeconfig: "/etc/kubernetes/kube-proxy.kubeconfig"
  qps: 100
bindAddress: ##NODE_IP##
healthzBindAddress: ##NODE_IP##:10256
metricsBindAddress: ##NODE_IP##:10249
enableProfiling: true
clusterCIDR: ${CLUSTER_CIDR}
hostnameOverride: ##NODE_NAME##
mode: "ipvs"
portRange: ""
kubeProxyIPTablesConfiguration:
  masqueradeAll: false
kubeProxyIPVSConfiguration:
  scheduler: rr
  excludeCIDRs: []
EOF
  • bindAddress: 監聽地址;
  • clientConnection.kubeconfig: 連接 apiserver 的 kubeconfig 文件;
  • clusterCIDR: kube-proxy 根據 –cluster-cidr判斷集羣內部和外部流量,指定 –cluster-cidr
    或 –masquerade-all 選項後 kube-proxy 纔會對訪問 Service IP 的請求做 SNAT;
  • hostnameOverride: 參數值必須與 kubelet 的值一致,否則 kube-proxy 啓動後會找不到該
    Node,從而不會創建任何 ipvs 規則;
  • mode: 使用 ipvs 模式;

e、分發kube-proxy配置文件

[root@k8s-01 ~]# cd /opt/k8s/work

for (( i=0; i < 4; i++ ))
  do 
    echo ">>> ${NODE_NAMES[i]}"
    sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${NODE_NAMES[i]}.yaml.template
    scp kube-proxy-config-${NODE_NAMES[i]}.yaml.template root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml
  done

f、創建和分發 kube-proxy systemd unit 文件
(1)創建

[root@k8s-01 ~]# cd /opt/k8s/work
[root@k8s-01 ~]# cat > kube-proxy.service <<EOF
添加下面內容:

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=${K8S_DIR}/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy-config.yaml \\
  --logtostderr=true \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

(2)分發

[root@k8s-01 ~]# cd /opt/k8s/work

for node_name in ${NODE_NAMES[@]}
  do 
    echo ">>> ${node_name}"
    scp kube-proxy.service root@${node_name}:/etc/systemd/system/
  done

g、啓動 kube-proxy 服務

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"
    ssh root@${node_ip} "modprobe ip_vs_rr"
    ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
  done

h、檢查啓動結果

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "systemctl status kube-proxy|grep Active"
  done

正常圖示:
在這裏插入圖片描述
i、檢查監聽端口

[root@k8s-01 ~]# cd /opt/k8s/work

netstat -lnpt|grep kube-prox

在這裏插入圖片描述
j、查看ipvs路由規則

[root@k8s-01 ~]# cd /opt/k8s/work

for node_ip in ${NODE_IPS[@]}
  do
    echo ">>> ${node_ip}"
    ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"
  done

在這裏插入圖片描述
k、驗證集羣功能
現在使用daemonset驗證master和worker節點是否正常

[root@k8s-01 ~]# cd /opt/k8s/work
.
NAME STATUS ROLES AGE VERSION
k8s-01 Ready 20m v1.14.2
k8s-02 Ready 20m v1.14.2
k8s-03 Ready 20m v1.14.2
k8s-04 Ready 20m v1.14.2

創建測試yaml文件

[root@k8s-01 work]# cat > nginx-ds.yml <<EOF
添加下面內容:

apiVersion: v1
kind: Service
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: daocloud.io/library/nginx:1.13.0-alpine
        ports:
        - containerPort: 80
EOF

[root@k8s-01 ~]# kubectl create -f nginx-ds.yml

l、查看pod啓動情況

[root@k8s-01 ~]# cd /opt/k8s/work
[root@k8s-01 work]# kubectl get pod -o wide
在這裏插入圖片描述

l、檢查各節點的Pod IP 連通性

[root@k8s-01 ~]# cd /opt/k8s/work
[root@k8s-01 ~]# ping -c 172.30.48.2
在這裏插入圖片描述

m、檢查服務IP和端口可達性

[root@k8s-01 ~]# cd /opt/k8s/work
[root@k8s-01 work]# kubectl get svc |grep nginx-ds
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章