centos8 k8s安裝部署

部署前準備

(1)、安裝時所需要的鏡像

在一臺網絡比較好的機器上pull相應版本的鏡像

docker pull k8s.gcr.io/kube-apiserver:v1.18.1
docker pull k8s.gcr.io/kube-controller-manager:v1.18.1
docker pull k8s.gcr.io/kube-scheduler:v1.18.1
docker pull k8s.gcr.io/kube-proxy:v1.18.1
docker pull k8s.gcr.io/pause:3.2
docker pull k8s.gcr.io/etcd:3.4.3-0
docker pull k8s.gcr.io/coredns:1.6.7
docker pull quay.io/coreos/flannel:v0.12.0-amd64

將pull下來的鏡像打包,之後遷移到我們的安裝服務器上

docker save k8s.gcr.io/kube-apiserver:v1.18.1 >apiserver-1.18.1.tar.gz
docker save k8s.gcr.io/etcd:3.4.3-0 > etcd-3.4.3-0.tar
docker save k8s.gcr.io/pause:3.2 > pause-3.2.tar
docker save k8s.gcr.io/kube-controller-manager:v1.18.1 > kube-controller-manager-1.18.1.tar
docker save k8s.gcr.io/kube-scheduler:v1.18.1 > kube-scheduler-1.18.1.tar
docker save k8s.gcr.io/kube-proxy:v1.18.1 > kube-proxy-1.18.1.tar
docker save k8s.gcr.io/coredns:1.6.7 >coredns-1.6.7.tar
docker save quay.io/coreos/flannel:v0.12.0-amd64 >flannel-0.12.0.tar

(2)、準備三臺機器

主機名 IP地址 服務
master 192.168.0.155 docker
node01 192.168.0.154 docker
node02 192.168.0.153 docker

(3)、設置hostname,並配置hosts文件

[root@localhost kubenet]# hostnamectl set-hostname k8s_master
[root@localhost kubenet]# hostname
k8s_master
[root@localhost kubenet]# su -

[root@k8s_master kubenet]# tail -3 /etc/hosts 
192.168.0.155 k8s_master
192.168.0.154 k8s_node1
192.168.0.153 k8s_node2

(4)、關閉swap分區

[root@master ~]# swapoff -a
[root@master ~]# vim /etc/fstab 
[root@master ~]# free -h
^H              total        used        free      shared  buff/cache   available
Mem:          2.7Gi       278Mi       1.6Gi       8.0Mi       812Mi       2.2Gi
Swap:            0B          0B          0B
[root@master ~]# tail -1 /etc/fstab 
#/dev/mapper/cl-swap     swap                    swap    defaults        0 0

(5)測試環境關閉防火牆(線上則可以添加相應的防火牆規則)

[root@master updates]# systemctl stop firewalld
[root@master updates]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

  (6) 關閉selinux

[root@master ~]# vim /etc/sysconfig/selinux 
[root@master ~]# grep "^SELINUX" /etc/sysconfig/selinux 
SELINUX=disabled
SELINUXTYPE=targeted
[root@master ~]# setenforce 0

2、在安裝的服務器上導入我們所需要的鏡像、安裝docker服務,詳見https://blog.csdn.net/baidu_38432732/article/details/105315880

3、 打開路由轉發和iptables橋接功能(三臺)

[root@k8s_master kubenet]# cat /etc/sysctl.d/k8s.conf //開啓iptables橋接功能
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@localhost kubenet]# echo net.ipv4.ip_forward = 1 >> /etc/sysctl.conf  //**打開路由轉發
[root@localhost kubenet]# sysctl -p /etc/sysctl.d/k8s.conf  //刷新一下
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@localhost kubenet]# sysctl -p 
net.ipv4.ip_forward = 1

4、master節點安裝部署k8s

1) 指定yum安裝kubernetes的yum源(三臺)

[root@k8s_master kubenet]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2)檢查倉庫是否可用

[root@k8s_master kubenet]# yum repolist 
Repository AppStream is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository PowerTools is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
Kubernetes                                                                                                                                                    2.5 kB/s | 454  B     00:00    
Kubernetes                                                                                                                                                     19 kB/s | 1.8 kB     00:00    
Importing GPG key 0xA7317B0F:
 Userid     : "Google Cloud Packages Automatic Signing Key <[email protected]>"
 Fingerprint: D0BC 747F D8CA F711 7500 D6FA 3746 C208 A731 7B0F
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Importing GPG key 0xBA07F4FB:
 Userid     : "Google Cloud Packages Automatic Signing Key <[email protected]>"
 Fingerprint: 54A6 47F9 048D 5688 D7DA 2ABE 6A03 0B21 BA07 F4FB
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Is this ok [y/N]: y
Kubernetes                                                                                                                                                     22 kB/s | 975  B     00:00    
Importing GPG key 0x3E1BA8D5:
 Userid     : "Google Cloud Packages RPM Signing Key <[email protected]>"
 Fingerprint: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Is this ok [y/N]: ty
Is this ok [y/N]: y
Kubernetes                                                                                                                                                    166 kB/s |  90 kB     00:00    
repo id                                                                          repo name                                                                                              status
AppStream                                                                        CentOS-8 - AppStream                                                                                   5,281
base                                                                             CentOS-8 - Base - mirrors.aliyun.com                                                                   2,231
docker-ce-stable                                                                 Docker CE Stable - x86_64                                                                                 63
extras                                                                           CentOS-8 - Extras - mirrors.aliyun.com                                                                    15
kubernetes                                                                       Kubernetes  

 3)創建本地緩存(三臺)

[root@k8s_master ~]# yum makecache
Repository AppStream is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository PowerTools is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
CentOS-8 - AppStream                                                                                                                                          6.9 kB/s | 4.3 kB     00:00    
CentOS-8 - Base - mirrors.aliyun.com                                                                                                                           91 kB/s | 3.8 kB     00:00    
CentOS-8 - Extras - mirrors.aliyun.com                                                                                                                         28 kB/s | 1.5 kB     00:00    
Docker CE Stable - x86_64                                                                                                                                     3.9 kB/s | 3.5 kB     00:00    
Kubernetes                                                                                                                                                    4.1 kB/s | 454  B     00:00    
Metadata cache created.

5、所有節點安裝各個服務

1)master安裝以下服務

[root@k8s_master ~]# yum -y install kubeadm kubelet kubectl

2)node1和node2安裝

[root@k8s_node1 ~]# yum -y install kubeadm kubelet

3)三臺主機設置kubelet開機啓動

[root@k8s_master ~]# systemctl enable kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.

6、master導入之前準備好的鏡像

[root@k8s_master kubenet]# ls
apiserver-1.18.1.tar.gz  etcd-3.4.3-0.tar.gz                    kube-proxy-1.18.1.tar.gz      pause-3.2.tar.gz
coredns-1.6.7.tar.gz     kube-controller-manager-1.18.1.tar.gz  kube-scheduler-1.18.1.tar.gz  v2.0.0-rc7.tar.gz
[root@k8s_master kubenet]# docker load <apiserver-1.18.1.tar.gz 
fc4976bd934b: Loading layer [==================================================>]  53.88MB/53.88MB
58d9c9d2174e: Loading layer [==================================================>]  120.7MB/120.7MB
Loaded image: k8s.gcr.io/kube-apiserver:v1.18.1
[root@k8s_master kubenet]# docker load <etcd-3.4.3-0.tar.gz 
fe9a8b4f1dcc: Loading layer [==================================================>]  43.87MB/43.87MB
ce04b89b7def: Loading layer [==================================================>]  224.9MB/224.9MB
1b2bc745b46f: Loading layer [==================================================>]  21.22MB/21.22MB
Loaded image: k8s.gcr.io/etcd:3.4.3-0
[root@k8s_master kubenet]# docker load <kube-proxy-1.18.1.tar.gz 
682fbb19de80: Loading layer [==================================================>]  21.06MB/21.06MB
2dc2f2423ad1: Loading layer [==================================================>]  5.168MB/5.168MB
ad9fb2411669: Loading layer [==================================================>]  4.608kB/4.608kB
597151d24476: Loading layer [==================================================>]  8.192kB/8.192kB
0d8d54147a3a: Loading layer [==================================================>]  8.704kB/8.704kB
310c81aa788d: Loading layer [==================================================>]  38.38MB/38.38MB
Loaded image: k8s.gcr.io/kube-proxy:v1.18.1
[root@k8s_master kubenet]# docker load <pause-3.2.tar.gz 
ba0dae6243cc: Loading layer [==================================================>]  684.5kB/684.5kB
Loaded image: k8s.gcr.io/pause:3.2
[root@k8s_master kubenet]# docker load <coredns-1.6.7.tar.gz 
225df95e717c: Loading layer [==================================================>]  336.4kB/336.4kB
c965b38a6629: Loading layer [==================================================>]  43.58MB/43.58MB
Loaded image: k8s.gcr.io/coredns:1.6.7
[root@k8s_master kubenet]# docker load <kube-controller-manager-1.18.1.tar.gz 
13d57fb64a59: Loading layer [==================================================>]  110.1MB/110.1MB
Loaded image: k8s.gcr.io/kube-controller-manager:v1.18.1
[root@k8s_master kubenet]# docker load <kube-scheduler-1.18.1.tar.gz 
b74ff62d98bf: Loading layer [==================================================>]  42.95MB/42.95MB
Loaded image: k8s.gcr.io/kube-scheduler:v1.18.1
[root@k8s_master kubenet]# docker load <flannel-0.12.0.tar.gz

查詢下剛剛導入的鏡像

[root@k8s_master kubenet]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.18.1             4e68534e24f6        13 days ago         117MB
k8s.gcr.io/kube-controller-manager   v1.18.1             d1ccdd18e6ed        13 days ago         162MB
k8s.gcr.io/kube-apiserver            v1.18.1             a595af0107f9        13 days ago         173MB
k8s.gcr.io/kube-scheduler            v1.18.1             6c9320041a7b        13 days ago         95.3MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        2 months ago        683kB
k8s.gcr.io/coredns                   1.6.7               67da37a9a360        2 months ago        43.8MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        5 months ago        288MB
quay.io/coreos/flannel                                 v0.12.0-amd64       4e9f801d2217        5 weeks ago         52.8MB

 7、初始化kubernetes集羣

kubeadm init --kubernetes-version=v1.18.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[root@master ~]# kubeadm init --kubernetes-version=v1.18.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
W0422 18:41:24.971483   14853 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0422 18:41:26.045499   14853 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0422 18:41:26.046076   14853 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.003005 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ywhj33.7n34chv9k2ouitvk
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.155:6443 --token nc5pfp.wrp5mutpwb4blufb     --discovery-token-ca-cert-hash sha256:a675056664f0d65bda6ef36d48243b372b5d59e259aa599d34d3f9ef2077f3c2

1)必須執行一下操作

root用戶

[root@node1 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf"  >> /etc/profile
[root@node1 ~]# source /etc/profile

普通用戶 

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 2) 在其他兩臺node節點上導入剛剛準備好的鏡像

[root@node1 ~]# docker images
REPOSITORY                                             TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                                  v1.18.1             4e68534e24f6        13 days ago         117MB
quay.io/coreos/flannel                                 v0.12.0-amd64       4e9f801d2217        5 weeks ago         52.8MB
k8s.gcr.io/pause                                       3.2                 80d28bedfe5d        2 months ago        683kB
k8s.gcr.io/coredns                                     1.6.7               67da37a9a360        2 months ago        43.8MB

 3)在其他node主機上執行

kubeadm join 192.168.0.155:6443 --token nc5pfp.wrp5mutpwb4blufb     --discovery-token-ca-cert-hash sha256:a675056664f0d65bda6ef36d48243b372b5d59e259aa599d34d3f9ef2077f3c2

node1的執行結果

[root@node1 ~]# kubeadm join 192.168.0.155:6443 --token nc5pfp.wrp5mutpwb4blufb     --discovery-token-ca-cert-hash sha256:a675056664f0d65bda6ef36d48243b372b5d59e259aa599d34d3f9ef2077f3c2
W0422 19:08:45.598062    4205 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

node2的執行結果

[root@node2 pods]# kubeadm join 192.168.0.155:6443 --token nc5pfp.wrp5mutpwb4blufb     --discovery-token-ca-cert-hash sha256:a675056664f0d65bda6ef36d48243b372b5d59e259aa599d34d3f9ef2077f3c2
W0422 19:10:55.127994   17734 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3)查詢節點狀態

[root@master updates]# kubectl get node
NAME     STATUS     ROLES    AGE    VERSION
master   NotReady   master   5m2s   v1.18.2

4)查詢健康狀況

[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}  

8、添加網絡組件flannel

[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2020-04-22 17:57:12--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.108.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14366 (14K) [text/plain]
Saving to: 'kube-flannel.yml'

kube-flannel.yml                       100%[============================================================================>]  14.03K  25.0KB/s    in 0.6s    

2020-04-22 17:57:15 (25.0 KB/s) - 'kube-flannel.yml' saved [14366/14366]

[root@master ~]# ls
 kube-flannel.yml         
[root@master ~]# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

創建flannel網絡

[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@master ~]# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

查看對應的pod是否啓動並運行(結果顯示對應的三個kube-proxy和flannel網絡的pod在運行)

在查詢個節點狀態

[root@master ~]# kubectl get node
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   16m     v1.18.2
node1    Ready    <none>   3m13s   v1.18.2
node2    Ready    <none>   63s     v1.18.2

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章