從零開始在ubuntu上安裝和使用k8s集羣及報錯解決


此文首發於我的個人Jekyll博客:zhang0peter的個人博客


這幾天在學習K8S的安裝和使用,在此記錄一下

此文參考了視頻教程:兩小時Kubernetes(K8S)從懵圈到熟練——大型分佈式集羣環境捷徑部署搭建_嗶哩嗶哩 (゜-゜)つロ 乾杯~-bilibili

報錯解決在文章最後

安裝docker

先安裝docker:

curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt update && apt install docker-ce
docker run hello-world

安裝kubernetes

docker成功運行後配置k8s的更新源,推薦阿里雲:

echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo gpg --keyserver keyserver.ubuntu.com --recv-keys BA07F4FB #對安裝包進行簽名
sudo gpg --export --armor BA07F4FB | sudo apt-key add -
sudo apt-get update

關閉虛擬內存

sudo swapoff -a #暫時關閉
nano /etc/fstab #永久關閉,註釋掉swap那一行,推薦永久關閉

安裝最新版的k8s:

apt-get install kubelet kubeadm kubectl kubernetes-cni

其中kubeadm用於初始化環境,kubectl用於操作kubelet
設置開機啓動:

sudo systemctl enable kubelet && systemctl start kubelet

查看kubectl版本:

root@ubuntu:/home/ubuntu# kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

配置k8s集羣

剛剛已經裝好一臺虛擬機的k8s,現在要配置2臺額外的虛擬機,總共3臺,形成k8s集羣。

推薦的做法是直接使用vmware自帶的克隆功能,這樣可以免去重裝的煩惱。

共3臺機器,分別爲 master, node1, node2.

配置虛擬機網絡

/etc/hostname中配置主節點爲master,node1爲 node1,node2爲 node2

配置每臺機器的/etc/netplan/50-cloud-init.yaml,把DHCP的IP改爲固定IP:

network:
    ethernets:
        ens33:
            addresses: [192.168.32.134/24]
            dhcp4: false
            gateway4: 192.168.32.2
            nameservers:
                addresses: [192.168.32.2]
            optional: true
    version: 2

修改/etc/hosts

192.168.32.132 master
192.168.32.133 node1
192.168.32.134 node2

重啓機器後能互相ping表示配置成功:

ubuntu@node2:~$ ping master
PING master (192.168.32.132) 56(84) bytes of data.
64 bytes from master (192.168.32.132): icmp_seq=1 ttl=64 time=0.837 ms
64 bytes from master (192.168.32.132): icmp_seq=2 ttl=64 time=0.358 ms

配置Master節點的k8s網絡

創建工作目錄:

mkdir ~/k8s
cd ~/k8s

生成配置文件:

ubuntu@master:~/k8s$ kubeadm config print init-defaults ClusterConfiguration > kubeadm.conf
W0130 00:57:12.673237    9359 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0130 00:57:12.673539    9359 validation.go:28] Cannot validate kubelet config - no validator is available

修改文件kubeadm.conf中的IP地址

#修改IP地址爲master節點的IP地址
localAPIEndpoint:
  advertiseAddress: 192.168.32.132
#配置pod地址
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12

拉取k8s需要的鏡像

由於官方鏡像地址被牆,所以我們需要首先獲取所需鏡像以及它們的版本。然後從國內鏡像站獲取。

ubuntu@master:~/k8s$ kubeadm config images list --config kubeadm.conf
W0130 01:31:26.536909   15911 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0130 01:31:26.536973   15911 validation.go:28] Cannot validate kubelet config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.0
k8s.gcr.io/kube-controller-manager:v1.17.0
k8s.gcr.io/kube-scheduler:v1.17.0
k8s.gcr.io/kube-proxy:v1.17.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
#下載全部當前版本的k8s所關聯的鏡像
images=(  # 下面的鏡像應該去除"k8s.gcr.io/"的前綴,版本換成上面獲取到的版本
kube-apiserver:v1.17.0
kube-controller-manager:v1.17.0
kube-scheduler:v1.17.0
kube-proxy:v1.17.0
pause:3.1
etcd:3.4.3-0
coredns:1.6.5
)

for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

啓動 kubeadm 和 kubelet

鏡像拉取完成後啓動:

ubuntu@master:~/k8s$ sudo swapoff -a
ubuntu@master:~/k8s$ sudo kubeadm init --config ./kubeadm.conf
W0130 01:33:17.642133   16358 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0130 01:33:17.642176   16358 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
...........
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.32.132:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:b6392e2c7aa72df336e178f3688ba6ca69374937a30a0fe429aaae0ffa76d5f5 

ubuntu@master:~/k8s$   mkdir -p $HOME/.kube
ubuntu@master:~/k8s$   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
ubuntu@master:~/k8s$   sudo chown $(id -u):$(id -g) $HOME/.kube/config

創建系統服務並啓動

# 啓動kubelet 設置爲開機自啓動
$ sudo systemctl enable kubelet
# 啓動k8s服務程序
$ sudo systemctl start kubelet

查看啓動狀況:

ubuntu@master:~/k8s$ kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
master   NotReady   master   7m53s   v1.17.2
ubuntu@master:~/k8s$ kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}  

現在只有一個master節點,接下來增加node節點。

先配置內部通信 flannel 網絡:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

確保kubeadm.conf中的podsubnet的地址和kube-flannel.yml中的網絡配置一樣

加載配置文件:

ubuntu@master:~/k8s$ kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

狀態變爲ready:

ubuntu@master:~/k8s$ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   39m   v1.17.2

配置 node節點

sudo swapoff -a
sudo systemctl enable kubelet
sudo systemctl start kubelet

拷貝配置文件到每個node:

scp /etc/kubernetes/admin.conf ubuntu@node1:/home/ubuntu/
scp /home/ubuntu/k8s/kube-flannel.yml ubuntu@node1:/home/ubuntu/

scp /etc/kubernetes/admin.conf ubuntu@node2:/home/ubuntu/
scp /home/ubuntu/k8s/kube-flannel.yml ubuntu@node2:/home/ubuntu/

配置並加入節點,加入中的哈希值是之前配置時生成的。

mkdir -p $HOME/.kube
sudo cp -i $HOME/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo kubeadm join 192.168.32.132:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:b6392e2c7aa72df336e178f3688ba6ca69374937a30a0fe429aaae0ffa76d5f5
kubectl apply -f kube-flannel.yml 

查看node是否已經加入到k8s集羣中(需要等一段時間才能ready):

ubuntu@master:~$ kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
master   Ready      master   5h8m    v1.17.2
node1    Ready      <none>   3h21m   v1.17.2
node2    Ready      <none>   3h20m   v1.17.2

出現報錯參考後面的報錯解決。

部署應用

編寫配置文件mysql-rc.yaml

apiVersion: v1
kind: ReplicationController                           
metadata:
  name: mysql                                          
spec:
  replicas: 1                                          #Pod副本的期待數量
  selector:
    app: mysql                                         #符合目標的Pod擁有此標籤
  template:                                            #根據此模板創建Pod的副本(實例)
    metadata:
      labels:
        app: mysql                                     #Pod副本擁有的標籤,對應RC的Selector
    spec:
      containers:                                      #Pod內容器的定義部分
      - name: mysql                                    #容器的名稱
        image: hub.c.163.com/library/mysql              #容器對應的Docker image
        ports: 
        - containerPort: 3306                          #容器應用監聽的端口號
        env:                                           #注入容器內的環境變量
        - name: MYSQL_ROOT_PASSWORD 
          value: "123456"

加載文件到集羣中,等待幾分鐘等待docker下載完成。

ubuntu@master:~/k8s$ kubectl create -f mysql-rc.yaml
replicationcontroller/mysql created
ubuntu@master:~/k8s$ kubectl get pods
NAME          READY   STATUS              RESTARTS   AGE
mysql-chv9n   0/1     ContainerCreating   0          29s
ubuntu@master:~/k8s$ kubectl get pods 
NAME          READY   STATUS    RESTARTS   AGE
mysql-chv9n   1/1     Running   0          5m56s

集羣創建完畢。

參考:

報錯解決

之前參考視頻配置的時候報錯如下:

ubuntu@master:~/k8s$ kubeadm config images pull --config ./kubeadm.conf
W0130 01:11:49.990838   11959 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0130 01:11:49.991229   11959 validation.go:28] Cannot validate kubelet config - no validator is available
failed to pull image "registry.cn-beijing.aliyuncs.com/imcto/kube-apiserver:v1.17.0": output: Error response from daemon: manifest for registry.cn-beijing.aliyuncs.com/imcto/kube-apiserver:v1.17.0 not found: manifest unknown: manifest unknown
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

於是我就換了一種安裝的方法,具體操作見文章

如果不關閉swap虛擬內存,會報錯:

ubuntu@master:~/k8s$ sudo kubeadm init --config ./kubeadm.conf
[sudo] password for ubuntu: 
W0130 01:32:14.915442   16070 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0130 01:32:14.915742   16070 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

如果使用舊版本的kube-flannel.yml會報錯,需要下載最新版本:

ubuntu@master:~/k8s$ kubectl apply -f kube-flannel.yml 
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
unable to recognize "kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

如果運行程序的用戶不對,會報錯:

root@node2:/home/ubuntu# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
ubuntu@node2:~$ kubectl get nodes
NAME     STATUS     ROLES    AGE    VERSION
master   Ready      master   147m   v1.17.2
node1    NotReady   <none>   40m    v1.17.2
node2    NotReady   <none>   39m    v1.17.2

如果k8s的node節點一直是NotReady狀態,那麼需要查看日誌:

ubuntu@node1:~/.kube$ journalctl -f -u kubelet
-- Logs begin at Tue 2020-01-28 11:02:32 UTC. --
Jan 30 04:25:10 node1 kubelet[1893]: W0130 04:25:10.042232    1893 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Jan 30 04:25:11 node1 kubelet[1893]: E0130 04:25:11.637588    1893 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 30 04:25:11 node1 kubelet[1893]: E0130 04:25:11.637625    1893 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-proxy-pk22k_kube-system(ad0d231e-e5a5-421d-944d-7f860d1119fa)" failed: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 30 04:25:11 node1 kubelet[1893]: E0130 04:25:11.637685    1893 kuberuntime_manager.go:729] createPodSandbox for pod "kube-proxy-pk22k_kube-system(ad0d231e-e5a5-421d-944d-7f860d1119fa)" failed: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jan 30 04:25:11 node1 kubelet[1893]: E0130 04:25:11.637737    1893 pod_workers.go:191] Error syncing pod ad0d231e-e5a5-421d-944d-7f860d1119fa ("kube-proxy-pk22k_kube-system(ad0d231e-e5a5-421d-944d-7f860d1119fa)"), skipping: failed to "CreatePodSandbox" for "kube-proxy-pk22k_kube-system(ad0d231e-e5a5-421d-944d-7f860d1119fa)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-proxy-pk22k_kube-system(ad0d231e-e5a5-421d-944d-7f860d1119fa)\" failed: rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.1\": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jan 30 04:25:12 node1 kubelet[1893]: E0130 04:25:12.608103    1893 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Jan 30 04:25:12 node1 kubelet[1893]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Jan 30 04:25:13 node1 kubelet[1893]: E0130 04:25:13.662938    1893 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jan 30 04:25:15 node1 kubelet[1893]: W0130 04:25:15.043972    1893 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Jan 30 04:25:18 node1 kubelet[1893]: E0130 04:25:18.671967    1893 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

看報錯就可以解決問題了。

這裏顯示的報錯是鏡像沒下載,那麼就手動下載。

隨後又報錯如下:

ubuntu@node1:~$ journalctl -f -u kubelet
-- Logs begin at Tue 2020-01-28 11:02:32 UTC. --
Jan 30 04:32:26 node1 kubelet[1893]: E0130 04:32:26.252152    1893 pod_workers.go:191] Error syncing pod 9e1020f5-06a0-469b-8340-adff61fb2f56 ("kube-flannel-ds-amd64-rcvjv_kube-system(9e1020f5-06a0-469b-8340-adff61fb2f56)"), skipping: failed to "StartContainer" for "install-cni" with ImagePullBackOff: "Back-off pulling image \"quay.io/coreos/flannel:v0.11.0-amd64\""
Jan 30 04:32:30 node1 kubelet[1893]: E0130 04:32:30.115061    1893 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jan 30 04:32:30 node1 kubelet[1893]: W0130 04:32:30.149915    1893 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Jan 30 04:32:35 node1 kubelet[1893]: E0130 04:32:35.125483    1893 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jan 30 04:32:35 node1 kubelet[1893]: W0130 04:32:35.150265    1893 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Jan 30 04:32:39 node1 kubelet[1893]: E0130 04:32:39.251675    1893 pod_workers.go:191] Error syncing pod 9e1020f5-06a0-469b-8340-adff61fb2f56 ("kube-flannel-ds-amd64-rcvjv_kube-system(9e1020f5-06a0-469b-8340-adff61fb2f56)"), skipping: failed to "StartContainer" for "install-cni" with ImagePullBackOff: "Back-off pulling image \"quay.io/coreos/flannel:v0.11.0-amd64\""
Jan 30 04:32:40 node1 kubelet[1893]: E0130 04:32:40.134950    1893 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jan 30 04:32:40 node1 kubelet[1893]: W0130 04:32:40.151451    1893 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Jan 30 04:32:45 node1 kubelet[1893]: E0130 04:32:45.145834    1893 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jan 30 04:32:45 node1 kubelet[1893]: W0130 04:32:45.151693    1893 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

報錯Unable to update cni config: no networks found in /etc/cni/net.d這個報錯我也看不出來哪裏有問題。

隨後查看更具體的情況:

ubuntu@master:~$ kubectl get pods -n kube-system -o wide
NAME                             READY   STATUS     RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
kube-flannel-ds-amd64-gtlwv      1/1     Running    4          4h23m   192.168.32.132   master   <none>           <none>
kube-flannel-ds-amd64-m78z2      0/1     Init:0/1   0          3h13m   192.168.32.134   node2    <none>           <none>
kube-flannel-ds-amd64-rcvjv      1/1     Running    1          3h13m   192.168.32.133   node1    <none>           <none>
ubuntu@master:~$ kubectl --namespace kube-system logs kube-flannel-ds-amd64-m78z2
Error from server (BadRequest): container "kube-flannel" in pod "kube-flannel-ds-amd64-m78z2" is waiting to start: PodInitializing
ubuntu@master:~$ kubectl describe pod kube-flannel-ds-amd64-m78z2 --namespace=kube-system
Name:         kube-flannel-ds-amd64-m78z2
Namespace:    kube-system
............................
Events:
  Type     Reason                  Age                     From               Message
  ----     ------                  ----                    ----               -------
  Normal   Scheduled               <unknown>               default-scheduler  Successfully assigned kube-system/kube-flannel-ds-amd64-m78z2 to node2
  Warning  FailedCreatePodSandBox  3h17m (x22 over 3h27m)  kubelet, node2     Failed to create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  FailedCreatePodSandBox  139m (x63 over 169m)    kubelet, node2     Failed to create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed                  23m (x3 over 26m)       kubelet, node2     Failed to pull image "quay.io/coreos/flannel:v0.11.0-amd64": rpc error: code = Unknown desc = context canceled
  Warning  Failed                  23m (x3 over 26m)       kubelet, node2     Error: ErrImagePull
  Normal   BackOff                 23m (x5 over 26m)       kubelet, node2     Back-off pulling image "quay.io/coreos/flannel:v0.11.0-amd64"
  Warning  Failed                  23m (x5 over 26m)       kubelet, node2     Error: ImagePullBackOff
  Normal   Pulling                 22m (x4 over 30m)       kubelet, node2     Pulling image "quay.io/coreos/flannel:v0.11.0-amd64"
  Normal   SandboxChanged          19m                     kubelet, node2     Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling                 18m                     kubelet, node2     Pulling image "quay.io/coreos/flannel:v0.11.0-amd64"
  Normal   Pulling                 10m                     kubelet, node2     Pulling image "quay.io/coreos/flannel:v0.11.0-amd64"
  Normal   Pulling                 3m4s                    kubelet, node2     Pulling image "quay.io/coreos/flannel:v0.11.0-amd64"

可以看出問題在於無法拉鏡像:Pulling image "quay.io/coreos/flannel:v0.11.0-amd64",手動拉:

docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64

問題解決,參考:k8s 部署問題解決 - 簡書

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章