部署node 節點 --k8s 中讓pod 運行在另外一個node

今天天氣不錯,打算運行pod 在另外一個node 上,先看一下方法是在另外一個node 上設置一個label, 然後再設置這個pod含有這個label 。

發現kubectl get node 根本就沒有這個node.

下面先把這個node 加進來,網上查了一下,發現跟kubelet 這個配置有關係:下面解決方法:

1:這個文件要新建的:

創建 kubelet 的service配置文件

文件位置/etc/systemd/system/kubelet.serivce

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBELET_API_SERVER \
        $KUBELET_ADDRESS \
        $KUBELET_PORT \
        $KUBELET_HOSTNAME \
        $KUBE_ALLOW_PRIV \
        $KUBELET_POD_INFRA_CONTAINER \
        $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

2:然後保存後,再對下面一個文件進行編輯:

注意:需要先創建/var/lib/kubelet目錄,不然稍後啓動kubelet會報如下錯誤:
Failed at step CHDIR spawning /usr/bin/kubelet: No such file or directory
kubelet的配置文件/etc/kubernetes/kubelet其中的IP地址更改爲你的每臺node節點的IP地址

注意,我就原來這個ip 沒有配成node 的IP, 所以看不到node 的hostname.

把文件改成如下:

[root@k8s-node system]# cat /etc/kubernetes/kubelet
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.122.234"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.122.168:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

3: 重新啓動kubelet 服務:

[root@k8s-node system]# systemctl daemon-reload

[root@k8s-node system]# systemctl enable kubelet
Failed to execute operation: File exists
[root@k8s-node system]# systemctl restart kubelet

下面去master 機器上看一下node;

[root@k8s-master kubernetes]# kubectl get node
NAME              STATUS    AGE
192.168.122.168   Ready     3d
192.168.122.234   Ready     17s

成功啦。

再看一下pod 運行情況:

[root@k8s-master kubernetes]# kubectl get pod -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP            NODE
mysql-4144028371-l6d0m   1/1       Unknown   1          18h       172.17.64.2   192.168.122.168
mysql-4144028371-qr1dg   1/1       Running   0          11m       172.17.64.2   192.168.122.234
myweb-3659005716-p6lwf   1/1       Unknown   1          17h       172.17.64.3   192.168.122.168
myweb-3659005716-xmq4d   1/1       Running   0          11m       172.17.64.3   192.168.122.234
[root@k8s-master kubernetes]#

4: 爲了實現流量控制,node 上加入kube-proxy 的服務:

配置 kube-proxy

創建 kube-proxy 的service配置文件

文件路徑/etc/systemd/system/kube-proxy.service

[root@k8s-node system]# cat /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

下面重起一下kube-proxy:

# systemctl daemon-reload
# systemctl enable kube-proxy
# systemctl start kube-proxy
# systemctl status kube-proxy

[root@k8s-node system]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube-Proxy Server
   Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-10-22 23:09:31 EDT; 5min ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 21828 (kube-proxy)
    Tasks: 8
   Memory: 51.4M
   CGroup: /system.slice/kube-proxy.service
           └─21828 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://192.168.122.168:8080

Oct 22 23:09:31 k8s-node kube-proxy[21828]: E1022 23:09:31.811947   21828 server.go:421] Can't get Node "k8s-node", assuming iptables proxy, err: nodes "k... not found
Oct 22 23:09:31 k8s-node kube-proxy[21828]: I1022 23:09:31.813840   21828 server.go:215] Using iptables Proxier.
Oct 22 23:09:31 k8s-node kube-proxy[21828]: W1022 23:09:31.815094   21828 server.go:468] Failed to retrieve node info: nodes "k8s-node" not found
Oct 22 23:09:31 k8s-node kube-proxy[21828]: W1022 23:09:31.815185   21828 proxier.go:248] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP
Oct 22 23:09:31 k8s-node kube-proxy[21828]: W1022 23:09:31.815192   21828 proxier.go:253] clusterCIDR not specified, unable to distinguish between interna...al traffic
Oct 22 23:09:31 k8s-node kube-proxy[21828]: I1022 23:09:31.815233   21828 server.go:227] Tearing down userspace rules.
Oct 22 23:09:31 k8s-node kube-proxy[21828]: I1022 23:09:31.831103   21828 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Oct 22 23:09:31 k8s-node kube-proxy[21828]: I1022 23:09:31.832439   21828 conntrack.go:66] Setting conntrack hashsize to 32768
Oct 22 23:09:31 k8s-node kube-proxy[21828]: I1022 23:09:31.836280   21828 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Oct 22 23:09:31 k8s-node kube-proxy[21828]: I1022 23:09:31.836344   21828 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Hint: Some lines were ellipsized, use -l to show in full.

雖然有些報錯,接着研究啦。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章