k8s集羣部署v1.15實踐12:work節點部署kube-proxy

work節點部署kube-proxy

注:二進制文件前面已經下載分發好

1.創建kube-proxy證書和密鑰

創建簽名請求

[root@k8s-node1 kube-proxy]# cat kube-proxy-csr.json 
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "SZ",
"L": "SZ",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
[root@k8s-node1 kube-proxy]#

CN:指定該證書的 User 爲 system:kube-proxy .預定義的 RoleBinding system:node-proxier 將User system:kube-proxy 與Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserverProxy 相關 API 的權限.該證書只會被 kube-proxy 當做 client 證書使用,所以 hosts 字段爲空.

生成證書和密鑰

[root@k8s-node1 kube-proxy]#  cfssl gencert -ca=/etc/kubernetes/cert/ca.pem -ca-key=/etc/kubernetes/cert/ca-key.pem -config=/etc/kubernetes/cert/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2019/11/05 21:03:03 [INFO] generate received request
2019/11/05 21:03:03 [INFO] received CSR
2019/11/05 21:03:03 [INFO] generating key: rsa-2048
2019/11/05 21:03:04 [INFO] encoded CSR
2019/11/05 21:03:04 [INFO] signed certificate with serial number 257083627823849004077905552203274968448941860993
2019/11/05 21:03:04 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node1 kube-proxy]# ls
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem
[root@k8s-node1 kube-proxy]#

2.創建和分發kubeconfig文件

創建kubeconfig文件

[root@k8s-node1 kube-proxy]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/cert/ca.pem --embed-certs=true --server=https://192.168.174.127:8443 --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.
[root@k8s-node1 kube-proxy]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.
[root@k8s-node1 kube-proxy]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
Context "default" created.
[root@k8s-node1 kube-proxy]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".
[root@k8s-node1 kube-proxy]# ls |grep config
kube-proxy.kubeconfig
[root@k8s-node1 kube-proxy]#

分發kubeconfig文件

[root@k8s-node1 kube-proxy]# cp kube-proxy.kubeconfig /etc/kubernetes/
[root@k8s-node1 kube-proxy]# scp kube-proxy.kubeconfig root@k8s-node2:/etc/kubernetes/
kube-proxy.kubeconfig                                                                                        100% 6219     5.4MB/s   00:00    
[root@k8s-node1 kube-proxy]# scp kube-proxy.kubeconfig root@k8s-node3:/etc/kubernetes/
kube-proxy.kubeconfig 

3.創建kube-proxy config文件

模板

[root@k8s-node1 kube-proxy]# cat kube-proxy.config.yaml.template 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: ##NODE_IP##
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: ${CLUSTER_CIDR}
healthzBindAddress: ##NODE_IP##:10256
hostnameOverride: ##NODE_NAME##
kind: KubeProxyConfiguration
metricsBindAddress: ##NODE_IP##:10249
mode: "ipvs"
[root@k8s-node1 kube-proxy]#

bindAddress:監聽地址.

clientConnection.kubeconfig:連接 apiserver 的 kubeconfig 文件.

clusterCIDR:kube-proxy 根據 --cluster-cidr 判斷集羣內部和外部流量,指定 --cluster-cidr 或 --masquerade-all 選項後 kube-proxy 纔會對訪問Service IP 的請求做 SNAT.

hostnameOverride : 參數值必須與 kubelet 的值一致,否則 kube-proxy 啓動後會找不到該 Node,從而不會創建任何 ipvs 規則.

mode:使用 ipvs 模式.

修改變量

[root@k8s-node1 kube-proxy]# echo ${CLUSTER_CIDR}
172.30.0.0/16
[root@k8s-node1 kube-proxy]# cat kube-proxy.config.yaml.template 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: ##NODE_IP##
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 172.30.0.0/16
healthzBindAddress: ##NODE_IP##:10256
hostnameOverride: ##NODE_NAME##
kind: KubeProxyConfiguration
metricsBindAddress: ##NODE_IP##:10249
mode: "ipvs"
[root@k8s-node1 kube-proxy]#

分發

[root@k8s-node1 kube-proxy]# cp kube-proxy.config.yaml.template /etc/kubernetes/kube-proxy.config.yaml
[root@k8s-node1 kube-proxy]# scp kube-proxy.config.yaml.template root@k8s-node2:/etc/kubernetes/kube-proxy.config.yaml
kube-proxy.config.yaml.template                                                                              100%  315   283.0KB/s   00:00    
[root@k8s-node1 kube-proxy]# scp kube-proxy.config.yaml.template root@k8s-node3:/etc/kubernetes/kube-proxy.config.yaml
kube-proxy.config.yaml.template                                                                              100%  315   326.6KB/s   00:00    
[root@k8s-node1 kube-proxy]#

修改NODE_IP和NODE_NAME,所有節點的都根據節點的ip和hostname修改.

sed -i -e 's/##NODE_IP##/192\.168\.174\.128/g'  -e 's/##NODE_NAME##/k8s\-node1/g' /etc/kubernetes/kube-proxy.config.yaml

創建和分發kube-proxy systemd unit 文件

[root@k8s-node1 kube-proxy]# cat kube-proxy.service 
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.config.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

手動創建目錄WorkingDirectory=/var/lib/kube-proxy

[root@k8s-node1 kube-proxy]# mkdir -p /var/lib/kube-proxy
[root@k8s-node1 kube-proxy]# ssh root@k8s-node2 "mkdir -p /var/lib/kube-proxy"
[root@k8s-node1 kube-proxy]# ssh root@k8s-node3 "mkdir -p /var/lib/kube-proxy"

分發文件

[root@k8s-node1 kube-proxy]# cp kube-proxy.service /etc/systemd/system
[root@k8s-node1 kube-proxy]# scp kube-proxy.service root@k8s-node2:/etc/systemd/system
kube-proxy.service                                                                                           100%  450   525.1KB/s   00:00    
[root@k8s-node1 kube-proxy]# scp kube-proxy.service root@k8s-node3:/etc/systemd/system
kube-proxy.service 

加上執行權限

chmod +x -R /etc/systemd/system

4.啓動服務

systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy

啓動報錯

[root@k8s-node1 kubernetes]# cat kube-proxy.ERROR 
Log file created at: 2019/11/05 21:56:48
Running on machine: k8s-node1
Binary: Built with gc go1.12.10 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
F1105 21:56:48.913044   30996 server.go:449] unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined

文件格式問題,注意參考格式見下

[root@k8s-master1 kubernetes]# cat /etc/kubernetes/kube-proxy.config.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.211.128
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig  
clusterCIDR: 172.30.0.0/16
healthzBindAddress: 192.168.211.128:10256
hostnameOverride: k8s-master1
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.211.128:10249
mode: "ipvs"
[root@k8s-master1 kubernetes]#
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig

注意這個前面的空格,沒有就會報上面的錯誤

再啓動,服務起來了

[root@k8s-node1 kubernetes]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube-Proxy Server
   Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-11-05 21:59:54 EST; 8s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 32608 (kube-proxy)
    Tasks: 0
   Memory: 10.6M
   CGroup: /system.slice/kube-proxy.service
           ‣ 32608 /opt/k8s/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.config.yaml --alsologtostderr=true --logtostderr=false --log-...

Nov 05 21:59:54 k8s-node1 kube-proxy[32608]: I1105 21:59:54.931228   32608 config.go:187] Starting service config controller
Nov 05 21:59:54 k8s-node1 kube-proxy[32608]: I1105 21:59:54.931248   32608 controller_utils.go:1029] Waiting for caches to sync for s...troller
Nov 05 21:59:54 k8s-node1 kube-proxy[32608]: I1105 21:59:54.931422   32608 config.go:96] Starting endpoints config controller
Nov 05 21:59:54 k8s-node1 kube-proxy[32608]: I1105 21:59:54.931431   32608 controller_utils.go:1029] Waiting for caches to sync for e...troller
Nov 05 21:59:55 k8s-node1 kube-proxy[32608]: I1105 21:59:55.032212   32608 controller_utils.go:1036] Caches are synced for endpoints ...troller
Nov 05 21:59:55 k8s-node1 kube-proxy[32608]: I1105 21:59:55.032320   32608 proxier.go:748] Not syncing ipvs rules until Services and ... master
Nov 05 21:59:55 k8s-node1 kube-proxy[32608]: I1105 21:59:55.032338   32608 controller_utils.go:1036] Caches are synced for service co...troller
Nov 05 21:59:55 k8s-node1 kube-proxy[32608]: I1105 21:59:55.032376   32608 service.go:332] Adding new service port "default/httpd-svc...:80/TCP
Nov 05 21:59:55 k8s-node1 kube-proxy[32608]: I1105 21:59:55.032393   32608 service.go:332] Adding new service port "default/kubernete...443/TCP
Nov 05 21:59:55 k8s-node1 kube-proxy[32608]: I1105 21:59:55.075261   32608 proxier.go:1797] Opened local port "nodePort for default/h...36/tcp)
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-node1 kubernetes]#
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章