kubernetes集羣安裝指南:master組件kube-controller-manager部署

kube-controller-manager集羣包含 3 個節點,啓動後將通過競爭選舉機制產生一個 leader 節點,其它節點爲阻塞狀態。當 leader 節點不可用後,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的高可用性。

1 安裝準備

特別說明:這裏所有的操作都是在devops這臺機器上執行

1.1 環境變量定義

#################### Variable parameter setting ######################
KUBE_NAME=kube-controller-manager
K8S_INSTALL_PATH=/data/apps/k8s/kubernetes
K8S_BIN_PATH=${K8S_INSTALL_PATH}/sbin
K8S_LOG_DIR=${K8S_INSTALL_PATH}/logs
K8S_CONF_PATH=/etc/k8s/kubernetes
KUBE_CONFIG_PATH=/etc/k8s/kubeconfig
CA_DIR=/etc/k8s/ssl
SOFTWARE=/root/software
VERSION=v1.14.2
PACKAGE="kubernetes-server-${VERSION}-linux-amd64.tar.gz"
DOWNLOAD_URL=“”https://github.com/devops-apps/download/raw/master/kubernetes/${PACKAGE}"
ETCD_ENDPOIDS=https://10.10.10.22:2379,https://10.10.10.23:2379,https://10.10.10.24:2379
ETH_INTERFACE=eth1
LISTEN_IP=$(ifconfig | grep -A 1 ${ETH_INTERFACE} |grep inet |awk '{print $2}')
USER=k8s
SERVICE_CIDR=10.254.0.0/22

1.2 下載和分發 kubernetes 二進制文件

訪問kubernetes github 官方地址下載穩定的 realease 包至本機;

wget  $DOWNLOAD_URL -P $SOFTWARE

將kubernetes 軟件包分發到各個master節點服務器;

sudo ansible master_k8s_vgs -m copy -a "src=${SOFTWARE}/$PACKAGE dest=${SOFTWARE}/" -b

2 部署kube-controller-manager集羣

2.1 安裝kube-controller-manager二進制文件

### 1.Check if the install directory exists.
if [ ! -d "$K8S_BIN_PATH" ]; then
     mkdir -p $K8S_BIN_PATH
fi

if [ ! -d "$K8S_LOG_DIR/$KUBE_NAME" ]; then
     mkdir -p $K8S_LOG_DIR/$KUBE_NAME
fi

if [ ! -d "$K8S_CONF_PATH" ]; then
     mkdir -p $K8S_CONF_PATH
fi

if [ ! -d "$KUBE_CONFIG_PATH" ]; then
     mkdir -p $KUBE_CONFIG_PATH
fi

### 2.Install kube-apiserver binary of kubernetes.
if [ ! -f "$SOFTWARE/kubernetes-server-${VERSION}-linux-amd64.tar.gz" ]; then
     wget $DOWNLOAD_URL -P $SOFTWARE >>/tmp/install.log  2>&1
fi
cd $SOFTWARE && tar -xzf kubernetes-server-${VERSION}-linux-amd64.tar.gz -C ./
cp -fp kubernetes/server/bin/$KUBE_NAME $K8S_BIN_PATH
ln -sf  $K8S_BIN_PATH/$KUBE_NAM /usr/local/bin
chown -R $USER:$USER $K8S_INSTALL_PATH
chmod -R 755 $K8S_INSTALL_PATH

2.2 分發 kubeconfig 文件和證書

分發證書
sudo ansible master_k8s_vgs -m  synchronize -a \
  "src=${CA_DIR}/kube-scheduler* \
  dest=${K8S_KUBECONFIG_PATH}/ mode=push delete=yes rsync_opts=-avz" -b
分發kubeconfig認證文件

kube-controller-manager使用 kubeconfig文件連接訪問 apiserver服務,該文件提供了 apiserver 地址、嵌入的 CA 證書和 kube-scheduler證書:

sudo ansible master_k8s_vgs -m  synchronize -a \
  "src=${K8S_KUBECONFIG_PATH}/ \
  dest=${K8S_KUBECONFIG_PATH}/ mode=push delete=yes rsync_opts=-avz" -b

備註: 如果在前面小節已經同步過各組件kubeconfig和證書文件,此處可以不必執行此操作;

2.3 創建kube-controller-manager啓動服務

cat >/usr/lib/systemd/system/${KUBE_NAME}.service<<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
User=${USER}
WorkingDirectory=${K8S_INSTALL_PATH}
ExecStart=${K8S_BIN_PATH}/${KUBE_NAME} \\
  --port=10252 \\
  --secure-port=10257 \\
  --bind-address=${LISTEN_IP} \\
  --address=127.0.0.1 \\
  --kubeconfig=${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig \\
  --authentication-kubeconfig=${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig \\
  --authorization-kubeconfig=${KUBE_CONFIG_PATH}/${KUBE_NAME}.kubeconfig \\
  --client-ca-file=${CA_DIR}/ca.pem \\
  --service-cluster-ip-range=${SERVICE_CIDR} \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=${CA_DIR}/ca.pem \\
  --cluster-signing-key-file=${CA_DIR}/ca-key.pem \\
  --root-ca-file=${CA_DIR}/ca.pem \\
  --service-account-private-key-file=${CA_DIR}/ca-key.pem \\
  --leader-elect=true \\
  --feature-gates=RotateKubeletServerCertificate=true \\
  --horizontal-pod-autoscaler-use-rest-clients=true \\
  --horizontal-pod-autoscaler-sync-period=10s \\
  --concurrent-service-syncs=2 \\
  --kube-api-qps=1000 \\
  --kube-api-burst=2000 \\
  --concurrent-gc-syncs=30 \\
  --concurrent-deployment-syncs=10 \\
  --terminated-pod-gc-threshold=10000 \\
  --controllers=*,bootstrapsigner,tokencleaner \\
  --requestheader-allowed-names="" \\
  --requestheader-client-ca-file=${CA_DIR}/ca.pem \\
  --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --tls-cert-file=${CA_DIR}/kube-controller-manager.pem \\
  --tls-private-key-file=${CA_DIR}/kube-controller-manager-key.pem \\
  --use-service-account-credentials=true \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=${K8S_LOG_DIR}/${KUBE_NAME} \\
  --flex-volume-plugin-dir=${K8S_INSTALL_PATH}/libexec/kubernetes \\
  --v=2
Restart=on
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
  • --port=0:關閉監聽非安全端口(http),同時 --address 參數無效,--bind-address 參數有效;
  • --secure-port=10252、--bind-address=0.0.0.0: 在所有網絡接口監聽 10252 端口的 https /metrics 請求;
  • --kubeconfig:指定 kubeconfig 文件路徑,kube-controller-manager 使用它連接和驗證 kube-apiserver;
  • --authentication-kubeconfig 和 --authorization-kubeconfig:kube-controller-manager 使用它連接 apiserver,對 client 的請求進行認證和授權。kube-controller-manager 不再使用 --tls-ca-file 對請求 https metrics 的 Client 證書進行校驗。如果沒有配置這兩個 kubeconfig 參數,則 client 連接 kube-controller-manager https 端口的請求會被拒絕(提示權限不足)。
  • --cluster-signing-*-file:簽名 TLS Bootstrap 創建的證書;
  • --experimental-cluster-signing-duration:指定 TLS Bootstrap 證書的有效期;
  • --root-ca-file:放置到容器 ServiceAccount 中的 CA 證書,用來對 kube-apiserver 的證書進行校驗;
  • --service-account-private-key-file:簽名 ServiceAccount 中 Token 的私鑰文件,必須和 kube-apiserver 的 --service-account-key-file 指定的公鑰文件配對使用;
  • --service-cluster-ip-range :指定 Service Cluster IP 網段,必須和 kube-apiserver 中的同名參數一致;
  • --leader-elect=true:集羣運行模式,啓用選舉功能;被選爲 leader 的節點負責處理工作,其它節點爲阻塞狀態;
  • --controllers=*,bootstrapsigner,tokencleaner:啓用的控制器列表,tokencleaner 用於自動清理過期的 Bootstrap token;
  • --horizontal-pod-autoscaler-*:custom metrics 相關參數,支持 autoscaling/v2alpha1;
  • --tls-cert-file、--tls-private-key-file:使用 https 輸出 metrics 時使用的 Server 證書和祕鑰;
  • --use-service-account-credentials=true: kube-controller-manager 中各 controller 使用 serviceaccount 訪問 kube-apiserver;

2.4 檢查服務運行狀態

kube-controller-manager監聽10252和10257端口,兩個接口都對外提供 /metrics 和 /healthz 的訪問。

  • 10252:接收 http 請求訪問,非安全端口,不需要認證授權,爲了安全建議偵聽地址爲127.0.0.1;
  • 10257:接收 https 請求訪問,安全端口,需要認證授權,可以偵聽任何地址;
    sudo netstat -ntlp | grep kube-con
    tcp  0      0 127.0.0.1:10252         0.0.0.0:*      LISTEN      2450/kube-controlle 
    tcp  0      0 10.10.10.22:10257       0.0.0.0:*      LISTEN      2450/kube-controlle 

注意:很多安裝文檔都是關閉了非安全端口,將安全端口改爲10250,這會導致查看集羣狀態是報如下錯誤,執行 kubectl get cs命令時,apiserver 默認向 127.0.0.1 發送請求。當controller-manager、scheduler以集羣模式運行時,有可能和kube-apiserver不在一臺機器上,且訪問方式爲https,則 controller-manager或scheduler 的狀態爲 Unhealthy,但實際上它們工作正常。則會導致上述error,但實際集羣是安全狀態;

kubectl get componentstatuses
NAME                 STATUS      MESSAGE    ERROR
controller-manager  Unhealthy  dial tcp  127.0.0.1:10252: connect: connection refused
scheduler          Unhealthy  dial tcp  127.0.0.1:10251: connect: connection refused
etcd-0               Healthy     {"health":"true"}
etcd-2               Healthy     {"health":"true"}
etcd-1               Healthy     {"health":"true"}

正常輸出應該爲:
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   

查看服務是否運行

systemctl status kube-controller-manager|grep Active

確保狀態爲 active (running),否則查看日誌,確認原因:

sudo journalctl -u kube-controller-manager

2.5 查看輸出的 metrics

注意:以下命令在 kube-controller-manager 節點上執行。

https方式訪問
curl -s --cacert /opt/k8s/work/ca.pem \
  --cert /opt/k8s/work/admin.pem \
  --key /opt/k8s/work/admin-key.pem \
  https://10.10.10.22:10257/metrics |head

http方式訪問
curl -s http://127.0.0.1:10252/metrics |head

2.6 kube-controller-manager 的權限設置

ClusteRole system:kube-controller-manager 的權限很小,只能創建 secret、serviceaccount 等資源對象,各 controller 的權限分散到 ClusterRole system:controller:XXX 中:

 $ kubectl describe clusterrole system:kube-controller-manager
Name:         system:kube-controller-manager
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
  Resources                  Non-Resource URLs  Resource Names  Verbs
  ---------                  -----------------  --------------  -----
  secrets                                   []   []    [create delete get update]
  endpoints                                 []   []    [create get update]
  serviceaccounts                           []   []    [create get update]
  events                                    []   []    [create patch update]
  tokenreviews.authentication.k8s.io        []   []    [create]
  subjectacce***eviews.authorization.k8s.io []   []    [create]
  configmaps                                []   []    [get]
  namespaces                                []   []    [get]
  *.*                                       []   []    [list watch]

需要在kube-controller-manager的啓動參數中添加"--use-service-account-credentials=true"參數,這樣main controller將會爲各controller創建對應的ServiceAccount XXX-controller。然後內置的 ClusterRoleBinding system:controller:XXX則將賦予各XXX-controller ServiceAccount對應的ClusterRole system:controller:XXX 權限。

 $ kubectl get clusterrole|grep controller
system:controller:attachdetach-controller                              17d
system:controller:certificate-controller                               17d
system:controller:clusterrole-aggregation-controller                   17d
system:controller:cronjob-controller                                   17d
system:controller:daemon-set-controller                                17d
system:controller:deployment-controller                                17d
system:controller:disruption-controller                                17d
system:controller:endpoint-controller                                  17d
system:controller:expand-controller                                    17d
system:controller:generic-garbage-collector                            17d
system:controller:horizontal-pod-autoscaler                            17d
system:controller:job-controller                                       17d
system:controller:namespace-controller                                 17d
system:controller:node-controller                                      17d
system:controller:persistent-volume-binder                             17d
system:controller:pod-garbage-collector                                17d
system:controller:pv-protection-controller                             17d
system:controller:pvc-protection-controller                            17d
system:controller:replicaset-controller                                17d
system:controller:replication-controller                               17d
system:controller:resourcequota-controller                             17d
system:controller:route-controller                                     17d
system:controller:service-account-controller                           17d
system:controller:service-controller                                   17d
system:controller:statefulset-controller                               17d
system:controller:ttl-controller                                       17d
system:kube-controller-manager                                         17d

以 deployment controller 爲例:

$ kubectl describe clusterrole system:controller:deployment-controller
Name:         system:controller:deployment-controller
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
  Resources                        Non-Resource URLs  Resource Names  Verbs
  ---------                        -----------------  --------------  -----
  replicasets.apps                 []   []  [create delete get list patch update watch]
  replicasets.extensions           []   []  [create delete get list patch update watch]
  events                           []   []  [create patch update]
  pods                             []   []  [get list update watch]
  deployments.apps                 []   []  [get list update watch]
  deployments.extensions           []   []  [get list update watch]
  deployments.apps/finalizers      []   []  [update]
  deployments.apps/status          []   []  [update]
  deployments.extensions/finalizers[]   []  [update]
  deployments.extensions/status    []   []  [update]

2.7 查看當前的 leader

kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml

2.8 測試kube-controller-manager集羣的高可用

隨機找一個或兩個 master 節點,停掉kube-controller-manager服務,看其它節點是否獲取了 leader 權限.

參考


關於 controller 權限和 use-service-account-credentials 參數:
https://github.com/kubernetes/kubernetes/issues/48208
kubelet 認證和授權:
https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章