k8s v1.10部署筆記

本文是根據最近一份github上很不錯的部署教程所做的驗證部署測試,不同之處在於原教程中是3節點,而這裏共使用了4個節點。Github上的教程地址如下所示,推薦大家參照原作者文章進行自己的實驗。在本文中遇到的一些問題,也已經反饋至github issue或討論中,同時也有很多其他網友反饋遇到或發現的一些問題,其中大部分問題都已經在github教程中得到了校正。

https://github.com/opsnull/follow-me-install-kubernetes-cluster

目錄:

問題記錄

系統初始化配置

創建CA證書和密鑰

部署 kubectl 命令行工具

部署 etcd 集羣

部署flannel網絡

部署 master 節點

部署node節點

驗證集羣功能

部署集羣插件

問題記錄:

開篇先談問題記錄,主要原因是參照github上的教程原文去部署自己的實驗環境的效果會更好,大家瞭解下我遇到過的問題即可,提供一個參考。Github上的教程原文內容詳實、生動,是以部署操作的說明爲主,原理性的知識需要自己另行學習掌握。原文中也曾出現和糾正過一些細微錯誤,盡信書不如無書,遇到問題後不妨查一下其他參考資料中是怎麼說的。本文也只是我的一份過程筆記,供自己後續工作參考。

1、網卡hairpin_mode設置

在準備配置環境的過程中,就要求設置docker網卡的hairpin_mode,不太理解在未安裝docker時爲什麼要求設置這個,且確實無法設置,因爲此時連docker也還沒有安裝。

注:hairpin_mode模式下,虛機或容器間的流量強制要求必須經過物理交換機才能通信。

2、設置系統參數net.bridge.bridge-nf-call-iptables=1(打開iptables管理網橋的功能)

在各節點上執行以下命令:

modprobe br_netfilter

cat > /etc/sysctl.d/kubernetes.conf <

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

EOF

sysctl -p /etc/sysctl.d/kubernetes.conf

原文中把modprobe br_netfilter放在最後執行的,實際情況是應該首先執行這條命令。

3、授予 kubernetes 證書訪問 kubelet API 的權限的命令的執行順序錯誤

應該在成功啓動了kube-apiserver服務後再執行該命令。

4、在部署kube-apiserver服務中,製作密鑰的證書請求中使用了無法解析的域名kubernetes.default.svc.cluster.local.

該問題已經確認爲是go v1.9中的域名語法校驗解析bug。在6.29號的最新版本的部署材料中已經發現和糾正了該問題。但此故障引發的coreDNS部署失敗並報以下錯誤,已經摺騰了我2天時間尋求答案!

E0628 08:10:41.256264 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:319: Failed to list *v1.Namespace: Get https://10.254.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: tls: failed to parse certificate from server: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local."

關於該bug的修復說明:

https://github.com/opsnull/follow-me-install-kubernetes-cluster/commit/719e5f01e9dcbf96e1a19159ae68a18c7fa9171b

5、關於怎麼使用admin密鑰訪問api接口

下面是正確的方式:

curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert /home/k8s/admin.pem --key /home/k8s/admin-key.pem https://172.16.10.100:6443/api/v1/endpoints

注:原文中因密鑰文件未指定爲絕對路徑,會導致無法找到而報錯。

1、系統初始化配置

ssh登錄信息:

kube-node1,localhost:2222

kube-node2,localhost:2200

kube-node3,localhost:2201

kube-server,localhost:2202

修改每臺機器的 /etc/hosts 文件,添加主機名和 IP 的對應關係:

172.16.10.100 kube-server

172.16.10.101 kube-node1

172.16.10.102 kube-node2

172.16.10.103 kube-node3

在每臺機器上添加 k8s 賬戶,可以無密碼 sudo:

useradd -m k8s

visudo

在末尾添加下面一行

k8s ALL=(ALL) NOPASSWD: ALL

在所有node節點機器上添加 docker 賬戶,將 k8s 賬戶添加到 docker 組中,同時配置 dockerd 參數:

useradd -m docker

gpasswd -a k8s docker

mkdir -p /etc/docker/

cat /etc/docker/daemon.json

{

"registry-mirrors": ["https://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn"],

"max-concurrent-downloads": 20

}

設置 kube-server 可以無密碼登錄所有節點的 k8s 和 root 賬戶:

[k8s@kube-server k8s]$ ssh-keygen -t rsa

[k8s@kube-server k8s]$ ssh-copy-id root@kube-node1

[k8s@kube-server k8s]$ ssh-copy-id root@kube-node2

[k8s@kube-server k8s]$ ssh-copy-id root@kube-node3

[k8s@kube-server k8s]$ ssh-copy-id k8s@kube-node1

[k8s@kube-server k8s]$ ssh-copy-id k8s@kube-node2

[k8s@kube-server k8s]$ ssh-copy-id k8s@kube-node3

在每臺機器上將可執行文件路徑 /opt/k8s/bin 添加到 PATH 變量中:

[root@kube-server ~]# echo 'PATH=/opt/k8s/bin:$PATH:$HOME/bin:$JAVA_HOME/bin' >>/root/.bashrc

[root@kube-server ~]# su - k8s

[k8s@kube-server ~]$ echo 'PATH=/opt/k8s/bin:$PATH:$HOME/bin:$JAVA_HOME/bin' >>~/.bashrc

在每臺機器上安裝依賴包:

yum install -y epel-release

yum install -y conntrack ipvsadm ipset jq sysstat curl iptables

在每臺機器上關閉防火牆:

systemctl stop firewalld

systemctl disable firewalld

iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat

iptables -P FORWARD ACCEPT

在每臺機器上關閉swap分區,k8s從v1.8版本開始默認要求關閉swap,這樣做的主要目的是保證性能:

swapoff -a

同時編輯下/etc/fstab文件,註釋掉swap分區。

注:也可以選擇通過爲kubelet將參數 --fail-swap-on 設置爲 false 來忽略 swap on

設置系統參數

net.bridge.bridge-nf-call-iptables=1(打開iptables管理網橋的功能)

在各節點上執行以下命令:

modprobe br_netfilter

cat > /etc/sysctl.d/kubernetes.conf <

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

vm.swappiness=0

EOF

sysctl -p /etc/sysctl.d/kubernetes.conf

使用k8s用戶在每臺機器上創建目錄:

sudo mkdir -p /opt/k8s/bin

sudo chown -R k8s /opt/k8s

sudo sudo mkdir -p /etc/kubernetes/cert

sudo chown -R k8s /etc/kubernetes

sudo mkdir -p /etc/etcd/cert

sudo chown -R k8s /etc/etcd/cert

sudo mkdir -p /var/lib/etcd && chown -R k8s /etc/etcd/cert

後續的部署步驟將使用下面定義的全局環境變量,請根據自己的機器、網絡情況修改:

!/usr/bin/bash

生成 EncryptionConfig 所需的加密 key

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

最好使用 當前未用的網段 來定義服務網段和 Pod 網段

服務網段,部署前路由不可達,部署後集羣內路由可達(kube-proxy 和 ipvs 保證)

SERVICE_CIDR="10.254.0.0/16"

Pod 網段,建議 /16 段地址,部署前路由不可達,部署後集羣內路由可達(flanneld 保證)

CLUSTER_CIDR="172.30.0.0/16"

服務端口範圍 (NodePort Range)

export NODE_PORT_RANGE="8400-9000"

集羣各機器 IP 數組

export NODE_IPS=(172.16.10.101 172.16.10.102 172.16.10.103)

集羣各 IP 對應的 主機名數組

export NODE_NAMES=(kube-node1 kube-node2 kube-node3)

kube-apiserver 節點 IP

export MASTER_IP=172.16.10.100

kube-apiserver https 地址

export KUBE_APISERVER="https://${MASTER_IP}:6443"

etcd 集羣服務地址列表

export ETCD_ENDPOINTS="https://172.16.10.101:2379,https://172.16.10.102:2379,https://172.16.10.103:2379"

etcd 集羣間通信的 IP 和端口

export ETCD_NODES="kube-node1=https://172.16.10.101:2380,kube-node2=https://172.16.10.102:2380,kube-node3=https://172.16.10.103:2380"

flanneld 網絡配置前綴

export FLANNEL_ETCD_PREFIX="/kubernetes/network"

kubernetes 服務 IP (一般是 SERVICE_CIDR 中第一個IP)

export CLUSTER_KUBERNETES_SVC_IP="10.254.0.1"

集羣 DNS 服務 IP (從 SERVICE_CIDR 中預分配)

export CLUSTER_DNS_SVC_IP="10.254.0.2"

集羣 DNS 域名

export CLUSTER_DNS_DOMAIN="cluster.local."

將二進制目錄 /opt/k8s/bin 加到 PATH 中

export PATH=/opt/k8s/bin:$PATH

將上面內容保存爲/opt/k8s/bin/environment.sh,分發各節點。

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "mkdir -p /opt/k8s/bin && chown -R k8s /opt/k8s/bin"

scp /opt/k8s/bin/environment.sh k8s@${node_ip}:/opt/k8s/bin/environment.sh

done

2、創建CA證書和密鑰

使用 CloudFlare 的 PKI 工具集 cfssl 創建所有證書。

安裝 cfssl 工具集

sudo mkdir -p /opt/k8s/cert && sudo chown -R k8s /opt/k8s && cd /opt/k8s

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

mv cfssl_linux-amd64 /opt/k8s/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo

chmod +x /opt/k8s/bin/*

export PATH=/opt/k8s/bin:$PATH

創建根證書 (CA)

CA 配置文件用於配置根證書的使用場景 (profile) 和具體參數 (usage,過期時間、服務端認證、客戶端認證、加密等),後續在簽名其它證書時需要指定特定場景。

cat > ca-config.json <

{

"signing": {

"default": {

"expiry": "87600h"

},

"profiles": {

"kubernetes": {

"usages": [

"signing",

"key encipherment",

"server auth",

"client auth"

],

"expiry": "87600h"

}

}

}

}

EOF

signing:表示該證書可用於簽名其它證書,生成的 ca.pem 證書中 CA=TRUE;

server auth:表示 client 可以用該該證書對 server 提供的證書進行驗證;

client auth:表示 server 可以用該該證書對 client 提供的證書進行驗證;

創建證書籤名請求文件

cat > ca-csr.json <

{

"CN": "kubernetes",

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "k8s",

"OU": "testcorp"

}

]

}

EOF

CN:Common Name,kube-apiserver 從證書中提取該字段作爲請求的用戶名 (User Name),瀏覽器使用該字段驗證網站是否合法;

O:Organization,kube-apiserver 從證書中提取該字段作爲請求用戶所屬的組 (Group);

kube-apiserver 將提取的 User、Group (kubernetes.k8s)作爲 RBAC 授權的用戶標識;

生成 CA 證書和私鑰

[k8s@kube-server ~]$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca

2018/06/25 13:45:47 [INFO] generating a new CA key and certificate from CSR

2018/06/25 13:45:47 [INFO] generate received request

2018/06/25 13:45:47 [INFO] received CSR

2018/06/25 13:45:47 [INFO] generating key: rsa-2048

2018/06/25 13:45:47 [INFO] encoded CSR

2018/06/25 13:45:47 [INFO] signed certificate with serial number 333080208048507116448165428577682216785536827857

[k8s@kube-server ~]$ ls ca*

ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem

[k8s@kube-server ~]$

分發證書文件

將生成的 CA 證書、祕鑰文件、配置文件拷貝到所有節點的 /etc/kubernetes/cert 目錄下:

source /opt/k8s/bin/environment.sh # 導入 NODE_IPS 環境變量

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

scp ca*.pem ca-config.json k8s@${node_ip}:/etc/kubernetes/cert

done

3、部署 kubectl 命令行工具

kubectl 默認從 ~/.kube/config 文件讀取 kube-apiserver 地址、證書、用戶名等信息。

下載和分發 kubectl 二進制文件

下載和解壓:

wget https://dl.k8s.io/v1.10.4/kubernetes-client-linux-amd64.tar.gz

tar -xzvf kubernetes-client-linux-amd64.tar.gz

分發到所有使用 kubectl 的節點:

source /opt/k8s/bin/environment.sh # 導入 NODE_IPS 環境變量

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

scp kubernetes/client/bin/kubectl k8s@${node_ip}:/opt/k8s/bin/

ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"

done

cp kubernetes/client/bin/kubectl /opt/k8s/bin/

chmod +x /opt/k8s/bin/*

創建 admin 證書和私鑰

kubectl 與 apiserver https 安全端口通信,apiserver 對提供的證書進行認證和授權。

kubectl 作爲集羣的管理工具,需要被授予最高權限。這裏創建具有最高權限的 admin 證書。

創建證書籤名請求:

cat > admin-csr.json <

{

"CN": "admin",

"hosts": [],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "system:masters",

"OU": "testcorp"

}

]

}

EOF

O 爲 system:masters,kube-apiserver 收到該證書後將請求的 Group 設置爲 system:masters;

預定義的 ClusterRoleBinding cluster-admin 將 Group system:masters 與 Role cluster-admin 綁定,該 Role 授予所有 API的權限;

該證書只會被 kubectl 當做 client 證書使用,所以 hosts 字段爲空;

生成證書和私鑰:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem -ca-key=/etc/kubernetes/cert/ca-key.pem -config=/etc/kubernetes/cert/ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

ls admin*

創建 kubeconfig 文件

kubeconfig 爲 kubectl 的配置文件,包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書。

source /opt/k8s/bin/environment.sh

設置集羣參數

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/cert/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=kubectl.kubeconfig

設置客戶端認證參數

kubectl config set-credentials admin \

--client-certificate=admin.pem \

--client-key=admin-key.pem \

--embed-certs=true \

--kubeconfig=kubectl.kubeconfig

設置上下文參數

kubectl config set-context kubernetes \

--cluster=kubernetes \

--user=admin \

--kubeconfig=kubectl.kubeconfig

設置默認上下文

kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

--certificate-authority:驗證 kube-apiserver 證書的根證書;

--client-certificate、--client-key:剛生成的 admin 證書和私鑰,連接 kube-apiserver 時使用;

--embed-certs=true:將 ca.pem 和 admin.pem 證書內容嵌入到生成的 kubectl.kubeconfig 文件中(不加時,寫入的是證書文件路徑);

分發 kubeconfig 文件

分發到所有使用 kubelet 命令的節點:

source /opt/k8s/bin/environment.sh # 導入 NODE_IPS 環境變量

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh k8s@${node_ip} "mkdir -p ~/.kube"

scp kubectl.kubeconfig k8s@${node_ip}:~/.kube/config

ssh root@${node_ip} "mkdir -p ~/.kube"

scp kubectl.kubeconfig root@${node_ip}:~/.kube/config

done

cp kubectl.kubeconfig /home/k8s/.kube/config

sudo mkdir -p /root/.kube

sudo cp kubectl.kubeconfig /root/.kube/config

保存到用戶的 ~/.kube/config 文件。

4、部署 etcd 集羣

我們在kube-node1, kube-node2, kube-node3上面部署一套高可用的etcd服務集羣。

下載和分發 etcd 二進制文件

https://github.com/coreos/etcd/releases 頁面下載最新版本的發佈包:

wget https://github.com/coreos/etcd/releases/download/v3.3.7/etcd-v3.3.7-linux-amd64.tar.gz

tar -xvf etcd-v3.3.7-linux-amd64.tar.gz

分發二進制文件到集羣所有節點

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

scp etcd-v3.3.7-linux-amd64/etcd* k8s@${node_ip}:/opt/k8s/bin

ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"

done

創建 etcd 證書和私鑰

創建證書籤名請求:

cat > etcd-csr.json <

{

"CN": "etcd",

"hosts": [

"127.0.0.1",

"172.16.10.101",

"172.16.10.102",

"172.16.10.103"

],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "k8s",

"OU": "testcorp"

}

]

}

EOF

hosts 字段指定授權使用該證書的 etcd 節點 IP 或域名列表,這裏將 etcd 集羣的三個節點 IP 都列在其中。

生成證書和私鑰:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

-ca-key=/etc/kubernetes/cert/ca-key.pem \

-config=/etc/kubernetes/cert/ca-config.json \

-profile=kubernetes etcd-csr.json | cfssljson -bare etcd

分發生成的證書和私鑰到各 etcd 節點:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "mkdir -p /etc/etcd/cert && chown -R k8s /etc/etcd/cert"

scp etcd*.pem k8s@${node_ip}:/etc/etcd/cert/

done

創建 etcd 的 systemd unit 模板文件

source /opt/k8s/bin/environment.sh

cat > etcd.service.template <

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

[Service]

User=k8s

Type=notify

WorkingDirectory=/var/lib/etcd/

ExecStart=/opt/k8s/bin/etcd \

--data-dir=/var/lib/etcd \

--name=##NODE_NAME## \

--cert-file=/etc/etcd/cert/etcd.pem \

--key-file=/etc/etcd/cert/etcd-key.pem \

--trusted-ca-file=/etc/kubernetes/cert/ca.pem \

--peer-cert-file=/etc/etcd/cert/etcd.pem \

--peer-key-file=/etc/etcd/cert/etcd-key.pem \

--peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \

--peer-client-cert-auth \

--client-cert-auth \

--listen-peer-urls=https://##NODE_IP##:2380 \

--initial-advertise-peer-urls=https://##NODE_IP##:2380 \

--listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \

--advertise-client-urls=https://##NODE_IP##:2379 \

--initial-cluster-token=etcd-cluster-0 \

--initial-cluster=${ETCD_NODES} \

--initial-cluster-state=new

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

User:指定以 k8s 賬戶運行;

WorkingDirectory、--data-dir:指定工作目錄和數據目錄爲 /var/lib/etcd,需在啓動服務前創建這個目錄;

--name:指定節點名稱,當 --initial-cluster-state 值爲 new 時,--name 的參數值必須位於 --initial-cluster 列表中;

--cert-file、--key-file:etcd server 與 client 通信時使用的證書和私鑰;

--peer-client-cert-auth、--client-cert-auth:啓用與client的加密通信功能;

--trusted-ca-file:簽名 client 證書的 CA 證書,用於驗證 client 證書;

--peer-cert-file、--peer-key-file:etcd 與 peer 通信使用的證書和私鑰;

--peer-trusted-ca-file:簽名 peer 證書的 CA 證書,用於驗證 peer 證書;

爲各節點創建和分發 etcd systemd unit 文件

替換模板文件中的變量,爲各節點創建 systemd unit 文件:

source /opt/k8s/bin/environment.sh

for (( i=0; i < 3; i++ ))

do

sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service

done

ls *.service

NODE_NAMES 和 NODE_IPS 爲相同長度的 bash 數組,分別爲節點名稱和對應的 IP。

分發生成的 systemd unit 文件:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "mkdir -p /var/lib/etcd && chown -R k8s /var/lib/etcd" # 創建 etcd 數據目錄和工作目錄

scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service

done

文件重命名爲 etcd.service。

啓動 etcd 服務

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "source /opt/k8s/bin/environment.sh && systemctl daemon-reload && systemctl enable etcd && systemctl start etcd"

done

etcd 進程首次啓動時會等待其它節點的 etcd 加入集羣,命令 systemctl start etcd 會卡住一段時間,爲正常現象。

檢查啓動結果

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh k8s@${node_ip} "systemctl status etcd|grep Active"

done

172.16.10.101

Active: active (running) since Mon 2018-06-25 17:17:52 UTC; 58s ago

172.16.10.102

Active: active (running) since Mon 2018-06-25 17:17:52 UTC; 58s ago

172.16.10.103

Active: active (running) since Mon 2018-06-25 17:17:58 UTC; 53s ago

驗證服務狀態

部署完 etcd 集羣后,在任一 etc 節點上執行如下命令:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ETCDCTL_API=3 /opt/k8s/bin/etcdctl \

--endpoints=https://${node_ip}:2379 \

--cacert=/etc/kubernetes/cert/ca.pem \

--cert=/etc/etcd/cert/etcd.pem \

--key=/etc/etcd/cert/etcd-key.pem endpoint health

done

172.16.10.101

https://172.16.10.101:2379 is healthy: successfully committed proposal: took = 1.918083ms

172.16.10.102

https://172.16.10.102:2379 is healthy: successfully committed proposal: took = 2.779171ms

172.16.10.103

https://172.16.10.103:2379 is healthy: successfully committed proposal: took = 2.684327ms

輸出均爲 healthy 時表示集羣服務正常。

5、部署flannel網絡

下載和分發 flanneld 二進制文件

https://github.com/coreos/flannel/releases 頁面下載最新版本的發佈包:

mkdir flannel

wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz -C flannel

分發 flanneld 二進制文件到集羣所有節點:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

scp flannel/{flanneld,mk-docker-opts.sh} k8s@${node_ip}:/opt/k8s/bin/

ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"

done

此爲在kube-server節點上的操作,所以還要向本節點複製一份:

cp flannel/flanneld flannel/mk-docker-opts.sh /opt/k8s/bin/

chmod +x /opt/k8s/bin/*

創建 flannel 證書和私鑰

flannel 從 etcd 集羣存取網段分配信息,而 etcd 集羣啓用了雙向 x509 證書認證,所以需要爲 flanneld 生成證書和私鑰。

創建證書籤名請求:

cat > flanneld-csr.json <

{

"CN": "flanneld",

"hosts": [],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "k8s",

"OU": "testcorp"

}

]

}

EOF

該證書只會被 kubectl 當做 client 證書使用,所以 hosts 字段爲空。

生成證書和私鑰:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

-ca-key=/etc/kubernetes/cert/ca-key.pem \

-config=/etc/kubernetes/cert/ca-config.json \

-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

ls flanneld*pem

將生成的證書和私鑰分發到所有節點(master 和 worker):

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "mkdir -p /etc/flanneld/cert && chown -R k8s /etc/flanneld"

scp flanneld*.pem k8s@${node_ip}:/etc/flanneld/cert

done

給kube-server自身也要分發一份:

sudo mkdir -p /etc/flanneld/cert &&sudo chown -R k8s /etc/flanneld

cp flanneld*.pem /etc/flanneld/cert

向 etcd 寫入集羣 Pod 網段信息

注意:本步驟只需在某一個node節點上執行一次。

source /opt/k8s/bin/environment.sh

etcdctl \

--endpoints=${ETCD_ENDPOINTS} \

--ca-file=/etc/kubernetes/cert/ca.pem \

--cert-file=/etc/flanneld/cert/flanneld.pem \

--key-file=/etc/flanneld/cert/flanneld-key.pem \

set ${FLANNEL_ETCD_PREFIX}/config '{"Network":"'${CLUSTER_CIDR}'", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}'

預期輸出類似:

{"Network":"172.30.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}

flanneld 當前版本 (v0.10.0) 不支持 etcd v3,故使用 etcd v2 API 寫入配置 key 和網段數據;

寫入的 Pod 網段 ${CLUSTER_CIDR} 必須是 /16 段地址,必須與 kube-controller-manager 的 --cluster-cidr 參數值一致;

創建 flanneld 的 systemd unit 文件

source /opt/k8s/bin/environment.sh

export IFACE=enp0s8 # 節點間互聯的網絡接口名稱

cat > flanneld.service << EOF

[Unit]

Description=Flanneld overlay address etcd agent

After=network.target

After=network-online.target

Wants=network-online.target

After=etcd.service

Before=docker.service

[Service]

Type=notify

ExecStart=/opt/k8s/bin/flanneld \

-etcd-cafile=/etc/kubernetes/cert/ca.pem \

-etcd-certfile=/etc/flanneld/cert/flanneld.pem \

-etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \

-etcd-endpoints=${ETCD_ENDPOINTS} \

-etcd-prefix=${FLANNEL_ETCD_PREFIX} \

-iface=${IFACE}

ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker

Restart=on-failure

[Install]

WantedBy=multi-user.target

RequiredBy=docker.service

EOF

mk-docker-opts.sh 腳本將分配給 flanneld 的 Pod 子網網段信息寫入 /run/flannel/docker 文件,後續 docker 啓動時使用這個文件中的環境變量配置 docker0 網橋;

flanneld 使用系統缺省路由所在的接口與其它節點通信,對於有多個網絡接口(如內網和公網)的節點,可以用 -iface參數指定通信接口;

flanneld 運行時需要 root 權限;

分發 flanneld systemd unit 文件到所有節點

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

scp flanneld.service root@${node_ip}:/etc/systemd/system/

done

也要包括kube-server節點自身:

sudo cp flanneld.service /etc/systemd/system/

啓動 flanneld 服務

在kube-server節點上:

sudo systemctl daemon-reload && sudo systemctl enable flanneld && sudo systemctl start flanneld

啓動其它node節點上的flanneld服務,繼續在kube-server上執行:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "systemctl daemon-reload && systemctl enable flanneld && systemctl start flanneld"

done

檢查啓動結果

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh k8s@${node_ip} "systemctl status flanneld|grep Active"

done

檢查分配給各 flanneld 的 Pod 網段信息

在一個node節點上查看集羣 Pod 網段(/16):

source /opt/k8s/bin/environment.sh

etcdctl \

--endpoints=${ETCD_ENDPOINTS} \

--ca-file=/etc/kubernetes/cert/ca.pem \

--cert-file=/etc/flanneld/cert/flanneld.pem \

--key-file=/etc/flanneld/cert/flanneld-key.pem \

get ${FLANNEL_ETCD_PREFIX}/config

輸出:

{"Network":"172.30.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}

查看已分配的 Pod 子網段列表(/24):

source /opt/k8s/bin/environment.sh

etcdctl \

--endpoints=${ETCD_ENDPOINTS} \

--ca-file=/etc/kubernetes/cert/ca.pem \

--cert-file=/etc/flanneld/cert/flanneld.pem \

--key-file=/etc/flanneld/cert/flanneld-key.pem \

ls ${FLANNEL_ETCD_PREFIX}/subnets

輸出:

/kubernetes/network/subnets/172.30.46.0-24

/kubernetes/network/subnets/172.30.49.0-24

/kubernetes/network/subnets/172.30.7.0-24

/kubernetes/network/subnets/172.30.48.0-24

查看某一 Pod 網段對應的節點 IP 和 flannel 接口地址:

source /opt/k8s/bin/environment.sh

etcdctl \

--endpoints=${ETCD_ENDPOINTS} \

--ca-file=/etc/kubernetes/cert/ca.pem \

--cert-file=/etc/flanneld/cert/flanneld.pem \

--key-file=/etc/flanneld/cert/flanneld-key.pem \

get ${FLANNEL_ETCD_PREFIX}/subnets/172.30.46.0-24

輸出:

{"PublicIP":"172.16.10.100","BackendType":"vxlan","BackendData":{"VtepMAC":"92:dc:8d:eb:f2:bf"}}

驗證各節點能通過 Pod 網段互通

在各節點上部署 flannel 後,檢查是否創建了 flannel 接口(名稱可能爲 flannel0、flannel.0、flannel.1 等):

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh ${node_ip} "/usr/sbin/ip addr show flannel.1|grep -w inet"

done

輸出:

172.16.10.101

inet 172.30.49.0/32 scope global flannel.1

172.16.10.102

inet 172.30.7.0/32 scope global flannel.1

172.16.10.103

inet 172.30.48.0/32 scope global flannel.1

在各節點上 ping 所有 flannel 接口 IP,確保能通:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh ${node_ip} "ping -c 1 172.30.46.0"

ssh ${node_ip} "ping -c 1 172.30.49.0"

ssh ${node_ip} "ping -c 1 172.30.7.0"

ssh ${node_ip} "ping -c 1 172.30.48.0"

done

6、部署 master 節點

kubernetes master 節點運行如下組件:

kube-apiserver

kube-scheduler

kube-controller-manager

這 3 個組件均可以以集羣模式運行,通過 leader 選舉產生一個工作進程,其它進程處於阻塞模式。

下載最新版本的二進制文件

從 CHANGELOG頁面 下載 server tarball 文件。

wget https://dl.k8s.io/v1.10.4/kubernetes-server-linux-amd64.tar.gz

tar -xzvf kubernetes-server-linux-amd64.tar.gz

將二進制文件拷貝到所有 master 節點

因爲我們這裏只有一個master節點,所以:

cp server/bin/* /opt/k8s/bin/

chmod +x /opt/k8s/bin/*

6.1 部署 kube-apiserver 組件

創建 kubernetes 證書和私鑰

創建證書籤名請求:

source /opt/k8s/bin/environment.sh

cat > kubernetes-csr.json <

{

"CN": "kubernetes",

"hosts": [

"127.0.0.1",

"172.16.10.100",

"10.254.0.1",

"kubernetes",

"kubernetes.default",

"kubernetes.default.svc",

"kubernetes.default.svc.cluster",

"kubernetes.default.svc.cluster.local"

],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "k8s",

"OU": "testcorp"

}

]

}

EOF

hosts 字段指定授權使用該證書的 IP 或域名列表,這裏列出了 apiserver 節點 IP、kubernetes 服務 IP 和域名;

域名最後字符不能是 .(如不能爲 kubernetes.default.svc.cluster.local.),否則解析時失敗,提示: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local.";

如果使用非 cluster.local 域名,如 opsnull.com,則需要修改域名列表中的最後兩個域名爲:kubernetes.default.svc.opsnull、kubernetes.default.svc.opsnull.com

kubernetes 服務 IP 是 apiserver 自動創建的,一般是 --service-cluster-ip-range 參數指定的網段的第一個IP

生成證書和私鑰:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

-ca-key=/etc/kubernetes/cert/ca-key.pem \

-config=/etc/kubernetes/cert/ca-config.json \

-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

ls kubernetes*

將生成的證書和私鑰文件拷貝到 master 節點:

sudo mkdir -p /etc/kubernetes/cert/ && sudo chown -R k8s /etc/kubernetes/cert/

cp kubernetes*.pem /etc/kubernetes/cert/

注:k8s 賬戶可以讀寫 /etc/kubernetes/cert/ 目錄;

創建加密配置文件

source /opt/k8s/bin/environment.sh

cat > encryption-config.yaml <

kind: EncryptionConfig

apiVersion: v1

resources:

  • resources:
  • secrets

providers:

  • aescbc:

keys:

  • name: key1

secret: ${ENCRYPTION_KEY}

  • identity: {}

EOF

將加密配置文件拷貝到 master 節點的 /etc/kubernetes 目錄下:

cp encryption-config.yaml /etc/kubernetes/

創建和分發 kube-apiserver systemd unit 文件

source /opt/k8s/bin/environment.sh

cat > kube-apiserver.service <

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

[Service]

ExecStart=/opt/k8s/bin/kube-apiserver \

--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \

--anonymous-auth=false \

--experimental-encryption-provider-config=/etc/kubernetes/encryption-config.yaml \

--advertise-address=${MASTER_IP} \

--bind-address=${MASTER_IP} \

--insecure-port=0 \

--authorization-mode=Node,RBAC \

--runtime-config=api/all \

--enable-bootstrap-token-auth \

--service-cluster-ip-range=${SERVICE_CIDR} \

--service-node-port-range=${NODE_PORT_RANGE} \

--tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \

--tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \

--client-ca-file=/etc/kubernetes/cert/ca.pem \

--kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \

--kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \

--service-account-key-file=/etc/kubernetes/cert/ca-key.pem \

--etcd-cafile=/etc/kubernetes/cert/ca.pem \

--etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \

--etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \

--etcd-servers=${ETCD_ENDPOINTS} \

--enable-swagger-ui=true \

--allow-privileged=true \

--apiserver-count=3 \

--audit-log-maxage=30 \

--audit-log-maxbackup=3 \

--audit-log-maxsize=100 \

--audit-log-path=/var/log/audit.log \

--event-ttl=1h \

--v=2

Restart=on-failure

RestartSec=5

Type=notify

User=k8s

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

--experimental-encryption-provider-config:啓用加密特性;

--authorization-mode=Node,RBAC: 開啓 Node 和 RBAC 授權模式,拒絕未授權的請求;

--enable-admission-plugins:啓用 ServiceAccount 和 NodeRestriction;

--service-account-key-file:簽名 ServiceAccount Token 的公鑰文件,kube-controller-manager 的 --service-account-private-key-file 指定私鑰文件,兩者配對使用;

--tls-*-file:指定 apiserver 使用的證書、私鑰和 CA 文件。--client-ca-file 用於驗證 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)請求所帶的證書;

--kubelet-client-certificate、--kubelet-client-key:如果指定,則使用 https 訪問 kubelet APIs;需要爲 kubernete 用戶定義 RBAC 規則,否則無權訪問 kubelet API;

--bind-address: 不能爲 127.0.0.1,否則外界不能訪問它的安全端口 6443;

--insecure-port=0:關閉監聽非安全端口(8080);

--service-cluster-ip-range: 指定 Service Cluster IP 地址段;

--service-node-port-range: 指定 NodePort 的端口範圍;

--runtime-config=api/all=true: 啓用所有版本的 APIs,如 autoscaling/v2alpha1;

--enable-bootstrap-token-auth:啓用 kubelet bootstrap 的 token 認證;

--apiserver-count=3:指定集羣運行模式,多臺 kube-apiserver 會通過 leader 選舉產生一個工作節點,其它節點處於阻塞狀態;

User=k8s:使用 k8s 賬戶運行;

分發 systemd uint 文件到 master 節點:

sudo cp kube-apiserver.service /etc/systemd/system/

啓動 kube-apiserver 服務

sudo su -

mkdir -p /var/run/kubernetes && chown -R k8s /var/run/kubernetes

systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver

檢查下服務狀態:

[k8s@kube-server ~]$ sudo systemctl status kube-apiserver |grep 'Active:'

Active: active (running) since Tue 2018-06-26 03:38:43 UTC; 4min 14s ago

授予 kubernetes 證書訪問 kubelet API 的權限

在執行 kubectl exec、run、logs 等命令時,apiserver 會轉發到 kubelet。這裏定義 RBAC 規則,授權 apiserver 調用 kubelet API。

[k8s@kube-server ~]$ source /opt/k8s/bin/environment.sh

[k8s@kube-server ~]$ kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

clusterrolebinding.rbac.authorization.k8s.io "kube-apiserver:kubelet-apis" created

打印 kube-apiserver 寫入 etcd 的數據

source /opt/k8s/bin/environment.sh

ETCDCTL_API=3 etcdctl \

--endpoints=${ETCD_ENDPOINTS} \

--cacert=/etc/kubernetes/cert/ca.pem \

--cert=/etc/etcd/cert/etcd.pem \

--key=/etc/etcd/cert/etcd-key.pem \

get /registry/ --prefix --keys-only

檢查集羣信息

[k8s@kube-server ~]$ kubectl cluster-info

Kubernetes master is running at https://172.16.10.100:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

[k8s@kube-server ~]$ kubectl get all --all-namespaces

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

default service/kubernetes ClusterIP 10.254.0.1 443/TCP 49m

[k8s@kube-server ~]$ kubectl get componentstatuses

NAME STATUS MESSAGE ERROR

scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused

controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: getsockopt: connection refused

etcd-1 Healthy {"health":"true"}

etcd-2 Healthy {"health":"true"}

etcd-0 Healthy {"health":"true"}

注意:

  1. 如果執行 kubectl 命令式時輸出如下錯誤信息,則說明使用的 ~/.kube/config 文件不對,請切換到正確的賬戶後再執行該命令:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

  1. 執行 kubectl get componentstatuses 命令時,apiserver 默認向 127.0.0.1 發送請求。當 controller-manager、scheduler 以集羣模式運行時,有可能和 kube-apiserver 不在一臺機器上,這時 controller-manager 或 scheduler 的狀態爲 Unhealthy,但實際上它們工作正常。

檢查 kube-apiserver 監聽的端口

[k8s@kube-server ~]$ sudo netstat -lnpt|grep kube

tcp 0 0 172.16.10.100:6443 0.0.0.0:* LISTEN 11066/kube-apiserve

6443: 接收 https 請求的安全端口,對所有請求做認證和授權;

由於關閉了非安全端口,故沒有監聽 8080;

6.2 部署高可用 kube-controller-manager 集羣

該集羣包含 3 個節點,我們使用node1、node2、node3來搭建,啓動後將通過競爭選舉機制產生一個 leader 節點,其它節點爲阻塞狀態。當 leader 節點不可用後,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。

爲保證通信安全,本文檔先生成 x509 證書和私鑰,kube-controller-manager 在如下兩種情況下使用該證書:

  1. 與 kube-apiserver 的安全端口通信時;
  2. 在安全端口(https,10252) 輸出 prometheus 格式的 metrics;

創建 kube-controller-manager 證書和私鑰

創建證書籤名請求:

cat > kube-controller-manager-csr.json <

{

"CN": "system:kube-controller-manager",

"key": {

"algo": "rsa",

"size": 2048

},

"hosts": [

"127.0.0.1",

"172.16.10.101",

"172.16.10.102",

"172.16.10.103"

],

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "system:kube-controller-manager",

"OU": "testcorp"

}

]

}

EOF

hosts 列表包含所有 kube-controller-manager 節點 IP;

CN 爲 system:kube-controller-manager、O 爲 system:kube-controller-manager,kubernetes 內置的 ClusterRoleBindings system:kube-controller-manager 賦予 kube-controller-manager 工作所需的權限。

生成證書和私鑰:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

-ca-key=/etc/kubernetes/cert/ca-key.pem \

-config=/etc/kubernetes/cert/ca-config.json \

-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

將生成的證書和私鑰分發到所有 kube-controller-manager節點:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

scp kube-controller-manager*.pem k8s@${node_ip}:/etc/kubernetes/cert/

done

創建和分發 kubeconfig 文件

kubeconfig 文件包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書。

source /opt/k8s/bin/environment.sh

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/cert/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \

--client-certificate=kube-controller-manager.pem \

--client-key=kube-controller-manager-key.pem \

--embed-certs=true \

--kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager \

--cluster=kubernetes \

--user=system:kube-controller-manager \

--kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

分發 kubeconfig 到所有kube-controller-manager 節點:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

scp kube-controller-manager.kubeconfig k8s@${node_ip}:/etc/kubernetes/

done

創建和分發 kube-controller-manager systemd unit 文件

source /opt/k8s/bin/environment.sh

cat > kube-controller-manager.service <

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]

ExecStart=/opt/k8s/bin/kube-controller-manager \

--port=0 \

--secure-port=10252 \

--bind-address=127.0.0.1 \

--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \

--service-cluster-ip-range=${SERVICE_CIDR} \

--cluster-name=kubernetes \

--cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \

--cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \

--experimental-cluster-signing-duration=8760h \

--root-ca-file=/etc/kubernetes/cert/ca.pem \

--service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \

--leader-elect=true \

--feature-gates=RotateKubeletServerCertificate=true \

--controllers=*,bootstrapsigner,tokencleaner \

--horizontal-pod-autoscaler-use-rest-clients=true \

--horizontal-pod-autoscaler-sync-period=10s \

--tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \

--tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \

--use-service-account-credentials=true \

--v=2

Restart=on

Restart=on-failure

RestartSec=5

User=k8s

[Install]

WantedBy=multi-user.target

EOF

--port=0:關閉監聽 http /metrics 的請求,同時 --address 參數無效,--bind-address 參數有效;

--secure-port=10252、--bind-address=0.0.0.0: 在所有網絡接口監聽 10252 端口的 https /metrics 請求;

--kubeconfig:指定 kubeconfig 文件路徑,kube-controller-manager 使用它連接和驗證 kube-apiserver;

--cluster-signing-*-file:簽名 TLS Bootstrap 創建的證書;

--experimental-cluster-signing-duration:指定 TLS Bootstrap 證書的有效期;

--root-ca-file:放置到容器 ServiceAccount 中的 CA 證書,用來對 kube-apiserver 的證書進行校驗;

--service-account-private-key-file:簽名 ServiceAccount 中 Token 的私鑰文件,必須和 kube-apiserver 的 --service-account-key-file 指定的公鑰文件配對使用;

--service-cluster-ip-range :指定 Service Cluster IP 網段,必須和 kube-apiserver 中的同名參數一致;

--leader-elect=true:集羣運行模式,啓用選舉功能;被選爲 leader 的節點負責處理工作,其它節點爲阻塞狀態;

--feature-gates=RotateKubeletServerCertificate=true:開啓 kublet server 證書的自動更新特性;

--controllers=*,bootstrapsigner,tokencleaner:啓用的控制器列表,tokencleaner 用於自動清理過期的 Bootstrap token;

--horizontal-pod-autoscaler-*:custom metrics 相關參數,支持 autoscaling/v2alpha1;

--tls-cert-file、--tls-private-key-file:使用 https 輸出 metrics 時使用的 Server 證書和祕鑰;

--use-service-account-credentials=true:

User=k8s:使用 k8s 賬戶運行;

kube-controller-manager 不對請求 https metrics 的 Client 證書進行校驗,故不需要指定 --tls-ca-file 參數,而且該參數已被淘汰。

分發 systemd unit 文件到所有 kube-controller-manager節點:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

scp kube-controller-manager.service root@${node_ip}:/etc/systemd/system/

done

kube-controller-manager 的權限

ClusteRole: system:kube-controller-manager 的權限很小,只能創建 secret、serviceaccount 等資源對象,各 controller 的權限分散到 ClusterRole system:controller:XXX 中。

需要在 kube-controller-manager 的啓動參數中添加 --use-service-account-credentials=true 參數,這樣 main controller 會爲各 controller 創建對應的 ServiceAccount XXX-controller。

內置的 ClusterRoleBinding system:controller:XXX 將賦予各 XXX-controller ServiceAccount 對應的 ClusterRole system:controller:XXX 權限。

啓動 kube-controller-manager 服務

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"

done

檢查服務運行狀態

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh k8s@${node_ip} "systemctl status kube-controller-manager|grep Active"

done

輸出:

172.16.10.101

Active: active (running) since Tue 2018-06-26 05:15:00 UTC; 21s ago

172.16.10.102

Active: active (running) since Tue 2018-06-26 05:15:01 UTC; 20s ago

172.16.10.103

Active: active (running) since Tue 2018-06-26 05:15:02 UTC; 20s ago

查看輸出的 metric

注意:以下命令在 kube-controller-manager 節點上執行。

kube-controller-manager 監聽 10252 端口,接收 https 請求:

[k8s@kube-node1 system]$ sudo netstat -lnpt|grep kube-controll

tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 28523/kube-controll

[k8s@kube-node1 system]$ curl -s --cacert /etc/kubernetes/cert/ca.pem https://127.0.0.1:10252/metrics |head

HELP ClusterRoleAggregator_adds Total number of adds handled by workqueue: ClusterRoleAggregator

TYPE ClusterRoleAggregator_adds counter

ClusterRoleAggregator_adds 9

HELP ClusterRoleAggregator_depth Current depth of workqueue: ClusterRoleAggregator

TYPE ClusterRoleAggregator_depth gauge

ClusterRoleAggregator_depth 0

HELP ClusterRoleAggregator_queue_latency How long an item stays in workqueueClusterRoleAggregator before being requested.

TYPE ClusterRoleAggregator_queue_latency summary

ClusterRoleAggregator_queue_latency{quantile="0.5"} 304

ClusterRoleAggregator_queue_latency{quantile="0.9"} 73770

[k8s@kube-node1 system]$

注:* curl --cacert CA 證書用來驗證 kube-controller-manager https server 證書。

測試 kube-controller-manager 集羣的高可用

停掉一個或兩個節點的 kube-controller-manager 服務,觀察其它節點的日誌,看是否獲取了 leader 權限。

查看當前的 leader

[k8s@kube-server ~]$ kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml

apiVersion: v1

kind: Endpoints

metadata:

annotations:

control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube-node1_e213fdca-78ff-11e8-bb39-080027395360","leaseDurationSeconds":15,"acquireTime":"2018-06-26T05:15:03Z","renewTime":"2018-06-26T05:18:20Z","leaderTransitions":0}'

creationTimestamp: 2018-06-26T05:15:04Z

name: kube-controller-manager

namespace: kube-system

resourceVersion: "340"

selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager

uid: e2192a53-78ff-11e8-b2bd-080027395360

可見,當前的 leader 爲 kube-node1 節點。

我們到kube-node1節點上,關掉kube-controller-manager服務,然後再觀察:

[k8s@kube-server ~]$ kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml

apiVersion: v1

kind: Endpoints

metadata:

annotations:

control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube-node3_e2895c01-78ff-11e8-8ff5-080027395360","leaseDurationSeconds":15,"acquireTime":"2018-06-26T05:19:38Z","renewTime":"2018-06-26T05:19:38Z","leaderTransitions":1}'

creationTimestamp: 2018-06-26T05:15:04Z

name: kube-controller-manager

namespace: kube-system

resourceVersion: "372"

selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager

uid: e2192a53-78ff-11e8-b2bd-080027395360

[k8s@kube-server ~]$

可以,當前的leader已經變爲kube-node3節點了。

6.3 部署高可用 kube-scheduler 集羣

本文檔介紹部署高可用 kube-scheduler 集羣的步驟。

該集羣包含 3 個節點,kube-node1、kube-node2、kube-node3,啓動後將通過競爭選舉機制產生一個 leader 節點,其它節點爲阻塞狀態。當 leader 節點不可用後,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。

爲保證通信安全,本文檔先生成 x509 證書和私鑰,kube-scheduler 在如下兩種情況下使用該證書:

  1. 與 kube-apiserver 的安全端口通信;
  2. 在安全端口(https,10251) 輸出 prometheus 格式的 metrics;

創建 kube-scheduler 證書和私鑰

創建證書籤名請求:

cat > kube-scheduler-csr.json <

{

"CN": "system:kube-scheduler",

"hosts": [

"127.0.0.1",

"172.16.10.101",

"172.16.10.102",

"172.16.10.103"

],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "system:kube-scheduler",

"OU": "testcorp"

}

]

}

EOF

hosts 列表包含所有 kube-scheduler 節點 IP;

CN 爲 system:kube-scheduler、O 爲 system:kube-scheduler,kubernetes 內置的 ClusterRoleBindings system:kube-scheduler 將賦予 kube-scheduler 工作所需的權限。

生成證書和私鑰:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

-ca-key=/etc/kubernetes/cert/ca-key.pem \

-config=/etc/kubernetes/cert/ca-config.json \

-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

創建和分發 kubeconfig 文件

kubeconfig 文件包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書。

source /opt/k8s/bin/environment.sh

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/cert/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \

--client-certificate=kube-scheduler.pem \

--client-key=kube-scheduler-key.pem \

--embed-certs=true \

--kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler \

--cluster=kubernetes \

--user=system:kube-scheduler \

--kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

上一步創建的證書、私鑰以及 kube-apiserver 地址被寫入到 kubeconfig 文件中。

分發 kubeconfig 到所有 kube-scheduler節點:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

scp kube-scheduler.kubeconfig k8s@${node_ip}:/etc/kubernetes/

done

創建和分發 kube-scheduler systemd unit 文件

cat > kube-scheduler.service <

[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]

ExecStart=/opt/k8s/bin/kube-scheduler \

--address=127.0.0.1 \

--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \

--leader-elect=true \

--v=2

Restart=on-failure

RestartSec=5

User=k8s

[Install]

WantedBy=multi-user.target

EOF

--address:在 127.0.0.1:10251 端口接收 http /metrics 請求;kube-scheduler 目前還不支持接收 https 請求;

--kubeconfig:指定 kubeconfig 文件路徑,kube-scheduler 使用它連接和驗證 kube-apiserver;

--leader-elect=true:集羣運行模式,啓用選舉功能;被選爲 leader 的節點負責處理工作,其它節點爲阻塞狀態;

User=k8s:使用 k8s 賬戶運行;

分發 systemd unit 文件到所有kube-scheduler 節點:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

scp kube-scheduler.service root@${node_ip}:/etc/systemd/system/

done

啓動 kube-scheduler 服務

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler"

done

檢查服務運行狀態

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh k8s@${node_ip} "systemctl status kube-scheduler|grep Active"

done

輸出:

172.16.10.101

Active: active (running) since Tue 2018-06-26 05:36:31 UTC; 17s ago

172.16.10.102

Active: active (running) since Tue 2018-06-26 05:36:32 UTC; 16s ago

172.16.10.103

Active: active (running) since Tue 2018-06-26 05:36:33 UTC; 16s ago

[k8s@kube-server ~]$

查看輸出的 metric

注意:以下命令在 kube-scheduler 節點上執行。

kube-scheduler 監聽 10251 端口,接收 http 請求:

[k8s@kube-node1 system]$ sudo netstat -lnpt|grep kube-sche

tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 30200/kube-schedule

[k8s@kube-node1 system]$ curl -s http://127.0.0.1:10251/metrics |head

HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.

TYPE apiserver_audit_event_total counter

apiserver_audit_event_total 0

HELP go_gc_duration_seconds A summary of the GC invocation durations.

TYPE go_gc_duration_seconds summary

go_gc_duration_seconds{quantile="0"} 1.1776e-05

go_gc_duration_seconds{quantile="0.25"} 1.2644e-05

go_gc_duration_seconds{quantile="0.5"} 1.7374e-05

go_gc_duration_seconds{quantile="0.75"} 2.2085e-05

go_gc_duration_seconds{quantile="1"} 4.5083e-05

[k8s@kube-node1 system]$

測試 kube-scheduler 集羣的高可用

隨便找一個或兩個 master 節點,停掉 kube-scheduler 服務,看其它節點是否獲取了 leader 權限(systemd 日誌)。

查看當前的 leader

[k8s@kube-node1 system]$ kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml

apiVersion: v1

kind: Endpoints

metadata:

annotations:

control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube-node1_e1d9a10f-7902-11e8-814d-080027395360","leaseDurationSeconds":15,"acquireTime":"2018-06-26T05:36:33Z","renewTime":"2018-06-26T05:38:42Z","leaderTransitions":0}'

creationTimestamp: 2018-06-26T05:36:33Z

name: kube-scheduler

namespace: kube-system

resourceVersion: "1008"

selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler

uid: e2769a5d-7902-11e8-b2bd-080027395360

可見,當前的 leader 爲 kube-node1 節點。

7、部署node節點

kubernetes node節點運行如下組件:

docker

kubelet

kube-proxy

7.1 部署docker組件

kubelet 通過 Container Runtime Interface (CRI) 與 docker 進行交互。

下載和分發 docker 二進制文件

https://download.docker.com/linux/static/stable/x86_64/ 頁面下載最新發布包:

wget https://download.docker.com/linux/static/stable/x86_64/docker-18.03.1-ce.tgz

tar -xvf docker-18.03.1-ce.tgz

分發二進制文件到所有 worker 節點:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

scp docker/docker* k8s@${node_ip}:/opt/k8s/bin/

ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"

done

創建和分發 systemd unit 文件

cat > docker.service <<"EOF"

[Unit]

Description=Docker Application Container Engine

Documentation=http://docs.docker.io

[Service]

Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"

EnvironmentFile=-/run/flannel/docker

ExecStart=/opt/k8s/bin/dockerd --log-level=error $DOCKER_NETWORK_OPTIONS

ExecReload=/bin/kill -s HUP $MAINPID

Restart=on-failure

RestartSec=5

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

Delegate=yes

KillMode=process

[Install]

WantedBy=multi-user.target

EOF

EOF 前後有引號,這樣 bash 不會替換文檔中的變量,如 $DOCKER_NETWORK_OPTIONS;

dockerd 運行時會調用其它 docker 命令,如 docker-proxy,所以需要將 docker 命令所在的目錄加到 PATH 環境變量中;

flanneld 啓動時將網絡配置寫入到 /run/flannel/docker 文件中的變量 DOCKER_NETWORK_OPTIONS,dockerd 命令行上指定該變量值來設置docker0 網橋參數;

如果指定了多個 EnvironmentFile 選項,則必須將 /run/flannel/docker 放在最後(確保 docker0 使用 flanneld 生成的 bip 參數);

docker 需要以 root 用於運行;

docker 從 1.13 版本開始,可能將 iptables FORWARD chain的默認策略設置爲DROP,從而導致 ping 其它 Node 上的 Pod IP 失敗,遇到這種情況時,需要手動設置策略爲 ACCEPT:

分發 systemd unit 文件到所有 worker 機器:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

scp docker.service root@${node_ip}:/etc/systemd/system/

done

啓動 docker 服務

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "systemctl stop firewalld && systemctl disable firewalld"

ssh root@${node_ip} "iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat"

ssh root@${node_ip} "iptables -P FORWARD ACCEPT"

ssh root@${node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl start docker"

done

關閉 firewalld(centos7)/ufw(ubuntu16.04),否則可能會重複創建 iptables 規則;

清理舊的 iptables rules 和 chains 規則;

檢查服務運行狀態

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh k8s@${node_ip} "systemctl status docker|grep Active"

done

檢查 docker0 網橋

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh k8s@${node_ip} "/usr/sbin/ip addr show"

done

確認各 work 節點的 docker0 網橋和 flannel.1 接口的 IP 處於同一個網段中(如下 172.30.49.0 和 172.30.49.1):

[k8s@kube-node1 system]$ ip a

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

link/ether 08:00:27:39:53:60 brd ff:ff:ff:ff:ff:ff

inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3

valid_lft 69101sec preferred_lft 69101sec

inet6 fe80::953e:9248:d505:388f/64 scope link noprefixroute

valid_lft forever preferred_lft forever

3: enp0s8: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

link/ether 08:00:27:e5:7e:fd brd ff:ff:ff:ff:ff:ff

inet 172.16.10.101/24 brd 172.16.10.255 scope global noprefixroute enp0s8

valid_lft forever preferred_lft forever

inet6 fe80::a00:27ff:fee5:7efd/64 scope link

valid_lft forever preferred_lft forever

4: flannel.1: mtu 1450 qdisc noqueue state UNKNOWN group default

link/ether e6:1a:e1:53:92:ef brd ff:ff:ff:ff:ff:ff

inet 172.30.49.0/32 scope global flannel.1

valid_lft forever preferred_lft forever

inet6 fe80::e41a:e1ff:fe53:92ef/64 scope link

valid_lft forever preferred_lft forever

5: docker0: mtu 1500 qdisc noqueue state DOWN group default

link/ether 02:42:3f:5a:25:80 brd ff:ff:ff:ff:ff:ff

inet 172.30.49.1/24 brd 172.30.49.255 scope global docker0

valid_lft forever preferred_lft forever

[k8s@kube-node1 system]$

7.2 部署 kubelet 組件

kublet 運行在每個 worker 節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令如 exec、run、logs 等。

kublet 啓動時自動向 kube-apiserver 註冊節點信息,內置的 cadvisor 統計和監控節點的資源使用情況。

爲確保安全,本文檔只開啓接收 https 請求的安全端口,對請求進行認證和授權,拒絕未授權的訪問(如 apiserver、heapster)。

下載最新版本的二進制文件

從 CHANGELOG頁面 下載 server tarball 文件。

wget https://dl.k8s.io/v1.10.4/kubernetes-server-linux-amd64.tar.gz

tar -xzvf kubernetes-server-linux-amd64.tar.gz

將二進制文件拷貝到所有 node 節點:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

scp server/bin/* k8s@${node_ip}:/opt/k8s/bin/

ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"

done

創建 kubelet bootstrap kubeconfig 文件

source /opt/k8s/bin/environment.sh

for node_name in ${NODE_NAMES[@]}

do

echo ">>> ${node_name}"

創建 token

export BOOTSTRAP_TOKEN=$(kubeadm token create \

--description kubelet-bootstrap-token \

--groups system:bootstrappers:${node_name} \

--kubeconfig ~/.kube/config)

設置集羣參數

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/cert/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

設置客戶端認證參數

kubectl config set-credentials kubelet-bootstrap \

--token=${BOOTSTRAP_TOKEN} \

--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

設置上下文參數

kubectl config set-context default \

--cluster=kubernetes \

--user=kubelet-bootstrap \

--kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

設置默認上下文

kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

done

證書中寫入 Token 而非證書,證書後續由 controller-manager 創建。

查看 kubeadm 爲各節點創建的 token:

[k8s@kube-server ~]$ kubeadm token list --kubeconfig ~/.kube/config

TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS

34jnny.c7ks4atqkqpclbe1 23h 2018-06-28T01:32:46Z authentication,signing kubelet-bootstrap-token system:bootstrappers:kube-node2

jlzg9x.la6w75ab1jf9dsg2 23h 2018-06-28T01:32:45Z authentication,signing kubelet-bootstrap-token system:bootstrappers:kube-node1

qgfb6a.8w0fm4i8kwh8y5gd 23h 2018-06-28T01:32:46Z authentication,signing kubelet-bootstrap-token system:bootstrappers:kube-node3

分發 bootstrap kubeconfig 文件到所有 node節點

source /opt/k8s/bin/environment.sh

for node_name in ${NODE_NAMES[@]}

do

echo ">>> ${node_ip}"

scp kubelet-bootstrap-${node_name}.kubeconfig k8s@${node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig

done

創建和分發 kubelet 參數配置文件

從 v1.10 開始,kubelet 部分參數需在配置文件中配置,kubelet --help 會提示:

[k8s@kube-server ~]$ kubelet --help|grep DEPRECATED

--address 0.0.0.0 The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and :: for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)

創建 kubelet 參數配置模板文件:

source /opt/k8s/bin/environment.sh

cat > kubelet.config.json.template <

{

"kind": "KubeletConfiguration",

"apiVersion": "kubelet.config.k8s.io/v1beta1",

"authentication": {

"x509": {

"clientCAFile": "/etc/kubernetes/cert/ca.pem"

},

"webhook": {

"enabled": true,

"cacheTTL": "2m0s"

},

"anonymous": {

"enabled": false

}

},

"authorization": {

"mode": "Webhook",

"webhook": {

"cacheAuthorizedTTL": "5m0s",

"cacheUnauthorizedTTL": "30s"

}

},

"address": "##NODE_IP##",

"port": 10250,

"readOnlyPort": 0,

"cgroupDriver": "cgroupfs",

"hairpinMode": "promiscuous-bridge",

"serializeImagePulls": false,

"featureGates": {

"RotateKubeletClientCertificate": true,

"RotateKubeletServerCertificate": true

},

"clusterDomain": "cluster.local.",

"clusterDNS": ["10.254.0.2"]

}

EOF

address:API 監聽地址,不能爲 127.0.0.1,否則 kube-apiserver、heapster 等不能調用 kubelet 的 API;

readOnlyPort=0:關閉只讀端口(默認 10255),等效爲未指定;

authentication.anonymous.enabled:設置爲 false,不允許匿名訪問 10250 端口;

authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啓 HTTP 證書認證;

authentication.webhook.enabled=true:開啓 HTTPs bearer token 認證;

對於未通過 x509 證書和 webhook 認證的請求(kube-apiserver 或其他客戶端),將被拒絕,提示 Unauthorized;

authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查詢 kube-apiserver 某 user、group 是否具有操作資源的權限(RBAC);

featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自動 rotate 證書,證書的有效期取決於 kube-controller-manager 的 --experimental-cluster-signing-duration 參數;

需要 root 賬戶運行;

爲各節點創建和分發 kubelet 配置文件:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

sed -e "s/##NODE_IP##/${node_ip}/" kubelet.config.json.template > kubelet.config-${node_ip}.json

scp kubelet.config-${node_ip}.json root@${node_ip}:/etc/kubernetes/kubelet.config.json

done

創建和分發 kubelet systemd unit 文件

創建 kubelet systemd unit 文件模板:

cat > kubelet.service.template <

[Unit]

Description=Kubernetes Kubelet

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service

[Service]

WorkingDirectory=/var/lib/kubelet

ExecStart=/opt/k8s/bin/kubelet \

--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \

--cert-dir=/etc/kubernetes/cert \

--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \

--config=/etc/kubernetes/kubelet.config.json \

--hostname-override=##NODE_NAME## \

--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \

--logtostderr=true \

--v=2

Restart=on-failure

RestartSec=5

[Install]

WantedBy=multi-user.target

EOF

如果設置了 --hostname-override 選項,則 kube-proxy 也需要設置該選項,否則會出現找不到 Node 的情況;

--bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用該文件中的用戶名和 token 向 kube-apiserver 發送 TLS Bootstrapping 請求;

K8S approve kubelet 的 csr 請求後,在 --cert-dir 目錄創建證書和私鑰文件,然後寫入 --kubeconfig 文件;

--feature-gates:啓用 kuelet 證書輪轉功能;

爲各節點創建和分發 kubelet systemd unit 文件:

source /opt/k8s/bin/environment.sh

for node_name in ${NODE_NAMES[@]}

do

echo ">>> ${node_name}"

sed -e "s/##NODE_NAME##/${node_name}/" kubelet.service.template > kubelet-${node_name}.service

scp kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service

done

Bootstrap Token Auth 和授予權限

kublet 啓動時查找配置的 --kubeletconfig 文件是否存在,如果不存在則使用 --bootstrap-kubeconfig 向 kube-apiserver 發送證書籤名請求 (CSR)。

kube-apiserver 收到 CSR 請求後,對其中的 Token 進行認證(事先使用 kubeadm 創建的 token),認證通過後將請求的 user 設置爲 system:bootstrap:,group 設置爲 system:bootstrappers,這一過程稱爲 Bootstrap Token Auth。

默認情況下,這個 user 和 group 沒有創建 CSR 的權限,kubelet 啓動失敗,錯誤日誌如下:

$ sudo journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests'

May 06 06:42:36 kube-node1 kubelet[26986]: F0506 06:42:36.314378 26986 server.go:233] failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:lemy40" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope

May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a

May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Failed with result 'exit-code'.

解決辦法是:創建一個 clusterrolebinding,將 group system:bootstrappers 和 clusterrole system:node-bootstrapper 綁定:

$ kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

啓動 kubelet 服務

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "mkdir -p /var/lib/kubelet" # 必須先創建工作目錄

ssh root@${node_ip} "swapoff -a" # 關閉 swap 分區

ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"

done

kubelet 啓動後使用 --bootstrap-kubeconfig 向 kube-apiserver 發送 CSR 請求,當這個 CSR 被 approve 後,kube-controller-manager 爲 kubelet 創建 TLS 客戶端證書、私鑰和 --kubeletconfig 文件。

注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 參數,纔會爲 TLS Bootstrap 創建證書和私鑰。

[k8s@kube-server ~]$ kubectl get csr

NAME AGE REQUESTOR CONDITION

node-csr-9UuHCTss6Mxs4FTcuNqU9sBe6FC1of_Da7t8luoVL_0 2m system:bootstrap:jlzg9x Pending

node-csr-9WiUTwjqsFmNLiV3wqKYRY_MCy-V6lxNauLHJuuUxpc 2m system:bootstrap:34jnny Pending

node-csr-j0SQAP6ODUDrP0QQUto0yfCc41Kp_yMYhXYLS3IluCY 2m system:bootstrap:qgfb6a Pending

[k8s@kube-server ~]$ kubectl get nodes

No resources found.

[k8s@kube-server ~]$

三個 node節點的 csr 均處於 pending 狀態。

approve kubelet CSR 請求

可以手動或自動 approve CSR 請求。推薦使用自動的方式,因爲從 v1.8 版本開始,可以自動輪轉approve csr 後生成的證書。

手動 approve CSR 請求

[k8s@kube-server ~]$ kubectl certificate approve node-csr-9UuHCTss6Mxs4FTcuNqU9sBe6FC1of_Da7t8luoVL_0

certificatesigningrequest.certificates.k8s.io "node-csr-9UuHCTss6Mxs4FTcuNqU9sBe6FC1of_Da7t8luoVL_0" approved

[k8s@kube-server ~]$ kubectl describe csr node-csr-9UuHCTss6Mxs4FTcuNqU9sBe6FC1of_Da7t8luoVL_0

Name: node-csr-9UuHCTss6Mxs4FTcuNqU9sBe6FC1of_Da7t8luoVL_0

Labels:

Annotations:

CreationTimestamp: Wed, 27 Jun 2018 04:44:59 +0000

Requesting User: system:bootstrap:jlzg9x

Status: Approved,Issued

Subject:

Common Name: system:node:kube-node1

Serial Number:

Organization: system:nodes

Events:

Requesting User:請求 CSR 的用戶,kube-apiserver 對它進行認證和授權;

Subject:請求籤名的證書信息;

證書的 CN 是 system:node:kube-node2, Organization 是 system:nodes,kube-apiserver 的 Node 授權模式會授予該證書的相關權限;

自動 approve CSR 請求

創建三個 ClusterRoleBinding,分別用於自動 approve client、renew client、renew server 證書:

cat > csr-crb.yaml <

Approve all CSRs for the group "system:bootstrappers"

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: auto-approve-csrs-for-group

subjects:

  • kind: Group

name: system:bootstrappers

apiGroup: rbac.authorization.k8s.io

roleRef:

kind: ClusterRole

name: system:certificates.k8s.io:certificatesigningrequests:nodeclient

apiGroup: rbac.authorization.k8s.io


To let a node of the group "system:bootstrappers" renew its own credentials

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: node-client-cert-renewal

subjects:

  • kind: Group

name: system:bootstrappers

apiGroup: rbac.authorization.k8s.io

roleRef:

kind: ClusterRole

name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient

apiGroup: rbac.authorization.k8s.io


A ClusterRole which instructs the CSR approver to approve a node requesting a

serving cert matching its client cert.

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: approve-node-server-renewal-csr

rules:

  • apiGroups: ["certificates.k8s.io"]

resources: ["certificatesigningrequests/selfnodeserver"]

verbs: ["create"]


To let a node of the group "system:nodes" renew its own server credentials

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: node-server-cert-renewal

subjects:

  • kind: Group

name: system:nodes

apiGroup: rbac.authorization.k8s.io

roleRef:

kind: ClusterRole

name: approve-node-server-renewal-csr

apiGroup: rbac.authorization.k8s.io

EOF

生效配置:

[k8s@kube-server ~]$ kubectl apply -f csr-crb.yaml

clusterrolebinding.rbac.authorization.k8s.io "auto-approve-csrs-for-group" created

clusterrolebinding.rbac.authorization.k8s.io "node-client-cert-renewal" created

clusterrole.rbac.authorization.k8s.io "approve-node-server-renewal-csr" created

clusterrolebinding.rbac.authorization.k8s.io "node-server-cert-renewal" created

查看 kublet 的情況

等待一段時間(1-10 分鐘),三個節點的 CSR 都被自動 approve:

[k8s@kube-server ~]$ kubectl get csr

NAME AGE REQUESTOR CONDITION

csr-72dq4 5m system:node:kube-node1 Pending

node-csr-9UuHCTss6Mxs4FTcuNqU9sBe6FC1of_Da7t8luoVL_0 10m system:bootstrap:jlzg9x Approved,Issued

node-csr-9WiUTwjqsFmNLiV3wqKYRY_MCy-V6lxNauLHJuuUxpc 10m system:bootstrap:34jnny Pending

node-csr-j0SQAP6ODUDrP0QQUto0yfCc41Kp_yMYhXYLS3IluCY 10m system:bootstrap:qgfb6a Pending

[k8s@kube-server ~]$ kubectl get nodes

NAME STATUS ROLES AGE VERSION

kube-node1 Ready 5m v1.10.4

[k8s@kube-server ~]$ kubectl get csr

NAME AGE REQUESTOR CONDITION

csr-72dq4 23m system:node:kube-node1 Approved,Issued

csr-wnkj8 14m system:node:kube-node2 Approved,Issued

csr-zxkbr 14m system:node:kube-node3 Approved,Issued

node-csr-9UuHCTss6Mxs4FTcuNqU9sBe6FC1of_Da7t8luoVL_0 27m system:bootstrap:jlzg9x Approved,Issued

node-csr-9WiUTwjqsFmNLiV3wqKYRY_MCy-V6lxNauLHJuuUxpc 27m system:bootstrap:34jnny Approved,Issued

node-csr-j0SQAP6ODUDrP0QQUto0yfCc41Kp_yMYhXYLS3IluCY 27m system:bootstrap:qgfb6a Approved,Issued

[k8s@kube-server ~]$ kubectl get nodes

NAME STATUS ROLES AGE VERSION

kube-node1 Ready 23m v1.10.4

kube-node2 Ready 14m v1.10.4

kube-node3 Ready 14m v1.10.4

[k8s@kube-server ~]$

kube-controller-manager 爲各 node 生成了 kubeconfig 文件和公私鑰:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "ls -l /etc/kubernetes/kubelet.kubeconfig"

ssh root@${node_ip} "ls -l /etc/kubernetes/cert/|grep kubelet"

done

輸出:

172.16.10.101

-rw-------. 1 root root 2290 Jun 27 04:49 /etc/kubernetes/kubelet.kubeconfig

-rw-r--r--. 1 root root 1046 Jun 27 04:49 kubelet-client.crt

-rw-------. 1 root root 227 Jun 27 04:44 kubelet-client.key

-rw-------. 1 root root 1330 Jun 27 04:56 kubelet-server-2018-06-27-04-56-32.pem

  1. 1 root root 59 Jun 27 04:56 kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2018-06-27-04-56-32.pem

172.16.10.102

-rw-------. 1 root root 2290 Jun 27 04:58 /etc/kubernetes/kubelet.kubeconfig

-rw-r--r--. 1 root root 1046 Jun 27 04:58 kubelet-client.crt

-rw-------. 1 root root 227 Jun 27 04:45 kubelet-client.key

-rw-------. 1 root root 1330 Jun 27 04:58 kubelet-server-2018-06-27-04-58-41.pem

  1. 1 root root 59 Jun 27 04:58 kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2018-06-27-04-58-41.pem

172.16.10.103

-rw-------. 1 root root 2290 Jun 27 04:58 /etc/kubernetes/kubelet.kubeconfig

-rw-r--r--. 1 root root 1046 Jun 27 04:58 kubelet-client.crt

-rw-------. 1 root root 227 Jun 27 04:45 kubelet-client.key

-rw-------. 1 root root 1330 Jun 27 04:58 kubelet-server-2018-06-27-04-58-42.pem

  1. 1 root root 59 Jun 27 04:58 kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2018-06-27-04-58-42.pem

kubelet-server 證書會週期輪轉。

kubelet 提供的 API 接口

kublet 啓動後監聽多個端口,用於接收 kube-apiserver 或其它組件發送的請求:

[k8s@kube-node1 ~]$ sudo netstat -lnpt|grep kubelet

tcp 0 0 172.16.10.101:4194 0.0.0.0:* LISTEN 9191/kubelet

tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 9191/kubelet

tcp 0 0 172.16.10.101:10250 0.0.0.0:* LISTEN 9191/kubelet

4194: cadvisor http 服務;

10248: healthz http 服務;

10250: https API 服務;注意:未開啓只讀端口 10255;

例如執行 kubectl ec -it nginx-ds-5rmws -- sh 命令時,kube-apiserver 會向 kubelet 發送如下請求:

POST /exec/default/nginx-ds-5rmws/my-nginx?command=sh&input=1&output=1&tty=1

kubelet 接收 10250 端口的 https 請求:

/pods、/runningpods

/metrics、/metrics/cadvisor、/metrics/probes

/spec

/stats、/stats/container

/logs

/run/、"/exec/", "/attach/", "/portForward/", "/containerLogs/" 等管理;

詳情參考:

https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L434:3

由於關閉了匿名認證,同時開啓了 webhook 授權,所有訪問 10250 端口 https API 的請求都需要被認證和授權。

預定義的 ClusterRole system:kubelet-api-admin 授予訪問 kubelet 所有 API 的權限:

[k8s@kube-server ~]$ kubectl describe clusterrole system:kubelet-api-admin

Name: system:kubelet-api-admin

Labels: kubernetes.io/bootstrapping=rbac-defaults

Annotations: rbac.authorization.kubernetes.io/autoupdate=true

PolicyRule:

Resources Non-Resource URLs Resource Names Verbs

--------- ----------------- -------------- -----

nodes [] [] [get list watch proxy]

nodes/log [] [] [*]

nodes/metrics [] [] [*]

nodes/proxy [] [] [*]

nodes/spec [] [] [*]

nodes/stats [] [] [*]

kublet api 認證和授權

kublet 配置瞭如下認證參數:

authentication.anonymous.enabled:設置爲 false,不允許匿名訪問 10250 端口;

authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啓 HTTPs 證書認證;

authentication.webhook.enabled=true:開啓 HTTPs bearer token 認證;

同時配置瞭如下授權參數:

authroization.mode=Webhook:開啓 RBAC 授權;

kubelet 收到請求後,使用 clientCAFile 對證書籤名進行認證,或者查詢 bearer token 是否有效。如果兩者都沒通過,則拒絕請求,提示 Unauthorized:

[k8s@kube-server ~]$ curl -s --cacert /etc/kubernetes/cert/ca.pem https://172.16.10.101:10250/metrics

Unauthorized

[k8s@kube-server ~]$

[k8s@kube-server ~]$ curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://172.16.10.101:10250/metrics

Unauthorized

[k8s@kube-server ~]$

通過認證後,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 發送請求,查詢證書或 token 對應的 user、group 是否有操作資源的權限(RBAC);

證書認證和授權:

權限不足的證書;

[k8s@kube-node1 ~]$ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://172.16.10.101:10250/metrics

Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)

[k8s@kube-node1 ~]$

使用部署 kubectl 命令行工具時創建的、具有最高權限的 admin 證書;

$ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/admin.pem --key /opt/k8s/admin-key.pem https://172.16.10.101:10250/metrics|head

注:如果未使用絕對路徑指出admin密鑰位置,會找不到。

bear token 認證和授權:

創建一個 ServiceAccount,將它和 ClusterRole system:kubelet-api-admin 綁定,從而具有調用 kubelet API 的權限:

kubectl create sa kubelet-api-test

kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test

SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')

TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')

echo ${TOKEN}

[k8s@kube-server ~]$ echo ${TOKEN}

eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Imt1YmVsZXQtYXBpLXRlc3QtdG9rZW4tZ2RqN2ciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3ViZWxldC1hcGktdGVzdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImY0OWEyMGJlLTc5ZDYtMTFlOC04MTM4LTA4MDAyNzM5NTM2MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Omt1YmVsZXQtYXBpLXRlc3QifQ.isa6PPJtg0WEstKwuozjT-CHs6EEonq12mpGqCk4SIaPe2TWjPDDHiczRf9Yt4ivmOakquhiYBs9vnDuPuINXWHNCzEudYMDz2mIYHwXH0s26CT-eSxXCnPRH54H9zVjJzNSZ9LhYgLLxOPSFldNaLd8E0MCCjGwBWqucSAraxHNyNmVALbi8LKaPt6u3JiHV02cGhqhG7xEiS5oeSdXh8kWSxd1wOtGc7bmQerrVDNnTqPNflb926zRGuPrELghdm0SeFVWtFGTjILvpgPugq3biLRt199ct8afaIqqH9tuDlpd32Cv4IVPKvvnIutamOILnb04FfrkzwPb6iv4xw

[k8s@kube-server ~]$ curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://172.16.10.101:10250/metrics|head

HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.

TYPE apiserver_client_certificate_expiration_seconds histogram

apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="345600"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="604800"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="2.592e+06"} 0

cadvisor 和 metrics

cadvisor 統計所在節點各容器的資源(CPU、內存、磁盤、網卡)使用情況,分別在自己的 http web 頁面(4194 端口)和 10250 以 promehteus metrics 的形式輸出。

瀏覽器訪問 http://172.16.10.101:4194/containers/ 可以查看到 cadvisor 的監控頁面:

因爲我們是使用的Virtualbox虛機搭建的測試環境,所以需要配置個轉發端口,便於我們從外部訪問到測試環境中kube-node1中的服務。

配置方法如下所示。

獲取 kublet 的配置

從 kube-apiserver 獲取各 node 的配置:

[k8s@kube-server ~]$ curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert /home/k8s/admin.pem --key /home/k8s/admin-key.pem https://${MASTER_IP}:6443/api/v1/nodes/kube-node1/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'

{

"syncFrequency": "1m0s",

"fileCheckFrequency": "20s",

"httpCheckFrequency": "20s",

"address": "172.16.10.101",

"port": 10250,

"authentication": {

"x509": {

"clientCAFile": "/etc/kubernetes/cert/ca.pem"

},

"webhook": {

"enabled": true,

"cacheTTL": "2m0s"

},

"anonymous": {

"enabled": false

}

},

"authorization": {

"mode": "Webhook",

"webhook": {

"cacheAuthorizedTTL": "5m0s",

"cacheUnauthorizedTTL": "30s"

}

},

"registryPullQPS": 5,

"registryBurst": 10,

"eventRecordQPS": 5,

"eventBurst": 10,

"enableDebuggingHandlers": true,

"healthzPort": 10248,

"healthzBindAddress": "127.0.0.1",

"oomScoreAdj": -999,

"clusterDomain": "cluster.local.",

"clusterDNS": [

"10.254.0.2"

],

"streamingConnectionIdleTimeout": "4h0m0s",

"nodeStatusUpdateFrequency": "10s",

"imageMinimumGCAge": "2m0s",

"imageGCHighThresholdPercent": 85,

"imageGCLowThresholdPercent": 80,

"volumeStatsAggPeriod": "1m0s",

"cgroupsPerQOS": true,

"cgroupDriver": "cgroupfs",

"cpuManagerPolicy": "none",

"cpuManagerReconcilePeriod": "10s",

"runtimeRequestTimeout": "2m0s",

"hairpinMode": "promiscuous-bridge",

"maxPods": 110,

"podPidsLimit": -1,

"resolvConf": "/etc/resolv.conf",

"cpuCFSQuota": true,

"maxOpenFiles": 1000000,

"contentType": "application/vnd.kubernetes.protobuf",

"kubeAPIQPS": 5,

"kubeAPIBurst": 10,

"serializeImagePulls": false,

"evictionHard": {

"imagefs.available": "15%",

"memory.available": "100Mi",

"nodefs.available": "10%",

"nodefs.inodesFree": "5%"

},

"evictionPressureTransitionPeriod": "5m0s",

"enableControllerAttachDetach": true,

"makeIPTablesUtilChains": true,

"iptablesMasqueradeBit": 14,

"iptablesDropBit": 15,

"featureGates": {

"RotateKubeletClientCertificate": true,

"RotateKubeletServerCertificate": true

},

"failSwapOn": true,

"containerLogMaxSize": "10Mi",

"containerLogMaxFiles": 5,

"enforceNodeAllocatable": [

"pods"

],

"kind": "KubeletConfiguration",

"apiVersion": "kubelet.config.k8s.io/v1beta1"

}

[k8s@kube-server ~]$

7.3 部署 kube-proxy 組件

創建 kube-proxy 證書

創建證書籤名請求:

cat > kube-proxy-csr.json <

{

"CN": "system:kube-proxy",

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "k8s",

"OU": "testcorp"

}

]

}

EOF

CN:指定該證書的 User 爲 system:kube-proxy;

預定義的 RoleBinding system:node-proxier 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限;

該證書只會被 kube-proxy 當做 client 證書使用,所以 hosts 字段爲空;

生成證書和私鑰:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

-ca-key=/etc/kubernetes/cert/ca-key.pem \

-config=/etc/kubernetes/cert/ca-config.json \

-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

創建和分發 kubeconfig 文件

source /opt/k8s/bin/environment.sh

設置集羣參數

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/cert/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=kube-proxy.kubeconfig

設置客戶端認證參數

kubectl config set-credentials kube-proxy \

--client-certificate=kube-proxy.pem \

--client-key=kube-proxy-key.pem \

--embed-certs=true \

--kubeconfig=kube-proxy.kubeconfig

設置上下文參數

kubectl config set-context default \

--cluster=kubernetes \

--user=kube-proxy \

--kubeconfig=kube-proxy.kubeconfig

設置默認上下文

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

--embed-certs=true:將 ca.pem 和 admin.pem 證書內容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加時,寫入的是證書文件路徑);

分發 kubeconfig 文件:

source /opt/k8s/bin/environment.sh

for node_name in ${NODE_NAMES[@]}

do

echo ">>> ${node_name}"

scp kube-proxy.kubeconfig k8s@${node_name}:/etc/kubernetes/

done

創建 kube-proxy 配置文件

從 v1.10 開始,kube-proxy 部分參數可以配置文件中配置。可以使用 --write-config-to 選項生成該配置文件,或者參考 kubeproxyconfig 的類型定義源文件 :https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/apis/kubeproxyconfig/types.go

創建 kube-proxy config 文件模板:

cat >kube-proxy.config.yaml.template <

apiVersion: kubeproxy.config.k8s.io/v1alpha1

bindAddress: ##NODE_IP##

clientConnection:

kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig

clusterCIDR: 172.30.0.0/16

healthzBindAddress: ##NODE_IP##:10256

hostnameOverride: ##NODE_NAME##

kind: KubeProxyConfiguration

metricsBindAddress: ##NODE_IP##:10249

mode: "ipvs"

EOF

bindAddress: 監聽地址;

clientConnection.kubeconfig: 連接 apiserver 的 kubeconfig 文件;

clusterCIDR: 必須與 kube-controller-manager 的 --cluster-cidr 選項值一致;kube-proxy 根據 --cluster-cidr 判斷集羣內部和外部流量,指定 --cluster-cidr 或 --masquerade-all 選項後 kube-proxy 纔會對訪問 Service IP 的請求做 SNAT;

hostnameOverride: 參數值必須與 kubelet 的值一致,否則 kube-proxy 啓動後會找不到該 Node,從而不會創建任何 ipvs 規則;

mode: 使用 ipvs 模式;

爲各節點創建和分發 kube-proxy 配置文件:

source /opt/k8s/bin/environment.sh

for (( i=0; i < 3; i++ ))

do

echo ">>> ${NODE_NAMES[i]}"

sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" kube-proxy.config.yaml.template > kube-proxy-${NODE_NAMES[i]}.config.yaml

scp kube-proxy-${NODE_NAMES[i]}.config.yaml root@${NODE_NAMES[i]}:/etc/kubernetes/kube-proxy.config.yaml

done

創建和分發 kube-proxy systemd unit 文件

source /opt/k8s/bin/environment.sh

cat > kube-proxy.service <

[Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

[Service]

WorkingDirectory=/var/lib/kube-proxy

ExecStart=/opt/k8s/bin/kube-proxy \

--config=/etc/kubernetes/kube-proxy.config.yaml \

--logtostderr=true \

--v=2

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

分發 kube-proxy systemd unit 文件:

source /opt/k8s/bin/environment.sh

for node_name in ${NODE_NAMES[@]}

do

echo ">>> ${node_name}"

scp kube-proxy.service root@${node_name}:/etc/systemd/system/

done

啓動 kube-proxy 服務

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "mkdir -p /var/lib/kube-proxy" # 必須先創建工作目錄

ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl start kube-proxy"

done

檢查啓動結果

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh k8s@${node_ip} "systemctl status kube-proxy|grep Active"

done

確保狀態爲 active (running)

查看監聽端口和 metrics

[k8s@kube-node1 cert]$ sudo netstat -lnpt|grep kube-prox

tcp 0 0 172.16.10.101:10249 0.0.0.0:* LISTEN 17534/kube-proxy

tcp 0 0 172.16.10.101:10256 0.0.0.0:* LISTEN 17534/kube-proxy

10249:http prometheus metrics port;

10256:http healthz port;

查看 ipvs 路由規則

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh root@${node_ip} "/usr/sbin/ipvsadm -ln"

done

輸出:

172.16.10.101

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 10.254.0.1:443 rr persistent 10800

-> 172.16.10.100:6443 Masq 1 0 0

172.16.10.102

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 10.254.0.1:443 rr persistent 10800

-> 172.16.10.100:6443 Masq 1 0 0

172.16.10.103

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 10.254.0.1:443 rr persistent 10800

-> 172.16.10.100:6443 Masq 1 0 0

8、驗證集羣功能

檢查節點狀態

[k8s@kube-server ~]$kubectl get nodes

NAME STATUS ROLES AGE VERSION

kube-node1 Ready 4h v1.10.4

kube-node2 Ready 4h v1.10.4

kube-node3 Ready 4h v1.10.4

創建測試文件

cat > nginx-ds.yml <

apiVersion: v1

kind: Service

metadata:

name: nginx-ds

labels:

app: nginx-ds

spec:

type: NodePort

selector:

app: nginx-ds

ports:

  • name: http

port: 80

targetPort: 80


apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

name: nginx-ds

labels:

addonmanager.kubernetes.io/mode: Reconcile

spec:

template:

metadata:

labels:

app: nginx-ds

spec:

containers:

  • name: my-nginx

image: nginx:1.7.9

ports:

  • containerPort: 80

EOF

執行定義文件

[k8s@kube-server ~]$ kubectl create -f nginx-ds.yml

service "nginx-ds" created

daemonset.extensions "nginx-ds" created

檢查各 Node 上的 Pod IP 連通性

[k8s@kube-server ~]$ kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE

nginx-ds-8h4j5 1/1 Running 0 5m 172.30.49.2 kube-node1

nginx-ds-kxx7r 1/1 Running 0 5m 172.30.7.2 kube-node2

nginx-ds-ndnf5 1/1 Running 0 5m 172.30.48.2 kube-node3

可見,nginx-ds 的 Pod IP 分別是 172.30.49.2、172.30.7.2、172.30.48.2,在所有 Node 上分別 ping 這三個 IP,看是否連通:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh ${node_ip} "ping -c 1 172.30.49.2"

ssh ${node_ip} "ping -c 1 172.30.7.2"

ssh ${node_ip} "ping -c 1 172.30.48.2"

done

檢查服務 IP 和端口可達性

[k8s@kube-server ~]$ kubectl get svc |grep nginx-ds

nginx-ds NodePort 10.254.220.112 80:8880/TCP 4h

可見:

Service Cluster IP:10.254.220.112

服務端口:80

NodePort 端口:8900

在所有 Node 上 curl Service IP:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh ${node_ip} "curl 10.254.220.112"

done

預期輸出 nginx 歡迎頁面內容。

檢查服務的 NodePort 可達性

在所有 Node 上執行:

source /opt/k8s/bin/environment.sh

for node_ip in ${NODE_IPS[@]}

do

echo ">>> ${node_ip}"

ssh ${node_ip} "curl ${node_ip}:8900"

done

預期輸出 nginx 歡迎頁面內容。

9、部署集羣插件

插件是集羣的附件組件,豐富和完善了集羣的功能。

coredns

Dashboard

Heapster (influxdb、grafana)

Metrics Server

EFK (elasticsearch、fluentd、kibana)

9.1 部署 coredns 插件

修改配置文件

將下載的 kubernetes-server-linux-amd64.tar.gz 解壓後,再解壓其中的 kubernetes-src.tar.gz 文件。

coredns 對應的目錄是:cluster/addons/dns。

[k8s@kube-server dns]$ pwd

/home/k8s/kubernetes/cluster/addons/dns

[k8s@kube-server dns]$ ls

coredns.yaml.base coredns.yaml.sed kube-dns.yaml.in Makefile README.md transforms2sed.sed

coredns.yaml.in kube-dns.yaml.base kube-dns.yaml.sed OWNERS transforms2salt.sed

[k8s@kube-server dns]$

[k8s@kube-server dns]$ cp coredns.yaml.base coredns.yaml

[k8s@kube-server dns]$ vi coredns.yaml

[k8s@kube-server dns]$ diff coredns.yaml.base coredns.yaml

61c61

< kubernetes PILLAR__DNS__DOMAIN in-addr.arpa ip6.arpa {


kubernetes cluster.local. in-addr.arpa ip6.arpa {

153c153

< clusterIP: PILLAR__DNS__SERVER


clusterIP: 10.254.0.2

[k8s@kube-server dns]$ kubectl create -f coredns.yaml

serviceaccount "coredns" created

clusterrole.rbac.authorization.k8s.io "system:coredns" created

clusterrolebinding.rbac.authorization.k8s.io "system:coredns" created

configmap "coredns" created

deployment.extensions "coredns" created

service "coredns" created

[k8s@kube-server dns]$

檢查 coredns 功能

[k8s@kube-server dns]$ kubectl get all -n kube-system

NAME READY STATUS RESTARTS AGE

pod/coredns-77c989547b-bq6ff 1/1 Running 0 31s

pod/coredns-77c989547b-m8qhw 1/1 Running 0 31s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/coredns ClusterIP 10.254.0.2 53/UDP,53/TCP 31s

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

deployment.apps/coredns 2 2 2 2 31s

NAME DESIRED CURRENT READY AGE

replicaset.apps/coredns-77c989547b 2 2 2 31s

[k8s@kube-server dns]$

新建一個 測試Deployment

cat > my-nginx.yaml <

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: my-nginx

spec:

replicas: 2

template:

metadata:

labels:

run: my-nginx

spec:

containers:

  • name: my-nginx

image: nginx:1.7.9

ports:

EOF

[k8s@kube-server ~]$ kubectl create -f my-nginx.yaml

deployment.extensions "my-nginx" created

export 該 Deployment, 生成 my-nginx 服務:

[k8s@kube-server ~]$ kubectl expose deploy my-nginx

service "my-nginx" exposed

[k8s@kube-server ~]$ kubectl get services --all-namespaces |grep my-nginx

default my-nginx ClusterIP 10.254.191.237 80/TCP 8s

創建另一個測試 Pod

查看 /etc/resolv.conf 是否包含 kubelet 配置的 --cluster-dns 和 --cluster-domain,是否能夠將服務 my-nginx 解析到上面顯示的 Cluster IP 10.254.191.237

cat > pod-nginx.yaml <

apiVersion: v1

kind: Pod

metadata:

name: nginx

spec:

containers:

  • name: nginx

image: nginx:1.7.9

ports:

  • containerPort: 80

EOF

[k8s@kube-server ~]$ kubectl exec -it nginx -c nginx /bin/bash

root@nginx:/# ip a

1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

14: eth0@if15: mtu 1450 qdisc noqueue state UP

link/ether 02:42:ac:1e:30:04 brd ff:ff:ff:ff:ff:ff

inet 172.30.48.4/24 brd 172.30.48.255 scope global eth0

valid_lft forever preferred_lft forever

root@nginx:/# ping kubernetes

PING kubernetes.default.svc.cluster.local (10.254.0.1): 48 data bytes

56 bytes from 10.254.0.1: icmp_seq=0 ttl=64 time=0.050 ms

56 bytes from 10.254.0.1: icmp_seq=1 ttl=64 time=0.076 ms

56 bytes from 10.254.0.1: icmp_seq=2 ttl=64 time=0.143 ms

56 bytes from 10.254.0.1: icmp_seq=3 ttl=64 time=0.079 ms

^C--- kubernetes.default.svc.cluster.local ping statistics ---

4 packets transmitted, 4 packets received, 0% packet loss

round-trip min/avg/max/stddev = 0.050/0.087/0.143/0.034 ms

root@nginx:/# ping my-nginx

PING my-nginx.default.svc.cluster.local (10.254.191.237): 48 data bytes

56 bytes from 10.254.191.237: icmp_seq=0 ttl=64 time=0.094 ms

56 bytes from 10.254.191.237: icmp_seq=1 ttl=64 time=0.113 ms

^C--- my-nginx.default.svc.cluster.local ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max/stddev = 0.094/0.104/0.113/0.000 ms

root@nginx:/# ping coredns

ping: unknown host

root@nginx:/# ping coredns.kube-system.svc.cluster.local

PING coredns.kube-system.svc.cluster.local (10.254.0.2): 48 data bytes

56 bytes from 10.254.0.2: icmp_seq=0 ttl=64 time=0.042 ms

56 bytes from 10.254.0.2: icmp_seq=1 ttl=64 time=0.095 ms

^C--- coredns.kube-system.svc.cluster.local ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max/stddev = 0.042/0.069/0.095/0.027 ms

root@nginx:/#

root@nginx:/# cat /etc/resolv.conf

nameserver 10.254.0.2

search default.svc.cluster.local. svc.cluster.local. cluster.local.

options ndots:5

root@nginx:/#

9.2 部署 dashboard 插件

修改配置文件

將下載的 kubernetes-server-linux-amd64.tar.gz 解壓後,再解壓其中的 kubernetes-src.tar.gz 文件。

dashboard 對應的目錄是:cluster/addons/dashboard。

$ pwd

/opt/k8s/kubernetes/cluster/addons/dashboard

$ cp dashboard-controller.yaml{,.orig}

$ diff dashboard-controller.yaml{,.orig}

33c33

< image: siriuszg/kubernetes-dashboard-amd64:v1.8.3


image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

$ cp dashboard-service.yaml{,.orig}

$ diff dashboard-service.yaml.orig dashboard-service.yaml

10a11

type: NodePort

指定端口類型爲 NodePort,這樣外界可以通過地址 nodeIP:nodePort 訪問 dashboard。

更換了一個容器鏡像的下載地址。

執行所有定義文件

[k8s@kube-server dashboard]$ ls *.yaml

dashboard-configmap.yaml dashboard-controller.yaml dashboard-rbac.yaml dashboard-secret.yaml dashboard-service.yaml

[k8s@kube-server dashboard]$ kubectl create -f .

configmap "kubernetes-dashboard-settings" created

serviceaccount "kubernetes-dashboard" created

deployment.apps "kubernetes-dashboard" created

role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created

rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created

secret "kubernetes-dashboard-certs" created

secret "kubernetes-dashboard-key-holder" created

service "kubernetes-dashboard" created

[k8s@kube-server dashboard]$

查看分配的 NodePort

[k8s@kube-server dashboard]$ kubectl get deployment kubernetes-dashboard -n kube-system

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

kubernetes-dashboard 1 1 1 1 50s

[k8s@kube-server dashboard]$ kubectl --namespace kube-system get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE

coredns-77c989547b-bq6ff 1/1 Running 558 1d 172.30.49.3 kube-node1

coredns-77c989547b-m8qhw 1/1 Running 556 1d 172.30.48.3 kube-node3

kubernetes-dashboard-65f7b4f486-j659c 1/1 Running 0 7m 172.30.7.2 kube-node2

[k8s@kube-server dashboard]$ kubectl get services kubernetes-dashboard -n kube-system

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes-dashboard NodePort 10.254.56.169 443:8645/TCP 8m

NodePort 8645 映射到 dashboard pod 443 端口

dashboard 的 --authentication-mode 支持 token、basic,默認爲 token。如果使用 basic,則 kube-apiserver 必須配置 '--authorization-mode=ABAC' 和 '--basic-auth-file' 參數。

查看 dashboard 支持的命令行參數

kubectl exec --namespace kube-system -it kubernetes-dashboard-65f7b4f486-j659c -- /dashboard --help

訪問 dashboard

爲了集羣安全,從 1.7 開始,dashboard 只允許通過 https 訪問,如果使用 kube proxy 則必須監聽 localhost 或 127.0.0.1,對於 NodePort 沒有這個限制,但是僅建議在開發環境中使用。

對於不滿足這些條件的登錄訪問,在登錄成功後瀏覽器不跳轉,始終停在登錄界面。

  1. kubernetes-dashboard 服務暴露了 NodePort,可以使用 http://NodeIP:NodePort 地址訪問 dashboard;
  2. 通過 kube-apiserver 訪問 dashboard;
  3. 通過 kubectl proxy 訪問 dashboard:

如果使用了 VirtualBox,需要啓用 VirtualBox 的 ForworadPort 功能將虛機監聽的端口和 Host 的本地端口綁定。

通過 kubectl proxy 訪問 dashboard

啓動代理:

[k8s@kube-node2 ~]$ kubectl proxy --address='localhost' --port=8086 --accept-hosts='^*$'

Starting to serve on 127.0.0.1:8086

--address 必須爲 localhost 或 127.0.0.1;

需要指定 --accept-hosts 選項,否則瀏覽器訪問 dashboard 頁面時提示 “Unauthorized”;

瀏覽器訪問 URL:http://127.0.0.1:8086/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

注:上面的方式最終沒有調試通,不知是端口轉發哪裏沒設置正確。

通過 kube-apiserver 訪問 dashboard

獲取集羣服務地址列表:

[root@kube-server ~]# kubectl cluster-info

Kubernetes master is running at https://172.16.10.100:6443

CoreDNS is running at https://172.16.10.100:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy

kubernetes-dashboard is running at https://172.16.10.100:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

必須通過 kube-apiserver 的安全端口(https)訪問 dashbaord,訪問時瀏覽器需要使用自定義證書,否則會被 kube-apiserver 拒絕訪問。

創建登錄 Dashboard 的 token 和 kubeconfig 配置文件

上面提到,Dashboard 默認只支持 token 認證,所以如果使用 KubeConfig 文件,需要在該文件中指定 token,不支持使用 client 證書認證。

創建登錄 token

[k8s@kube-server ~]$ kubectl create sa dashboard-admin -n kube-system

serviceaccount "dashboard-admin" created

[k8s@kube-server ~]$ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

clusterrolebinding.rbac.authorization.k8s.io "dashboard-admin" created

[k8s@kube-server ~]$ ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')

[k8s@kube-server ~]$ DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')

[k8s@kube-server ~]$ echo ${DASHBOARD_LOGIN_TOKEN}

eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbWQycWIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMDAxMjA1ZTUtN2I3YS0xMWU4LTkzNjAtMDgwMDI3Mzk1MzYwIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.jgQo6TtcGugKQOlcXbe9-dqoP1_YkKshbeeqMudZOFVigDgSKPAUYNH4LbIqOBoAMnsZxKJPFFd36wR5JRzqUy5hI6cSRhBZr7_XiAZYeAdt0ZmbTq_ZM-Y0HDnokhxonwmV08TkVffj85uLnHUY5IZFYKmiiEUuSecek8LWVqvUAgBj1TIeKyGr5FGYxk2KCzlkHU90yFlhSjN4VqE-YkG7TJuV-2ge2sBWhmnqodrWhOHMD7_CQP-WzZxjPZY-WbznYNrBbuVOkJVjOyaf6EB0lzx1bpMSSeVhkWA3a_BdxOEWEx-OuvQgIxqqn0cY27om5xKItR-B4DiyrKyu6w

[k8s@kube-server ~]$

使用輸出的 token 登錄 Dashboard。

創建使用 token 的 KubeConfig 文件

source /opt/k8s/bin/environment.sh

設置集羣參數

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/cert/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=dashboard.kubeconfig

設置客戶端認證參數,使用上面創建的 Token

kubectl config set-credentials dashboard_user \

--token=${DASHBOARD_LOGIN_TOKEN} \

--kubeconfig=dashboard.kubeconfig

設置上下文參數

kubectl config set-context default \

--cluster=kubernetes \

--user=dashboard_user \

--kubeconfig=dashboard.kubeconfig

設置默認上下文

kubectl config use-context default --kubeconfig=dashboard.kubeconfig

用生成的 dashboard.kubeconfig 登錄 Dashboard。

由於缺少 Heapster 插件,當前 dashboard 不能展示 Pod、Nodes 的 CPU、內存等統計數據和圖表。

9.3 部署 heapster 插件

Heapster是一個收集者,將每個Node上的cAdvisor的數據進行彙總,然後導到第三方工具(如InfluxDB)。

Heapster 是通過調用 kubelet 的 http API 來獲取 cAdvisor 的 metrics 數據的。

由於 kublet 只在 10250 端口接收 https 請求,故需要修改 heapster 的 deployment 配置。同時,需要賦予 kube-system:heapster ServiceAccount 調用 kubelet API 的權限。

下載 heapster 文件

到 heapster release 頁面 下載最新版本的 heapster

wget https://github.com/kubernetes/heapster/archive/v1.5.3.tar.gz

tar -xzvf v1.5.3.tar.gz

mv v1.5.3.tar.gz heapster-1.5.3.tar.gz

官方文件目錄: heapster-1.5.3/deploy/kube-config/influxdb

修改配置

[k8s@kube-server ~]$ cd heapster-1.5.3/deploy/kube-config/influxdb

[k8s@kube-server influxdb]$ ls

grafana.yaml heapster.yaml influxdb.yaml

[k8s@kube-server influxdb]$ cp grafana.yaml{,.orig}

[k8s@kube-server influxdb]$ vi grafana.yaml

[k8s@kube-server influxdb]$ diff grafana.yaml.orig grafana.yaml

16c16

< image: gcr.io/google_containers/heapster-grafana-amd64:v4.4.3


image: wanghkkk/heapster-grafana-amd64-v4.4.3:v4.4.3

67c67

< # type: NodePort


type: NodePort

[k8s@kube-server influxdb]$

更換國內可訪問的鏡像,並開啓 NodePort

[k8s@kube-server influxdb]$ cp heapster.yaml{,.orig}

[k8s@kube-server influxdb]$ vi heapster.yaml

[k8s@kube-server influxdb]$ diff heapster.yaml.orig heapster.yaml

23c23

< image: gcr.io/google_containers/heapster-amd64:v1.5.3


image: fishchen/heapster-amd64:v1.5.3

27c27

< - --source=kubernetes:https://kubernetes.default


[k8s@kube-server influxdb]$

由於 kubelet 只在 10250 監聽 https 請求,故添加相關參數。

[k8s@kube-server influxdb]$ cp influxdb.yaml{,.orig}

[k8s@kube-server influxdb]$ vi influxdb.yaml

[k8s@kube-server influxdb]$ diff influxdb.yaml.orig influxdb.yaml

16c16

< image: gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3


image: fishchen/heapster-influxdb-amd64:v1.3.3

[k8s@kube-server influxdb]$

執行所有定義文件

[k8s@kube-server influxdb]$ pwd

/home/k8s/heapster-1.5.3/deploy/kube-config/influxdb

[k8s@kube-server influxdb]$ ls *.yaml

grafana.yaml heapster.yaml influxdb.yaml

[k8s@kube-server influxdb]$ kubectl create -f .

deployment.extensions "monitoring-grafana" created

service "monitoring-grafana" created

serviceaccount "heapster" created

deployment.extensions "heapster" created

service "heapster" created

deployment.extensions "monitoring-influxdb" created

service "monitoring-influxdb" created

[k8s@kube-server influxdb]$

$ cd ../rbac/

$ pwd

/opt/k8s/heapster-1.5.2/deploy/kube-config/rbac

$ ls

heapster-rbac.yaml

[k8s@kube-server rbac]$ cp heapster-rbac.yaml{,.orig}

[k8s@kube-server rbac]$ vi heapster-rbac.yaml

[k8s@kube-server rbac]$ diff heapster-rbac.yaml.orig heapster-rbac.yaml

4c4

< name: heapster


name: heapster-kubelet-api

8c8

< name: system:heapster


name: system:kubelet-api-admin

[k8s@kube-server rbac]$ kubectl create -f heapster-rbac.yaml

clusterrolebinding.rbac.authorization.k8s.io "heapster-kubelet-api" created

[k8s@kube-server rbac]$

將 serviceAccount kube-system:heapster 與 ClusterRole system:kubelet-api-admin 綁定,授予它調用 kubelet API 的權限。

檢查執行結果

[k8s@kube-server rbac]$ kubectl get pods -n kube-system | grep -E 'heapster|monitoring'

heapster-7648ffc7c9-qfvtd 1/1 Running 0 18m

monitoring-grafana-5986995c7b-dlqn4 0/1 ImagePullBackOff 0 18m

monitoring-influxdb-f75847d48-pd97v 1/1 Running 0 18m

檢查 kubernets dashboard 界面,可以正確顯示各 Nodes、Pods 的 CPU、內存、負載等統計數據和圖表了。

訪問 grafana

1.通過 kube-apiserver 訪問:

獲取 monitoring-grafana 服務 URL:

[k8s@kube-server influxdb]$ kubectl cluster-info

Kubernetes master is running at https://172.16.10.100:6443

CoreDNS is running at https://172.16.10.100:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy

Heapster is running at https://172.16.10.100:6443/api/v1/namespaces/kube-system/services/heapster/proxy

kubernetes-dashboard is running at https://172.16.10.100:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

monitoring-grafana is running at https://172.16.10.100:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy

monitoring-influxdb is running at https://172.16.10.100:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

使用瀏覽器訪問:https://172.16.10.100:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy即可。

注:本以爲需要在virtualbox上做個端口轉發才行,結果發現在virtualbox上給kube-server,kube-node1,2,3的第2塊網卡所設置的Host-Only網絡,實際上支持從PC主機上直接訪問到這幾個虛機的Host-Only網卡地址與服務端口!

  1. 通過 kubectl proxy 訪問:

創建代理

kubectl proxy --address='172.16.10.100' --port=8086 --accept-hosts='^*$'

Starting to serve on 172.16.10.100:8086

瀏覽器訪問 URL:http://172.16.10.100:8086/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy/?orgId=1

  1. 通過 NodePort 訪問:

[k8s@kube-server influxdb]$ kubectl get svc -n kube-system|grep -E 'monitoring|heapster'

heapster ClusterIP 10.254.14.104 80/TCP 58m

monitoring-grafana NodePort 10.254.36.0 80:8995/TCP 58m

monitoring-influxdb ClusterIP 10.254.206.219 8086/TCP 58m

grafana 監聽 NodePort 8995;

瀏覽器訪問 URL:http://172.16.10.101:8995/?orgId=1

9.4 部署 metrics-server 插件

metrics-server爲通過api的方式提供nodes或pods的資源使用指標提供支持。目前主要是在HPA自動伸縮和Scheduler自動調度中得到應用。

創建 metrics-server 使用的證書

創建 metrics-server 證書籤名請求:

cat > metrics-server-csr.json <

{

"CN": "aggregator",

"hosts": [],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "BeiJing",

"L": "BeiJing",

"O": "k8s",

"OU": "testcorp"

}

]

}

EOF

注意: CN 名稱爲 aggregator,需要與 kube-apiserver 的 --requestheader-allowed-names 參數配置一致。

生成 metrics-server 證書和私鑰:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

-ca-key=/etc/kubernetes/cert/ca-key.pem \

-config=/etc/kubernetes/cert/ca-config.json \

-profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server

將生成的證書和私鑰文件拷貝到 kube-apiserver 節點:

cp metrics-server*.pem /etc/kubernetes/cert/

修改 kubernetes 控制平面組件的配置以支持 metrics-server

kube-apiserver添加如下配置參數:

--requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem

--requestheader-allowed-names=aggregator

--requestheader-extra-headers-prefix="X-Remote-Extra-"

--requestheader-group-headers=X-Remote-Group

--requestheader-username-headers=X-Remote-User

--proxy-client-cert-file=/etc/kubernetes/cert/metrics-server.pem

--proxy-client-key-file=/etc/kubernetes/cert/metrics-server-key.pem

--runtime-config=api/all=true

--enable-aggregator-routing=true

--requestheader-XXX、--proxy-client-XXX 是 kube-apiserver 的 aggregator layer 相關的配置參數,metrics-server & HPA 需要使用;

--requestheader-client-ca-file:用於簽名 --proxy-client-cert-file 和 --proxy-client-key-file 指定的證書;在啓用了 metric aggregator 時使用;

如果 --requestheader-allowed-names 不爲空,則--proxy-client-cert-file 證書的 CN 必須位於 allowed-names 中,默認爲 aggregator;

如果 kube-apiserver 機器沒有運行 kube-proxy,則還需要添加 --enable-aggregator-routing=true 參數。

注意:requestheader-client-ca-file 指定的 CA 證書,必須具有 client auth and server auth。

kube-controller-manager添加如下配置參數:

--horizontal-pod-autoscaler-use-rest-clients=true

用於配置 HPA 控制器使用 REST 客戶端獲取 metrics 數據。

修改過啓動參數後,需要重啓服務以生效。

systemctl daemon-reload

systemctl restart kube-apiserver && systemctl status kube-apiserver

systemctl daemon-reload

systemctl restart kube-controller-manager && systemctl status kube-controller-manager

修改插件配置文件配置文件

metrics-server 插件位於 kubernetes 的 cluster/addons/metrics-server/ 目錄下。

修改 metrics-server-deployment 文件:

[k8s@kube-server metrics-server]$ cp metrics-server-deployment.yaml{,.orig}

[k8s@kube-server metrics-server]$ vi metrics-server-deployment.yaml

[k8s@kube-server metrics-server]$ diff metrics-server-deployment.yaml.orig metrics-server-deployment.yaml

51c51

< image: k8s.gcr.io/metrics-server-amd64:v0.2.1


image: mirrorgooglecontainers/metrics-server-amd64:v0.2.1

54c54

< - --source=kubernetes.summary_api:''


60c60

< image: k8s.gcr.io/addon-resizer:1.8.1


image: siriuszg/addon-resizer:1.8.1

[k8s@kube-server metrics-server]$

metrics-server 的參數格式與 heapster 類似。由於 kubelet 只在 10250 監聽 https 請求,故添加相關參數。

授予 kube-system:metrics-server ServiceAccount 訪問 kubelet API 的權限:

[k8s@kube-server metrics-server]$ cat auth-kubelet.yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

name: metrics-server:system:kubelet-api-admin

labels:

kubernetes.io/cluster-service: "true"

addonmanager.kubernetes.io/mode: Reconcile

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: system:kubelet-api-admin

subjects:

  • kind: ServiceAccount

name: metrics-server

namespace: kube-system

[k8s@kube-server metrics-server]$

新建一個 ClusterRoleBindings 定義文件,授予相關權限。

創建 metrics-server

[k8s@kube-server metrics-server]$ pwd

/home/k8s/kubernetes/cluster/addons/metrics-server

[k8s@kube-server metrics-server]$ ls -l *.yaml

-rw-rw-r--. 1 k8s k8s 398 Jun 4 23:17 auth-delegator.yaml

-rw-rw-r--. 1 k8s k8s 404 Jun 29 13:40 auth-kubelet.yaml

-rw-rw-r--. 1 k8s k8s 419 Jun 4 23:17 auth-reader.yaml

-rw-rw-r--. 1 k8s k8s 393 Jun 4 23:17 metrics-apiservice.yaml

-rw-rw-r--. 1 k8s k8s 2650 Jun 29 13:21 metrics-server-deployment.yaml

-rw-rw-r--. 1 k8s k8s 336 Jun 4 23:17 metrics-server-service.yaml

-rw-rw-r--. 1 k8s k8s 801 Jun 4 23:17 resource-reader.yaml

[k8s@kube-server metrics-server]$ kubectl create -f .

clusterrolebinding.rbac.authorization.k8s.io "metrics-server:system:auth-delegator" created

clusterrolebinding.rbac.authorization.k8s.io "metrics-server:system:kubelet-api-admin" created

rolebinding.rbac.authorization.k8s.io "metrics-server-auth-reader" created

apiservice.apiregistration.k8s.io "v1beta1.metrics.k8s.io" created

serviceaccount "metrics-server" created

configmap "metrics-server-config" created

deployment.extensions "metrics-server-v0.2.1" created

service "metrics-server" created

clusterrole.rbac.authorization.k8s.io "system:metrics-server" created

clusterrolebinding.rbac.authorization.k8s.io "system:metrics-server" created

[k8s@kube-server metrics-server]$

查看運行情況

[k8s@kube-server metrics-server]$ kubectl get pods -n kube-system |grep metrics-server

metrics-server-v0.2.1-86946dfbfb-4fxvz 2/2 Running 0 5m

[k8s@kube-server metrics-server]$ kubectl get svc -n kube-system|grep metrics-server

metrics-server ClusterIP 10.254.71.71 443/TCP 6m

查看 metrics-server 輸出的 metrics

metrics-server 輸出的 APIs:https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md

1.通過 kube-apiserver 或 kubectl proxy 訪問:

https://172.16.10.100:6443/apis/metrics.k8s.io/v1beta1/nodes

https://172.16.10.100:6443/apis/metrics.k8s.io/v1beta1/nodes/

https://172.16.10.100:6443/apis/metrics.k8s.io/v1beta1/pods

https://172.16.10.100:6443/apis/metrics.k8s.io/v1beta1/namespace//pods/

2.直接使用 kubectl 命令訪問:

kubectl get --raw apis/metrics.k8s.io/v1beta1/nodes

kubectl get --raw apis/metrics.k8s.io/v1beta1/pods

kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes/

kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespace//pods/

kubectl get --raw "/apis/metrics.k8s.io/v1beta1" | jq .

kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq .

注:/apis/metrics.k8s.io/v1beta1/nodes 和 /apis/metrics.k8s.io/v1beta1/pods 返回的 usage 包含 CPU 和 Memory 。

注:以上查看metrics的操作執行時報錯,暫未找到答案。錯誤信息主要是:

[k8s@kube-server metrics-server]$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq .

Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "aggregator" cannot list nodes.metrics.k8s.io at the cluster scope.

9.5 EFK插件

EFK 對應的目錄:kubernetes/cluster/addons/fluentd-elasticsearch

[k8s@kube-server addons]$ pwd

/home/k8s/kubernetes/cluster/addons

[k8s@kube-server addons]$ cd fluentd-elasticsearch/

[k8s@kube-server fluentd-elasticsearch]$ ls

es-image es-statefulset.yaml fluentd-es-ds.yaml kibana-deployment.yaml OWNERS README.md

es-service.yaml fluentd-es-configmap.yaml fluentd-es-image kibana-service.yaml podsecuritypolicies

[k8s@kube-server fluentd-elasticsearch]$

修改定義文件

$ cp es-statefulset.yaml{,.orig}

$ diff es-statefulset.yaml{,.orig}

76c76

< - image: netonline/elasticsearch:v5.6.4


  • image: k8s.gcr.io/elasticsearch:v5.6.4

$ cp fluentd-es-ds.yaml{,.orig}

$ diff fluentd-es-ds.yaml{,.orig}

79c79

< image: netonline/fluentd-elasticsearch:v2.0.4


image: k8s.gcr.io/fluentd-elasticsearch:v2.0.4

給 Node 設置標籤

DaemonSet fluentd-es 只會調度到設置了標籤 beta.kubernetes.io/fluentd-ds-ready=true 的 Node,需要在期望運行 fluentd 的 Node 上設置該標籤;

[k8s@kube-server fluentd-elasticsearch]$ kubectl get nodes

NAME STATUS ROLES AGE VERSION

kube-node1 Ready 3d v1.10.4

kube-node2 Ready 3d v1.10.4

kube-node3 Ready 3d v1.10.4

[k8s@kube-server fluentd-elasticsearch]$ kubectl label nodes kube-node3 beta.kubernetes.io/fluentd-ds-ready=true

node "kube-node3" labeled

[k8s@kube-server fluentd-elasticsearch]$

執行定義文件

[k8s@kube-server fluentd-elasticsearch]$ kubectl create -f .

service "elasticsearch-logging" created

serviceaccount "elasticsearch-logging" created

clusterrole.rbac.authorization.k8s.io "elasticsearch-logging" created

clusterrolebinding.rbac.authorization.k8s.io "elasticsearch-logging" created

statefulset.apps "elasticsearch-logging" created

configmap "fluentd-es-config-v0.1.4" created

serviceaccount "fluentd-es" created

clusterrole.rbac.authorization.k8s.io "fluentd-es" created

clusterrolebinding.rbac.authorization.k8s.io "fluentd-es" created

daemonset.apps "fluentd-es-v2.0.4" created

deployment.apps "kibana-logging" created

service "kibana-logging" created

[k8s@kube-server fluentd-elasticsearch]$

檢查執行結果

kubectl get pods -n kube-system -o wide|grep -E 'elasticsearch|fluentd|kibana'

kubectl get service -n kube-system|grep -E 'elasticsearch|kibana'

kibana Pod 第一次啓動時會用較長時間(0-20分鐘)來優化和 Cache 狀態頁面,可以 tailf 該 Pod 的日誌觀察進度:

kubectl logs kibana-logging-7445dc9757-jbzvd -n kube-system -f

注意:只有當的 Kibana pod 啓動完成後,才能查看 kibana dashboard,否則會提示 refuse。

訪問 kibana

通過 kube-apiserver 訪問:

kubectl cluster-info|grep -E 'Elasticsearch|Kibana'

通過 kubectl proxy 訪問:

創建代理

$ kubectl proxy --address='172.16.10.100' --port=8086 --accept-hosts='^*$'

瀏覽器訪問 URL:http://172.16.10.100:8086/api/v1/namespaces/kube-system/services/kibana-logging/proxy

在 Settings -> Indices 頁面創建一個 index(相當於 mysql 中的一個 database),選中 Index contains time-based events,使用默認的 logstash-* pattern,點擊 Create ;

創建 Index 後,稍等幾分鐘就可以在 Discover 菜單下看到 ElasticSearch logging 中匯聚的日誌;

注:因爲我的模擬測試環境中在一臺極爲普通的PC機上搭建的,在運行了上面這套EFK環境後,磁盤IO基本就跑不過來了,導致各種服務沒響應,最終還是手動又刪除了EFK。

問題記錄:

1、網卡hairpin_mode設置

在準備配置環境的過程中,就要求設置docker網卡的hairpin_mode,不太理解在未安裝docker時爲什麼要求設置這個,且確實無法設置,因爲此時連docker也還沒有安裝。

注:hairpin_mode模式下,虛機或容器間的流量強制要求必須經過物理交換機才能通信。

2、設置系統參數net.bridge.bridge-nf-call-iptables=1(打開iptables管理網橋的功能)

在各節點上執行以下命令:

modprobe br_netfilter

cat > /etc/sysctl.d/kubernetes.conf <

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

EOF

sysctl -p /etc/sysctl.d/kubernetes.conf

原文中把modprobe br_netfilter放在最後執行的,實際情況是應該首先執行這條命令。

3、授予 kubernetes 證書訪問 kubelet API 的權限的命令的執行順序錯誤

應該在成功啓動了kube-apiserver服務後再執行該命令。

4、在部署kube-apiserver服務中,製作密鑰的證書請求中使用了無法解析的域名kubernetes.default.svc.cluster.local.

該問題已經確認爲是go v1.9中的域名語法校驗解析bug。在6.29號的最新版本的部署材料中已經發現和糾正了該問題。但此故障引發的coreDNS部署失敗並報以下錯誤,已經摺騰了我2天時間尋求答案!

E0628 08:10:41.256264 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:319: Failed to list *v1.Namespace: Get https://10.254.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: tls: failed to parse certificate from server: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local."

關於該bug的修復說明:

https://github.com/opsnull/follow-me-install-kubernetes-cluster/commit/719e5f01e9dcbf96e1a19159ae68a18c7fa9171b

5、關於怎麼使用admin密鑰訪問api接口

下面是正確的方式:

curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert /home/k8s/admin.pem --key /home/k8s/admin-key.pem https://172.16.10.100:6443/api/v1/endpoints

本文轉自CSDN-k8s v1.10部署筆記

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章