私有云原生k8s部署 源碼包下載 wget https://dl.k8s.io/v1.12.7/kubernetes-server-linux-amd64.tar.gz 安裝路徑規範 /opt/kubernetes/bin #二進制安裝目錄 /opt/kubernetes/cfg #配置文件目錄 /opt/kubernetes/log #日誌目錄 /opt/kubernetes/ssl #證書目錄 1.環境規劃 組件版本: Kubernetes 1.12.7 Docker 18.09.0-ce Etcd 3.3.10 Flanneld 0.10.0 keepalived 1.2.20 haproxy 1.6.8 插件: core-dns dashboard 鏡像倉庫: docker registry harbor 主要配置策略: kubernets master節點主要包含組件: kube-apiserver: 使用 keepalived 和 haproxy 實現 3 節點高可用; 關閉非安全端口 8080 和匿名訪問; 在安全端口 6443 接收 https 請求; 嚴格的認證和授權策略 (x509、token、RBAC); 開啓 bootstrap token 認證,支持 kubelet TLS bootstrapping; 使用 https 訪問 kubelet、etcd,加密通信; kube-controller-manager: 3 節點高可用; 關閉非安全端口,在安全端口 10252 接收 https 請求; 使用 kubeconfig 訪問 apiserver 的安全端口; 自動 approve kubelet 證書籤名請求 (CSR),證書過期後自動輪轉; 各 controller 使用自己的 ServiceAccount 訪問 apiserver; kube-scheduler: 3 節點高可用; 使用 kubeconfig 訪問 apiserver 的安全端口; # 說明:目前這三個組件需要部署在同一臺機器上,kube-scheduler、kube-controller-manager 和 kube-apiserver 三者的功能緊密相關; 同時只能有一個kube-scheduler、kube-controller-manager進程處於工作狀態,如果運行多個,則需要通過選舉產生一個leader; kubernets node節點包含的組件: kubelet: 使用 kubeadm 動態創建 bootstrap token,而不是在 apiserver 中靜態配置; 使用 TLS bootstrap 機制自動生成 client 和 server 證書,過期後自動輪轉; 在 KubeletConfiguration 類型的 JSON 文件配置主要參數; 關閉只讀端口,在安全端口 10250 接收 https 請求,對請求進行認證和授權,拒絕匿名訪問和非授權訪問; 使用 kubeconfig 訪問 apiserver 的安全端口; 運行在每個 worker 節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令,如 exec、run、logs等 kublet 啓動時自動向 kube-apiserver 註冊節點信息,內置的 cadvisor 統計和監控節點的資源使用情況。 kube-proxy: 使用 kubeconfig 訪問 apiserver 的安全端口; 在 KubeProxyConfiguration 類型的 JSON 文件配置主要參數; 使用 ipvs 代理模式; 運行在所有 worker 節點上,,它監聽 apiserver 中 service 和 Endpoint 的變化情況,創建路由規則來進行服務負載均衡。 集羣插件: DNS:使用功能、性能更好的 coredns; Dashboard:支持登錄認證; Metric:heapster、metrics-server,使用 https 訪問 kubelet 安全端口; Log:Elasticsearch、Fluend、Kibana; Registry鏡像庫:docker-registry、harbor; 服務器分配: 角色 IP 組件 master01 192.168.200.101 kube-apiserver kube-controller-manager kube-scheduler node01 kubelet kube-proxy docker flannel etcd haproxy keepalived master02 192.168.200.102 kube-apiserver kube-controller-manager kube-scheduler node02 kubelet kube-proxy docker flannel etcd haproxy keepalived master03 192.168.200.103 kube-apiserver kube-controller-manager kube-scheduler node03 kubelet kube-proxy docker flannel etcd 各組件使用證書如下 etcd:ca.pem、etcd-key.pem、etcd.pem; 準備工作: #修改主機名/etc/hostname #對應主機名爲 192.168.200.101 k8s-master1 192.168.200.102 k8s-master2 192.168.200.103 k8s-master3 #本地host解析 cat /etc/hosts 192.168.200.101 etcd1 192.168.200.102 etcd2 192.168.200.103 etcd3 192.168.200.101 k8s-master1 192.168.200.102 k8s-master2 192.168.200.103 k8s-master3 192.168.200.101 k8s-node1 192.168.200.102 k8s-node2 192.168.200.103 k8s-node3 #主機之間互信(在k8s-master1生成密鑰) ssh-keygen -t rsa ssh-copy-id k8s-master2 ssh-copy-id k8s-master3 #關閉防火牆 systemctl stop firewalld systemctl disable firewalld iptables -P FORWARD ACCEPT #關閉SElinux setenforce 0 grep SELINUX /etc/selinux/config #關閉swap分區 swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #加載內核模塊 modprobe br_netfilter modprobe ip_vs #設置系統參數 #tcp_tw_recycle 和 Kubernetes 的NAT衝突,必須關閉 ,否則會導致服務不通; #關閉不使用的IPV6協議棧,防止觸發docker BUG; cd /etc/sysctl.d/ cat > kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 EOF sysctl -p /etc/sysctl.d/kubernetes.conf #檢查系統內核和模塊適不適合運行docker(Linux系統) curl https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh > check-config.sh sh check-config.sh 2.安裝docker yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install -y docker-ce cat <<EOF > /etc/docker/daemon.json { "registry-mirrors": [ "https://registry.docker-cn.com"],"insecure-registries":["192.168.200.101:5000"] } EOF systemctl start docker systemctl enable docker 3.自籤TLS證書 kubernetes系統各組件需要使用TLS證書對通信進行加密,本文檔使用CloudFlare的PKI工具集cfssl來生成Certificate Authority (CA)證書和祕鑰文件,CA是自簽名的證書,用來簽名後續創建的其它TLS證書。 #下面步驟是在k8s-master1上操作的。證書只需要創建一次即可,以後在向集羣中添加新節點時只要將/opt/kubernetes/ssl目錄下的證書拷貝到新節點上即可 #安裝CFSSL curl -o /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 curl -o /usr/local/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 chmod +x /usr/local/bin/cfssl* #創建CA配置目錄 mkdir -p /opt/kubernetes/ssl #創建CA證書臨時目錄 mkdir /root/ssl cd /root/ssl #TLS證書創建 #使用腳本生成:cat certificate.sh #創建CA配置文件 cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } } EOF #創建CA證書籤名請求 cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca #======================================================================================================================= #創建etcd證書籤名請求文件 cat > etcd-csr.json <<EOF { "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.200.101", "192.168.200.102", "192.168.200.103" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd #======================================================================================================================= #創建flannel證書籤名請求文件 cat > flanneld-csr.json <<EOF { "CN": "flanneld", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld #======================================================================================================================= #創建kubectl請求證書 cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "4Paradigm" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #======================================================================================================================= #創建kube-apiserver的證書籤名請求 cat > kubernetes-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.200.101", "192.168.200.102", "192.168.200.103", "192.168.200.200", "10.10.10.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes #======================================================================================================================= #創建kube-controller-manager證書請求 cat > kube-controller-manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "hosts": [ "127.0.0.1", "192.168.200.101", "192.168.200.102", "192.168.200.103" ], "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:kube-controller-manager", "OU": "4Paradigm" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager #======================================================================================================================= #創建kube-scheduler證書請求 cat > kube-scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "192.168.200.101", "192.168.200.102", "192.168.200.103" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:kube-scheduler", "OU": "4Paradigm" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler #======================================================================================================================= #創建kube-proxy證書 cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 腳本說明: ======================================================================================================================= ca-config.json: 可以定義多個 profiles,分別指定不同的過期時間、使用場景等參數;後續在簽名證書時使用某個 profile; signing:表示該證書可用於簽名其它證書;生成的 ca.pem 證書中 CA=TRUE; server auth:表示 client 可以用該 CA 對 server 提供的證書進行驗證; client auth:表示 server 可以用該 CA 對 client 提供的證書進行驗證; ca-csr.json: CN:Common Name,kube-apiserver 從證書中提取該字段作爲請求的用戶名 (User Name),瀏覽器使用該字段驗證網站是否合法; O:Organization,kube-apiserver 從證書中提取該字段作爲請求用戶所屬的組 (Group); kube-apiserver 將提取的 User、Group 作爲 RBAC 授權的用戶標識; etcd-csr.json: hosts 字段指定授權使用該證書的 etcd 節點 IP 或域名列表,這裏將 etcd 集羣的三個節點 IP 都列在其中 flanneld-csr.json: 該證書只會被 kubectl 當做 client 證書使用,所以 hosts 字段爲空; admin-csr.json: kubectl 默認從 ~/.kube/config 文件讀取 kube-apiserver 地址、證書、用戶名等信息,如果沒有配置,執行 kubectl 命令時可能會出錯,只需要部署一次,然後拷貝到其他的master。 O爲system:masters,kube-apiserver 收到該證書後將請求的 Group 設置爲 system:masters; 預定義的 ClusterRoleBinding cluster-admin 將 Group system:masters 與 Role cluster-admin 綁定,該 Role 授予所有 API的權限; 該證書只會被 kubectl 當做 client 證書使用,所以 hosts 字段爲空; kubernetes-csr.json: hosts字段指定授權使用該證書的IP或域名列表,這裏列出了 VIP 、apiserver節點IP、kubernetes服務IP和域名; 域名最後字符不能是 .(如不能爲kubernetes.default.svc.cluster.local.),否則解析時失敗,提示:x509: cannot parse dnsName "kubernetes.default.svc.cluster.local."; 如果使用非cluster.local域名,如bqding.com,則需要修改域名列表中的最後兩個域名爲:kubernetes.default.svc.bqding、kubernetes.default.svc.bqding.com kube-controller-manager-csr.json: 集羣包含3個kube-controller-manager節點,啓動後將通過競爭選舉機制產生一個leader節點,其它節點爲阻塞狀態。當leader節點不可用後,剩餘節點將再次進行選舉產生新的leader節點,從而保證服務的可用性。 爲保證通信安全,kube-controller-manager在如下兩種情況下使用該證書:1.與kube-apiserver的安全端口通信時;2.在安全端口(https,10252)輸出prometheus格式的metrics hosts列表包含所有kube-controller-manager節點IP; CN爲system:kube-controller-manager、O爲system:kube-controller-manager,kubernetes內置的ClusterRoleBindings system:kube-controller-manager 賦予 kube-controller-manager 工作所需的權限。 kube-scheduler-csr.json: 集羣包含3個kube-scheduler節點,啓動後將通過競爭選舉機制產生一個leader節點,其它節點爲阻塞狀態。當leader節點不可用後,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。 爲保證通信安全,kube-scheduler在如下兩種情況下使用該證書:1.與 kube-apiserver 的安全端口通信;2.在安全端口(https,10251)輸出 prometheus 格式的 metrics; hosts 列表包含所有 kube-scheduler 節點 IP; CN 爲 system:kube-scheduler、O 爲 system:kube-scheduler,kubernetes 內置的 ClusterRoleBindings system:kube-scheduler 將賦予 kube-scheduler 工作所需的權限。 kube-proxy-csr.json: CN:指定該證書的 User 爲 system:kube-proxy; 預定義的 RoleBinding system:node-proxier 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限; 該證書只會被 kube-proxy 當做 client 證書使用,所以 hosts 字段爲空; ========================================================================================================================== 執行腳本 sh certificate.sh 腳本執行成功會在/root/ssl生成一批證書 admin.csr etcd-key.pem kube-proxy-key.pem admin-csr.json etcd.pem kube-proxy.pem admin-key.pem flanneld.csr kubernetes.csr admin.pem flanneld-csr.json kubernetes-csr.json ca-config.json flanneld-key.pem kubernetes-key.pem ca.csr flanneld.pem kubernetes.pem ca-csr.json kube-controller-manager.csr kube-scheduler.csr ca-key.pem kube-controller-manager-csr.json kube-scheduler-csr.json ca.pem kube-controller-manager-key.pem kube-scheduler-key.pem certificate.sh kube-controller-manager.pem kube-scheduler.pem etcd.csr kube-proxy.csr etcd-csr.json kube-proxy-csr.json 分別把證書拷貝到對應組件的服務器上,各組件所需證書: etcd ca.pem,ca-key.pem,etcd-key.pem,etcd.pem flannel ca.pem,ca-key.pem,flanneld-key.pem,flanneld.pem kube-apiserver ca.pem,ca-key.pem,kubernetes-key.pem,kubernetes.pem kube-controller-manager ca.pem,ca-key.pem,kube-controller-manager-key.pem,kube-controller-manager.pem kube-scheduler ca.pem,ca-key.pem,kube-scheduler-key.pem,kube-scheduler.pem kubelet ca.pem,ca-key.pem kube-proxy ca.pem,ca-key.pem,kube-proxy.pem,kube-proxy-key.pem kubectl ca.pem,ca-key.pem,admin.pem,admin-key.pem 本文檔3臺服務器各組件都存在,所以需要所有證書 mv ./*.pem /opt/kubernetes/ssl/ salt-cp "192.168.200.102 192.168.200.103" /opt/kubernetes/ssl/* /opt/kubernetes/ssl/ 4.部署etcd集羣(192.168.200.101/102/103) 創建kubernets目錄: mkdir -p /opt/kubernetes/{bin,cfg,ssl} 上傳etcd源碼包etcd-v3.3.10-linux-amd64.tar.gz tar xf etcd-v3.3.10-linux-amd64.tar.gz cd etcd-v3.3.10-linux-amd64.tar.gz 移動etcd命令到kubernets工作目錄bin下 cp etcd etcdctl /opt/kubernetes/bin/ 使用腳本生成配置文件並啓動:cat etcd.sh #!/bin/bash ETCD_NAME=${1:-"etcd01"} ETCD_IP=${2:-"127.0.0.1"} ETCD_CLUSTER=${3:-"etcd01=http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/etcd #[Member] ETCD_NAME="${ETCD_NAME}" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380" ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380" ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379" ETCD_INITIAL_CLUSTER="${ETCD_CLUSTER}" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF cat <<EOF >/usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=-/opt/kubernetes/cfg/etcd ExecStart=/opt/kubernetes/bin/etcd \\ --name=\${ETCD_NAME} \\ --data-dir=\${ETCD_DATA_DIR} \\ --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \\ --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \\ --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \\ --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \\ --initial-cluster=\${ETCD_INITIAL_CLUSTER} \\ --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \\ --initial-cluster-state=new \\ --cert-file=/opt/kubernetes/ssl/etcd.pem \\ --key-file=/opt/kubernetes/ssl/etcd-key.pem \\ --peer-cert-file=/opt/kubernetes/ssl/etcd.pem \\ --peer-key-file=/opt/kubernetes/ssl/etcd-key.pem \\ --trusted-ca-file=/opt/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable etcd systemctl restart etcd #分別在三臺上啓動etcd chmod +x etcd.sh ./etcd.sh etcd01 192.168.200.101 etcd01=https://192.168.200.101:2380,etcd02=https://192.168.200.102:2380,etcd03=https://192.168.200.103:2380 ./etcd.sh etcd02 192.168.200.102 etcd01=https://192.168.200.101:2380,etcd02=https://192.168.200.102:2380,etcd03=https://192.168.200.103:2380 ./etcd.sh etcd03 192.168.200.103 etcd01=https://192.168.200.101:2380,etcd02=https://192.168.200.102:2380,etcd03=https://192.168.200.103:2380 #檢測etcd集羣狀態 /opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/etcd.pem --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health 5.部署flanneld集羣(192.168.200.101/102/103) #說明 Overlay Network:覆蓋網絡,在基礎網絡上疊加的一種虛擬網絡技術模式,該網絡中的主機通過虛擬鏈路連接起來。 VXLAN :將源數據包封裝到UDP中,並使用基礎網絡的IP/MAC作爲外層報文頭進行封裝,然後在以太網上傳輸,到達目的地後由隧道端點解封裝並將數據發送給目標地址。 Flannel :是Overlay網絡的一種,也是將源數據包封裝在另一種網絡包裏面進行路由轉發和通信,目前已經支持UDP、VXLAN、AWS VPC和GCE路由等數據轉發方式。 多主機容器網絡通信其他主流方案:隧道方案( Weave、OpenvSwitch ),路由方案(Calico)等。 #寫入分配的子網段到 etcd ,供 flanneld 使用(只在一臺上操作即可) /opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/etcd.pem --key-file=/opt/kubernetes/ssl/etcd-key.pem --endpoints="https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' #解壓二進制包 # wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz tar xf flannel-v0.10.0-linux-amd64.tar.gz mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/ #配置 Flannel/systemd管理 #使用腳本配置cat flanneld.sh #!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \ -etcd-cafile=/opt/kubernetes/ssl/ca.pem \ -etcd-certfile=/opt/kubernetes/ssl/flanneld.pem \ -etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF cat <<EOF >/usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP \$MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld systemctl restart docker #說明: mk-docker-opts.sh 腳本將分配給flanneld的Pod子網網段信息寫入到 /run/flannel/docker文件中,後續docker啓動時使用這個文件中參數值設置 docker0網橋。 flanneld 使用系統缺省路由所在的接口和其它節點通信,對於有多個網絡接口的機器(如,內網和公網),可以用-iface=enpxx 選項值指定通信接口。 /run/flannel/subnet.env 是flannel分配給docker的子網信息 #啓動flannel(三臺啓動命令相同) ./flanneld.sh https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379 #驗證網絡 /opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/flanneld.pem --key-file=/opt/kubernetes/ssl/flanneld-key.pem --endpoints="https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379" ls /coreos.com/network/subnets 6、創建Node節點的配置文件 在master節點證書臨時目錄/root/ssl下使用以下腳本,創建node節點所需配置文件: cat kubeconfig.sh # 創建 TLS Bootstrapping Token export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF #---------------------- # 創建kubelet bootstrapping kubeconfig export KUBE_APISERVER="https://192.168.200.101:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 創建kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig 注意:執行此腳本時必須存在kubectl命令,上傳kubectl二進制文件到/usr/bin chmod +x kubeconfig.sh ./kubeconfig.sh 生成node節點所需配置文件如下: token.csv //kubelet與kube-api通信使用 bootstrap.kubeconfig //用來生成kubelet配置文件 kube-proxy.kubeconfig //kube-proxy配置文件 分發node配置文件至node節點: salt-cp "192.168.200.101 192.168.200.102 192.168.200.103" ./*.kubeconfig /opt/kubernetes/cfg/ 7.部署master組件 #下載k8s二進制包 https://github.com/kubernetes/kubernetes/ 移動二進制文件到工作目錄bin下: salt-cp "192.168.200.101 192.168.200.102 192.168.200.103" kubectl kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/ 移動token認證信息到配置目錄下: salt-cp "192.168.200.101 192.168.200.102 192.168.200.103" /root/ssl/token.csv /opt/kubernetes/cfg/ #使用以下腳本apiserver.sh、scheduler.sh、controller-manager.sh在三臺master上分別部署kube-apiserver/kube-scheduler/kube-controller-manager: #部署kube-apiserver:cat apiserver.sh #!/bin/bash MASTER_ADDRESS=${1:-"192.168.200.101"} ETCD_SERVERS=${2:-"http://192.168.200.101:2379"} cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \\ --v=4 \\ --etcd-servers=${ETCD_SERVERS} \\ --insecure-bind-address=127.0.0.1 \\ --bind-address=${MASTER_ADDRESS} \\ --insecure-port=8080 \\ --secure-port=6443 \\ --advertise-address=${MASTER_ADDRESS} \\ --allow-privileged=true \\ --service-cluster-ip-range=10.10.10.0/24 \\ --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \\ --authorization-mode=RBAC,Node \\ --kubelet-https=true \\ --enable-bootstrap-token-auth \\ --token-auth-file=/opt/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-50000 \\ --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/kubernetes/ssl/ca.pem \\ --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \\ --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver # 配置文件參數說明: --experimental-encryption-provider-config:啓用加密特性; --authorization-mode=Node,RBAC: 開啓 Node 和 RBAC 授權模式,拒絕未授權的請求; --enable-admission-plugins:啓用 ServiceAccount 和 NodeRestriction; --service-account-key-file:簽名 ServiceAccount Token 的公鑰文件,kube-controller-manager 的 --service-account-private-key-file:指定私鑰文件,兩者配對使用; --tls-*-file:指定apiserver使用的證書、私鑰和 CA 文件。 --client-ca-file:用於驗證 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)請求所帶的證書; --kubelet-client-certificate、--kubelet-client-key:如果指定,則使用https訪問kubelet APIs;需要爲證書對應的用戶(上面kubernetes*.pem證書的用戶爲kubernetes) 用戶定義 RBAC 規則,否則訪問 kubelet API 時提示未授權; --bind-address: 不能爲 127.0.0.1,否則外界不能訪問它的安全端口6443; --insecure-port=0:關閉監聽非安全端口(8080); --service-cluster-ip-range: 指定Service Cluster IP地址段; --service-node-port-range: 指定NodePort的端口範圍; --runtime-config=api/all=true: 啓用所有版本的 APIs,如 autoscaling/v2alpha1; --enable-bootstrap-token-auth:啓用 kubelet bootstrap 的 token 認證; --apiserver-count=3:指定集羣運行模式,多臺 kube-apiserver 會通過 leader 選舉產生一個工作節點,其它節點處於阻塞狀態; #部署kube-controller-manager #查看controller-manager.sh:cat controller-manager.sh #!/bin/bash MASTER_ADDRESS=${1:-"127.0.0.1"} cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\ --v=4 \\ --master=${MASTER_ADDRESS}:8080 \\ --leader-elect=true \\ --address=127.0.0.1 \\ --service-cluster-ip-range=10.10.10.0/24 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/opt/kubernetes/ssl/ca.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager # 配置文件參數說明: --address:指定監聽的地址爲127.0.0.1 --kubeconfig:指定 kubeconfig 文件路徑,kube-controller-manager 使用它連接和驗證 kube-apiserver; --cluster-signing-*-file:簽名 TLS Bootstrap 創建的證書; --experimental-cluster-signing-duration:指定 TLS Bootstrap 證書的有效期; --root-ca-file:放置到容器 ServiceAccount 中的 CA 證書,用來對 kube-apiserver 的證書進行校驗; --service-account-private-key-file:簽名 ServiceAccount 中 Token 的私鑰文件,必須和 kube-apiserver 的 --service-account-key-file 指定的公鑰文件配對使用; --service-cluster-ip-range :指定 Service Cluster IP 網段,必須和 kube-apiserver 中的同名參數一致; --leader-elect=true:集羣運行模式,啓用選舉功能;被選爲 leader 的節點負責處理工作,其它節點爲阻塞狀態; --feature-gates=RotateKubeletServerCertificate=true:開啓 kublet server 證書的自動更新特性; --controllers=*,bootstrapsigner,tokencleaner:啓用的控制器列表,tokencleaner 用於自動清理過期的 Bootstrap token; --horizontal-pod-autoscaler-*:custom metrics 相關參數,支持 autoscaling/v2alpha1; --tls-cert-file、--tls-private-key-file:使用 https 輸出 metrics 時使用的 Server 證書和祕鑰; --use-service-account-credentials=true: #部署kube-scheduler #查看scheduler.sh:cat scheduler.sh #!/bin/bash MASTER_ADDRESS=${1:-"127.0.0.1"} cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\ --v=4 \\ --master=${MASTER_ADDRESS}:8080 \\ --leader-elect" EOF cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler 啓動組件: 3臺分別啓動apiserver ./apiserver.sh 192.168.200.101 https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379 ./scheduler.sh ./controller-manager.sh ./apiserver.sh 192.168.200.102 https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379 ./scheduler.sh ./controller-manager.sh ./apiserver.sh 192.168.200.103 https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379 ./scheduler.sh ./controller-manager.sh 查看組件啓動狀態: kubectl get cs 8.部署node組件 mv kubelet kube-proxy /opt/kubernetes/bin/ chmod +x /opt/kubernetes/bin/* 在node節點使用腳本啓動kubelet: cat kubelet.sh #!/bin/bash NODE_ADDRESS=${1:-"192.168.200.102"} DNS_SERVER_IP=${2:-"10.10.10.2"} cat <<EOF >/opt/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \\ --v=4 \\ --address=${NODE_ADDRESS} \\ --hostname-override=${NODE_ADDRESS} \\ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\ --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\ --cert-dir=/opt/kubernetes/ssl \\ --allow-privileged=true \\ --cluster-dns=${DNS_SERVER_IP} \\ --cluster-domain=cluster.local \\ --fail-swap-on=false \\ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" EOF cat <<EOF >/usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/opt/kubernetes/cfg/kubelet ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet 啓動前需要在master執行命令創建kubelet-bootstrap用戶: kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap 否則啓動會報錯:kubelet: error: failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope 原因是:kubelet-bootstrap並沒有權限創建證書。所以要創建這個用戶的權限並綁定到這個角色上。 在node01啓動: ./kubelet.sh 192.168.200.101 在node02啓動: ./kubelet.sh 192.168.200.102 在node03啓動: ./kubelet.sh 192.168.200.103 在master節點查看認證狀態: kubectl get csr 顯示爲等待簽名認證狀態 NAME AGE REQUESTOR CONDITION node-csr-QmkBSqwpZJnC5CJyowdOwYi_SvD2Q5h_e9l-axZRf3s 27s kubelet-bootstrap Pending node-csr-piPDu1XYXFMdWSKyucooft7bc-L5dfvgCiKjigjXgKI 5m kubelet-bootstrap Pending node-csr-uhPEX9cxes5aTt2Ajkjt2Imhl2KgeAh12iLCFjTAisM 21s kubelet-bootstrap Pending master進行簽名認證: kubectl certificate approve node-csr-QmkBSqwpZJnC5CJyowdOwYi_SvD2Q5h_e9l-axZRf3s kubectl certificate approve node-csr-piPDu1XYXFMdWSKyucooft7bc-L5dfvgCiKjigjXgKI kubectl certificate approve node-csr-uhPEX9cxes5aTt2Ajkjt2Imhl2KgeAh12iLCFjTAisM 再次查看: kubectl get csr 顯示爲簽發狀態 NAME AGE REQUESTOR CONDITION node-csr-QmkBSqwpZJnC5CJyowdOwYi_SvD2Q5h_e9l-axZRf3s 5m kubelet-bootstrap Approved,Issued node-csr-piPDu1XYXFMdWSKyucooft7bc-L5dfvgCiKjigjXgKI 10m kubelet-bootstrap Approved,Issued node-csr-uhPEX9cxes5aTt2Ajkjt2Imhl2KgeAh12iLCFjTAisM 3m11s kubelet-bootstrap Approved,Issued kubectl get node 顯示node節點已準備就緒 NAME STATUS ROLES AGE VERSION 192.168.200.101 Ready <none> 1m v1.12.7 192.168.200.102 Ready <none> 1m v1.12.7 192.168.200.103 Ready <none> 2m v1.12.7 使用腳本在node節點啓動kube-proxy cat proxy.sh #!/bin/bash NODE_ADDRESS=${1:-"192.168.200.101"} cat <<EOF >/opt/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=${NODE_ADDRESS} \ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" EOF cat <<EOF >/usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy 啓動kube-proxy: 在node1啓動: ./proxy.sh 192.168.200.101 在nodo2啓動: ./proxy.sh 192.168.200.102 在nodo3啓動: ./proxy.sh 192.168.200.103 9.使用salt安裝keepalived、haproxy salt-ha.tar.gz文件目錄結構如下: └── haproxy //執行目錄 ├── change_haproxy.sh //執行命令腳本 ├── files //keepalived、haproxy配置文件目錄 ├── haproxy.init //haproxy啓動腳本 ├── haproxy-outside.cfg //haproxy配置文件 ├── haproxy.service //添加haproxy系統服務文件 ├── keepalived.init //keepalived啓動腳本 ├── keepalived.sysconfig //keepalived配置文件 ├── limits.conf //系統參數配置文件 ├── master-keepalived.conf //keepalived主master配置文件 ├── notify.sh //keepalived檢測haproxy存活腳本 ├── rsyslog //keepalived日誌配置 ├── rsyslog.conf //keepalived日誌配置 ├── slave-keepalived.conf //keepalived備backup配置文件 └── sysctl.conf //系統參數配置文件 ├── haproxy-outside-install.sls //安裝haproxy的sls文件(會調用 keepalived-install.sls 、pkg.sls文件,安裝keepalived) ├── keepalived-install.sls //安裝keepalived的sls文件 ├── pkg.sls //安裝依賴包的sls文件 └── soft //源碼包存放目錄 ├── haproxy-1.8.19.tar.gz └── keepalived-2.0.4.tar.gz 注意:在keepalived-install.sls中修改主從的主機名、VIP,並確定ROUTEID沒有衝突。在haproxy-outside.cfg中修改端口、後端ip sh change_haproxy.sh 10.修改node節點的組件kubelet、kube-proxy配置文件指向VIP地址:端口 首先配置Kubectl管理工具,kubectl 是 kubernetes 集羣的命令行管理工具,用於遠程管理集羣服務,kubectl 默認從 ~/.kube/config 文件讀取 kube-apiserver 地址、證書、用戶名等信息,如果沒有配置,執行 kubectl 命令時可能會出錯。 ~/.kube/config只需要部署一次,然後拷貝到其他的master #上面已經創建了kubectl的證書(admin.pem/admin-key.pem) ,並且已經把kubectl二進制包拷貝到了/usr/local/bin下 #執行以下命令創建~/.kube/config文件: # 設置集羣項中名爲kubernetes的apiserver地址與根證書 kubectl config set-cluster kubernetes --server=https://192.168.200.200:8443 --certificate-authority=/opt/kubernetes/ssl/ca.pem # 設置用戶項中cluster-admin用戶證書認證字段 kubectl config set-credentials cluster-admin --certificate-authority=/opt/kubernetes/ssl/ca.pem --client-key=/opt/kubernetes/ssl/admin-key.pem --client-certificate=/opt/kubernetes/ssl/admin.pem # 設置環境項中名爲default的默認集羣和用戶 kubectl config set-context default --cluster=kubernetes --user=cluster-admin # 設置默認環境項爲default kubectl config use-context default 注意:server地址爲集羣的VIP地址 修改/opt/kubernetes/cfg/目錄下的:bootstrap.kubeconfig、kubelet.kubeconfig、kube-proxy.kubeconfig server: https://192.168.200.200:8443 11.部署Web UI (Dashboard) 使用kubernets模板文件dashboard-rbac.yaml、dashboard-deployment.yaml、dashboard-service.yaml: # kubectl create -f dashboard-rbac.yaml # kubectl create -f dashboard-deployment.yaml # kubectl create -f dashboard-service.yaml
k8s高可用集羣部署
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.