Kubernetes的三種部署方式
Minikube
- Minikube是一個工具,可以在本地快速運行一個單節點的Kubernetes,僅用於嘗試Kubernetes或日常開發的用戶使用
Kubeadm
- Kubeadm也是一個工具,提供kubeadm init和kubeadm join,用於快速部署Kubernetes集羣
二進制包
- 從官方下載發行版的二進制包,手動部署每個組件,組成Kubernetes集羣
單Master節點二進制部署平臺架構圖
Master節點:
Master節點上面主要由四個模塊組成,APIServer,schedule,controller-manager,etcd.
-
apiserver
apiserver負責對外提供RESTful的kubernetes API的服務,它是系統管理指令的統一接口,任何對資源的增刪該查都要交給apiserver處理後再交給Etcd,如圖,kubectl(kubernetes提供的客戶端工具,該工具內部是對kubernetes API的調用)是直接和apiserver交互的 -
scheduler
scheduler負責調度Pod到合適的Node上,如果把scheduler看成一個黑匣子,那麼它的輸入是pod和由多個Node組成的列表,輸出是Pod和一個Node的綁定。kubernetes目前提供了調度算法,同樣也保留了接口。用戶根據自己的需求定義自己的調度算法, -
controller manager
如果APIServer做的是前臺的工作的話,那麼controller manager就是負責後臺的。每一個資源都對應一個控制器。而controller manager就是負責管理這些控制器的,比如我們通過apiserver創建了一個Pod,當這個Pod創建成功後,apiserver的任務就算完成了。 -
Etcd
Etcd是一個高可用的鍵值存儲系統,kubernetes使用它來存儲各個資源的狀態,從而實現了Restful的API。
Node節點:
每個Node節點主要由三個模板組成:kublet,kube-proxy,Docker-Engine
-
kube-proxy
該模塊實現了kubernetes中的服務發現和反向代理功能。kube-proxy支持TCP和UDP連接轉發,默認基Round Robin算法將客戶端流量轉發到與service對應的一組後端pod。服務發現方面,kube-proxy使用etcd的watch機制監控集羣中service和endpoint對象數據的動態變化,並且維護一個service到endpoint的映射關係,從而保證了後端pod的IP變化不會對訪問者造成影響 -
kublet
kublet是Master在每個Node節點上面的agent,是Node節點上面最重要的模塊,它負責維護和管理該Node上的所有容器,但是如果容器不是通過kubernetes創建的,它並不會管理。本質上,它負責使Pod的運行狀態與期望的狀態一致 -
Docker-Engine
容器引擎
kubernetes網絡類型
- Overlay Network:覆蓋網絡,在基礎網絡上疊加的一種 虛擬網絡技術模式,該網絡中的主機通過虛擬鏈路連接起來
- VXLAN:將源數據包封裝到UDP中,並使用基礎網絡的IP/MAC作爲外層報文頭進行封裝,然後在以太網上傳輸,到達目的地後由隧道端點解封裝並將數據發送給目
標地址 - Flannel:是Overlay網絡的一種, 也是將源數據包封裝在另一種網絡包裏面進行路由轉發和通信,目前已經支持UDP、VXLAN、 AWS VPC和GCE路由等數據轉發方式
實驗部署(過程比較複雜,建議跟做)
- 實驗所需安裝包及腳本:
鏈接:https://pan.baidu.com/s/1vcVJSdpbl52nWzA1aKk4pQ
提取碼:fnx7
1、實驗環境規劃
2、自籤SSL證書
組件 | 使用證書 |
---|---|
etcd | ca.pem,server.pem,server-key.pem |
flannel | ca.pem,server.pem,server-key.pem |
kube-apiserver | ca.pem,server.pem,server-key.pem |
kubelet | ca.pem,ca-key.pem |
kube-proxy | ca.pem,kube-proxy.pem,kube-proxy-key.pem |
kubectl | ca.pem,admin.pem,admin-key.pem |
3、Etcd集羣部署安裝(在master節點操作)
(1)創建工作目錄及證書創建目錄
[root@localhost ~]# mkdir k8s
[root@localhost ~]# cd k8s/
[root@localhost k8s]# mkdir etcd-cert
(2)創建cfssl(證書創建工具)下載腳本
[root@localhost k8s]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
#執行腳本
[root@localhost k8s]# bash cfssl.sh
[root@localhost k8s]# ls /usr/local/bin/
cfssl cfssl-certinfo cfssljson
//cfssl:生成證書工具
//cfssljson:通過傳入json文件生成證書
//cfssl-certinfo:查看證書信息
(3)證書製作
#定義ca證書
[root@localhost k8s]# cd etcd-cert/
[root@localhost k8s]# cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
#實現證書籤名
[root@localhost k8s]# cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
#生成證書,生成ca-key.pem、ca.pem
[root@localhost etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#指定etcd三個節點間的通信驗證
[root@localhost k8s]# cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"192.168.7.100", //master
"192.168.7.102", //node1
"192.168.7.103" //node2
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
#生成ETCD證書,server-key.pem、server.pem
[root@localhost etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
(4) 下載Etcd二進制包
(5)將下載的安裝包上傳至k8s目錄中,並解壓
[root@localhost k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
[root@localhost k8s]# ls etcd-v3.3.10-linux-amd64
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
(6)創建etcd的工作目錄,指定配置文件、命令文件、證書目錄
[root@localhost k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p
[root@localhost k8s]# ls /opt/etcd/
bin cfg ssl
(7)添加etcd執行文件
[root@localhost k8s]# cd etcd-v3.3.10-linux-amd64/
[root@localhost etcd-v3.3.10-linux-amd64]# mv etcd etcdctl /opt/etcd/bin/
[root@localhost etcd-v3.3.10-linux-amd64]# ls /opt/etcd/bin/
etcd etcdctl
(8)證書拷貝到etcd工作目錄中
[root@localhost k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/
[root@localhost k8s]# ls /opt/etcd/ssl/
ca-key.pem ca.pem server-key.pem server.pem
(9)創建並執行etcd.sh腳本,生成配置文件,因爲目前只有一個節點,無法添加到其它的節點,執行此腳本時會卡住等待其它節點
[root@localhost k8s]# vim etcd.sh
#!/bin/bash
#下面的舉例是執行腳本的命令示例
#example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380
ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3
WORK_DIR=/opt/etcd
cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
#生成etcd的服務啓動文件
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
#添加腳本權限,啓動服務
[root@localhost k8s]# chmod +x etcd.sh
#因爲目前只有一個etcd01節點,無法連接到etcd02、etcd03節點,所以報錯是正常的,可以先暫時不管
[root@localhost k8s]# ./etcd.sh etcd01 192.168.7.100 etcd02=https://192.168.7.102:2380,etcd03=https://192.168.7.103:2380
#打開一個新的會話窗口,檢查etcd的進程是否開啓
[root@localhost k8s]# ps -ef | grep etcd
root 12331 1 2 20:52 ? 00:00:01 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.7.100:2380 --listen-client-urls=https://192.168.7.100:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.7.100:2379 --initial-advertise-peer-urls=https://192.168.7.100:2380 --initial-cluster=etcd01=https://192.168.7.100:2380,etcd02=https://192.168.7.102:2380,etcd03=https://192.168.7.103:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
root 12342 10089 0 20:52 pts/0 00:00:00 grep --color=auto etcd
(9)將etcd的工作目錄和啓動腳本文件拷貝到其它的節點,更改對應的配置文件/opt/etcd/cfg/etcd
#在master節點
[root@localhost k8s]# scp -r /opt/etcd/ [email protected]:/opt
[root@localhost k8s]# scp -r /opt/etcd/ [email protected]:/opt
[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
#在node1、node2節點更改配置文件
[root@localhost ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02" //更改節點名稱,不能重複
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.7.102:2380" //更改IP地址爲本機地址
ETCD_LISTEN_CLIENT_URLS="https://192.168.7.102:2379" //更改IP地址爲本機地址
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.7.102:2380" //更改IP地址爲本機地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.7.102:2379" //更改IP地址爲本機地址
#下面是固定格式不要更改
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.7.100:2380,etcd02=https://192.168.7.102:2380,etcd03=https://192.168.7.103:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#啓動etcd
[root@localhost ~]# systemctl start etcd
[root@localhost ~]# systemctl enable etcd
[root@localhost ~]# systemctl status etcd
(10)檢查etcd集羣狀態
#在證書目錄下
[root@localhost ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.7.100:2379,https://192.168.7.102:2379,https://192.168.7.103:2379" cluster-health
member 57b92743cdbef0be is healthy: got healthy result from https://192.168.7.100:2379
member 823cb89d12a9ab55 is healthy: got healthy result from https://192.168.7.103:2379
member a99d699d2f1c604c is healthy: got healthy result from https://192.168.7.102:2379
cluster is healthy
3、在兩個node節點安裝docker
(1)依賴包、docker鏡像源、docker安裝
[root@localhost ~]# yum -y install yum-utils device-mapper-persistent-data lvm2
[root@localhost ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@localhost ~]# yum -y install docker-ce
#啓動docker
[root@localhost ~]# systemctl restart docker
[root@localhost ~]# systemctl enable docker
(2)鏡像加速(可在阿里雲官方申請一個加速)
[root@localhost ~]# tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://xxx.mirror.aliyuncs.com"]
}
EOF
[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker
(3)網絡優化
[root@localhost ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward=1
[root@localhost ~]# sysctl -p
[root@localhost ~]# service network restart
[root@localhost ~]# systemctl restart docker
4、安裝flannel網絡組件(在node節點安裝)
(1)在master添加基礎網絡到etcd中
[root@localhost ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.7.100:2379,https://192.168.7.102:2379,https://192.168.7.103:2379" set /coreos.com/network/config '{"Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{"Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
#查看寫入的信息
[root@localhost ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.7.100:2379,https://192.168.7.102:2379,https://192.168.7.103:2379" get /coreos.com/network/config
{"Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
(2)上傳flannel軟件包到所有的node節點,並完成解壓
[root@localhost ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
(3)創建k8s的工作目錄
[root@localhost ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
#添加執行文件
[root@localhost ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
(4)啓動flannel.sh腳本
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
#添加flannel配置文件
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
#添加flannel啓動文件
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
[root@localhost ~]# bash flannel.sh https://192.168.7.100:2379,https://192.168.7.102,https://192.168.7.103:2379
(5)配置docker,與flannel建立關係
[root@localhost ~]# vim /usr/lib/systemd/system/docker.service
#更改如下
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
#查看子網的文件
[root@localhost ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.57.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
//說明:bip指定啓動時的子網
DOCKER_NETWORK_OPTIONS=" --bip=172.17.57.1/24 --ip-masq=false --mtu=1450"
#重啓docker服務
[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker
#查看網絡,docker0的網關IP已經變化
[root@localhost ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.57.1 netmask 255.255.255.0 broadcast 172.17.57.255
inet6 fe80::42:7cff:feff:d613 prefixlen 64 scopeid 0x20<link>
ether 02:42:7c:ff:d6:13 txqueuelen 0 (Ethernet)
RX packets 4996 bytes 208971 (204.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9953 bytes 7571774 (7.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
(6)創建容器測試兩個node節點
#創建容器查看網絡
[root@localhost ~]# docker run -it centos:7 /bin/bash
[root@433ee230aaeb /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.57.2 netmask 255.255.255.0 broadcast 172.17.57.255
ether 02:42:ac:11:39:02 txqueuelen 0 (Ethernet)
RX packets 14 bytes 1076 (1.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 14 bytes 1204 (1.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
#ping另個node節點上的容器IP,可以ping通,flannel安裝成功
[root@433ee230aaeb /]# ping 172.17.24.2
PING 172.17.24.2 (172.17.24.2) 56(84) bytes of data.
64 bytes from 172.17.24.2: icmp_seq=1 ttl=62 time=0.509 ms
64 bytes from 172.17.24.2: icmp_seq=2 ttl=62 time=0.691 ms
5、部署Master組件
(1)上傳master腳本並解壓
[root@localhost k8s]# unzip master.zip
(2)創建api-server證書目錄
[root@localhost k8s]# mkdir k8s-cert
(3)創建證書生成腳本,並執行
[root@localhost k8s]# cd k8s-cert/
[root@localhost k8s-cert]# vim k8s-cert.sh
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#-----------------------
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
#下面爲了方便後期添加使用多節點部署,添加了VIP、以及負載均衡的主從IP
"192.168.7.100", //master1
"192.168.7.101", //master2
"192.168.7.99", //VIP
"192.168.7.104", //lb (master)
"192.168.7.105", //lb (backup)
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
#-----------------------
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#-----------------------
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
#運行腳本
[root@localhost k8s-cert]# bash k8s-cert.sh
[root@localhost k8s-cert]# ls *.pem
admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem
admin.pem ca.pem kube-proxy.pem server.pem
(4)創建master上的k8s工作目錄,將api-server證書複製到k8s工作目錄中
[root@localhost k8s-cert]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@localhost k8s-cert]# cp *.pem /opt/kubernetes/ssl/
[root@localhost k8s-cert]# ls /opt/kubernetes/ssl/
admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem
admin.pem ca.pem kube-proxy.pem server.pem
(5)上傳kubernetes安裝包到/root/k8s,並解壓
[root@localhost k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
(6)將解壓的安裝包內關鍵的命令文件複製到k8s工作目錄
[root@localhost k8s]# cd /root/k8s/kubernetes/server/bin/
[root@localhost bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
(7)創建管理用戶角色
#先隨機生成一個序列號
[root@localhost bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
5fd75f08ee0f22c9d2ae64dcd402b298
#創建token.csv文件
[root@localhost bin]# vim /opt/kubernetes/cfg/token.csv
5fd75f08ee0f22c9d2ae64dcd402b298,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
(8)二進制文件,token,證書都準備好以後,開啓apiserver
[root@localhost k8s]# bash apiserver.sh 192.168.7.100 https://192.168.7.100:2379,https://192.168.7.102:2379,https://192.168.7.103:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
#查看服務及端口是否開啓
[root@localhost k8s]# ps aux | grep apiserver
root 15472 39.1 8.6 421080 333804 ? Ssl 19:37 0:10 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.7.100:2379,https://192.168.7.102:2379,https://192.168.7.103:2379 --bind-address=192.168.7.100 --secure-port=6443 --advertise-address=192.168.7.100 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root 15495 0.0 0.0 112724 988 pts/1 S+ 19:38 0:00 grep --color=auto apiserver
[root@localhost k8s]# netstat -natp | grep 6443
tcp 0 0 192.168.7.100:6443 0.0.0.0:* LISTEN 15472/kube-apiserve
tcp 0 0 192.168.7.100:44136 192.168.7.100:6443 ESTABLISHED 15472/kube-apiserve
tcp 0 0 192.168.7.100:6443 192.168.7.100:44136 ESTABLISHED 15472/kube-apiserve
[root@localhost k8s]# netstat -natp | grep 8080
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 15472/kube-apiserve
(9)啓動scheduler服務
[root@localhost k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
#查看服務
[root@localhost k8s]# ps aux | grep kube
(10)啓動controller-manager
[root@localhost k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
#查看服務
[root@localhost k8s]# ps aux | grep kube
(11)查看master節點狀態,顯示的是ok或者true纔算正常
[root@localhost k8s]# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
6、安裝node組件
(1)上傳node腳本,並解壓
[root@localhost ~]# unzip node.zip
Archive: node.zip
inflating: proxy.sh //安裝啓動proxy腳本
inflating: kubelet.sh //安裝啓動kubelet腳本
(2)把master解壓的k8s安裝包內的文件kubelet、kube-proxy拷貝到node節點上去
[root@localhost bin]# scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
[root@localhost bin]# scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
(3)在master創建/root/k8s/kubeconfig工作目錄,添加腳本,執行完成將文件複製到node節點上
- kubeconfig是爲訪問集羣所作的配置
[root@localhost k8s]# mkdir kubeconfig
[root@localhost k8s]# cd kubeconfig/
[root@localhost kubeconfig]# vim kubeconfig
APISERVER=$1
SSL_DIR=$2
# 創建kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://$APISERVER:6443"
# 設置集羣參數
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
--token=5fd75f08ee0f22c9d2ae64dcd402b298 \
--kubeconfig=bootstrap.kubeconfig
# 設置上下文參數
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 創建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=$SSL_DIR/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=$SSL_DIR/kube-proxy.pem \
--client-key=$SSL_DIR/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
#設置環境變量
[root@localhost kubeconfig]# vim /etc/profile
//在最後一行添加
export PATH=$PATH:/opt/kubernetes/bin/
[root@localhost kubeconfig]# source /etc/profile
#生成配置文件
[root@localhost kubeconfig]# bash kubeconfig 192.168.7.100 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@localhost kubeconfig]# ls
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
#拷貝配置文件到node節點
[root@localhost kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
[root@localhost kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
#創建bootstrap角色賦予權限用於連接apiserver請求籤名
[root@localhost kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
(4)在node01節點安裝啓動kubelet
#安裝kubelet
[root@localhost ~]# bash kubelet.sh 192.168.7.102
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
#查看服務是否啓動
[root@localhost ~]# ps aux | grep kubelet
[root@localhost ~]# systemctl status kubelet.service
(5)在master節點可以看到node1節點的證書請求
[root@localhost kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-bPGome_z3ZBCFpug_FyVVoOXCYFuID6MmCO5ymtDQpQ 2m1s kubelet-bootstrap Pending
//pending等待集羣給該節點頒發證書
#在master節點同意頒發證書
[root@localhost kubeconfig]# kubectl certificate approve node-csr-bPGome_z3ZBCFpug_FyVVoOXCYFuID6MmCO5ymtDQpQ
[root@localhost kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-bPGome_z3ZBCFpug_FyVVoOXCYFuID6MmCO5ymtDQpQ 6m12s kubelet-bootstrap Approved,Issued
//Approved,Issued,已經被允許加入集羣
#查看集羣已經成功的加入node1節點
[root@localhost kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.7.102 Ready <none> 2m9s v1.12.3
(6)在node1節點,啓動proxy服務
[root@localhost ~]# bash proxy.sh 192.168.7.102
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
#查看服務狀態是否已經啓動
[root@localhost ~]# systemctl status kube-proxy.service
(7)node2節點部署
#把node1的/opt/kubernetes目錄複製到node2節點
[root@localhost ~]# scp -r /opt/kubernetes/ [email protected]:/opt
#把node1節點的kubelet,kube-proxy的service文件拷貝到node2中
[root@localhost ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service [email protected]:/usr/lib/systemd/system/
(8)在node2節點更改node1節點複製過來的配置文件
#首先刪除複製過來的證書,各節點需要各自申請自己的證書
[root@localhost ~]# cd /opt/kubernetes/ssl/
[root@localhost ssl]# rm -rf *
#修改配置文件kubelet、kubelet.config、kube-proxy(三個配置文件)
[root@localhost ssl]# cd /opt/kubernetes/cfg/
[root@localhost cfg]# vim kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.7.103 \ //更改爲本地IP
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
[root@localhost cfg]# vim kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.7.103 //更改爲本地IP
[root@localhost cfg]# vim kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.7.103 \ //更改爲本地IP
#啓動服務
[root@localhost cfg]# systemctl start kubelet.service
[root@localhost cfg]# systemctl enable kubelet.service
[root@localhost cfg]# systemctl start kube-proxy.service
[root@localhost cfg]# systemctl enable kube-proxy.service
(9)在master節點同意node2節點的請求,頒發證書
#查看請求
[root@localhost kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-UFj47uNOLQwNmsXwAhVZPg4dtjPGIUL8FZwQaDhTYBI 2m56s kubelet-bootstrap Pending
node-csr-bPGome_z3ZBCFpug_FyVVoOXCYFuID6MmCO5ymtDQpQ 30m kubelet-bootstrap Approved,Issued
#同意Pending項
[root@localhost kubeconfig]# kubectl certificate approve node-csr-UFj47uNOLQwNmsXwAhVZPg4dtjPGIUL8FZwQaDhTYBI
[root@localhost kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-UFj47uNOLQwNmsXwAhVZPg4dtjPGIUL8FZwQaDhTYBI 5m23s kubelet-bootstrap Approved,Issued
node-csr-bPGome_z3ZBCFpug_FyVVoOXCYFuID6MmCO5ymtDQpQ 32m kubelet-bootstrap Approved,Issued
(10)在master節點查看集羣信息,全爲ready完成k8s單master節點部署
[root@localhost kubeconfig]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.7.102 Ready <none> 26m v1.12.3
192.168.7.103 Ready <none> 24s v1.12.3