kubernetes 高可用的配置
標籤(空格分隔): kubernetes系列
一:kubernetes 高可用的配置
一:kubernetes 的 kubeadmn高可用的配置
二: 系統初始化
2.1 系統主機名
192.168.100.11 node01.flyfish
192.168.100.12 node02.flyfish
192.168.100.13 node03.flyfish
192.168.100.14 node04.flyfish
192.168.100.15 node05.flyfish
192.168.100.16 node06.flyfish
192.168.100.17 node07.flyfish
----
node01.flyfish / node02.flyfish /node03.flyfish 作爲master 節點
node04.flyfish / node05.flyfish / node06.flyfish 作爲work節點
node07.flyfish 作爲 測試節點
keepalive集羣VIP 地址爲: 192.168.100.100
2.2 關閉firewalld 清空iptables 與 selinux 規則
系統節點全部執行:
systemctl stop firewalld && systemctl disable firewalld && yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
關閉 SELINUX與swap 內存
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
2.3 安裝 依賴包
全部節點安裝
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
2.4升級調整內核參數,對於 K8S
所有節點都執行
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空間,只有當系統 OOM 時才允許使用它 vm.overcommit_memory=1 # 不檢查物理內存是否夠用
vm.panic_on_oom=0 # 開啓 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
2.5 調整系統時區
# 設置系統時區爲 中國/上海 timedatectl set-timezone Asia/Shanghai
# 將當前的 UTC 時間寫入硬件時鐘 timedatectl set-local-rtc 0
# 重啓依賴於系統時間的服務
systemctl restart rsyslog && systemctl restart crond
關閉系統不需要的服務
systemctl stop postfix && systemctl disable postfix
2.6 設置 rsyslogd 和 systemd journald
系統全部節點
mkdir /var/log/journal # 持久化保存日誌的目錄
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盤
Storage=persistent
# 壓縮歷史日誌
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
# 最大佔用空間 10G
SystemMaxUse=10G
# 單日誌文件最大 200M
SystemMaxFileSize=200M
# 日誌保存時間 2 周
MaxRetentionSec=2week
# 不將日誌轉發到 syslog
ForwardToSyslog=no
EOF
systemctl restart systemd-journald
2.7升級系統內核爲 4.44
CentOS 7.x 系統自帶的 3.10.x 內核存在一些 Bugs,導致運行的 Docker、Kubernetes 不穩定,例如: rpm -Uvh
http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 安裝完成後檢查 /boot/grub2/grub.cfg 中對應內核 menuentry 中是否包含 initrd16 配置,如果沒有,再安裝 一次!
yum --enablerepo=elrepo-kernel install -y kernel-lt
# 設置開機從新內核啓動
grub2-set-default "CentOS Linux (4.4.182-1.el7.elrepo.x86_64) 7 (Core)"
reboot
# 重啓後安裝內核源文件
yum --enablerepo=elrepo-kernel install kernel-lt-devel-$(uname -r) kernel-lt-headers-$(uname -r)
2.8 kube-proxy開啓ipvs的前置條件
modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
三: 開始安裝docker
3.1 安裝docker
機器節點都執行:
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum update -y && yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io -y
重啓機器: reboot
查看內核版本: uname -r
在加載: grub2-set-default "CentOS Linux (4.4.182-1.el7.elrepo.x86_64) 7 (Core)" && reboot
如果還不行
就改 文件 : vim /etc/grub2.cfg 註釋掉 3.10 的 內核
保證 內核的版本 爲 4.4
service docker start
chkconfig docker on
## 創建 /etc/docker 目錄
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"insecure-registries": ["https://node04.flyfish"],
"registry-mirrors": ["https://registry.docker-cn.com","http://hub-mirror.c.163.com"]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# 重啓docker服務
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
安裝命令補全工具
yum -y install bash-completion
source /etc/profile.d/bash_completion.sh
鏡像加速
由於Docker Hub的服務器在國外,下載鏡像會比較慢,可以配置鏡像加速器。主要的加速器有:Docker官方提供的中國registry mirror、阿里雲加速器、DaoCloud 加速器,本文以阿里加速器配置爲例。
登陸阿里雲容器模塊:
登陸地址爲:https://cr.console.aliyun.com ,未註冊的可以先註冊阿里雲賬戶
mkdir /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://dfmo7maf.mirror.aliyuncs.com"]
}
EOF
Cgroup Driver:
修改daemon.json
修改daemon.json,新增‘"exec-opts": ["native.cgroupdriver=systemd"]
cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://dfmo7maf.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
修改cgroupdriver是爲了消除告警:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
重新加載docker
systemctl daemon-reload
systemctl restart docker
四:安裝keepalived
control plane節點都執行本部分操作。
安裝keepalived
yum install -y keepalived
keepalived配置
node01.flyfish 配置:
cat /etc/keepalived/keepalived.conf
---
! Configuration File for keepalived
global_defs {
router_id node01.flyfish
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 50
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.100.100
}
}
---
node02.flyfish 配置:
cat /etc/keepalived/keepalived.conf
---
! Configuration File for keepalived
global_defs {
router_id node02.flyfish
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 50
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.100.100
}
}
---
node03.flyfish 配置
cat /etc/keepalived/keepalived.conf
---
! Configuration File for keepalived
global_defs {
router_id node03.flyfish
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 50
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.100.100
}
}
---
所有control plane啓動keepalived服務並設置開機啓動
service keepalived start
systemctl enable keepalived
vip在node01.flyfish上
五: k8s安裝
5.1:安裝 Kubeadm (主從配置)
control plane和work節點都執行本部分操作。
cat >> /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum list kubelet --showduplicates | sort -r
本文安裝的kubelet版本是1.16.4,該版本支持的docker版本爲1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。
yum -y install kubeadm-1.16.4 kubectl-1.16.4 kubelet-1.16.4
---
kubelet 運行在集羣所有節點上,用於啓動Pod和容器等對象的工具
kubeadm 用於初始化集羣,啓動集羣的命令工具
kubectl 用於和集羣通信的命令行,通過kubectl可以部署和管理應用,查看各種資源,創建、刪除和更新各種組件
---
啓動kubelet:
systemctl enable kubelet && systemctl start kubelet
kubectl命令補全
echo "source <(kubectl completion bash)" >> ~/.bash_profile
source .bash_profile
5.2 下載鏡像
鏡像下載的腳本:
Kubernetes幾乎所有的安裝組件和Docker鏡像都放在goolge自己的網站上,直接訪問可能會有網絡問題,這裏的解決辦法是從阿里雲鏡像倉庫下載鏡像,拉取到本地以後改回默認的鏡像tag。本文通過運行image.sh腳本方式拉取鏡像。
下載腳本
vim image.sh
---
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/loong576
version=v1.16.4
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
docker pull $url/$imagename
docker tag $url/$imagename k8s.gcr.io/$imagename
docker rmi -f $url/$imagename
done
---
./image.sh
docker images
node01.flyfish 節點 初始化
cat kubeadm-config.yaml
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.4
apiServer:
certSANs: #填寫所有kube-apiserver節點的hostname、IP、VIP
- node01.flyfish
- node02.flyfish
- node03.flyfish
- node04.flyfish
- node05.flyfish
- node06.flyfish
- 192.168.100.11
- 192.168.100.12
- 192.168.100.13
- 192.168.100.14
- 192.168.100.15
- 192.168.100.16
- 192.168.100.100
controlPlaneEndpoint: "192.168.100.100:6443"
networking:
podSubnet: "10.244.0.0/16"
---
初始化主機節點:
kubeadm init --config=kubeadm-config.yaml
---
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 192.168.100.100:6443 --token 3j4th7.4va6qsj7at7ky2qs \
--discovery-token-ca-cert-hash sha256:13d17c476688e4e78837b9cac94efa7edf689bf530a2120e2b81bf13b588fff9 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.100.100:6443 --token 3j4th7.4va6qsj7at7ky2qs \
--discovery-token-ca-cert-hash sha256:13d17c476688e4e78837b9cac94efa7edf689bf530a2120e2b81bf13b588fff9
---
如果初始化失敗,可執行kubeadm reset後重新初始化
kubeadm reset
rm -rf $HOME/.kube/config
加載環境變量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile
本文所有操作都在root用戶下執行,若爲非root用戶,則執行如下操作:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
安裝flannel網絡
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
kubectl get pod -n kube-system
5.3 control plane節點加入集羣
證書分發
在node01.flyfish 上面執行 腳本:cert-main-master.sh
vim cert-main-master.sh
---
#!/bin/bash
USER=root # customizable
CONTROL_PLANE_IPS="192.168.100.12 192.168.100.13"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
# Quote this line if you are using external etcd
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done
---
./cert-main-master.sh
登錄 node02.flyfish
cd /root
mkdir -p /etc/kubernetes/pki
mv *.crt *.key *.pub /etc/kubernetes/pki/
cd /etc/kubernetes/pki
mkdir etcd
mv etcd-* etcd
cd etcd
mv etcd-ca.key ca.key
mv etcd-ca.crt ca.crt
node02.flyfish 加入集羣
kubeadm join 192.168.100.100:6443 --token 3j4th7.4va6qsj7at7ky2qs \
--discovery-token-ca-cert-hash sha256:13d17c476688e4e78837b9cac94efa7edf689bf530a2120e2b81bf13b588fff9 \
--control-plane
登錄 node03.flyfish
cd /root
mkdir -p /etc/kubernetes/pki
mv *.crt *.key *.pub /etc/kubernetes/pki/
cd /etc/kubernetes/pki
mkdir etcd
mv etcd-* etcd
cd etcd
mv etcd-ca.key ca.key
mv etcd-ca.crt ca.crt
node03.flyfish 加入集羣
kubeadm join 192.168.100.100:6443 --token 3j4th7.4va6qsj7at7ky2qs \
--discovery-token-ca-cert-hash sha256:13d17c476688e4e78837b9cac94efa7edf689bf530a2120e2b81bf13b588fff9 \
--control-plane
node02.flyfish 與node03.flyfis 加載 環境變量
rsync -avrzP [email protected]:/etc/kubernetes/admin.conf /etc/kubernetes/
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile
查看節點
kubectl get node
kubectl get pod -o wide -n kube-system
5.4 將從節點加入集羣
node04.flyfish 加入 集羣
kubeadm join 192.168.100.100:6443 --token 3j4th7.4va6qsj7at7ky2qs \
--discovery-token-ca-cert-hash sha256:13d17c476688e4e78837b9cac94efa7edf689bf530a2120e2b81bf13b588fff9
node05.flyfish 加入集羣
kubeadm join 192.168.100.100:6443 --token 3j4th7.4va6qsj7at7ky2qs \
--discovery-token-ca-cert-hash sha256:13d17c476688e4e78837b9cac94efa7edf689bf530a2120e2b81bf13b588fff9
node06.flyfish 加入集羣
kubeadm join 192.168.100.100:6443 --token 3j4th7.4va6qsj7at7ky2qs \
--discovery-token-ca-cert-hash sha256:13d17c476688e4e78837b9cac94efa7edf689bf530a2120e2b81bf13b588fff9
kubectl get node
kubectl get pods -o wide -n kube-system
5.5 在node07.flyfish 上面進行測試
登錄 node07.flyfish
設置kubernetes源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl-1.16.4
命令補全:
yum install -y bash-completion
source /etc/profile.d/bash_completion.sh
拷貝admin.conf
mkdir -p /etc/kubernetes
scp [email protected]:/etc/kubernetes/admin.conf /etc/kubernetes/
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile
查看測試:
kubectl get nodes
kubectl get pod -n kube-system
5.6部署dashboard 界面
注:在node07.flyfish節點上進行如下操作
1.創建Dashboard的yaml文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
sed -i 's/kubernetesui/registry.cn-hangzhou.aliyuncs.com\/loong576/g' recommended.yaml
sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml
新增管理員帳號
vim recommended.yaml
到最後加上:
---
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
---
部署Dashboard
kubectl apply -f recommended.yaml
創建完成後,檢查相關服務運行狀態
kubectl get all -n kubernetes-dashboard
kubectl get svc -n kubernetes-dashboard
netstat -ntlp|grep 30001
在瀏覽器輸入Dashboard訪問地址:
https://192.168.100.11:30001
授權令牌
kubectl describe secrets -n kubernetes-dashboard dashboard-admin
----
新建一個pod
----
vim nignx.yaml
apiVersion: apps/v1 #描述文件遵循extensions/v1beta1版本的Kubernetes API
kind: Deployment #創建資源類型爲Deployment
metadata: #該資源元數據
name: nginx-master #Deployment名稱
spec: #Deployment的規格說明
selector:
matchLabels:
app: nginx
replicas: 3 #指定副本數爲3
template: #定義Pod的模板
metadata: #定義Pod的元數據
labels: #定義label(標籤)
app: nginx #label的key和value分別爲app和nginx
spec: #Pod的規格說明
containers:
- name: nginx #容器的名稱
image: nginx:latest #創建容器所使用的鏡像
----
kubectl apply -f nginx.yaml
kubectl get pod