centos7 kubernetes 集羣部署

centos 7 kubernetes 集羣部署


主機(虛擬機)信息:

[root@k8s-master ~]# cat /etc/redhat-release 
CentOS Linux release 7.7.1908 (Core)
節點名稱 IP
k8s-master 192.168.1.86
k8s-node1 192.168.1.87

注:

1、k8s版本可自行選擇,此處以 1.16.2 爲例。
2、除集羣初始化僅master執行以外,其餘部署步驟在所有節點上執行。

1. centos 7 配置


關閉防火牆、關閉selinux、更新源

#防火牆
systemctl disable firewalld.service

#關閉Selinux
	sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
## 或者
    /etc/selinux/config
    #將其中的 SELINUX=*處修改爲如下
    SELINUX=disabled
#重啓服務器
#運行命令getenforce 確保 selinux 爲disable

#安裝wget
yum install -y wget
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
#更新 源
yum upgrade

2. host配置


cat /etc/hosts
#k8s nodes
192.169.1.86	k8s-master
192.168.1.87	k8s-node1

cat /etc/hostname
 ## 節點名稱 k8s-master或k8s-node1
 
# 重啓
reboot

3. 創建/etc/sysctl.d/k8s.conf文件


#修改內核參數
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
#執行sysctl -p /etc/sysctl.d/k8s.conf生效(sysctl --system)
sysctl -p /etc/sysctl.d/k8s.conf

#如果有如下報錯:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
#解決方法:
#安裝bridge-util軟件,加載bridge模塊,加載br_netfilter模塊
yum install -y bridge-utils.x86_64
modprobe bridge
modprobe br_netfilter

#關閉swap
swapoff -a
echo "vm.swappiness=0" >> /etc/sysctl.d/k8s.conf
#使生效
sysctl -p /etc/sysctl.d/k8s.conf

4. 安裝軟件源配置


#配置k8s軟件源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

5. 安裝docker


##先校正時間 否則 無法運行docker!!!!
    # 1.安裝ntpdate工具
    sudo yum -y install ntp ntpdate
    # 2.設置系統時間與網絡時間同步
    sudo ntpdate cn.pool.ntp.org
    # 3.將系統時間寫入硬件時間
    sudo hwclock --systohc
    # 4.查看系統時間
    timedatectl

#安裝docker
yum install -y docker-io
#啓動docker並設置開機啓動
systemctl enable docker && systemctl start docker

6. 安裝kubernetes——自選版本


此處指定版本——1.16.2

#查看軟件包版本
yum list --showduplicates | grep 'kubeadm\|kubectl\|kubelet'
#安裝軟件 指定版本
yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2

#啓動服務並設置開機自啓
systemctl start kubelet && systemctl enable kubelet

7. 修改配置


#kubernetes 配置
#/usr/bin 目錄下 執行以下操作
## kubelet  kubeadm  kubectl更新權限
cd /usr/bin && chmod a+x kubelet  kubeadm  kubectl
export KUBECONFIG=/etc/kubernetes/admin.conf
iptables -P FORWARD ACCEPT

#docker 配置
##編輯 /lib/systemd/system/docker.service 在[Service] 下添加下面一行
ExecStartPost=/sbin/iptables -P FORWARD ACCEPT
##重啓docker
systemctl daemon-reload
systemctl restart docker

8. 拉取鏡像並tag


由於鏡像默認從國外網站拉取,被牆,故自行從國內雲拉取。
運行 kubeadm config images list 查看所需要的鏡像以及版本號,再從阿里雲拉取這些鏡像。

[root@k8s-master bin]# kubeadm config   images  list
W0108 19:53:17.464386   10103 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0108 19:53:17.464460   10103 version.go:102] falling back to the local client version: v1.16.2
k8s.gcr.io/kube-apiserver:v1.16.2
k8s.gcr.io/kube-controller-manager:v1.16.2
k8s.gcr.io/kube-scheduler:v1.16.2
k8s.gcr.io/kube-proxy:v1.16.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2

拉取對應鏡像docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/鏡像名:版本號

## 對應上面版本號
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.16.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.16.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.16.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.16.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.15-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.2

tag鏡像:

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.16.2 k8s.gcr.io/kube-proxy:v1.16.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.15-0 k8s.gcr.io/etcd:3.3.15-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.2 k8s.gcr.io/coredns:1.6.2

9. 使用kubeadm init初始化集羣(僅master)


詳細參數查詢地址: https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/

--apiserver-advertise-address string
API 服務器所公佈的其正在監聽的 IP 地址。如果未設置,則使用默認網絡接口。
--image-repository string     默認值:"k8s.gcr.io"
選擇用於拉取控制平面鏡像的容器倉庫
--kubernetes-version string     默認值:"stable-1"
爲控制平面選擇一個特定的 Kubernetes 版本。
--service-cidr string     默認值:"10.96.0.0/12"
爲服務的虛擬 IP 地址另外指定 IP 地址段
--pod-network-cidr string
指明 pod 網絡可以使用的 IP 地址段。如果設置了這個參數,控制平面將會爲每一個節點自動分配 CIDRs。
##部署Kubernetes Master
##在192.168.1.86(Master)執行
##由於默認拉取鏡像地址k8s.gcr.io國內無法訪問,這裏指定阿里雲鏡像倉庫地址

kubeadm init \
--apiserver-advertise-address=192.168.1.86 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.16.2 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

初始化成功,顯示:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.86:6443 --token pwwmps.9cds2s34wlpiyznv \
    --discovery-token-ca-cert-hash sha256:a3220f1d2384fe5230cad2302a4ac1f233b03ea24c19c165adb5824f9c358336

然後在master執行如下命令:

# 等待命令執行完畢後執行如下命令
## 在master上執行以下命令  
mkdir -p $HOME/.kube  
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  
sudo chown $(id -u):$(id -g) $HOME/.kube/config 

##安裝flannel網絡組件
## 在master上執行以下命令
kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

## 若出現無法下載安裝flannel組件

##查看節點
kubectl get node
##查看集羣狀態
kubectl get cs
# 可能會出現node notready的情況 在master執行
kubectl get pod --all-namespaces -o wide

Master節點初始化成功,狀態可能是NotReady,要等一段時間
如果初始化不成功,可以參考博文:https://www.jianshu.com/p/f53650a85131 進行修復

初始化問題


  1. 由於沒安裝 flannel 組件
[root@k8s-master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-58cc8c89f4-dwg8r             0/1     Pending   0          24m   <none>         <none>       <none>           <none>
kube-system   coredns-58cc8c89f4-jx7cw             0/1     Pending   0          24m   <none>         <none>       <none>           <none>
  1. 無法直接從官網下載 flannel 組件安裝 yml 文件

    參考:https://blog.csdn.net/fuck487/article/details/102783300

[root@k8s-master ~]# kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
## 解決方法:自行創建 或 ftp 傳輸本地 kube-flannel.yml
vi $HOME/kube-flannel.yml
## 
	## 粘貼內容 kube-flannel.yml 
##

## 安裝 
[root@k8s-master ~]# kubectl apply -f ./kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

10. 補充命令


## 然後在 master 和 node 上都執行此命令
[root@k8s-master bin]# modprobe ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh
modprobe: ERROR: could not insert 'ip_vs': Unknown symbol in module, or unknown parameter (see dmesg)
## 刪去 ip_vs
[root@k8s-master bin]# modprobe ip_vs_rr ip_vs_wrr ip_vs_sh
modprobe: ERROR: could not insert 'ip_vs_rr': Unknown symbol in module, or unknown parameter (see dmesg)
## 在執行
[root@k8s-master bin]# modprobe ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh

##查看確保內核開啓了ipvs模塊
[root@k8s-master bin]# lsmod|grep ip_vs
ip_vs                 145497  0 
nf_conntrack          139224  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

11. 添加節點


獲取 kubeadm join 命令

# 獲得添加節點命令 master上執行  kubeadm token create --print-join-command
[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.1.86:6443 --token a1qmdh.d79exiuqbzdr616o     --discovery-token-ca-cert-hash sha256:a3220f1d2384fe5230cad2302a4ac1f233b03ea24c19c165adb5824f9c358336

node節點上執行 join 添加節點

[root@k8s-node1 bin]# kubeadm join 192.168.1.86:6443 --token otjfah.zta4yo0bexibbj52     --discovery-token-ca-cert-hash sha256:60535ebe96b6a4cceab70d551f2b2b507a3641c3dc421469320b915e01377e5c
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

12. 刪除節點


master節點上:

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   3h9m   v1.16.2
k8s-node1    Ready    <none>   116s   v1.16.2
[root@k8s-master ~]# kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets
node/k8s-node1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-gmq2b, kube-system/kube-proxy-q9ppx
node/k8s-node1 drained
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS                     ROLES    AGE     VERSION
k8s-master   Ready                      master   3h10m   v1.16.2
k8s-node1    Ready,SchedulingDisabled   <none>   2m43s   v1.16.2
[root@k8s-master ~]# kubectl delete node k8s-node1
node "k8s-node1" deleted
[root@k8s-master ~]# 

刪除節點(k8s-node1)上:

[root@k8s-node1 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y	## y 確認
[preflight] Running pre-flight checks
W0109 13:39:15.848313   79539 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

附錄:查詢命令


##查看節點 在master執行
kubectl get nodes

##查看集羣狀態 在master執行
kubectl get cs

# 可能會出現node notready的情況 在master執行
kubectl get pod --all-namespaces -o wide
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章