實操kubeadm安裝部署Kubernetes集羣全過程 V1.18.0

首先感謝王躍輝我輝哥提供的技術支持,嘿嘿!

準備工具:VMware(已經啓動好三臺Linux服務器Centos7.5及以上),遠程連接工具putty,xshell等

如果還沒有安裝虛擬機,或者不知道怎麼操作,請移步這裏:

https://blog.csdn.net/ma726518972/article/details/106250012

注意:服務器至少是兩核,如不是,可以通過編輯虛擬機設置。

所有主機配置如下

IP地址 主機名 節點角色 K8S版本 安裝方式
192.168.218.131 k8smaster master V1.18.0 kubeadm
192.168.218.132 k8snode1 node1 V1.18.0 kubeadm
192.168.218.133 k8snode2 node2 V1.18.0 kubeadm

 建議先配置完Master服務器,再配置node服務器。

第一步:環境配置

1.1修改主機名(所有服務器都執行)

hostnamectl set-hostname k8smaster

1.2配置hosts解析(所有服務器都執行)

cp /etc/hosts /etc/hosts.bak`date +%F`
cat >> /etc/hosts <<'EOF'
192.168.218.131  k8smaster
192.168.218.132  k8snode1
192.168.218.133  k8snode2
EOF

1.3關閉防火牆(所有服務器都執行)

# 關閉 firewalld 防火牆(允許 master 和 node 的網絡通信)

systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld

1.4關閉 SElinux 安全模組(所有服務器都執行)

# 關閉 SElinux 安全模組(讓容器可以讀取主機的文件系統)

setenforce 0
sed -i.bak`date +%F` 's|SELINUX=.*|SELINUX=disabled|g' /etc/selinux/config

1.5關閉 Swap 交換分區(所有服務器都執行)

# 關閉 Swap 交換分區(啓用了 Swap,則 Qos 策略可能會失效)

swapoff -a && sed -i.bak "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab

1.6修改阿里的 yum 源(所有服務器都執行)

# 配置阿里的yum源 
# 一下順序不能錯
# 安裝wget命令

yum install -y wget
mkdir -p /etc/yum.repos.d/bak`date +%F` && yes|mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak`date +%F`
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all && yum makecache fast

1.7安裝docker容器(所有服務器都執行)

## Install required packages.

yum install -y yum-utils device-mapper-persistent-data lvm2

## Add Docker repository.

yum-config-manager --add-repo \
  https://download.docker.com/linux/centos/docker-ce.repo

## Install Docker CE.

yum update -y && yum install -y \
  containerd.io-1.2.10 \
  docker-ce-19.03.4 \
  docker-ce-cli-19.03.4

## Create /etc/docker directory.

 mkdir -p /etc/docker

# Setup daemon.

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker

systemctl daemon-reload
systemctl restart docker

第二步:安裝kubeadm

2.1安裝 Kubeadm 和相關工具(所有服務器都執行)

yum list|egrep 'kubelet|kubeadm|kubectl'

# 注:kubectl僅僅是個二進制文件而已即 /usr/bin/kubectl

yum install -y kubelet kubeadm kubectl

# 啓動 Kubelet

systemctl start kubelet && systemctl enable kubelet

 #查看kubelet狀態

systemctl status kubelet

運行 systemctl status kubelet 發現kubelet服務啓動失敗,錯誤代碼255。
kubelet.service: main process exited, code=exited, status=255/n/a

後來查了資料,運行 journalctl -xefu kubelet 命令查看systemd日誌才發現,真正的錯誤是:
unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory

這個錯誤在運行kubeadm init 生成CA證書後會被自動解決,此處可先忽略。
再回過頭來看Kubernets官方文檔,其實裏面已經寫了很清楚了:

The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do. This crashloop is expected and normal, please proceed with the next step and the kubelet will start running normally.

簡單地說就是在kubeadm init 之前kubelet會不斷重啓。

第三步:創建 Kubernetes 集羣

3.1初始化 Master 節點(在Master服務器)

# 注1:版本要根據實際的來更改,我安裝的是K8S 1.18.2,所以改成了 --kubernetes-version=v1.18.2
# 注2:pod-network-cidr我將會用Flannel 所以改爲了 --pod-network-cidr=10.244.0.0/16

kubeadm init --kubernetes-version=v1.18.2 \
--pod-network-cidr=10.244.0.0/16 \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers

 # 大約一分鐘
# 輸出以下內容表示成功:....................  中間日誌省略,最後的日誌如下   ...........................

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.218.131:6443 --token yptahz.1rm8nbkg1frzbhlh \
    --discovery-token-ca-cert-hash sha256:dc7ebc35051b0ee8c6dcb8f12f2fc8b61766cef8960210c839ea51e722e6a26c

#  上面一句是 Node 節點加入集羣的命令,記得保存一下,而且token的有效期是24小時,如果過期了,使用kubeadm token create再生成一個token
#  上面過程中生成的配置文件在  /var/lib/kubelet/config.yaml

3.2在master上操作,拷貝上面的命令(在Master服務器)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 # 打印文件看一下內容

[root@localhost ~]# cat $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EVXlNVEV6TVRFMU1sb1hEVE13TURVeE9URXpNVEUxTWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS0ZNCk05aTVoZERRWTJUR01hd1lYeVpHM1BIOHpHRzFEVmVGQkhNbEx4SW9FemFVZzgyc0U2NGtlN0E3dk1HWXdhY0IKTHJoRTBxb21kTDJ0eXhBME1VMHNSaFVkUHc1Yi90L0I3VElUcFBYVGRlbEFXbDV1N25GWlVmd29vbDMxQk1TUQpkTHNrdmc4RUJiSHVVZ0lJbGdpUnRSY2IvRlZleGl4NjhRZ2JXZjBhWFFRYWYrSkhqSlpTb0ZDc3VoYXpMTTZVCm8zREs2eHloN2REWFY5UFZ0ZEZpYU1iNDBGdnZWS3AvK09zeCtnbmFkSWlWQ1NIbitWTmM4ZjNCd0YvdVJCcXkKdFljNVpneDhQR1FTdzNHUVg5VFVqcGJwdktOUmgwUERYK0ljWTFZdjRvelNLR3hSdW5wODdLQWhUb2pkd0RXbgpvV25hNks0TlBjNU0zSnVDODdNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFFWXZLWm9XZnlqZVNEaHpqTUhBSWRrU3VUbWIKNzZhcUxCVUtPOFJUZ3E2Z0RYSVp4aEx2aWhPR3Z4RUFMbFFhT0pvcWJiYm05cXV1Z053WTR3alVud3Q0TUY0KwpBTXd1T2QrTFNJdU1TUDBmMnZ5aU1JZTVGR2thVW9PYWlQN3ppRHQzM2JSVmltZFMvczUxY1JlM0RNdWF5VjNUCngxa09qTkFPNDQrZDF4Q2tEVUZJMlk1QzNYdUFXYTNMQ3F2Z3IrQWtPV3VKR21DcWVXMjVBVlYvY1hlU1E1d2EKNUs5a25OQjgrd2NpWTl6ZjQ2dWZ1YytVZkZLa3JFSlkwMHRjQ3A3SWdtcDZ3a1VCZU1xWFJ2UG9rR1Z1QjJQbApubU9VeCt3ekdzMXBYSmdaekZ5ZDFKa3ZGMjlmaDhTZEZ1ZkxpckQ2K2w2T2JlSHVwakcrbjZFckNmRT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.218.131:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJSU1HQ2ZUbU50TnN3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBMU1qRXhNekV4TlRKYUZ3MHlNVEExTWpFeE16RXhOVFJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXFmTEV6UEpqd29GZFZXZGIKL2lIOVRmcUVzRWNHQjVvYW9TcVhsWGxSUTFNWHBVRkNYbG02K25KTHZSZHhDSVJDY0RCODFJVmp5QUZOT01jTApXb1hLcFVyOTJxN1hzZGhUb2VFSlZTZk92WkFQemVKeFN3ZjZCUnpna3FuamtmTzJkTzNMcngrSm9PSGFWR3FVCnd3eGJ5Mk5OTWx1aFFPZE1VZ25NMmZWb2xUMkxZQldVajgyZDBML1ErcG16VmxzUnozMHB6cXFKRGlKb1pxVWEKTmhHQjljaUNNQmhURUF2bVlyV3d1OUpaWmxLRUlsZTZEL1FIRlkzaC9vSnRLeVB1NTh4ZjRQTTRDNnJYcnRPegpYTE9GclFRZThWY1ZFKzlpT2t6bVliaUhJK3loSHJXNCs1aSt1c0JxekV5YWVjQXRWcDMzWTFnYUVWVzF3TGxaCnJ1b3BiUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIV1R0WHdqdlFtVzNuV2k1NkRoaWY4ZTVpQmkrb0Y0YlQ2Qgpzc3FDWmVlc0V3WVpudXBGeHVMNGVuQldtWG1idi96c3NVLzRDMzdjS01ZWUlCbU5CZFUxdW5uYXVqTTFQQVJSCkVjbXQ1QlgzVjEyVm9BOUlaVXQ4YnFkL2pSYXdvSzU2OHM0eGF4TW5UeGh3cSt0TXpJV3piL0JlZTBHdGRyRE0KS3dKT2V2cFZvcEdXTjlCZW1GZUFXMU5Odlg5L3NtenF0enNoR3ZvYVBzbGgyMzBKNkRKK3B5ZC9HR3k1TzlPcQpocld3c2VRWVU2MUdteHdmdnd4RmxNanVEdlFaUUJ2SFF1Q2RpaXJFUjRzNjZvNkJHNGwzYis2WWt5UWo0WkdiClNWek1ZT0luY1E0MmNPR21IUTZaL0thdjdmc1FQS1VnSXRvdTcrVWxVMTJsYWdlc0hEbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBcWZMRXpQSmp3b0ZkVldkYi9pSDlUZnFFc0VjR0I1b2FvU3FYbFhsUlExTVhwVUZDClhsbTYrbkpMdlJkeENJUkNjREI4MUlWanlBRk5PTWNMV29YS3BVcjkycTdYc2RoVG9lRUpWU2ZPdlpBUHplSngKU3dmNkJSemdrcW5qa2ZPMmRPM0xyeCtKb09IYVZHcVV3d3hieTJOTk1sdWhRT2RNVWduTTJmVm9sVDJMWUJXVQpqODJkMEwvUStwbXpWbHNSejMwcHpxcUpEaUpvWnFVYU5oR0I5Y2lDTUJoVEVBdm1Zcld3dTlKWlpsS0VJbGU2CkQvUUhGWTNoL29KdEt5UHU1OHhmNFBNNEM2clhydE96WExPRnJRUWU4VmNWRSs5aU9rem1ZYmlISSt5aEhyVzQKKzVpK3VzQnF6RXlhZWNBdFZwMzNZMWdhRVZXMXdMbFpydW9wYlFJREFRQUJBb0lCQUFReGNqdWdTMmZVSzBwZApKMzdvdGNoRHd4eGFWRUxCd2FCeVhaVVpqakM4RHh4THRPaUJERVQ3cHZTK2JGS0tlTjB0eFJhMVI5WDZlajVKCll2VlQwY0VzVFlFa3lUdWhHOGNsdDBZN21qVkJKYkt0d0ovYVRZZnN3M202NlZ1RGlOL3ZzaFBiRWxrKzJWVTEKMy8vRUFVdk9ZbXc0cUl6aWFCYXFHVHpUZWtZY1dVMDJPaTN1NjdMcFdDVEkxYVVoMGxrWC9qaVNsUlRocTRrWQoySm1DdmNaMlpxTmNqZFEwZVJML2phR1I1SFg3Q3ZmRXBNVGVOd29xbEVBUTdwQjJZdWlML1Z6L1N6M25IL1QwClUvb20wTXBaZWxlMEZuVWw0cGxEZjY3OXI4VHFpYXdyYUIwQXV1aVM5aXBLY2dkRVZrdnBQL0M1dlZHSTYvM1MKYVJiS3ROMENnWUVBd0RjS3ZHdlhlQi9mUTN3amZvYWE3REtCbnhvQXRRZCtENkp2NWlPY21rNjIyRDlvS1ZtbApCbGxLQzZDWjk3N28xdEVreEtOWm5EaGhRZ0Uzay9oS3RBR2puK25JRGFaWGVOdzBpZTc0TWF1ZWRYeVhNckk0CllRYUErRVhvU0tpbXZWcmRqUUFMbTcybXlSSWM5VXpIUnlESTQ4SUJqWWVkSTJBTlM1Z2hoeU1DZ1lFQTRsZ2oKS2M2Y3dZQ1ZWODNJVGlYTk5oZDNkSE9nT2NBNjEweWVEcytYc0lhV0FHdGFwQ0ZnZjR2dTh2cXVvOW5XSWdjQwpiYW93L2dvSjR5d2dLVDY2N3NJOHRrOW42L3pISDN1by9RWVEvTDhFK254d1lodGVFZlhnL3FlTXJpaVZIUjlXCjhJeHYzS1c2cnF0ZHFrbVI1MGQ2T2NuckMwNmhYOEdvUlA1RzNpOENnWUVBbVhsWmNTa0tXamZZcEtHeUZZeVUKbHBPZE85UWZUR3czRTNTM3RDSXJJR3BKUkZFY2NpZkp4RS8yOTJHOGpqdzQzWTBRdHBGWE00MHcydXJ0M1pBYQoxYStaWGsza0ZrSURCZFdOZmJUNUoyL0lqalowNDEyNTluNmk2NW1sNXA0Q3hKNlExOHg1ZUZqdG13NkRZTGwxClJDM0JPVm5tczRMY3pTb2NjNGQ4L2RFQ2dZRUF1UFpUVGMrMFU0QXpDangwU2tBajBPY2VTOEJORjhSWmtTVGcKS0xSRmZoQ05OYXlFdG9rNzVSN0IxamM2VFZVdTRrR2VIMldyZ1gxTWxTS3k2V0dFdXFWcG5ZV0lJOVUrRnlFagplQmpqK3RaU1NDczJYMFdEK3VOVnlHTzgxM2o4V1g4SnVhclpvcEtmMmlyWmNOV0w4Rlo5c0Ftc0ZHSmVCdlVtCi83SlcwU3NDZ1lFQXBkLzhFWjlCMjh1MDBTOVZQM1BPZndqRWtudlNJcmdMQmFMVVB2aG5BS0JsU1hZdUZIb1IKWmdvV0p6QWVsMk42ZGwyTGdHNU1GYmtVM2ZNNW41TC9RbHYyZ0NSRGlDTDFzS3lWR1dPM29RM2FMSkRmNzFwMQpTZW9iU2NMR09UeWU2MFR1SWVpSmZFTU85b25yc3c2a3B4TUE2U2ZzaTk5NDUwTlBsbWY3ZzhvPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

3.3安裝flanneld(在master服務器上)

只在master節點安裝flanneld即可,其它節點會自動安裝的flannel pod,多少節點多少個flannel pod

# 如果不好下載多試兩次,也可以用瀏覽器下載工具下來,然後再傳到服務器

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

#  因爲kube-flannel.yml文件中使用的鏡像爲quay.io的,國內無法拉取,所以同樣的先從國內源quay.azk8s.cn或quay-mirror.qiniu.com上下載。

grep 'quay.io/coreos' kube-flannel.yml
[root@localhost ~]# grep 'quay.io/coreos' kube-flannel.yml
        image: quay.io/coreos/flannel:v0.12.0-amd64
        image: quay.io/coreos/flannel:v0.12.0-amd64
        image: quay.io/coreos/flannel:v0.12.0-arm64
        image: quay.io/coreos/flannel:v0.12.0-arm64
        image: quay.io/coreos/flannel:v0.12.0-arm
        image: quay.io/coreos/flannel:v0.12.0-arm
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        image: quay.io/coreos/flannel:v0.12.0-s390x
        image: quay.io/coreos/flannel:v0.12.0-s390x
sed -i.bak`date +%F` '[email protected]/[email protected]/coreos@g' kube-flannel.yml
kubectl apply -f kube-flannel.yml

flanneld默認安裝在kube-system Namespace中,使用以下命令查看:

kubectl -n kube-system get pods
[root@localhost ~]# kubectl -n kube-system get pods
NAME                                READY   STATUS                  RESTARTS   AGE
coredns-546565776c-pp6gr            0/1     Pending                 0          29m
coredns-546565776c-r644s            0/1     Pending                 0          29m
etcd-k8smaster                      1/1     Running                 0          29m
kube-apiserver-k8smaster            1/1     Running                 0          29m
kube-controller-manager-k8smaster   1/1     Running                 0          29m
kube-flannel-ds-amd64-hml7j         0/1     Init:ImagePullBackOff   0          15m
kube-proxy-gfzgb                    1/1     Running                 0          29m
kube-scheduler-k8smaster            1/1     Running                 0          29m

3.添加節點(節點服務器執行,master服務器不執行

添加 node 節點,執行如下命令,使所有 node 節點加入 Kubernetes 集羣;

# 使用上面master初始化成功時所打印的命令

kubeadm token create  # 由於token過期了,主節點再重新生成一個,如果沒有過期則不用執行此命令

# 在 所有 node 節點都執行如下的命令,如下命令就是主節點在初始化成功時的日誌中的命令,直接複製上面初始化時的命令即可 

# 這裏的192.168.218.131是主節點的IP

kubeadm join 192.168.218.131:6443 --token yptahz.1rm8nbkg1frzbhlh \
    --discovery-token-ca-cert-hash sha256:dc7ebc35051b0ee8c6dcb8f12f2fc8b61766cef8960210c839ea51e722e6a26c
​
----------------------- 日誌輸出如下,則加入成功--------------------------------

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

第四步:查看所有節點

在Master服務器上執行

 kubectl get nodes

恭喜你!成功完成Kubernetes集羣的搭建,點個贊再走吧!

(有問題可評論留言,不定時解答)

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章