三臺centOS7虛擬機搭建Kubernetes集羣【包括Docker安裝】的圖文詳細教程

Docker安裝

首先準備好三臺虛擬機,如何安裝虛擬機可以參考我以前的博文Linux(1)centOS7/RedHat7 VMwareWorkstation12安裝步驟

然後針對三臺虛擬機分別的IP地址如下所示:

虛擬機IP 虛擬機名
192.168.189.145 k8s-master
192.168.189.144 k8s-node1
192.168.189.146 k8s-node2

在三臺相應的/etc/hosts文件分別寫入,至少cpu爲2核,否則後續有一步操作會提示錯誤,最小要求不足。

本次安裝版本1.15版本的k8s版本,推薦安裝Docker CE 18.09

Docker安裝過程參考官網此文

刪除舊版本的docker

# 在 master 節點和 worker 節點都要執行
sudo yum remove -y docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

由於是首次安裝,所以並沒有刪除舊版本docker步驟,此外,安裝Docker的步驟屬於一個月前早期使用的步驟記錄,只需要挑選其中重要過程即可,至少配置docker啓動參數步驟不可省略。

以下命令均需要在root賬戶下運行。

設置Docker存儲庫

在新主機上首次安裝Docker之前,需要設置Docker存儲庫。之後,可以從存儲庫安裝和更新Docker。

安裝所需的包

yum-utils提供了yum-config-manager 效用,devicemapper存儲驅動程序由需要 device-mapper-persistent-datalvm2

 yum install -y yum-utils device-mapper-persistent-data  lvm2

在這裏插入圖片描述

設置穩定的存儲庫。

yum-config-manager  --add-repo https://download.docker.com/linux/centos/docker-ce.repo

在這裏插入圖片描述

(可選)啓用夜間或測試存儲庫。

這些存儲庫包含在docker.repo上面的文件中,但默認情況下處於禁用狀態。可以將它們與穩定存儲庫一起啓用。

以下命令啓用nightly存儲庫。

yum-config-manager --enable docker-ce-nightly

要啓用test通道,請運行以下命令:

yum-config-manager --enable docker-ce-test

可以通過運行帶有 --disableyum-config-manager 命令來禁用nightlytest存儲庫 。要重新啓用它,請使用 --enable 。以下命令禁用nightly存儲庫。

yum-config-manager --disable docker-ce-nightly

關於nightlytest 渠道的描述在這裏看見

安裝Docke ENGINE - COMMUNITY

安裝最新版本的Docker Engine - COMMUNITY和或者轉到下一步安裝特定版本的容器:

yum install docker-ce docker-ce-cli containerd.io

如果要安裝特定版本,可以先列出repo中的可用版本,然後選擇並安裝其中一個:

列出並對您的倉庫中可用的版本進行排序。

示例中按版本號對結果進行排序,從最高到最低,並被截斷:

[root@ tanqiwei]# yum list docker-ce --showduplicates | sort -r
已加載插件:fastestmirror, langpacks
可安裝的軟件包
 * updates: mirrors.cn99.com
Loading mirror speeds from cached hostfile
 * extras: mirrors.cn99.com
docker-ce.x86_64    3:19.03.1-3.el7                            docker-ce-test   
docker-ce.x86_64    3:19.03.1-3.el7                            docker-ce-stable 
docker-ce.x86_64    3:19.03.0-3.el7                            docker-ce-test   
docker-ce.x86_64    3:19.03.0-3.el7                            docker-ce-stable 
docker-ce.x86_64    3:19.03.0-2.3.rc3.el7                      docker-ce-test   
docker-ce.x86_64    3:19.03.0-2.2.rc2.el7                      docker-ce-test   
docker-ce.x86_64    3:19.03.0-1.5.beta5.el7                    docker-ce-test   
docker-ce.x86_64    3:19.03.0-1.4.beta4.el7                    docker-ce-test   
docker-ce.x86_64    3:19.03.0-1.3.beta3.el7                    docker-ce-test   
docker-ce.x86_64    3:19.03.0-1.2.beta2.el7                    docker-ce-test   
docker-ce.x86_64    3:19.03.0-1.1.beta1.el7                    docker-ce-test   
docker-ce.x86_64    3:18.09.8-3.el7                            docker-ce-test   
docker-ce.x86_64    3:18.09.8-3.el7                            docker-ce-stable 
docker-ce.x86_64    3:18.09.7-3.el7                            docker-ce-test   
docker-ce.x86_64    3:18.09.7-3.el7                            docker-ce-stable 
docker-ce.x86_64    3:18.09.7-2.1.rc1.el7                      docker-ce-test   
...
 * base: mirrors.cn99.com

返回的列表取決於啓用的存儲庫,並且特定於你的CentOS版本(.el7在此示例中以後綴表示)。

通過其完全限定的包名稱安裝特定版本,包名稱(docker-ce)加上從第一個冒號(:)開始的版本字符串(第2列),用連字符(-)分隔。例如,docker-ce-18.09.1

安裝docker18.09.7

yum install -y docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io

中途先快後慢速度有點捉急,最後終於裝好了。

在這裏插入圖片描述

Docker已安裝但尚未啓動。該docker組已創建,但沒有用戶添加到該組。

啓動Docker

用下面運行啓動docker

systemctl start docker

驗證是否安裝成功

先查看是否啓動成功

service docker status

在這裏插入圖片描述

通過運行hello-world 映像驗證是否正確安裝了Docker Engine - 社區。

docker run hello-world

此命令下載測試映像並在容器中運行它。當容器運行時,它會打印一條信息性消息並退出。

在這裏插入圖片描述

Docker Engine - 社區已安裝並正在運行。如果是非root用戶執行,目前需要使用sudo來運行Docker命令。

查看版本是否正確

docker version

在這裏插入圖片描述

設置開啓啓動Docker

systemctl enable docker && systemctl restart docker && service docker status

在這裏插入圖片描述

配置docker啓動參數

在三臺虛擬機分別執行下面操作

vim /etc/docker/daemon.json

然後寫入:

{
  "registry-mirrors": ["https://registry.docker-cn.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

重啓docker

systemctl  daemon-reload
systemctl  restart docker

測試從Docker Hub pull下鏡像

docker pull kubeguide/redis-master

在這裏插入圖片描述

K8S集羣搭建

由於CentOS7 默認啓用防火牆服務(firewalld),而Kubernetes的Master與工作Node之間會有大量的網絡通信,安全的做法是在防火牆上配置各組件需要相互通信的端口號。

由於目前是實測環節,可以在安全的內部網絡環境中可以關閉防火牆服務:

# 關閉 防火牆
systemctl stop firewalld
systemctl disable firewalld

輸出信息大概是這樣的:

在這裏插入圖片描述

建議在主機上禁用SELinux,讓容器可以讀取主機文件系統。或修改系統文件/etc/sysconfig/selinux,將SELINUX=enforcing修改成SELINUX=disabled,然後重啓Linux。

# 關閉 SeLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

執行重啓操作,重啓後記得查看docker是否在運行中【之前設置了開機啓動】。

關閉交換空間

# 關閉 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

配置iptable管理ipv4/6請求

vim /etc/sysctl.d/k8s.conf

寫入

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

使配置生效

sysctl --system

在這裏插入圖片描述

安裝kubeadm套件

編輯源

vim /etc/yum.repos.d/kubernetes.repo

寫入

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

下載

yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0

在這裏插入圖片描述

然後執行

systemctl enable kubelet

集羣初始化

kubeadm init --apiserver-advertise-address=192.168.189.145 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

注意上面的命令中 --apiserver-advertise-address是master節點的IP地址,以上命令在master節點運行。

在這裏插入圖片描述
輸出代碼是:

[root@k8s-master tanqiwei]# kubeadm init --apiserver-advertise-address=192.168.189.145 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.189.145]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.189.145 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.189.145 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 47.005142 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 4vlw30.paeiwou9nmcslgjb
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.189.145:6443 --token 4vlw30.paeiwou9nmcslgjb \
    --discovery-token-ca-cert-hash sha256:7d468fa1c5b477ae33689abc26bb0aef47293fd29348cf5a54070559f21751cb 

記住這裏面的token和發現密鑰。

接下來執行如下命令,切換成普通用戶,由輸出結果中給出的。

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

另外注意的是,需要對集羣進行網絡部署,方案多種,可以在點擊此處查看。
這裏選擇flannel。

https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml

事實上還可以是這個網址下的:

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

二者實際上一樣,只是如果訪問github比較困難,可以選擇第二個。

部署flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

在這裏插入圖片描述

集羣加入節點

然後在其他兩個節點,以root用戶權限進行:

kubeadm join 192.168.189.145:6443 --token 4vlw30.paeiwou9nmcslgjb  --discovery-token-ca-cert-hash sha256:7d468fa1c5b477ae33689abc26bb0aef47293fd29348cf5a54070559f21751cb 

在這裏插入圖片描述

在這裏插入圖片描述

驗證集羣

在主master節點上,運行命令:

kubectl get nodes

在這裏插入圖片描述
證明集羣加入成功。

安裝k8s儀表盤

k8s的儀表盤github地址點擊此處。目前版本v1.10.1,已經2.0.0的測試版,目前還是以穩定版爲主。以下所有命令都是在master節點運行。

首先是

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

將其中的image的值改成:

lizhenliang/kubernetes-dashboard-amd64:v1.10.1

接着運行下面命令:

kubectl apply -f kubernetes-dashboard.yaml

接着運行下面命令:

kubectl get pod -A -o wide |grep dash
kubectl get svc -A -o wide |grep dash

在這裏插入圖片描述

查看服務情況

kubectl -n kube-system describe pod

輸出很多:

[root@k8s-master Documents]# kubectl -n kube-system describe pod
Name:                 coredns-bccdc95cf-qklvg
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 k8s-master/192.168.189.145
Start Time:           Fri, 27 Sep 2019 13:50:19 +0800
Labels:               k8s-app=kube-dns
                      pod-template-hash=bccdc95cf
Annotations:          <none>
Status:               Running
IP:                   10.244.0.5
Controlled By:        ReplicaSet/coredns-bccdc95cf
Containers:
.....
Events:
  Type    Reason     Age    From                Message
  ----    ------     ----   ----                -------
  Normal  Scheduled  2m41s  default-scheduler   Successfully assigned kube-system/kubernetes-dashboard-79ddd5-hhff6 to k8s-node2
  Normal  Pulling    2m40s  kubelet, k8s-node2  Pulling image "lizhenliang/kubernetes-dashboard-amd64:v1.10.1"
  Normal  Pulled     2m11s  kubelet, k8s-node2  Successfully pulled image "lizhenliang/kubernetes-dashboard-amd64:v1.10.1"
  Normal  Created    2m11s  kubelet, k8s-node2  Created container kubernetes-dashboard
  Normal  Started    2m11s  kubelet, k8s-node2  Started container kubernetes-dashboard

因爲中間結果太多,用省略號代替那些輸出。

查看docker啓動情況

docker ps

在這裏插入圖片描述

創建登陸賬戶

因爲沒有創建用戶,所以無法進行登錄。用戶創建參考此處

你可以選擇以下兩個命令進行創建

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

接下來使用下面命令生成登錄token

kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

然後輸出如下所示:

[root@k8s-master Documents]# kubectl get pod -A -o wide |grep dash
kube-system   kubernetes-dashboard-79ddd5-hhff6    1/1     Running   0          7m27s   10.244.2.8        k8s-node2    <none>           <none>
[root@k8s-master Documents]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-9v4f2
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 7a49b752-5b4c-47b7-91a9-5f2da7ee62a7

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tOXY0ZjIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2E0OWI3NTItNWI0Yy00N2I3LTkxYTktNWYyZGE3ZWU2MmE3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.JjZnMwBia9XXZf8YXqpmP0EgZgJBCm48FUscls1v_5dACQ1yTfTb7eC9iaV3C4oAr23_qSi8gslbVp4TYDMxa_g-7jk1qEM5KIPtxpEMRbiY7X3yr2PZZLCyPn8LFc6WEASeUkCrPVVCYEw_lk45nnnseS-WG3FA4o9DM3Yba9Z7I7WpzINYl55mWY3m2uqL2l_Rl-CGQzFWLxUw-DDIAuz-IFtD4YF23zDGH7l9yNcbsFOmNmfRTt0jPEraCUdqOcmh0DqgrfX8iTRhCQ2gC4oLe23vuqZV_q18QagtpTEzR54Cca28uDnYC1zCEy-25Y3z4pSzP73EYvKd6oxgag

在這裏插入圖片描述

kubectl proxy

登錄地址:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

在這裏插入圖片描述

到此全部安裝成功

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章