kubeadm部署單master節點kubernetes集羣

Kubeadm 是一個官方推薦部署kubernetes工具,降低了部署難度,提高效率它提供了 kubeadm init 以及 kubeadm join 這兩個命令作爲快速創建 kubernetes 集羣的最佳實踐。kubeadm 通過執行必要的操作來啓動和運行一個最小可用的集羣。它被故意設計爲只關心啓動集羣,而不是準備節點環境的工作。同樣的,諸如安裝各種各樣的可有可無的插件,也不再它的負責範圍

一、各相關組件及機器環境

OS:CentOS 7.6 x86_64

Container runtime:Docker-ce 19.03

Kubernetes:1.17.0

IP地址 主機名 角色 CPU Memory
192.168.100.150 master master >=2c >=2G
192.168.100.156 node01 node >=2c >=2G
192.168.100.157 node02 node >=2c >=2G

1、編輯Master和各node的/etc/hosts,使其能夠使用主機名解析

192.168.100.150 master master 
192.168.100.156 node01 node01 
192.168.100.157 node02 node02

2、主機時間同步

$ systemctl enable chronyd.service 
$ systemctl status chronyd.service

3、關閉防火牆和Selinux服務

$ systemctl stop firewalld && systemctl disable firewalld 
$ setenforce 0  
$ vim /etc/selinux/config  
SELINUX=disabled

4、禁用Swap虛擬內存

$ swapoff -a 
$ sed -i 's/.*swap.*/#&/' /etc/fstab

二、部署kubernetes集羣

1、安裝docker

官方安裝教程

$ wget https://download.docker.com/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum install -y docker-ce
$ systemctl start docker && systemctl enable docker

配置docker鏡像下載加速

vim /etc/docker/daemon.josn
{
  "registry-mirrors": [ "https://registry.docker-cn.com" ]
}
$ systemctl daemon-reload && systemctl restart docker

2、配置內核參數

將橋接的IPv4流量傳遞到iptables的鏈

$ cat > /etc/sysctl.d/k8s.conf <<EOF 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
EOF 

$ sysctl --system

3、配置國內kuberneetes的yum源

由於網絡原因,中國無法直接連接到google的網絡,需要配置阿里雲的yum源

$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] 
name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ 
enabled=1 
gpgcheck=1 
repo_gpgcheck=1 
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
EOF

4、安裝kubectl、kubeadm、kubelet

[root@master ~]# yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0
[root@node01 ~]# yum install -y kubelet-1.17.0 kubeadm-1.17.0

Kubelet負責與其他節點集羣通信,並進行本節點Pod和容器生命週期的管理。

溫馨提示:如果yum安裝提示找不到鏡像之類的,請yum makecache更新下yum源

$ systemctl daemon-reload 
$ systemctl enable kubelet     #master和node節點設置開機自啓動kubelet

5、初始化集羣,在master上執行kubeadm init

[root@master ~]# kubeadm init --kubernetes-version=1.17.0 \
--apiserver-advertise-address=192.168.110.156. \
--image-repository registry.aliyuncs.com/google_containers \ 
--service-cidr=10.96.0.0/12 \ 
--pod-network-cidr=10.244.0.0/16  

//以下是執行完畢後輸出的部分信息 Your Kubernetes control-plane has initialized successfully!  

To start using your cluster, you need to run the following as a regular user:  

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  
	sudo chown $(id -u):$(id -g) $HOME/.kube/config  
	
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  

	https://kubernetes.io/docs/concepts/cluster-administration/addons/  

Then you can join any number of worker nodes by running the following on each as root:  

kubeadm join 192.168.100.156:6443 --token cxins6.pxbyomo4pp1mnrao \   
	--discovery-token-ca-cert-hash sha256:35876ef6f2e5fe7eb5c7bb709dbd5e09d0e9e7d3adf41cbe708eec4fb586c8d6
  • –kubernetes-version 正在使用的Kubernetes程序組件的版本號,需要與kubelet 的版本號相同

  • –pod-network-cidr : Pod網絡的地址範圍,其值爲CIDR格式的網絡地址;使用flannel網絡插件時,其默認地址爲10.244.0.0/16

  • –service-cidr: Service 的網絡地址範圍,其值爲CIDR格式的網絡地址,默認地址爲10.96.0.0/12

  • –apiserver-advertise-address : API server通告給其他組件的IP地址 ,一般應該爲Master節點的 IP 地址,0.0.0.0 表示節點上所有可用的地址選擇其中一個

使用systemd作爲docker的cgroup driver可以確保服務器節點在資源緊張的情況更加穩定,因此這裏修改各個節點上docker的cgroup driver爲systemd。

#創建或修改/etc/docker/daemon.json: 
{ 
"exec-opts": ["native.cgroupdriver=systemd"] 
} 

#重啓docker: $ systemctl restart docker 
#驗證 docker info | grep Cgroup 
Cgroup Driver: systemd

6、配置kubectl工具

[root@K8sMaster ~]# mkdir -p /root/.kube 
[root@K8sMaster ~]# sudo cp /etc/kubernetes/admin.conf /root/.kube/config  [root@K8sMaster ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@K8sMaster ~]# kubectl get cs 
NAME                STATUS  MESSAGE       
ERROR etcd-0        Healthy  {"health":"true"}  
controller-manager  Healthy  ok         
scheduler           Healthy  ok

上面的STATUS結果爲"Healthy",表示組件處於健康狀態,否則需要檢查錯誤,如果排除不了問題,可以使用"kubeadm reset" 命令重置集羣后重新初始化

[root@master ~]# kubectl get nodes 
NAME     STATUS    ROLES   AGE  VERSION 
master   NotReady  master  10m  v1.17.0

此時的Master處於"NotReady"(未就緒),因爲集羣中尚未安裝網絡插件,部署完網絡後會ready,下面部署flannel

7、部署flannel網絡

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

下面看下集羣的狀態

$ kubectl get nodes 
NAME    STATUS  ROLES   AGE  VERSION 
master  Ready   master  17m  v1.17.0

集羣處於Ready狀態,node節點可以加入集羣中

8、node節點加入集羣

[root@node01 ~]# kubeadm join 192.168.100.156:6443 --token 2dt1wp.oudskargctjss991 \
--discovery-token-ca-cert-hash sha256:15aa0537c14d50df4fc9f45b6bdff0c30f8ef7114463a12e022e33619936266c  

//以下是部分輸出信息  

This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

執行完畢後稍等一會,在主節點上查看集羣的狀態,到這裏我們一個最簡單的包含最核心組件的集羣搭建完畢!

$ kubectl get nodes
NAME       STATUS   ROLES   AGE    VERSION 
master     Ready    master  34m    v1.17.0 
node01     Ready    <none>  6m14s  v1.17.0
node02     Ready    <none>  6m8s   v1.17.0

三、安裝其他附件組件

1、查看集羣的版本

$ kubectl version --short 
Client Version: v1.14.3 
Server Version: v1.14.3

2、安裝dashboard,使用UI界面管理集羣

創建dashboard的yaml文件

$wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

修改部分配置文件內容

$ sed -i 's/k8s.gcr.io/loveone/g' kubernetes-dashboard.yaml $ sed -i '/targetPort:/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' kubernetes-dashboard.yaml

部署dashboard

[root@master ~]# kubectl create -f kubernetes-dashboard.yaml 
secret/kubernetes-dashboard-certs created 
serviceaccount/kubernetes-dashboard created 
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created 

創建完成後,檢查各服務運行狀態

[root@master ~]# kubectl get deployment kubernetes-dashboard -n kube-system
NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
kubernetes-dashboard   1/1     1            1           89s

[root@master ~]# kubectl get services -n kube-system
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   61m
kubernetes-dashboard   NodePort    10.102.234.209   <none>        443:30001/TCP            16m
[root@master ~]# netstat -ntlp|grep 30001
tcp6       0      0 :::30001                :::*                    LISTEN      17306/kube-proxy

使用Firefox瀏覽器輸入Dashboard訪問地址:https://192.168.100.156:30001

這裏使用其他如chrome會提示安全問題無法連接!!!

查看訪問Dashboard的token

[root@master ~]# kubectl create serviceaccount  dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@master ~]# kubectl create clusterrolebinding  dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-9hglw
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 30efdd50-92bd-11e9-91e3-000c296bd9bc
 
Type:  kubernetes.io/service-account-token
 
Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tOWhnbHciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzBlZmRkNTAtOTJiZC0xMWU5LTkxZTMtMDAwYzI5NmJkOWJjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.Bg9FOIr6RkepjCFav8tbkbTALGEX7bZJMNOYMOrYhFPhnhCs1RSxop7pCGBtdjug_Zpsb9UJ1WNWTsCInUlMYtSHkbaqVLZQEdIgD6jGb177CxIZBcCuxmxxQm0JMJdYjc6Y_1wYSTJGHtmWOHa70pUEcKo9I0LonTUfHCZh5PgS3JrwiTrsqe1RGyz3Jz4p9EIVPfcxmKCowSuapinOTezAWK2XAUhk2h5utXgag6RRnrPcHtlncZzW5fMTSfdAZv5xlaI64AM__qiwOTqyK-14xkda5nbk9DGhN5UwhkHzyvU6ApGT7A9Tr3j3QkMov9gEyVIDbSbBaSj8xBt36Q

k8sdashboardlogin.png

k8sdashboardweb.png

3、重置集羣初始狀態

kubeadm reset

四、檢查集羣功能

1、測試DNS功能

kubectl  apply -f dns-test-busybox.yaml 

kubectl exec -ti busybox -- nslookup kubernetes.default  

dns-test-busybox.yaml

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox:1.28          #注意這個busybox的版本是個坑
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always

2、部署一個Nginx應用

[root@master ~]# kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created

nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  ports:
  - port: 88
    targetPort: 80
  selector:
    app: nginx
  type: NodePort
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章