一.概念解釋
Kubernetes的名字來自希臘語,意思是“舵手” 或 “領航員”。K8s是將8個字母“ubernete”替換爲“8”的縮寫。
Kubernetes是Google開源的一個容器編排引擎,它支持自動化部署、大規模可伸縮、應用容器化管理。在生產環境中部署一個應用程序時,通常要部署該應用的多個實例以便對應用請求進行負載均衡。
在Kubernetes中,我們可以創建多個容器,每個容器裏面運行一個應用實例,然後通過內置的負載均衡策略,實現對這一組應用實例的管理、發現、訪問,而這些細節都不需要運維人員去進行復雜的手工配置和處理。
特點:
- 可移植: 支持公有云,私有云,混合雲,多重雲(multi-cloud)
- 可擴展: 模塊化,插件化,可掛載,可組合
- 自動化: 自動部署,自動重啓,自動複製,自動伸縮/擴展
二.構建K8s管理集羣
1.清除實驗環境(server1、server2、server3操作相同)
- 清除列表管理(之前部署swarm的集羣)
[root@server1 ~]# docker service ls
[root@server1 ~]# docker service rm web
[root@server1 ~]# docker stack rm portainer
- 清理卷
[root@server1 ~]# docker volume ls
[root@server1 ~]# docker volume prune
- 刪除容器
[root@server1 ~]# docker rm -f `docker ps -aq`
- 刪除網絡
[root@server1 ~]# docker network ls
[root@server1 ~]# docker network rm we_net1
- 節點離開swarm集羣
[root@server1 ~]# docker swarm leave -f
- 禁用swap分區
[root@server1 ~]# swapoff -a
[root@server1 ~]# vim /etc/fstab
[root@server2 ~]# swapoff -a
[root@server2 ~]# vim /etc/fstab
[root@server3 ~]# swapoff -a
[root@server3 ~]# vim /etc/fstab
2.爲server1,server2,server3安裝軟件包
[root@server1 ~]# ls
cri-tools-1.13.0-0.x86_64.rpm kubectl-1.15.0-0.x86_64.rpm kubernetes-cni-0.7.5-0.x86_64.rpm
kubeadm-1.15.0-0.x86_64.rpm kubelet-1.15.0-0.x86_64.rpm
[root@server1 ~]# yum install * -y
[root@server1 ~]# scp * server2:
[root@server1 ~]# scp * server3:
[root@server2 ~]# ls
cri-tools-1.13.0-0.x86_64.rpm kubectl-1.15.0-0.x86_64.rpm kubernetes-cni-0.7.5-0.x86_64.rpm
kubeadm-1.15.0-0.x86_64.rpm kubelet-1.15.0-0.x86_64.rpm
[root@server2 ~]# yum install *
[root@server3 ~]# ls
cri-tools-1.13.0-0.x86_64.rpm kubectl-1.15.0-0.x86_64.rpm kubernetes-cni-0.7.5-0.x86_64.rpm
kubeadm-1.15.0-0.x86_64.rpm kubelet-1.15.0-0.x86_64.rpm
[root@server3 ~]# yum install * -y
3.所有節點導入相關鏡像
[root@server1 ~]# for i in *.tar; do docker load -i $i ;done
4.將鏡像傳給server2、server3,並導入鏡像
[root@server1 ~]# scp *.tar server2:
[root@server1 ~]# scp *.tar server3:
[root@server2 ~]# for i in *.tar; do docker load -i $i ;done
[root@server3 ~]# for i in *.tar; do docker load -i $i ;done
5.在所有節點的/etc/sysctl.d/k8s.conf內寫入內容並查看
[root@server1 ~]# vim /etc/sysctl.d/k8s.conf
[root@server1 ~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@server1 ~]# scp /etc/sysctl.d/k8s.conf server2:/etc/sysctl.d/
k8s.conf 100% 79 0.1KB/s 00:00
[root@server1 ~]# scp /etc/sysctl.d/k8s.conf server3:/etc/sysctl.d/
k8s.conf
[root@server1 ~]# sysctl --system
6.在server1,server2,server3開啓kubelet服務(此時查看狀態仍未開啓)
[root@server1 ~]# systemctl start kubelet
[root@server1 ~]# systemctl enable kubelet
6.初始化管理節點
[root@server1 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.25.31.1
注意:如果第一次初始化集羣失敗,需要執行命令"kubeadm reset"進行重置,重置之後再執行初始化集羣的命令,進行集羣初始化。
7.創建用戶kubeadm,並賦予所有權限
[root@server1 k8s]# useradd kubeadm
[root@server1 k8s]# vim /etc/sudoers
8.切換到kubeadm用戶下創建文件並將其複製
[root@server1 ~]# su - kubeadm
[kubeadm@server1 ~]$ mkdir -p $HOME/.kube
[kubeadm@server1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[kubeadm@server1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[kubeadm@server1 ~]$ exit
logout
9.在server1上配置 kubectl 命令補齊功能:
[kubeadm@server1 ~]$ echo "source <(kubectl completion bash)" >> .bashrc
[kubeadm@server1 ~]$ logout
[root@server1 ~]# su - kubeadm
10.在server1上將kube-flannel.yml文件發送到/home/k8s目錄下。因爲kube-flannel.yml文件原來的/root/k8s目錄下,普通用戶k8s無法訪問
[root@server1 ~]# cp kube-flannel.yml /home/kubeadm/
[root@server1 ~]# su - kubeadm
Last login: Fri Aug 9 14:25:33 CST 2019 on pts/0
[kubeadm@server1 ~]$ ls
kube-flannel.yml
[kubeadm@server1 ~]$ kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
11.在所有節點導入flannel
[root@server1 ~]# docker load -i flannel.tar
cd7100a72410: Loading layer 4.403MB/4.403MB
3b6c03b8ad66: Loading layer 4.385MB/4.385MB
93b0fa7f0802: Loading layer 158.2kB/158.2kB
4165b2148f36: Loading layer 36.33MB/36.33MB
b883fd48bb96: Loading layer 5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.10.0-amd64
[root@server1 ~]# scp flannel.tar server2:
flannel.tar 100% 43MB 43.2MB/s 00:00
[root@server1 ~]# scp flannel.tar server3:
flannel.tar 100% 43MB 43.2MB/s 00:0
server2:
[root@server2 ~]# docker load -i flannel.tar
cd7100a72410: Loading layer 4.403MB/4.403MB
3b6c03b8ad66: Loading layer 4.385MB/4.385MB
93b0fa7f0802: Loading layer 158.2kB/158.2kB
4165b2148f36: Loading layer 36.33MB/36.33MB
b883fd48bb96: Loading layer 5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.10.0-amd64
server3:
[root@server3 ~]# docker load -i flannel.tar
cd7100a72410: Loading layer 4.403MB/4.403MB
3b6c03b8ad66: Loading layer 4.385MB/4.385MB
93b0fa7f0802: Loading layer 158.2kB/158.2kB
4165b2148f36: Loading layer 36.33MB/36.33MB
b883fd48bb96: Loading layer 5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.10.0-amd64
11.在server2和server3上加載ipvs內核模塊並使其臨時生效
[root@server2 ~]# modprobe ip_vs_wrr
[root@server2 ~]# modprobe ip_vs_rr
[root@server2 ~]# modprobe ip_vs_sh
[root@server2 ~]# modprobe ip_vs
[root@server3 ~]# modprobe ip_vs_wrr
[root@server3 ~]# modprobe ip_vs_rr
[root@server3 ~]# modprobe ip_vs_sh
[root@server3 ~]# modprobe ip_vs
12.server2、server3加入節點
[root@server3 ~]# kubeadm join 172.25.31.1:6443 --token pu2qxd.7i5itunef7xta8wg \
> --discovery-token-ca-cert-hash sha256:d2999b6e117e399234cf47884a183229aabb834dc700ecacf42449c9d36a0074
[root@server2 ~]# kubeadm join 172.25.31.1:6443 --token pu2qxd.7i5itunef7xta8wg \
> --discovery-token-ca-cert-hash sha256:d2999b6e117e399234cf47884a183229aabb834dc700ecacf42449c9d36a0074
13.獲取默認namespace(default)下的pod,查看所有節點的狀態是否都是ready,查看pods狀態是否是running
[kubeadm@server1 ~]$ kubectl get nodes
[kubeadm@server1 ~]$ kubectl get pods --all-namespaces
14.查看master狀態
[kubeadm@server1 ~]$ kubectl get cs
[kubeadm@server1 ~]$ kubectl get node
[kubeadm@server1 ~]$ kubectl get all -n kube-system
注意:
1.master節點要求cpu>=2
2.
三.佈置UI界面
1.在所有節點導入kubernetes-dashboard.tar鏡像
server1:
server2:
server3:
2.獲取或者自己編寫kubernetes-dashboard.yaml併發送到/home/kubeadm/
[root@server1 ~]# cp kubernetes-dashboard.yaml /home/kubeadm/
[root@server1 ~]# cd /home/kubeadm/
[root@server1 kubeadm]# ls
kube-flannel.yml kubernetes-dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
3.切換用戶
[root@server1 kubeadm]# su - kubeadm
Last login: Fri Aug 9 14:57:20 CST 2019 on pts/0
[kubeadm@server1 ~]$ ls
kube-flannel.yml kubernetes-dashboard.yaml
[kubeadm@server1 ~]$ vim kubernetes-dashboard.yaml
[kubeadm@server1 ~]$ kubectl create -f kubernetes-dashboard.yaml
4.更改type
[kubeadm@server1 ~]$ kubectl edit service kubernetes-dashboard -n kube-system
service/kubernetes-dashboard edited
##編輯類型type:NodePort
5.獲取NodePort
[kubeadm@server1 ~]$ kubectl describe svc kubernetes-dashboard -n kube-system
[kubeadm@server1 ~]$ kubectl get pods -n kube-system
6.新建用戶,用來管理UI界面
[kubeadm@server1 ~]$ vim dashboard-admin.yaml
[kubeadm@server1 ~]$ kubectl create -f dashboard-admin.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
文件內容:
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
7.獲取令牌(token)
[kubeadm@server1 ~]$ kubectl get secrets -n kube-system | grep admin
admin-user-token-zgt2v kubernetes.io/service-account-token 3 3m54s
[kubeadm@server1 ~]$ kubectl describe secrets admin-user-token-zgt2v -n kube-system
測試:
瀏覽器輸入:https://172.25.31.1:31912
注意:
- 確保每個節點都加入鏡像
- 每個節點都要有網關,並能上網,(剛開始要連網)