簡介:
kubernetes集羣結構和安裝環境
主節點: 172.19.2.50 主節點包含組件:kubectl、Kube-Proxy、kube-dns、etcd、kube-apiserver、kube-controller-manager、kube-scheduler、calico-node、calico-policy-controller、calico-etcd 從節點: 172.19.2.51 172.19.2.140 從節點包含組件:kubernetes-dashboard、calico-node、kube-proxy 最後kubernetes-dashboard訪問地址: http://172.19.2.50:30099/ kubectl:用於操作kubernetes集羣的命令行接口 Kube-Proxy:用於暴露容器內的端口,kubernetes master會從與定義的端口範圍內請求一個端口號(默認範圍:30000-32767) kube-dns:用於提供容器內的DNS服務 etcd:用於共享配置和服務發現的分佈式,一致性的KV存儲系統 kube-apiserver:相當於是k8s集羣的一個入口,不論通過kubectl還是使用remote api直接控制,都要經過apiserver kube-controller-manager:承擔了master的主要功能,比如交互,管理node,pod,replication,service,namespace等 kube-scheduler:根據特定的調度算法將pod調度到指定的工作節點(minion)上,這一過程也叫綁定(bind) calico:基於BGP協議的虛擬網絡工具,這裏主要用於容器內網絡通信 kubernetes-dashboard:官方提供的用戶管理Kubernets集羣可視化工具
參考文檔:
https://mritd.me/2016/10/29/set-up-kubernetes-cluster-by-kubeadm/
http://www.cnblogs.com/liangDream/p/7358847.html
一、所有節點安裝kubeadm
清理系統中殘留的kubernetes文件
rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd rm -rf $HOME/.kube
部署阿里雲的kubernetes倉庫
vim /etc/yum.repos.d/kubernetes.repo [kube] name=Kube baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0
安裝kubeadm
yum install kubeadm
安裝完後必須有以下幾個組件
rpm -qa | grep kube kubernetes-cni-0.5.1-0.x86_64 kubelet-1.7.5-0.x86_64 kubectl-1.7.5-0.x86_64 kubeadm-1.7.5-0.x86_64
docker使用的是docker-ce
新增docker-ce倉庫
vim /etc/yum.repos.d/docker-ce.repo [docker-ce-stable] name=Docker CE Stable - $basearch baseurl=https://download.docker.com/linux/centos/7/$basearch/stable enabled=1 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg [docker-ce-stable-debuginfo] name=Docker CE Stable - Debuginfo $basearch baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/stable enabled=0 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg [docker-ce-stable-source] name=Docker CE Stable - Sources baseurl=https://download.docker.com/linux/centos/7/source/stable enabled=0 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg [docker-ce-edge] name=Docker CE Edge - $basearch baseurl=https://download.docker.com/linux/centos/7/$basearch/edge enabled=0 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg [docker-ce-edge-debuginfo] name=Docker CE Edge - Debuginfo $basearch baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/edge enabled=0 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg [docker-ce-edge-source] name=Docker CE Edge - Sources baseurl=https://download.docker.com/linux/centos/7/source/edge enabled=0 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg [docker-ce-test] name=Docker CE Test - $basearch baseurl=https://download.docker.com/linux/centos/7/$basearch/test enabled=0 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg [docker-ce-test-debuginfo] name=Docker CE Test - Debuginfo $basearch baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/test enabled=0 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg [docker-ce-test-source] name=Docker CE Test - Sources baseurl=https://download.docker.com/linux/centos/7/source/test enabled=0 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg
安裝docker-ce
yum install docker-ce
二、部署鏡像到所有節點
所需鏡像,因爲是谷歌鏡像需要×××,可以先把鏡像構建到dockerhub上再下載到本地倉庫
gcr.io/google_containers/etcd-amd64:3.0.17 gcr.io/google_containers/kube-apiserver-amd64:v1.7.6 gcr.io/google_containers/kube-controller-manager-amd64:v1.7.6 gcr.io/google_containers/kube-scheduler-amd64:v1.7.6 quay.io/coreos/etcd:v3.1.10 quay.io/calico/node:v2.4.1 quay.io/calico/cni:v1.10.0 quay.io/calico/kube-policy-controller:v0.7.0 gcr.io/google_containers/pause-amd64:3.0 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4 gcr.io/google_containers/kube-proxy-amd64:v1.7.6
構建鏡像過程參考
https://mritd.me/2016/10/29/set-up-kubernetes-cluster-by-kubeadm/
http://www.cnblogs.com/liangDream/p/7358847.html
複製已有的私有倉庫密鑰到本地
mkdir -pv /etc/docker/certs.d/172.19.2.139/ vim /etc/docker/certs.d/172.19.2.139/ca.crt -----BEGIN CERTIFICATE----- MIIDvjCCAqagAwIBAgIUQzFZBuFh7EZLOzWUYZ10QokL+BUwDQYJKoZIhvcNAQEL BQAwZTELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0Jl aUppbmcxDDAKBgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwpr dWJlcm5ldGVzMB4XDTE3MDcwNDA4NTMwMFoXDTIyMDcwMzA4NTMwMFowZTELMAkG A1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0JlaUppbmcxDDAK BgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwprdWJlcm5ldGVz MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyWgHFV6Cnbgxcs7X7ujj APnnMmotzNnnTRhygJLCMpCZUaWYrdBkFE4T/HGpbYi1R5AykSPA7FCffFHpJIf8 Gs5DAZHmpY/uRsLSrqeP7/D8sYlyCpggVUeQJviV/a8L7PkCyGq9DSiU/MUBg4CV Dw07OT46vFJH0lzTaZJNSz7E5QsekLyzRb61tZiBN0CJvSOxXy7wvdqK0610OEFM T6AN8WfafTH4qmKWulFBJN1LjHTSYfTZzCL6kfTSG1M3kqG0W4B2o2+TkNLVmC9n gEKdeh/yQmQWfraRkuWiCorJZGxte27xpjgu7u62sRyCm92xQRNgp5RiGHxP913+ HQIDAQABo2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIBAjAd BgNVHQ4EFgQUDFiYOhMMWkuq93iNBoC1Udr9wLIwHwYDVR0jBBgwFoAUDFiYOhMM Wkuq93iNBoC1Udr9wLIwDQYJKoZIhvcNAQELBQADggEBADTAW0FPhfrJQ6oT/WBe iWTv6kCaFoSuWrIHiB9fzlOTUsicrYn6iBf+XzcuReZ6qBILghYGPWPpOmnap1dt 8UVl0Shdj+hyMbHzxR0XzX12Ya78Lxe1GFg+63XbxNwOURssd9DalJixKcyj2BW6 F6JG1aBQhdgGSBhsCDvG1zawqgZX/h4VWG55Kv752PYBrQOtUH8CS93NfeB5Q7bE FOuyvGVd1iO40JQLoFIkZuyxNh0okGjfmT66dia7g+bC0v1SCMiE/UJ9uvHvfPYe qLkSRjIHH7FH1lQ/AKqjl9qrpZe7lHplskQ/jynEWHcb60QRcAWPyd94OPrpLrTU 64g= -----END CERTIFICATE-----
登出docker倉庫,登錄私有倉庫
docker logout docker login 172.19.2.139 Username: admin Password: Cmcc@1ot
下載私有倉庫中的鏡像
docker pull 172.19.2.139/xsllqs/etcd-amd64:3.0.17 docker pull 172.19.2.139/xsllqs/kube-scheduler-amd64:v1.7.6 docker pull 172.19.2.139/xsllqs/kube-apiserver-amd64:v1.7.6 docker pull 172.19.2.139/xsllqs/kube-controller-manager-amd64:v1.7.6 docker pull 172.19.2.139/xsllqs/etcd:v3.1.10 docker pull 172.19.2.139/xsllqs/node:v2.4.1 docker pull 172.19.2.139/xsllqs/cni:v1.10.0 docker pull 172.19.2.139/xsllqs/kube-policy-controller:v0.7.0 docker pull 172.19.2.139/xsllqs/pause-amd64:3.0 docker pull 172.19.2.139/xsllqs/k8s-dns-kube-dns-amd64:1.14.4 docker pull 172.19.2.139/xsllqs/k8s-dns-dnsmasq-nanny-amd64:1.14.4 docker pull 172.19.2.139/xsllqs/kubernetes-dashboard-amd64:v1.6.3 docker pull 172.19.2.139/xsllqs/k8s-dns-sidecar-amd64:1.14.4 docker pull 172.19.2.139/xsllqs/kube-proxy-amd64:v1.7.6
重命名鏡像
docker tag 172.19.2.139/xsllqs/etcd-amd64:3.0.17 gcr.io/google_containers/etcd-amd64:3.0.17 docker tag 172.19.2.139/xsllqs/kube-scheduler-amd64:v1.7.6 gcr.io/google_containers/kube-scheduler-amd64:v1.7.6 docker tag 172.19.2.139/xsllqs/kube-apiserver-amd64:v1.7.6 gcr.io/google_containers/kube-apiserver-amd64:v1.7.6 docker tag 172.19.2.139/xsllqs/kube-controller-manager-amd64:v1.7.6 gcr.io/google_containers/kube-controller-manager-amd64:v1.7.6 docker tag 172.19.2.139/xsllqs/etcd:v3.1.10 quay.io/coreos/etcd:v3.1.10 docker tag 172.19.2.139/xsllqs/node:v2.4.1 quay.io/calico/node:v2.4.1 docker tag 172.19.2.139/xsllqs/cni:v1.10.0 quay.io/calico/cni:v1.10.0 docker tag 172.19.2.139/xsllqs/kube-policy-controller:v0.7.0 quay.io/calico/kube-policy-controller:v0.7.0 docker tag 172.19.2.139/xsllqs/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0 docker tag 172.19.2.139/xsllqs/k8s-dns-kube-dns-amd64:1.14.4 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4 docker tag 172.19.2.139/xsllqs/k8s-dns-dnsmasq-nanny-amd64:1.14.4 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 docker tag 172.19.2.139/xsllqs/kubernetes-dashboard-amd64:v1.6.3 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3 docker tag 172.19.2.139/xsllqs/k8s-dns-sidecar-amd64:1.14.4 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4 docker tag 172.19.2.139/xsllqs/kube-proxy-amd64:v1.7.6 gcr.io/google_containers/kube-proxy-amd64:v1.7.6
三、所有節點啓動kubelet
在hosts中加入所有主機名
直接啓動kubelet會有這個報錯
journalctl -xe error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgrou
所以需要修改以下內容
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
以下有就修改,沒有則不管
vim /etc/systemd/system/kubelet.service.d/99-kubelet-droplet.conf Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.aliyuncs.com/archon/pause-amd64:3.0 --cgroup-driver=cgroupfs"
啓動
systemctl enable kubelet systemctl start kubelet
也許會啓動失敗,說找不到配置文件,但是不用管,因爲kubeadm會給出配置文件並啓動kubelet,建議執行了kubeadm init失敗後啓動kubelete
四、kubeadm部署master節點
master節點執行以下命令
kubeadm reset echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables kubeadm init --kubernetes-version=v1.7.6
會有以下內容出現
[apiclient] All control plane components are healthy after 30.001912 seconds [token] Using token: cab485.49b7c0358a06ad35 [apiconfig] Created RBAC rules [addons] Applied essential addon: kube-proxy [addons] Applied essential addon: kube-dns Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token cab485.49b7c0358a06ad35 172.19.2.50:6443
請牢記以下內容,後期新增節點需要用到
kubeadm join --token cab485.49b7c0358a06ad35 172.19.2.50:6443
按照要求執行
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
部署calico網絡,最好先部署kubernetes-dashboard再部署calico網絡
cd /home/lvqingshan/ kubectl apply -f http://docs.projectcalico.org/v2.4/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
查看名稱空間
kubectl get pods --all-namespaces
五、主節點安裝kubernetes-dashboard
下載kubernetes-dashboard對應的yaml文件
cd /home/lvqingshan/ wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml -O kubernetes-dashboard.yaml
修改該yaml文件,固定服務對外端口
vim kubernetes-dashboard.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- kind: Deployment apiVersion: extensions/v1beta1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3 ports: - containerPort: 9090 protocol: TCP args: # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 80 targetPort: 9090 nodePort: 30099 #新增這裏,指定物理機上提供的端口 selector: k8s-app: kubernetes-dashboard type: NodePort #新增這裏,指定類型
創建dashboard
kubectl create -f kubernetes-dashboard.yaml
六、從節點加入集羣
從節點修改/etc/systemd/system/kubelet.service.d/文件後執行
systemctl enable kubelet systemctl start kubelet kubeadm join --token cab485.49b7c0358a06ad35 172.19.2.50:6443
查看dashboard的NodePort
kubectl describe svc kubernetes-dashboard --namespace=kube-system
網頁打開
http://172.19.2.50:30099/