kubernetes1.9離線部署

環境
Master: 192.168.20.93
Node1: 192.168.20.94
Node2: 192.168.20.95

採用kubeadm安裝
kubeadm爲kubernetes官方推薦的自動化部署工具,他將kubernetes的組件以pod的形式部署在master和node節點上,並自動完成證書認證等操作。
因爲kubeadm默認要從google的鏡像倉庫下載鏡像,但目前國內無法訪問google鏡像倉庫,所以已將鏡像下好,只需要將離線包的鏡像導入到節點中就可以了.\


開始安裝
----所有節點操作:----

下載:
鏈接:https://pan.baidu.com/s/1pMdK0Td 密碼:zjja

[root@master ~]# md5sum k8s_images.tar.bz2 
b60ad6a638eda472b8ddcfa9006315ee  k8s_images.tar.bz2

解壓下載下來的離線包

[root@master ~]# tar -jxvf k8s_images.tar.bz2 
tar (child): bzip2: Cannot exec: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now

原因:沒有bzip2解壓工具,安裝後解壓:

[root@master ~]# yum -y install bzip2 
[root@master ~]# tar -jxvf k8s_images.tar.bz2 

安裝啓動docker,關閉selinux,firewalld,後
配置系統路由參數,防止kubeadm報路由警告:

echo "
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
" >> /etc/sysctl.conf
sysctl -p

導入鏡像:

[root@master /]# cd k8s_images/docker_images/
[root@master docker_images]# for i in `ls`;do docker load < $i ;done

安裝kubelet kubeadm kubectl包:

[root@master docker_images]# cd ..
rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm
rpm -ivh kubernetes-cni-0.6.0-0.x86_64.rpm  kubelet-1.9.9-9.x86_64.rpm  kubectl-1.9.0-0.x86_64.rpm
rpm -ivh kubectl-1.9.0-0.x86_64.rpm
rpm -ivh kubeadm-1.9.0-0.x86_64.rpm



Master節點操作:
開始初始化master:

[root@master k8s_images]# kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16

kubernetes默認支持多重網絡插件如flannel、weave、calico,這裏使用flanne,就必須要設置--pod-network-cidr參數,10.244.0.0/16是kube-flannel.yml裏面配置的默認網段,如果需要修改的話,需要把kubeadm init的--pod-network-cidr參數和後面的kube-flannel.yml裏面修改成一樣的網段就可以了。



部分會安裝失敗:
發現原來是kubelet默認的cgroup的driver和docker的不一樣,docker默認的cgroupfs,kubelet默認爲systemd,需要將kubelet的cgroup改爲和docker的cgroup相同: "Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd""

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS$KUBELET_EXTRA_ARGS

重啓reload

systemctl daemon-reload && systemctl restart kubelet

此時記得將環境reset一下
執行:

kubeadm reset

在重新執行:

kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16


完成後,將kubeadm join xxx保存下來,等下node節點加入集羣需要使用
如果忘記了,可以在master上通過kubeadmin token list得到

此時root用戶還不能使用kubelet控制集羣需要,按提示配置下環境變量
對於非root用戶

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

對於root用戶

export KUBECONFIG=/etc/kubernetes/admin.conf

也可以直接放到~/.bash_profile

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

source一下環境變量

source ~/.bash_profile

kubectl version測試:

[root@master k8s_images]# kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:40:06Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

安裝網絡,可以使用flannel、calico、weave、macvlan這裏我們用flannel。
下載此文件

wget https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

或直接使用離線包裏面的
若要修改網段,需要kubeadm --pod-network-cidr=和這裏同步
vim kube-flannel.yml
修改network項:

"Network": "10.244.0.0/16",

執行

kubectl create -f kube-flannel.yml



node節點操作
使用剛剛kubeadm後的kubeadm join --xxx:

[root@node2 ~]# kubeadm join --token d508f9.bf00f1b8182fdc3f 192.168.20.93:6443 --discovery-token-ca-cert-hash sha256:3477ff532256a3ffe1915b3a504cd75a10989a49848cc0321cba0277830c2ac3

多次加入報錯查看/var/log/message日誌.
加入成功
master節點上check一下:

[root@master k8s_images]# kubectl get nodes
NAME               STATUS    ROLES     AGE       VERSION
master.flora.com   Ready     master    19h       v1.9.1
node1.flora.com    Ready     <none>    19h       v1.9.0
node2.flora.com    Ready     <none>    19h       v1.9.0

kubernetes會在每個node節點創建flannel和kube-proxy的pod:

[root@master k8s_images]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE
default       nginx-deployment-d5655dd9d-gc7c9           1/1       Running   0          17h
default       nginx-deployment-d5655dd9d-pjq5k           1/1       Running   0          17h
kube-system   etcd-master.flora.com                      1/1       Running   3          19h
kube-system   kube-apiserver-master.flora.com            1/1       Running   13         19h
kube-system   kube-controller-manager-master.flora.com   1/1       Running   9          19h
kube-system   kube-dns-6f4fd4bdf-ds2lf                   3/3       Running   23         19h
kube-system   kube-flannel-ds-5lhmm                      1/1       Running   0          19h
kube-system   kube-flannel-ds-cdhmr                      1/1       Running   1          19h
kube-system   kube-flannel-ds-l5w9b                      1/1       Running   0          19h
kube-system   kube-proxy-9794w                           1/1       Running   0          19h
kube-system   kube-proxy-986n2                           1/1       Running   0          19h
kube-system   kube-proxy-gmncl                           1/1       Running   1          19h
kube-system   kube-scheduler-master.flora.com            1/1       Running   8          19h
[root@master k8s_images]# kubectl get pods -n kube-system -o wide
NAME                                       READY     STATUS    RESTARTS   AGE       IP              NODE
etcd-master.flora.com                      1/1       Running   3          19h       192.168.20.93   master.flora.com
kube-apiserver-master.flora.com            1/1       Running   13         19h       192.168.20.93   master.flora.com
kube-controller-manager-master.flora.com   1/1       Running   9          19h       192.168.20.93   master.flora.com
kube-dns-6f4fd4bdf-ds2lf                   3/3       Running   23         19h       10.244.0.4      master.flora.com
kube-flannel-ds-5lhmm                      1/1       Running   0          19h       192.168.20.94   node1.flora.com
kube-flannel-ds-cdhmr                      1/1       Running   1          19h       192.168.20.93   master.flora.com
kube-flannel-ds-l5w9b                      1/1       Running   0          19h       192.168.20.95   node2.flora.com
kube-proxy-9794w                           1/1       Running   0          19h       192.168.20.94   node1.flora.com
kube-proxy-986n2                           1/1       Running   0          19h       192.168.20.95   node2.flora.com
kube-proxy-gmncl                           1/1       Running   1          19h       192.168.20.93   master.flora.com
kube-scheduler-master.flora.com            1/1       Running   8          19h       192.168.20.93   master.flora.com
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章