ssh -p 9000 root@127.0.0.1
192.168.56.101 master-node 192.168.56.102 work-node1 192.168.56.103 work-node2
由於kubeadm 默認從官網k8s.grc.io下載所需鏡像,國內無法訪問,因此需要通過–image-repository指定阿里雲鏡像倉庫地址。
kubeadm init --kubernetes-version=1.18.0
--apiserver-advertise-address=192.168.56.101
--image-repository registry.aliyuncs.com/google_containers
--service-cidr=10.10.0.0/16 --pod-network-cidr=192.168.0.0/16
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.101:6443 --token oxbgj6.ucnfimi2ncnq2w8g
--discovery-token-ca-cert-hash sha256:559382fa6170629e0f069bac59d69b41993bf729dcd0a52d3c5ba6f2df72cb77
網絡組建使用calico 性能比 flannel 好一點, 頭信息更小。
安裝過程比較慢,要等一下
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
問題
k8s network: stat /var/lib/calico/nodename: no such file or directory
這些處理好之後我發現還是出現network: stat /var/lib/calico/nodename: no such file or directory 這個錯誤,這個時候就發現了
是calico 配置殘留的問題,然後找到相關的calico 文件刪除掉問題就解決了
需要刪除 /var/lib/calico 這個目錄 和 /etc/cni/net.d/ 這個目錄下的calico 文件就行了
刪除以後刪除原有pod
問題
子節點報
The connection to the server localhost:8080 was refused - did you specify the right host or port?
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ 官網
https://blog.csdn.net/hanbing6174/article/details/90092800
https://www.kubernetes.org.cn/7189.html
https://blog.csdn.net/fire_work/article/details/106193304
https://www.cnblogs.com/ssgeek/p/13194687.html
https://edgedef.com/2018/06/16/build-k8s-cluster-via-kubeadm-on-vbox-vms/