主機名 | ip | 備註 |
k8s_master | 192.168.234.130 | Master&etcd |
k8s_node1 | 192.168.234.131 | Node1 |
k8s_node2 | 192.168.234.132 | Node2 |
Kubernetes 是goole開源的大規模容器集羣管理系統,使用centos7自帶的Kubernetes組件、分佈式鍵值存儲系統etcd以及flannel實現docker容器中跨容器訪問。
(集羣環境需要ntp時鐘一致,因爲是阿里雲的機器,系統默認有時鐘覈對)
第一步組件安裝
Master節點:
systemctl stop firewalld && sudo systemctl disable firewalld
yum install -y kubernetes etcd docker flannel registry
Node節點:
systemctl stop firewalld && sudo systemctl disable firewalld
yum install -y kubernetes docker etcd flannel
第二步配置
節點 | 運行服務 |
Master | etcd kube-apiserver kube-controller-manager kube-scheduler docker flanneld registry |
node | etcd flanneld docker kube-proxy kubelet |
0 準備工作aster:
master和slave節點配置hosts,並關閉防火牆
hostnamectl set-hostname k8s_master
vi /etc/hosts
192.168.234.130 registry
192.168.234.130 etcd
192.168.234.130 k8s_master
192.168.234.131 k8s_node1
192.168.234.132 k8s_node2
1. master配置
1.1 配置docker
vim /etc/sysconfig/docker
添加此OPTIONS='--insecure-registry registry:5000',允許從registry中拉取鏡像
1.2 etcd配置
vi /etc/etcd/etcd.conf
#[member]
ETCD_NAME="master"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379, ,http://0.0.0.0:4001" #
#[Clustering]
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
測試:
systemctl start etcd && systemctl enable etcd
etcdctl set testdir/testkey 0
etcdctl get testdir/testkey 會得到0
etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
1.3 apiserver 配置
vi /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
1.4 config配置
vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://k8s_master:8080"
啓動服務
systemctl enable kube-apiserver && systemctl start kube-apiserver
systemctl enable kube-controller-manager && systemctl start kube-controller-manager
systemctl enable kube-scheduler && systemctl start kube-scheduler
2. slave配置
2.1 配置docker
vim /etc/sysconfig/docker
添加此OPTIONS='--insecure-registry registry:5000',允許從registry中拉取鏡像
2.2 etcd配置
vi /etc/etcd/etcd.conf
#[member]
ETCD_NAME="master"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379, http://0.0.0.0:4001"
#[Clustering]
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
2.3 Kubelet配置
vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=k8s_node1" (相應節點IP=host name)
KUBELET_API_SERVER="--api-servers=http://k8s_master:8080" (master節點IP=>host name)
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=" "
2.4 config配置
vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://k8s_master:8080"
啓動服務
systemctl enable kubelet && systemctl start kubelet
systemctl enable kube-proxy && systemctl start kube-proxy
3. 查看集羣狀態,所有節點運行
kubectl -s http://k8s_master:8080 get node
NAME STATUS AGE
k8s_node1 Ready 1m
k8s_node2 Ready 1m
說明集羣搭建完畢
4 創建網絡,flannel 配置
master和slave節點更改配置
vi /etc/sysconfig/flanneld
FLANNEL_ETCD="http://etcd:2379"
FLANNEL_ETCD_KEY="/k8s/network"
配置etcd中flanneld對應的key
etcdctl set修改get查詢。不管是修改還是創建的時候,必須與FLANNEL_ETCD_KEY一致。
添加網絡:
systemctl enable etcd.service
systemctl start etcd.service
etcdctl mk /k8s/network/config '{"Network":"10.254.0.0/16"}' 創建 [網段需要與apiserver 一致]
5. 網絡啓動後,master、slave重啓所有服務
master:
systemctl enable flanneld && systemctl start flanneld
systemctl restart docker
systemctl restart kube-apiserver
systemctl restart kube-controller-manager
systemctl restart kube-scheduler
node:
systemctl enable flanneld && systemctl start flanneld
systemctl restart docker
systemctl restart kubelet
systemctl restart kube-proxy
查看所有NODE是否正常
kubectl -s k8s_master:8080 get nodes
kubectl get nodes
訪問http://kube-apiserver:port
http://192.168.234.130:8080/ 查看所有請求url
http://192.168.234.130:8080/healthz/ping 查看健康狀況
wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml
kubectl delete -f kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml
kubectl get namespace