使用kubeadm方式安裝kubernetes 1.10.0

使用kubeadm方式安裝kubernetes 1.10.0

  1. 環境準備
    #系統環境
    lsb_release -a
    Distributor ID: CentOS
    Description:    CentOS Linux release 7.3.1611 (Core)
    Release:    7.3.1611
    Codename:   Core
    說明:如提示:-bash: lsb_release: command not found,請yum install -y redhat-lsb
    #查看selinux狀態
    getenforce
    Disabled
    #關閉防火牆
    systemctl disable firewalld
    systemctl stop firewalld
    systemctl status firewalld
  2. docker安裝部署
    所需docker安裝包,請點擊17.03自行下載使用。
    #docker安裝
    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum install -y docker-ce-selinux-17.03.1.ce-1.el7.centos.noarch.rpm
    yum install -y docker-ce-17.03.1.ce-1.el7.centos.x86_64.rpm
    #docker啓動
    systemctl enable docker
    systemctl start docker
    systemctl status docker
    #docker 版本
    docker --version
    Docker version 17.03.1-ce, build c6d412e
  3. 基礎鏡像準備
    本次演示 kubernetes 所依賴的各個鏡像列表如下:
    k8s.gcr.io/kube-apiserver-amd64:v1.10.0
    k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
    k8s.gcr.io/kube-scheduler-amd64:v1.10.0
    k8s.gcr.io/kube-proxy-amd64:v1.10.0
    k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
    k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
    k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
    k8s.gcr.io/pause-amd64:3.1
    quay.io/coreos/flannel:v0.9.1
    quay.io/calico/node:v2.6.2
    quay.io/calico/cni:v1.11.0
    k8s.gcr.io/etcd-amd64:3.1.12
    k8s.gcr.io/heapster-amd64:v1.5.3
    k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
    k8s.gcr.io/heapster-grafana-amd64:v4.4.3
    k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
    k8s.gcr.io/kubernetes-dashboard-init-amd64:v1.0.1

    所需鏡像,請點擊images自行下載。

  4. 系統配置
    根據官方文檔中limitations小節中的內容,對各節點系統做如下設置:
    #創建/etc/sysctl.d/k8s.conf文件:
    touch /etc/sysctl.d/k8s.conf
    #添加如下內容:
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    #執行
    sysctl -p /etc/sysctl.d/k8s.conf
    使修改生效。
  5. 安裝kubeadm和kubelet
    下面在各節點安裝kubeadm和kubelet
    #配置yum源(這裏需要“fan qiang”,如不能“fan qiang”請繞行!後期我會建立庫供使用)
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
     https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF
    #生成緩存
    yum clean all
    yum makecache
    #測試地址
    curl  http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
    #查看kubeadm, kubelet, kubectl的最新版本
    yum list kubeadm  --showduplicates |sort -r
    kubeadm.x86_64                        1.10.0-0                        kubernetes
    yum list kubelet  --showduplicates |sort -r
    kubelet.x86_64                        1.10.0-0                        kubernetes
    yum list kubectl  --showduplicates |sort -r
    kubectl.x86_64                        1.10.0-0                        kubernetes
    直接安裝即可:
    yum install -y kubelet kubeadm kubectl kubernetes-cni
    #啓動kubelet.service
    systemctl enable kubelet.service
    systemctl start kubelet.service
    systemctl status kubelet.service
  6. 初始化master準備
    #說明:
    (1)kubeadm 等相關 rpm 安裝後會生成 /etc/kubernetes 目錄,而 kubeadm init 時候又會檢測這些目錄是否存在,如果存在則停止初始化,所以要先清理一下。

    #清理命令
    kubeadm reset

    參考官網tear down部分內容。

    (2)初始化以前記得一定要啓動 kubelet。

    #啓動命令如下:
    systemctl enable kubelet
    systemctl start kubelet

    (3)安裝ebtables包
    新版本直接 init 會提示 ebtables not found in system path 錯誤,所以還得先安裝一下這個包在初始化

    #安裝 ebtables
    yum install -y ebtables

    (4)修改kubelet配置文件

    kubelet和docker 的cgroup driver 有2種方式:cgroupfs和systemd.注意保持 2個應用
    的driver保持一致。
    #查看kubelet的driver
    cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf其中包含如下內容:
    Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
    #查看docker的driver
    docker info
    Server Version: 17.03.1-ce
    。。。
    Cgroup Driver: cgroupfs
    。。。
    #更改kubelet的driver
    Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

    (5)關閉系統Swap
    Kubernetes 1.8開始要求關閉系統的Swap,如果不關閉,默認配置下kubelet將無法啓動。可以通過kubelet的啓動參數--fail-swap-on=false更改這個限制。

    #關閉系統的Swap方法
    swapoff -a
    #修改 /etc/fstab 文件,註釋掉 SWAP 的自動掛載,使用free -m確認swap已經關閉。 swappiness參數調整,修改/etc/sysctl.d/k8s.conf添加下面一行:
    vm.swappiness=0
    #執行
    sysctl -p /etc/sysctl.d/k8s.conf
    使修改生效。

    說明:因爲這裏本次用於測試的主機上還運行其他服務,關閉swap可能會對其他服務產生影響,所以這裏修改kubelet的啓動參數--fail-swap-on=false去掉這個限制。

    #修改/etc/systemd/system/kubelet.service.d/10-kubeadm.conf,加入:
    Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
    #使配置修改生效:
    systemctl daemon-reload
  7. 使用kubeadm init初始化集羣
    #使用kubeadm初始化集羣,在Master Node上執行下面的命令:
    kubeadm init \
    --kubernetes-version=v1.10.0 \
    --pod-network-cidr=10.244.0.0/16 \
    --apiserver-advertise-address=10.0.0.39

    #說明
    我們選擇flannel作爲Pod網絡插件,所以上面的命令指定–pod-network-cidr=10.244.0.0/16
    #執行時的WARNING

    [WARNING FileExisting-crictl]: crictl not found in system path
    Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl

    #執行時如有:

    [preflight] Some fatal errors occurred:
    [ERROR Swap]: running with swap on is not supported. Please disable swap
    請添加–ignore-preflight-errors=Swap參數忽略這個錯誤,重新運行。
    即:
    kubeadm init \
    --kubernetes-version=v1.10.0 \
    --pod-network-cidr=10.244.0.0/16 \
    --apiserver-advertise-address=10.0.0.39 \
    --ignore-preflight-errors=Swap

    #整個初始化流程

    kubeadm init   --kubernetes-version=v1.10.0   --pod-network-cidr=10.244.0.0/16   
    --apiserver-advertise-address=10.0.0.39
    [init] Using Kubernetes version: v1.10.0
    [init] Using Authorization modes: [Node RBAC]
    [preflight] Running pre-flight checks.
    [WARNING FileExisting-crictl]: crictl not found in system path
    Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
    [preflight] Starting the kubelet service
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [swarm2 kubernetes
    kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] an
    d IPs [10.96.0.1 10.0.0.39]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] Generated etcd/ca certificate and key.
    [certificates] Generated etcd/server certificate and key.
    [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs
    [127.0.0.1]
    [certificates] Generated etcd/peer certificate and key.
    [certificates] etcd/peer serving cert is signed for DNS names [swarm2] and IPs
    [10.0.0.39]
    [certificates] Generated etcd/healthcheck-client certificate and key.
    [certificates] Generated apiserver-etcd-client certificate and key.
    [certificates] Generated sa key and public key.
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.co
    nf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [controlplane] Wrote Static Pod manifest for component kube-apiserver to
    "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-controller-manager 
    to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-scheduler 
    to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance
    to "/etc/kubernetes/manifests/etcd.yaml"
    [init] Waiting for the kubelet to boot up the control plane as Static Pods 
    from directory "/etc/kubernetes/manifests".
    [init] This might take a minute or longer if the control plane images have to be pulled.
    [apiclient] All control plane components are healthy after 22.002127 seconds
    [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in 
    the "kube-system" Namespace
    [markmaster] Will mark node swarm2 as master by adding a label and a taint
    [markmaster] Master swarm2 tainted and labelled with key/value: 
    node-role.kubernetes.io/master=""
    [bootstraptoken] Using token: 4g0p8w.w5p29ukwvitim2ti
    [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post 
    CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] Configured RBAC rules to allow the csrapprover controller
    automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node 
    client certificates in the cluster
    [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" 
    namespace
    [addons] Applied essential addon: kube-dns
    [addons] Applied essential addon: kube-proxy
    Your Kubernetes master has initialized successfully!
    To start using your cluster, you need to run the following as a regular user:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/
    You can now join any number of machines by running the following on each node
    as root:
    kubeadm join 10.0.0.39:6443 --token 4g0p8w.w5p29ukwvitim2ti --discovery-token
    -ca-cert-hash sha256:21d0adbfcb409dca97e655641573b2ee51c77a212f194e20a3
    07cb459e5f77c8

    #關鍵內容說明

    [certificates] 生成相關的各種證書
    [kubeconfig] 接下來是生成證書和相關的kubeconfig文件,
    這個目前我們在Kubernetes 1.6 高可用集羣部署也是這麼做的,
    目前沒看出有什麼新東西。
    [bootstraptoken] 生成token記錄下來,後邊使用kubeadm join往集羣中添加節點時
    會用到。
    下面的命令是配置常規用戶如何使用kubectl訪問集羣:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    最後給出了將節點加入集羣的命令:
    kubeadm join 10.0.0.39:6443 --token 4g0p8w.w5p29ukwvitim2ti 
    --discovery-token-ca-cert-hash sha256:21d0adbfcb409dca97e655641573b2ee51c
    77a212f194e20a307cb459e5f77c8
    這條命令一定保存好,因爲後期沒法重現的!!
  8. master節點驗證
    #查看一下集羣狀態
    kubectl get cs
    error: the server doesn't have a resource type "cs"
    kubectl get nodes
    Unable to connect to the server: x509: certificate signed by unknown authority
    (possibly because of "crypto/rsa: verification error" while trying to verify 
    candidate authority certificate "kubernetes")

    #解決方法:

    mkdir -p /root/.kube/
    cp -i /etc/kubernetes/admin.conf /root/.kube/config
    chown root:root /root/.kube/config

    #再次查看集羣狀態

    kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    controller-manager   Healthy   ok
    scheduler            Healthy   ok
    etcd-0               Healthy   {"health": "true"}

    #查看集羣節點

    kubectl get nodes
    NAME      STATUS    ROLES     AGE       VERSION
    swarm2     Ready       master       1d             v1.10.0

    #命令行驗證

    命令:
    curl --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt --key /etc/kubernetes/pki/apiserver-kubelet-client.key https://10.0.0.39:6443
  9. 查看當前Network
    kubeadm init成功之後,此時的master node上k8s的核心組件均正常啓動,而且是多以container的形式啓動。
    docker ps
    CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS               NAMES
    c2eb8ca152a2        k8s.gcr.io/pause-amd64:3.1   "/pause"                 20 minutes ago      Up 20 minutes                           k8s_POD_kube-flannel-ds-nhjx5_kube-system_a3016d6d-51dd-11e8-a243-0017fa00e437_0
    eb98616f7458        6f7f2dc7fab5                 "/sidecar --v=2 --..."   25 minutes ago      Up 25 minutes                           k8s_sidecar_kube-dns-86f4d74b45-ztrbs_kube-system_17bb3bf9-50e1-11e8-921b-0017fa00e437_4
    912942318fed        c2ce1ffb51ed                 "/dnsmasq-nanny -v..."   25 minutes ago      Up 25 minutes                           k8s_dnsmasq_kube-dns-86f4d74b45-ztrbs_kube-system_17bb3bf9-50e1-11e8-921b-0017fa00e437_4
    d5b193a4d60e        80cc5ea4b547                 "/kube-dns --domai..."   25 minutes ago      Up 25 minutes                           k8s_kubedns_kube-dns-86f4d74b45-ztrbs_kube-system_17bb3bf9-50e1-11e8-921b-0017fa00e437_4
    c120c761b764        9df3c00f55e6                 "kube-apiserver --..."   25 minutes ago      Up 25 minutes                           k8s_kube-apiserver_kube-apiserver-swarm2_kube-system_659a6e4a0a2629e2c62563857da54a7f_3
    81533f3ea2a7        k8s.gcr.io/pause-amd64:3.1   "/pause"                 25 minutes ago      Up 25 minutes                           k8s_POD_kube-dns-86f4d74b45-ztrbs_kube-system_17bb3bf9-50e1-11e8-921b-0017fa00e437_4
    867ce7e1d7f8        ceecd7155649                 "kube-scheduler --..."   25 minutes ago      Up 25 minutes                           k8s_kube-scheduler_kube-scheduler-swarm2_kube-system_0ede54c0e24ebcdc8ec84ec2aa830bfc_4
    01474f2a8879        6e6237849607                 "/usr/local/bin/ku..."   25 minutes ago      Up 25 minutes                           k8s_kube-proxy_kube-proxy-2tghd_kube-system_17c7aa1b-50e1-11e8-921b-0017fa00e437_4
    be4c6d21dc4d        52920ad46f5b                 "etcd --peer-key-f..."   25 minutes ago      Up 25 minutes                           k8s_etcd_etcd-swarm2_kube-system_11d7cb74cd31e890a93f59d783573f27_3
    912413c032e1        8401bb3ff261                 "kube-controller-m..."   25 minutes ago      Up 25 minutes                           k8s_kube-controller-manager_kube-controller-manager-swarm2_kube-system_c9074b2decba8a970cedd0fa9c4dd366_3
    6609d479936e        k8s.gcr.io/pause-amd64:3.1   "/pause"                 25 minutes ago      Up 25 minutes                           k8s_POD_kube-apiserver-swarm2_kube-system_659a6e4a0a2629e2c62563857da54a7f_3
    ddfb3e0b37f5        k8s.gcr.io/pause-amd64:3.1   "/pause"                 25 minutes ago      Up 25 minutes                           k8s_POD_kube-scheduler-swarm2_kube-system_0ede54c0e24ebcdc8ec84ec2aa830bfc_4
    67c0c0001635        k8s.gcr.io/pause-amd64:3.1   "/pause"                 25 minutes ago      Up 25 minutes                           k8s_POD_kube-proxy-2tghd_kube-system_17c7aa1b-50e1-11e8-921b-0017fa00e437_4
    5dac6220b6dc        k8s.gcr.io/pause-amd64:3.1   "/pause"                 25 minutes ago      Up 25 minutes                           k8s_POD_etcd-swarm2_kube-system_11d7cb74cd31e890a93f59d783573f27_5
    7c88b16b245d        k8s.gcr.io/pause-amd64:3.1   "/pause"                 25 minutes ago      Up 25 minutes                           k8s_POD_kube-controller-manager-swarm2_kube-system_c9074b2decba8a970cedd0fa9c4dd366_4

    不過這些核心組件並不是跑在pod network中的(沒錯,此時的pod network還沒有創建),而是採用了host network。以kube-apiserver的pod信息爲例:

    kubectl get pods -n kube-system
    NAME                                   READY     STATUS    RESTARTS   AGE
    kube-apiserver-swarm2            1/1        Running              3          1d

    #查看kube-apiserver的容器id

    docker ps |grep apiserver
    c120c761b764        9df3c00f55e6                 "kube-apiserver --..."   33 minutes ago      Up 33 minutes                           k8s_kube-apiserver_kube-apiserver-swarm2_kube-system_659a6e4a0a2629e2c62563857da54a7f_3

    #查看對應的pause容器的network屬性

    docker inspect c120c761b764
    "NetworkMode": "host",
  10. 安裝flannel pod網絡
    #創建kube-flannel.yml
    mkdir /etc/kubernetes/manifests/my.conf
    #下載kube-flannel.yml文件
    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    #修改文件
    image:quay.io/coreos/flannel:v0.10.0-amd64
    更改爲:
    image: quay.io/coreos/flannel:v0.9.1
    #啓動flannel
    cd /etc/kubernetes/manifests/my.conf
    kubectl create -f  kube-flannel.yml
  11. 查看master中所有pod的狀態
    kubectl get pod --all-namespaces -o wide
    NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE       IP           NODE
    kube-system   etcd-swarm2                      1/1              Running   3          1d        10.0.0.39    swarm2
    kube-system   kube-apiserver-swarm2            1/1       Running   3          1d        10.0.0.39    swarm2
    kube-system   kube-controller-manager-swarm2   1/1       Running   3          1d        10.0.0.39    swarm2
    kube-system   kube-dns-86f4d74b45-ztrbs        3/3       Running   12         1d        10.244.0.6   swarm2
    kube-system   kube-flannel-ds-nhjx5            1/1       Running   0          36m       10.0.0.39    swarm2
    kube-system   kube-proxy-2tghd                 1/1       Running   4          1d        10.0.0.39    swarm2
    kube-system   kube-scheduler-swarm2            1/1       Running   4          1d        10.0.0.39    swarm2
    或使用:
    kubectl get pods -n kube-system
    NAME                             READY     STATUS    RESTARTS   AGE
    etcd-swarm2                      1/1       Running   3          1d
    kube-apiserver-swarm2            1/1       Running   3          1d
    kube-controller-manager-swarm2   1/1       Running   3          1d
    kube-dns-86f4d74b45-ztrbs        3/3       Running   12         1d
    kube-flannel-ds-nhjx5            1/1       Running   0          37m
    kube-proxy-2tghd                 1/1       Running   4          1d
    kube-scheduler-swarm2            1/1       Running   4          1d
  12. 向Kubernetes集羣添加/刪除Node
    關於這部分操作,請點擊參考Node操作
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章