通過RKE 安裝kubernetes

通過RKE 安裝kubernetes

作者: 張首富
時間: 2019-02-13
個人博客: www.zhangshoufu.com
QQ羣: 895291458

集羣節點說明

我們這邊需要4臺機器,系統全都是centos7.5

10.0.0.99 MKE.kuber.com
10.0.0.100 master.kuber.com
10.0.0.101 node101.kuber.com
10.0.0.102 node102.kuber.com

安裝前參數調整(所有機器上操作)

sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 關閉selinux

systemctl stop firewalld.service && systemctl disable firewalld.service # 關閉防火牆

echo 'LANG="en_US.UTF-8"' >> /etc/profile;source /etc/profile #修改系統語言

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime # 修改時區(如果需要修改)

# 添加hosts文件

# 性能調優
cat >> /etc/sysctl.conf<<EOF
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.neigh.default.gc_thresh1=4096
net.ipv4.neigh.default.gc_thresh2=6144
net.ipv4.neigh.default.gc_thresh3=8192
EOF
sysctl –p

配置相關轉發

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
sysctl --system

配置kubernetes源(所有機器上操作)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

配置docker源,安裝docker(所有機器上操作)

yum -y install  yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y device-mapper-persistent-data lvm2
sudo yum makecache fast
yum -y remove container-selinux.noarch
yum install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm  -y
yum install docker-ce-17.03.0.ce -y  (安裝17.03,要不然會出現問題)
systemctl start docker && systemctl enable docker

配置docker鏡像加速

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://ll9gv5j9.mirror.aliyuncs.com","exec-opts": ["native.cgroupdriver=systemd"]]
}
EOF

配置鏡像加速地址

可以配置多條,以數組的形式編寫,地址需要添加協議頭。編輯/etc/docker/daemon.json加入以下內容

{
"registry-mirrors": ["https://z34wtdhg.mirror.aliyuncs.com","https://$IP:$PROT"]
}

配置私有倉庫 (可選)

Docker默認只信任TLS加密的倉庫地址(https),所有非https倉庫默認無法登陸也無法拉取鏡像。insecure-registries字面意思爲不安全的倉庫,通過添加這個參數對非https倉庫進行授信。可以設置多個insecure-registries地址,以數組形式書寫,地址不能添加協議頭(http)。編輯/etc/docker/daemon.json加入以下內容:

{
"insecure-registries":["harbor.httpshop.com","bh-harbor.suixingpay.com"]
}

配置Docker存儲驅動(可選)

存儲驅動有很多種,例如:overlay、overlay2、devicemapper等,前兩者是OverlayFS類型的,是一個新一代的聯合文件系統,類似於AUFS,但速度更快,更加穩定。這裏推薦新版的overlay2。
要求:
overlay2: Linux內核版本4.0或更高版本,或使用內核版本3.10.0-514+的RHEL或CentOS
支持的磁盤文件系統:ext4(僅限RHEL 7.1),xfs(RHEL7.2及更高版本),需要啓用d_type=true
編輯/etc/docker/daemon.json加入以下內容

{
"storage-driver": "overlay2",
"storage-opts": ["overlay2.override_kernel_check=true"]
}

配置日誌驅動(可選)

容器在運行時會產生大量日誌文件,很容易佔滿磁盤空間。通過配置日誌驅動來限制文件大小與文件的數量。 >限制單個日誌文件爲100M,最多產生3個日誌文件

{
  "log-driver": "json-file",
  "log-opts": {
  "max-size": "100m",
  "max-file": "3"
  }
}

daemon.json的樣例

{
  "registry-mirrors": ["https://z34wtdhg.mirror.aliyuncs.com"],
  "insecure-registries":["harbor.httpshop.com","bh-harbor.suixingpay.com"],
  "storage-driver": "overlay2",
  "storage-opts": ["overlay2.override_kernel_check=true"]
}
{
  "log-driver": "json-file",
  "log-opts": {
  "max-size": "100m",
  "max-file": "3"
  }
}

創建docker用戶(所有節點上) 這一步特別重要,我們後面起的服務全部都要在docker這個用戶下啓動

[root@RKE ~]# grep ^docker /etc/group  如果有docker組就不需要創建
docker:x:994:
useradd -g docker docker
echo "1" | passwd --stdin docker

在RKE上分發祕鑰

ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]
ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]

安裝nginx,爲了我們能在外面訪問(多master負載使用,在MKE安裝)

nginx的配置文件如下

[docker@MKE ~]$  cat /etc/nginx/nginx.conf
worker_processes auto;
pid /run/nginx.pid;

events {
    use epoll;
    worker_connections 65536;
    accept_mutex off;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$upstream_addr" "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for" "$request_time"';
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   900;
    # keepalive_timeout   0;
    keepalive_requests  100;
    types_hash_max_size 2048;

server {
        listen         80;
        return 301 https://$host$request_uri;
    }
}

stream {
    upstream rancher_servers {
        least_conn;
        server 10.0.0.100:443 max_fails=3 fail_timeout=5s;
    }
    server {
        listen     443;
        proxy_pass rancher_servers;
    }
}

啓動docker服務:

docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf \
nginx:1.14

RKE 安裝kubernetes(在MKE機器上操作)

下載RKE wget https://github.com/rancher/rke/releases/download/v0.1.11/rke_linux-amd64 (不建議在不能×××的機器上安裝,我們可以下載下來傳上去)

寫集羣yaml文件,先切換到docker用戶

nodes:
  - address: 10.0.0.100
    user: docker
    ssh_key_path: ~/.ssh/id_rsa
    role: [controlplane,worker,etcd]
  - address: 10.0.0.101
    user: docker
    role: [worker,etcd]
  - address: 10.0.0.102
    user: docker
    role: [worker,etcd]

services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h
  • address : 集羣節點的地址
  • user : 使用哪個用戶執行安裝命令
  • ssh_key_path : 私鑰地址(如果祕鑰生成不是默認的名稱就需要指定)
  • role : 這個節點充當什麼角色
    ......剩下https://www.cnrancher.com/docs/rke/v0.1.x/cn/example-yamls/cluster/ 看這個

安裝kubectl 檢查集羣

yum -y install kubectl

檢查集羣節點:

[docker@MKE ~]$ kubectl get nodes
NAME         STATUS   ROLES                      AGE   VERSION
10.0.0.100   Ready    controlplane,etcd,worker   2h    v1.11.3
10.0.0.101   Ready    etcd,worker                2h    v1.11.3
10.0.0.102   Ready    etcd,worker                2h    v1.11.3

檢查pod狀態

[docker@MKE ~]$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-797c5bc547-j7577     1/1     Running     0          2h
ingress-nginx   nginx-ingress-controller-69s9g            1/1     Running     0          2h
ingress-nginx   nginx-ingress-controller-8gw74            1/1     Running     0          2h
ingress-nginx   nginx-ingress-controller-xgzzw            1/1     Running     0          2h
kube-system     canal-5nf7c                               3/3     Running     0          2h
kube-system     canal-nzgx4                               3/3     Running     0          2h
kube-system     canal-t5m9d                               3/3     Running     0          2h
kube-system     kube-dns-7588d5b5f5-s5f99                 3/3     Running     0          2h
kube-system     kube-dns-autoscaler-5db9bbb766-62rxm      1/1     Running     0          2h
kube-system     metrics-server-97bc649d5-9h2g4            1/1     Running     0          2h
kube-system     rke-ingress-controller-deploy-job-rwzgq   0/1     Completed   0          2h
kube-system     rke-kubedns-addon-deploy-job-mvmzj        0/1     Completed   0          2h
kube-system     rke-metrics-addon-deploy-job-52gp4        0/1     Completed   0          2h
kube-system     rke-network-plugin-deploy-job-jckhc       0/1     Completed   0          2h

PodsSTATUSCompleted爲run-one Jobs,這些pods READY應該爲0/1。

配置kubectl命令補全

yum -y install bash-completion.noarch
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

Helm 安裝rancher dashboard(儀表盤)

創建helm的RBAC(Role-based Access Control,基於角色的訪問控制)

# 在kube-system這個命令空間裏面創建一個 tiller的服務賬號
kubectl -n kube-system create serviceaccount tiller

# 把tiller綁定到哪個集羣角色上面,服務賬號是什麼
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

二進制安裝helm

下載安裝包的地址爲 : https://github.com/helm/helm/releases

[docker@MKE ~]$ tar xf helm-v2.12.2-linux-amd64.tar.gz
[root@MKE ~]# cp -a -t /usr/local/bin/ /home/docker/linux-amd64/helm /home/docker/linux-amd64/tiller
[root@MKE ~]# su - docker

添加helm 鏡像源

helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

Rancher 中安裝 Tiller

默認使用的版本是V2.12.3

helm init --service-account tiller --tiller-image \
registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.12.3 \
--stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

升級Tiller(可選)

安裝證書管理器

helm install stable/cert-manager \
--name cert-manager \
--namespace kube-system

如果報錯,在後面添加--set createCustomResource=true

選擇SSL配置方式並安裝Rancher server

helm install rancher-stable/rancher \
--name rancher \
--namespace cattle-system \
--set hostname=rancher.zsf.com

修改hosts文件,瀏覽器訪問測試

我們在hosts文件裏面加入對應的域名解析,因爲我們的域名是假的

cat /etc/hosts
10.0.0.99 rancher.zsf.com

瀏覽器登錄訪問

登錄的時候需要注意,使用https協議訪問,這個過程的長短根據你的機器配置來的,

備份與恢復

集羣備份(對於新手來說,強烈建議集羣搭建成功後拍攝一個快照)

注意:

  • 需要RKE v0.1.7以上版本纔可以

手動創建快照:

當你即將升級Rancher或將其恢復到以前的快照時,你應該對數據手動創建快照,以便數據異常時可供恢復。

在RKE機器上執行下面命令

./rke_linux-amd64 etcd snapshot-save --name <SNAPSHOT.db> --config rancher-cluster.yml

SNAPSHOT.db: 這個是保存etcd的快照名字
rancher-cluster.yml: 這個是創建集羣的時候指定的配置文件,如果使用的是默認的cluster.yml就可以不指定
RKE會獲取每個etcd節點的快照,並保存在每個etcd節點的/opt/rke/etcd-snapshots目錄下;
測試:

[docker@MKE ~]$ pwd
/home/docker
[docker@MKE ~]$ ls
cluster.yml  kube_config_cluster.yml  linux-amd64  rke_linux-amd64
[docker@MKE ~]$ ./rke_linux-amd64 etcd snapshot-save --name initialization_status_20190213 --config cluster.yml
INFO[0000] Starting saving snapshot on etcd hosts
INFO[0000] [dialer] Setup tunnel for host [10.0.0.100]
INFO[0000] [dialer] Setup tunnel for host [10.0.0.101]
INFO[0000] [dialer] Setup tunnel for host [10.0.0.102]
INFO[0000] [etcd] Saving snapshot [initialization_status_20190213] on host [10.0.0.100]
INFO[0000] [etcd] Successfully started [etcd-snapshot-once] container on host [10.0.0.100]
INFO[0000] [etcd] Saving snapshot [initialization_status_20190213] on host [10.0.0.101]
INFO[0001] [etcd] Successfully started [etcd-snapshot-once] container on host [10.0.0.101]
INFO[0001] [etcd] Saving snapshot [initialization_status_20190213] on host [10.0.0.102]
INFO[0001] [etcd] Successfully started [etcd-snapshot-once] container on host [10.0.0.102]
INFO[0002] [certificates] Successfully started [rke-bundle-cert] container on host [10.0.0.100]
INFO[0002] [certificates] Successfully started [rke-bundle-cert] container on host [10.0.0.102]
INFO[0002] [certificates] Successfully started [rke-bundle-cert] container on host [10.0.0.101]
INFO[0002] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.0.0.101]
INFO[0002] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.0.0.100]
INFO[0002] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.0.0.102]
INFO[0002] Finished saving snapshot [initialization_status_20190213] on all etcd hosts

到節點上去看

[docker@master etcd-snapshots]$ ll -d /opt/rke/etcd-snapshots/initialization_status_20190213
-rw-r--r-- 1 root root 9052192 Feb 13 10:25 /opt/rke/etcd-snapshots/initialization_status_20190213

定時自動創建快照

定時自動創建快照服務是RKE附帶的服務,默認沒有開啓。可以通過在rancher-cluster.yml中添加配置來啓用etcd-snapshot(定時自動創建快照)服務。
在cluster.yml文件裏面添加如下代碼

services:
  etcd:
    snapshot: true  # 是否啓用快照功能,默認false;
    creation: 6h0s  # 快照創建間隔時間,不加此參數,默認5分鐘;
    retention: 24h  # 快照有效期,此時間後快照將被刪除;

運行命令./rke_linux-amd64 up --config cluster.yml
結果:
RKE會在每個etcd節點上定時獲取快照,並將快照將保存到每個etcd節點的:/opt/rke/etcd-snapshots/目錄下

HA集羣恢復,請點擊連接

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章