Rancher 2.4.4-高可用集羣HA部署-離線安裝

規劃:

本次虛擬機使用的是雙網卡,所以有2個ip,10段IP可忽略。負載均衡器需要10IP,作爲公網入口。

IP1 IP2 描述
10.0.0.20 172.16.1.20 nginx負載均衡器、Harbor
10.0.0.21 172.16.1.21 rancher1
10.0.0.22 172.16.1.22 rancher2
10.0.0.23 172.16.1.23 rancher3

官網推薦架構

  • Rancher 的 DNS 應該解析爲 4 層負載均衡器
  • 負載均衡器應將端口 TCP/80 和 TCP/443 流量轉發到Kubernetes 集羣中的所有 3 個節點。
  • Ingress 控制器會將 HTTP 重定向到 HTTPS,並在端口 TCP/443上終止 SSL/TLS。
  • Ingress 控制器會將流量轉發到 Rancher deployment 中 Pod 上的端口 TCP/80。

在這裏插入圖片描述

前期準備

1、主機OS調優

echo "
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
net.ipv4.conf.all.forwarding=1
net.ipv4.neigh.default.gc_thresh1=4096
net.ipv4.neigh.default.gc_thresh2=6144
net.ipv4.neigh.default.gc_thresh3=8192
net.ipv4.neigh.default.gc_interval=60
net.ipv4.neigh.default.gc_stale_time=120

# 參考 https://github.com/prometheus/node_exporter#disabled-by-default
kernel.perf_event_paranoid=-1

#sysctls for k8s node config
net.ipv4.tcp_slow_start_after_idle=0
net.core.rmem_max=16777216
fs.inotify.max_user_watches=524288
kernel.softlockup_all_cpu_backtrace=1

kernel.softlockup_panic=0

kernel.watchdog_thresh=30
fs.file-max=2097152
fs.inotify.max_user_instances=8192
fs.inotify.max_queued_events=16384
vm.max_map_count=262144
fs.may_detach_mounts=1
net.core.netdev_max_backlog=16384
net.ipv4.tcp_wmem=4096 12582912 16777216
net.core.wmem_max=16777216
net.core.somaxconn=32768
net.ipv4.ip_forward=1
net.ipv4.tcp_max_syn_backlog=8096
net.ipv4.tcp_rmem=4096 12582912 16777216

net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1

kernel.yama.ptrace_scope=0
vm.swappiness=0

# 可以控制core文件的文件名中是否添加pid作爲擴展。
kernel.core_uses_pid=1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route=0
net.ipv4.conf.all.accept_source_route=0

# Promote secondary addresses when the primary address is removed
net.ipv4.conf.default.promote_secondaries=1
net.ipv4.conf.all.promote_secondaries=1

# Enable hard and soft link protection
fs.protected_hardlinks=1
fs.protected_symlinks=1

# 源路由驗證
# see details in https://help.aliyun.com/knowledge_detail/39428.html
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2

# see details in https://help.aliyun.com/knowledge_detail/41334.html
net.ipv4.tcp_max_tw_buckets=5000
net.ipv4.tcp_syncookies=1
net.ipv4.tcp_fin_timeout=30
net.ipv4.tcp_synack_retries=2
kernel.sysrq=1

" >> /etc/sysctl.conf

2、所有機器安裝Docker

# 1) 安裝必要的一些系統工具
yum install -y yum-utils device-mapper-persistent-data lvm2

# 2) 添加軟件源信息
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 3) 更新並安裝 Docker-CE
 yum makecache fast
 yum -y install docker-ce
 
# 4) 修改爲國內鏡像源
mkdir /etc/docker
cat >> /etc/docker/daemon.json<<EOF
{
"registry-mirrors": ["https://7kmehv9e.mirror.aliyuncs.com"]
}
EOF

# 5) 開啓Docker服務並加入開機自啓
 systemctl start  docker
 systemctl enable  docker

3、在NGINX機器安裝helm

[root@rancher0 ~]# wget https://docs.rancher.cn/download/helm/helm-v3.0.3-linux-amd64.tar.gz \
&&   tar xf helm-v3.0.3-linux-amd64.tar.gz  \
&&   cd linux-amd64 \
&&   mv helm  /usr/sbin/

部署:

一、安裝負載均衡器

1、配置Nginx官方的Yum源

# 負載均衡需要stream模塊,所以本次安裝最新1.18版本nginx

vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1

2、安裝最新NGINX

yum -y install nginx

3、配置nginx

[root@nginx ~]# cd /etc/nginx && rm -rf conf.d && mv nginx.conf nginx.conf.bak
[root@nginx ~]# vim /etc/nginx/nginx.conf
worker_processes 4;
worker_rlimit_nofile 40000;

events {
    worker_connections 8192;
}

stream {
    upstream rancher_servers_http {
        least_conn;
        server 172.16.1.21:80 max_fails=3 fail_timeout=5s;
        server 172.16.1.22:80 max_fails=3 fail_timeout=5s;
        server 172.16.1.23:80 max_fails=3 fail_timeout=5s;
    }
    server {
        listen 80;
        proxy_pass rancher_servers_http;
    }

    upstream rancher_servers_https {
        least_conn;
        server 172.16.1.21:443 max_fails=3 fail_timeout=5s;
        server 172.16.1.22:443 max_fails=3 fail_timeout=5s;
        server 172.16.1.23:443 max_fails=3 fail_timeout=5s;
    }
    server {
        listen     443;
        proxy_pass rancher_servers_https;
    }

}

4、啓動nginx

systemctl start nginx && systemctl enable nginx

二、安裝鏡像倉庫Harbor

1、下載軟件包

wget https://docs.rancher.cn/download/harbor/harbor-online-installer-v2.0.0.tgz
tar xf harbor-online-installer-v2.0.0.tgz
mv harbor /opt/

2、配置

## 本次註釋了443端口,不使用SSL,
[root@nginx harbor]# vim harbor.yml  
[root@nginx harbor]# grep "^\s*[^# \t].*$"  harbor.yml               
hostname: 10.0.0.20
http:
  port: 8080
harbor_admin_password: Harbor12345
database:
  password: root123
  max_idle_conns: 50
  max_open_conns: 100
data_volume: /data
clair:
  updaters_interval: 12
  ignore_unfixed: false
  skip_update: false
  insecure: false
jobservice:
  max_job_workers: 10
notification:
  webhook_job_max_retry: 10
chart:
  absolute_url: disabled
log:
  level: info
  local:
    rotate_count: 50
    rotate_size: 200M
    location: /var/log/harbor
_version: 2.0.0
proxy:
  http_proxy:
  https_proxy:
  no_proxy:
  components:
    - core
    - jobservice
    - clair
    - trivy

3、安裝

# 1.腳本會調用docker-compose所以在此先安裝docker-compose
[root@nginx harbor]# yum install docker-compose -y

# 2.執行Harbor安裝腳本
[root@nginx harbor]# sh install.sh 

安裝完成後會在腳本目錄生成docker-compose.yml文件,可用docker-compose命令管理Harbor的生命週期。
安裝完成後可登陸查看http://10.0.0.20:8080默認用戶名admin,密碼Harbor12345

三、同步鏡像到私有倉庫

1、查找使用的 Rancher 版本所需要的資源

https://github.com/rancher/rancher/releases
進入文檔版本下載以下幾個文件

Release 文件 描述
rancher-images.txt 此文件包含安裝 Rancher、創建集羣和運行 Rancher 工具所需的鏡像列表。
rancher-save-images.sh 這個腳本會從 DockerHub 中拉取在文件rancher-images.txt中描述的所有鏡像,並將它們保存爲文件rancher-images.tar.gz。
rancher-images.txt 此文件包含安裝 Rancher、創建集羣和運行 Rancher 工具所需的鏡像列表。
rancher-load-images.sh 這個腳本會載入文件rancher-images.tar.gz中的鏡像,並將它們推送到您自己的私有鏡像庫。

本次離線安裝完成整理後的鏡像名稱列表: rancher_images.txt,可以用這份txt文件嘗試安裝。(按照官網步驟從GitHub下載的好像有遺漏)

2、收集 cert-manager 鏡像

在安裝高可用過程中,如果選擇使用 Rancher 默認的自簽名 TLS 證書,則還必須將 cert-manager 鏡像添加到 rancher-images.txt 文件中。如果使用自己的證書,則跳過此步驟。

2.1、獲取最新的cert-manager Helm chart,解析模板,獲取鏡像詳細信息:

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm fetch jetstack/cert-manager --version v0.12.0
helm template ./cert-manager-v0.12.0.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt   

2.2、對鏡像列表進行排序和唯一化,去除重複的鏡像源:

 sort -u rancher-images.txt -o rancher-images.txt

2.3、將鏡像保存到您的工作站中

# 1.爲rancher-save-images.sh 文件添加可執行權限:
chmod +x rancher-save-images.sh

# 2.執行腳本rancher-save-images.sh並以--image-list ./rancher-images.txt 作爲參數,創建所有需要鏡像的壓縮包
./rancher-save-images.sh --image-list ./rancher-images.txt

結果:
Docker 會開始拉取用於離線安裝所需的鏡像。這個過程會花費幾分鐘時間。完成時,您的當前目錄會輸出名爲rancher-images.tar.gz的壓縮包。請確認輸出文件是否存在。

2.4、推送鏡像到私有鏡像庫

文件 rancher-images.txt 和 rancher-images.tar.gz 應該位於工作站中運行 rancher-load-images.sh 腳本的同一目錄下。

首先配置/etc/docker/daemon.json不然會報錯Error response from daemon: Get https://10.0.0.20:8080/v2/: http: server gave HTTP response to HTTPS client

vim /etc/docker/daemon.json
{
 "insecure-registries": ["10.0.0.20:8080"],
"registry-mirrors": ["https://7kmehv9e.mirror.aliyuncs.com"]
}

修改完重啓docker systemctl restart docker

# 1.登錄私有鏡像庫:
[root@nginx images]# docker login 10.0.0.20:8080 
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

# 2.爲 rancher-load-images.sh 添加可執行權限:
chmod +x rancher-load-images.sh

# 3.使用腳本 rancher-load-images.sh提取rancher-images.tar.gz文件中的鏡像,根據文件rancher-images.txt中的鏡像列表對提取的鏡像文件重新打 tag 並推送到您的私有鏡像庫
./rancher-load-images.sh --image-list ./rancher-images.txt --registry 10.0.0.20:8080

四、RKE安裝kubernetes

1、安裝rke、helm、kubectl(在rancher1安裝即可)

[rancher@rancher1 ~]$  su - root 

# 1、下載rke文件並移動到/usr/sbin
[root@rancher1 ~]# wget https://github.com/rancher/rke/releases/download/v1.1.2/rke_linux-amd64 \
&& chmod +x rke_linux-amd64 \
&& mv rke_linux-amd64 /usr/bin/rke

# 2、安裝kubectl
[root@rancher1 ~]# wget https://docs.rancher.cn/download/kubernetes/linux-amd64-v1.18.3-kubectl \
&&  chmod +x linux-amd64-v1.18.3-kubectl \
&&  mv linux-amd64-v1.18.3-kubectl /usr/bin/kubectl

# 3、安裝helm
[root@rancher1 ~]# wget https://docs.rancher.cn/download/helm/helm-v3.0.3-linux-amd64.tar.gz \
&&   tar xf helm-v3.0.3-linux-amd64.tar.gz  \
&&   cd linux-amd64 \
&&   mv helm  /usr/sbin/ \
&& cd \
&& rm -rf helm-v3.0.3-linux-amd64.tar.gz  linux-amd64

2、在主機創建rancher用戶並分發密碼

# 1、在所有機器都操作創建用戶rancher
groupadd docker
useradd rancher -G docker
echo "123456" | passwd --stdin rancher

# 2、在所有機器授權
[root@rancher1 ~]# vim /etc/sudoers +100
rancher ALL=(ALL)       NOPASSWD: ALL

# 3、以下在rancher1機器執行即可
su - rancher
ssh-keygen

ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]

3、創建rke文件

[rancher@rancher1 opt]$ cd ~
[rancher@rancher1 opt]$ vim rancher-cluster.yml
nodes:
  - address: 10.0.0.21
    internal_address: 172.16.1.21
    user: rancher
    role: ["controlplane", "etcd", "worker"]
    ssh_key_path: /home/rancher/.ssh/id_rsa
  - address: 10.0.0.22
    internal_address: 172.16.1.22
    user: rancher
    role: ["controlplane", "etcd", "worker"]
    ssh_key_path: /home/rancher/.ssh/id_rsa
  - address: 10.0.0.23
    internal_address: 172.16.1.23 # 節點內網 IP
    user: rancher
    role: ["controlplane", "etcd", "worker"]
    ssh_key_path: /home/rancher/.ssh/id_rsa

private_registries:
  - url: 10.0.0.20:8080
    user: admin
    password: Harbor12345
    is_default: true

常用RKE節點選項

選項 必填 描述
address 公用 DNS 或 IP 地址
user 可以運行 docker 命令的用戶
role 分配給節點的 Kubernetes 角色列表
internal_address 內部集羣流量的專用 DNS 或 IP 地址
ssh_key_path 用於對節點進行身份驗證的 SSH 私鑰的路徑(默認爲~/.ssh/id_rsa)

4、配置完rancher-cluster.yml之後,啓動Kubernetes 集羣

# 1、在啓動之前,所有主機先配置/etc/docker/daemon.json,加入鏡像倉庫地址
vim /etc/docker/daemon.json 

{
 "insecure-registries": ["10.0.0.20:8080"],
"registry-mirrors": ["https://7kmehv9e.mirror.aliyuncs.com"]
}

systemctl restart docker

# 2、啓動Kubernetes 集羣
[root@rancher1 ~]# su - rancher
rke up --config ./rancher-cluster.yml

若提示缺少鏡像rancher/hyperkube:v1.17.5-rancher1,公網pull,然後push到鏡像倉庫即可。
執行完成出現INFO[0220] Finished building Kubernetes cluster successfully以及多出以下兩個文件

5、測試集羣以及檢查集羣狀態

[rancher@rancher1 ~]$  mkdir -p /home/rancher/.kube     
[rancher@rancher1 ~]$ cp kube_config_rancher-cluster.yml  $HOME/.kube/config
[rancher@rancher1 ~]$ kubectl get nodes
NAME        STATUS   ROLES                      AGE     VERSION
10.0.0.21   Ready    controlplane,etcd,worker   3m39s   v1.17.5
10.0.0.22   Ready    controlplane,etcd,worker   3m40s   v1.17.5
10.0.0.23   Ready    controlplane,etcd,worker   3m39s   v1.17.5
  • Pod 是Running或Completed狀態。
  • STATUS 爲 Running 的 Pod,READY 應該顯示所有容器正在運行 (例如,3/3)。
  • STATUS 爲 Completed的 Pod 是一次運行的作業。對於這些 Pod,READY應爲0/1。
[rancher@rancher1 ~]$  kubectl get pods --all-namespaces
NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-74858d6d44-fr9g2     1/1     Running     0          3m42s
ingress-nginx   nginx-ingress-controller-dq29w            1/1     Running     0          3m42s
ingress-nginx   nginx-ingress-controller-mwnfx            1/1     Running     0          3m42s
ingress-nginx   nginx-ingress-controller-zzl5v            1/1     Running     0          3m42s
kube-system     canal-44lzq                               2/2     Running     0          4m17s
kube-system     canal-c6drc                               2/2     Running     0          4m17s
kube-system     canal-mz9bh                               2/2     Running     0          4m17s
kube-system     coredns-7c7966fdb8-b4445                  1/1     Running     0          3m5s
kube-system     coredns-7c7966fdb8-sjgtl                  1/1     Running     0          4m2s
kube-system     coredns-autoscaler-57879bf9b8-krxqx       1/1     Running     0          4m1s
kube-system     metrics-server-59db96dbdd-fwlc8           1/1     Running     0          3m52s
kube-system     rke-coredns-addon-deploy-job-vsz7d        0/1     Completed   0          4m7s
kube-system     rke-ingress-controller-deploy-job-7g8pt   0/1     Completed   0          3m46s
kube-system     rke-metrics-addon-deploy-job-hkwlj        0/1     Completed   0          3m57s
kube-system     rke-network-plugin-deploy-job-hvfnb       0/1     Completed   0          4m19s

五、安裝Rancher

1、添加 Helm Chart 倉庫

此步驟在有公網的主機執行即可,爲了得到tgz文件

# 1、使用helm repo add來添加倉庫,不同的地址適應不同的 Rancher 版本,請替換命令中的<CHART_REPO>,替換爲latest,stable或alpha。

[rancher@rancher1 ~]$  helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
"rancher-stable" has been added to your repositories

# 2、獲取最新的 Rancher Chart, tgz 文件會下載到本地。

[rancher@rancher1 ~]$ helm fetch rancher-stable/rancher      

# 3、將tgz文件拷貝到內網rancher1中的rancher用戶家目錄下

2、使用 Rancher 默認的自簽名證書在公網環境下獲取最新的cert-manager Chart

# 1.在可以連接互聯網的系統中,添加 cert-manager 倉庫。
helm repo add jetstack https://charts.jetstack.io
helm repo update

# 2.從 [Helm Chart 倉庫](https://hub.helm.sh/charts/jetstack/cert-manager) 中獲取最新的 cert-manager Chart。
helm fetch jetstack/cert-manager --version v0.12.0

將生成的cert-manager-v0.12.0.tgz文件拷貝到內網主機rancher1中

[rancher@rancher1 ~]$ scp [email protected]:/root/install/cert-manager-v0.12.0.tgz .

3、使用期望的參數渲染 chart 模板

[rancher@rancher1 ~]$ helm template cert-manager ./cert-manager-v0.12.0.tgz --output-dir . \
>     --namespace cert-manager \
>     --set image.repository=10.0.0.20:8080/quay.io/jetstack/cert-manager-controller \
>     --set webhook.image.repository=10.0.0.20:8080/quay.io/jetstack/cert-manager-webhook \
>     --set cainjector.image.repository=10.0.0.20:8080/quay.io/jetstack/cert-manager-cainjector

執行完成會得到一個包含相關 YAML文件的cert-manager目錄

[rancher@rancher1 ~]$ tree -L 3 cert-manager
cert-manager
└── templates
    ├── cainjector-deployment.yaml
    ├── cainjector-rbac.yaml
    ├── cainjector-serviceaccount.yaml
    ├── deployment.yaml
    ├── rbac.yaml
    ├── serviceaccount.yaml
    ├── service.yaml
    ├── webhook-deployment.yaml
    ├── webhook-mutating-webhook.yaml
    ├── webhook-rbac.yaml
    ├── webhook-serviceaccount.yaml
    ├── webhook-service.yaml
    └── webhook-validating-webhook.yaml

4、下載 cert-manager 所需的 CRD 文件。

curl -L -o cert-manager/cert-manager-crd.yaml https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml

# 可能會下載失敗,可以從我的網絡地址下載
http://img.ljcccc.com/Ranchercert-manager-crd.yaml.txt

5、渲染 Rancher 模板

聲明您選擇的選項。需要將 Rancher 配置爲在由 Rancher 啓動 Kubernetes 集羣或 Rancher 工具時,使用私有鏡像庫。

[rancher@rancher1 ~]$ helm template rancher ./rancher-2.4.4.tgz --output-dir . \
> --namespace cattle-system \
> --set hostname=rancher.com \
> --set certmanager.version=v0.12.0 \
> --set rancherImage=10.0.0.20:8080/rancher/rancher \
> --set systemDefaultRegistry=10.0.0.20:8080 \
> --set useBundledSystemChart=true

# 執行會輸出以下內容
wrote ./rancher/templates/serviceAccount.yaml
wrote ./rancher/templates/clusterRoleBinding.yaml
wrote ./rancher/templates/service.yaml
wrote ./rancher/templates/deployment.yaml
wrote ./rancher/templates/ingress.yaml
wrote ./rancher/templates/issuer-rancher.yaml

6、安裝 Cert-manager

(僅限使用 Rancher 默認自簽名證書)

# 1、爲 cert-manager 創建 namespace。
[rancher@rancher1 ~]$ kubectl create namespace cert-manager
namespace/cert-manager created

# 2、創建 cert-manager CRD
[rancher@rancher1 ~]$ kubectl apply -f cert-manager/cert-manager-crd.yaml
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
# 3、啓動 cert-manager。
[rancher@rancher1 ~]$ kubectl apply -R -f ./cert-manager
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io unchanged
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io unchanged
deployment.apps/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
serviceaccount/cert-manager-cainjector created
deployment.apps/cert-manager created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
service/cert-manager created
serviceaccount/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:webhook-requester created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:auth-delegator created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:webhook-authentication-reader created
service/cert-manager-webhook created
serviceaccount/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

7、安裝Rancher

[rancher@rancher1 ~]$ kubectl create namespace cattle-system
namespace/cattle-system created

[rancher@rancher1 ~]$ kubectl -n cattle-system apply -R -f ./rancher
clusterrolebinding.rbac.authorization.k8s.io/rancher created
deployment.apps/rancher created
ingress.extensions/rancher created
service/rancher created
serviceaccount/rancher created
issuer.cert-manager.io/rancher created

8、創建完rancher後查看狀態

[rancher@rancher1 templates]$ kubectl get pod --all-namespaces
NAMESPACE       NAME                                       READY   STATUS      RESTARTS   AGE
cattle-system   rancher-657756bbb6-fgjgz                   1/1     Running     0          3m49s
cattle-system   rancher-657756bbb6-l4gcs                   1/1     Running     0          3m49s
cattle-system   rancher-657756bbb6-m4x5v                   1/1     Running     3          3m49s
cert-manager    cert-manager-6b89685c5f-zfw9v              1/1     Running     0          4m42s
cert-manager    cert-manager-cainjector-64bdfd6596-ck2k8   1/1     Running     0          4m43s
cert-manager    cert-manager-webhook-7c49498d4f-98j98      1/1     Running     0          4m40s
ingress-nginx   default-http-backend-74858d6d44-fr9g2      1/1     Running     0          39m
ingress-nginx   nginx-ingress-controller-dq29w             1/1     Running     0          39m
ingress-nginx   nginx-ingress-controller-mwnfx             1/1     Running     0          39m
ingress-nginx   nginx-ingress-controller-zzl5v             1/1     Running     0          39m
kube-system     canal-44lzq                                2/2     Running     0          40m
kube-system     canal-c6drc                                2/2     Running     0          40m
kube-system     canal-mz9bh                                2/2     Running     0          40m
kube-system     coredns-7c7966fdb8-b4445                   1/1     Running     0          39m
kube-system     coredns-7c7966fdb8-sjgtl                   1/1     Running     0          40m
kube-system     coredns-autoscaler-57879bf9b8-krxqx        1/1     Running     0          40m
kube-system     metrics-server-59db96dbdd-fwlc8            1/1     Running     0          40m
kube-system     rke-coredns-addon-deploy-job-vsz7d         0/1     Completed   0          40m
kube-system     rke-ingress-controller-deploy-job-7g8pt    0/1     Completed   0          39m
kube-system     rke-metrics-addon-deploy-job-hkwlj         0/1     Completed   0          40m
kube-system     rke-network-plugin-deploy-job-hvfnb        0/1     Completed   0          40m

本地電腦綁定host解析後,瀏覽器訪問:https://rancher.com
創建密碼

點擊save保存

進入後查看狀態,發現pod有問題

登錄主機查看有個pod是error狀態

由於我們通過hosts文件來添加映射,所以需要爲Agent Pod添加主機別名(/etc/hosts):

[rancher@rancher1 templates]$ kubectl -n cattle-system patch  deployments cattle-cluster-agent --patch '{
    "spec": {
        "template": {
            "spec": {
                "hostAliases": [
                    {
                        "hostnames":
                        [
                            "rancher.com"
                        ],
                            "ip": "10.0.0.20"
                    }
                ]
            }
        }
    }
}'

添加後可以在配置文件中看到

[rancher@rancher1 templates]$ kubectl edit pod cattle-cluster-agent-5598c6557c-9ttkw   -n cattle-system 

集羣狀態正常

參考官網文檔:Rancher2.docs

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章