Kubernetes--k8s--double_happy

介紹

k8s 就是爲了 容器 container而做準備的

k8s官網

k8s :
	1.自動化部署
	2.容器的管理
	3.伸縮

如果單純的用docker來做 會有什麼不方便呢?
	1.每一個 container 表示一個進程 一個一個啓動 太費勁了

爲了方便? k8s

我們使用 1.13版本的k8s

在這裏插入圖片描述
k8s文檔

部署

部署官網地址

kubeadm 工具 :
	來進行部署 k8s

kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
客戶端 kubectl: the command line util to talk to your cluster. 


注意:
	kubelet 是後臺進程的服務 
	kubeadm 引導初始化的一個程序

環境準備

container01  : docker 、harbor
container02  : docker
container03  : docker

進行部署 k8s cluster
要驗證的是:
	1.三臺機器都要能登陸 第一臺機器的 harbor服務

即:
	三臺機器 登陸 我們的 私服 Harbor 是沒有問題的

[root@container01 ~]# docker login -u admin -p Harbor12345 172.21.230.89
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@container01 ~]# 

[root@container02 ~]# docker login -u admin -p Harbor12345 172.21.230.89
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@container02 ~]# 

[root@container03 ~]# docker login -u admin -p Harbor12345 172.21.230.89
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@container03 ~]# 

三臺機器一塊做:

1.selinux 要設置爲 disable狀態 

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

2. 關閉 swap (這是 官網要求的)
swapoff -a 

3.

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

注意:
	cat <<EOF > 的用法就是:
		上面的 
			net.bridge.bridge-nf-call-ip6tables = 1
			net.bridge.bridge-nf-call-iptables = 1
		寫到k8s.conf 文件的意思

yum k8s的倉庫地址配置

三臺機器一塊做:

1.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

2.yum k8s 的三個東西 

yum install -y kubelet-1.13.2 kubeadm-1.13.2 kubectl-1.13.2 kubernetes-cni-0.6.0-0 

systemctl enable --now kubelet

僅僅是enable 不要自作多情start

3.三個節點 下載鏡像

參考 鏡像地址:
https://hub.docker.com/u/hackeruncle

下載: + 打標籤 開成k8s 的 
docker pull hackeruncle/pause:3.1
docker tag  hackeruncle/pause:3.1 k8s.gcr.io/pause:3.1

docker pull hackeruncle/etcd:3.2.24
docker tag hackeruncle/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24

docker pull hackeruncle/coredns:1.2.6
docker tag hackeruncle/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

docker pull hackeruncle/kube-scheduler:v1.13.2
docker tag hackeruncle/kube-scheduler:v1.13.2 k8s.gcr.io/kube-scheduler:v1.13.2

docker pull hackeruncle/kube-controller-manager:v1.13.2
docker tag hackeruncle/kube-controller-manager:v1.13.2 k8s.gcr.io/kube-controller-manager:v1.13.2

docker pull hackeruncle/kube-proxy:v1.13.2
docker tag hackeruncle/kube-proxy:v1.13.2 k8s.gcr.io/kube-proxy:v1.13.2

docker pull hackeruncle/kube-apiserver:v1.13.2
docker tag hackeruncle/kube-apiserver:v1.13.2 k8s.gcr.io/kube-apiserver:v1.13.2


[root@container02 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-apiserver            v1.13.2             177db4b8e93a        13 months ago       181MB
k8s.gcr.io/kube-controller-manager   v1.13.2             b9027a78d94c        13 months ago       146MB
k8s.gcr.io/kube-proxy                v1.13.2             01cfa56edcfc        13 months ago       80.3MB
k8s.gcr.io/kube-scheduler            v1.13.2             3193be46e0b3        13 months ago       79.6MB
k8s.gcr.io/coredns                   1.2.6               f59dcacceff4        15 months ago       40MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        16 months ago       220MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
[root@container02 ~]# 




注意:
這步並沒有推送到harbor,每個節點都應該要操作 
因爲 我這個節點是 都可以王文外網的  節省時間成本 就不操作了 很簡單 

刪除:(打了標籤了 就沒有必要保存這麼多)
docker rmi hackeruncle/pause:3.1
docker rmi hackeruncle/etcd:3.2.24
docker rmi hackeruncle/coredns:1.2.6
docker rmi hackeruncle/kube-scheduler:v1.13.2
docker rmi hackeruncle/kube-controller-manager:v1.13.2
docker rmi hackeruncle/kube-proxy:v1.13.2
docker rmi hackeruncle/kube-apiserver:v1.13.2

1.etcd :做存儲的

k8s初始化

1.
kubeadm init \
--kubernetes-version=v1.13.2 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--ignore-preflight-errors=Swap

注意:
pod-network-cidr :pod 是最小單元 就等價與  容器 
	10.244.0.0  :  10.244 是ip的開頭 
service-cidr: 對外的網段
			

[root@container01 ~]# kubeadm init \
> --kubernetes-version=v1.13.2 \
> --pod-network-cidr=10.244.0.0/16 \
> --service-cidr=10.96.0.0/12 \
> --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.13.2
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [container01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.21.230.89]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [container01 localhost] and IPs [172.21.230.89 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [container01 localhost] and IPs [172.21.230.89 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s


You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.21.230.89:6443 --token 40ps0a.2gbhl4ofkxoy01ko --discovery-token-ca-cert-hash sha256:9b45f651e5e7deb3d5e2558850fc76886006513c59f4391efbc0b48471c55974


2.這步就是做認證的 
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

[root@container01 ~]# mkdir -p $HOME/.kube
[root@container01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@container01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@container01 ~]# ll $HOME/.kube/config
-rw------- 1 root root 5453 Feb 10 13:28 /root/.kube/config
[root@container01 ~]# 

3.   創建網絡
前提 你先下載一個 kube-flannel.yml
  網上一大堆
[root@container01 ~]# ll
total 20
drwxr-xr-x 2 root root  4096 Feb  7 10:55 harbor
-rw-r--r-- 1 root root 11289 Feb 10 13:32 kube-flannel.yml
drwxr-xr-x 2 root root  4096 Feb  7 10:45 mysql5.7
[root@container01 ~]# 

[root@container01 ~]# kubectl apply -f kube-flannel.yml   
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[root@container01 ~]# 


4.讓其他機器 加入進來 (當前 01 機器 是 不能自己 加自己的)


[root@container02 ~]# kubeadm join 172.21.230.89:6443 --token 40ps0a.2gbhl4ofkxoy01ko --discovery-token-ca-cert-hash sha256:9b45f651e5e7deb3d5e2558850fc76886006513c59f4391efbc0b48471c55974
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.06
[discovery] Trying to connect to API Server "172.21.230.89:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.21.230.89:6443"
[discovery] Requesting info from "https://172.21.230.89:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.21.230.89:6443"
[discovery] Successfully established connection with API Server "172.21.230.89:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "container02" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@container02 ~]# 

讓你查看一下 是不是join 到 k8s上了?
去container01 機器上看一下:

[root@container01 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
container01   Ready    master   13m   v1.13.2
container02   Ready    <none>   85s   v1.13.2
[root@container01 ~]# 



[root@container03 ~]# kubeadm join 172.21.230.89:6443 --token 40ps0a.2gbhl4ofkxoy01ko --discovery-token-ca-cert-hash sha256:9b45f651e5e7deb3d5e2558850fc76886006513c59f4391efbc0b48471c55974
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.06
[discovery] Trying to connect to API Server "172.21.230.89:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.21.230.89:6443"
[discovery] Requesting info from "https://172.21.230.89:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.21.230.89:6443"
[discovery] Successfully established connection with API Server "172.21.230.89:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "container03" as an annotation

This node has joined the cluster


再看一下 container03是否加入進來:

[root@container01 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
container01   Ready    master   15m     v1.13.2
container02   Ready    <none>   2m41s   v1.13.2
container03   Ready    <none>   38s     v1.13.2
[root@container01 ~]# 

k8s : 是 標準的 主從架構
到此:
	命令行的部署好了 但是 dashborad 沒有部署好

部署dashborad

1.安裝證書
[root@container01 ~]# mkdir certs
[root@container01 ~]# ll
total 28
drwxr-xr-x 2 root root  4096 Feb 10 13:50 certs
drwxr-xr-x 2 root root  4096 Feb  7 10:55 harbor
drwxr-xr-x 2 root root  4096 Feb 10 13:49 k8s_dashboard
-rw-r--r-- 1 root root 11289 Feb 10 13:32 kube-flannel.yml
drwxr-xr-x 2 root root  4096 Feb  7 10:45 mysql5.7
[root@container01 ~]#

openssl req -nodes -newkey rsa:2048  \
-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=kubernetes-dashboard" \
-keyout dashborad.key \
-out dashborad.csr


[root@container01 certs]# openssl req -nodes -newkey rsa:2048  \
> -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=kubernetes-dashboard" \
> -keyout dashborad.key \
> -out dashborad.csr
Generating a 2048 bit RSA private key
...................................................+++
.+++
writing new private key to 'dashborad.key'
-----
[root@container01 certs]# 


openssl  x509 -req -sha256 -days 365 \
-in dashborad.csr \
-signkey dashborad.key \
-out dashborad.crt


[root@container01 certs]# openssl  x509 -req -sha256 -days 365 \
> -in dashborad.csr \
> -signkey dashborad.key \
> -out dashborad.crt
Signature ok
subject=/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=kubernetes-dashboard
Getting Private key
[root@container01 certs]# 


[root@container01 certs]# ll
total 12
-rw-r--r-- 1 root root 1241 Feb 10 13:54 dashborad.crt
-rw-r--r-- 1 root root 1021 Feb 10 13:52 dashborad.csr
-rw-r--r-- 1 root root 1708 Feb 10 13:52 dashborad.key
[root@container01 certs]# 


2.創建祕鑰

kubectl create secret generic \
kubernetes-dashboard-certs --from-file=certs -n kube-system

注意:
secret generic  就是創建一個 祕鑰 普通的
--from-file  從哪個文件夾
-n  就是 namespace

[root@container01 ~]# kubectl create secret generic \
> kubernetes-dashboard-certs --from-file=certs -n kube-system
secret/kubernetes-dashboard-certs created
[root@container01 ~]# 

3.創建dashboard
先要準備配置文件哦

[root@container01 ~]# cd k8s_dashboard/
[root@container01 k8s_dashboard]# ll
total 0
-rw-r--r-- 1 root root 0 Feb 10 13:49 admin-token.yaml
-rw-r--r-- 1 root root 0 Feb 10 13:49 kubernetes-dashboard.yaml
[root@container01 k8s_dashboard]# 

[root@container01 k8s_dashboard]#  kubectl create -f  kubernetes-dashboard.yaml
secret/kubernetes-dashboard-csrf created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" already exists
[root@container01 k8s_dashboard]# 


[root@container01 k8s_dashboard]# kubectl create -f admin-token.yaml
clusterrolebinding.rbac.authorization.k8s.io/admin created
serviceaccount/admin created
[root@container01 k8s_dashboard]# 

注意:

dashboad 要從瀏覽器打開 是需要賬號和密碼的 
我們採取的是 token
token:是有周期變化的

查看token:
	要注意:k8s裏都是 資源  下面的 我們使用到的資源 就是 祕鑰 叫secret

[root@container01 k8s_dashboard]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-w7fd8   kubernetes.io/service-account-token   3      65m
[root@container01 k8s_dashboard]# 

這個 是默認的 不是我們那個kube-system

1.獲取資源的 名字

[root@container01 k8s_dashboard]# kubectl get secret -n kube-system |grep admin | awk '{print $1}'
admin-token-q8fcz
[root@container01 k8s_dashboard]# 

2. 根據資源名字 查看token
[root@container01 k8s_dashboard]# kubectl describe secret/admin-token-q8fcz  -n kube-system
Name:         admin-token-q8fcz
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: b70353ea-4bce-11ea-adad-00163e041ece

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1xOGZjeiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI3MDM1M2VhLTRiY2UtMTFlYS1hZGFkLTAwMTYzZTA0MWVjZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.mxih3qYSd3T3t-DfPgetnxa7Xvrzlx8vPfxep2rbsbQwVRskclcYkz0DQ2DR57my8cT3vD24ph9idwsgNaaOW8MGHe376S9sHGZ2_98mdCuYKemjLygZENGRikmUU4VUuB2DXAykHm3R72GNM07NwPG3YFKbn2Vo4hG1XgFLOWd3B77jYxhR3ZQKbfyPj7lCZh3jgEHigsvg6oDNF4tvy8VBvfQRimphCVYQ_FqRXOBV6HJJUdJwvIhfWXifU8kEoIQPayx1aFx2U-Q6lzsgBA_pn_objmMwJ3Y_sA4cJSHBdegVotFwEt57a6cz4tHH2m52qLLHbSngpboRFQK1MQ
ca.crt:     1025 bytes
namespace:  11 bytes
[root@container01 k8s_dashboard]# 





注意:
 kubectl get all -n kube-system

get all : 指的是 查看所有資源

上面的命令 就是:
	指 kube-system 的所有資源

[root@container01 k8s_dashboard]# kubectl get all -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
pod/coredns-86c58d9df4-844m4               1/1     Running   0          71m
pod/coredns-86c58d9df4-s9b9d               1/1     Running   0          71m
pod/etcd-container01                       1/1     Running   0          70m
pod/kube-apiserver-container01             1/1     Running   0          70m
pod/kube-controller-manager-container01    1/1     Running   0          70m
pod/kube-flannel-ds-amd64-cn68f            1/1     Running   0          63m
pod/kube-flannel-ds-amd64-mkdp2            1/1     Running   0          56m
pod/kube-flannel-ds-amd64-thq8n            1/1     Running   0          58m
pod/kube-proxy-hvwsb                       1/1     Running   0          56m
pod/kube-proxy-w7rwk                       1/1     Running   0          58m
pod/kube-proxy-xkbds                       1/1     Running   0          71m
pod/kube-scheduler-container01             1/1     Running   0          70m
pod/kubernetes-dashboard-cb55bd5bd-dffq8   1/1     Running   0          10m

NAME                           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
service/kube-dns               ClusterIP   10.96.0.10     <none>        53/UDP,53/TCP   71m
service/kubernetes-dashboard   NodePort    10.98.92.211   <none>        443:32004/TCP   10m

NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
daemonset.apps/kube-flannel-ds-amd64     3         3         3       3            3           beta.kubernetes.io/arch=amd64     63m
daemonset.apps/kube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm       63m
daemonset.apps/kube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64     63m
daemonset.apps/kube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le   63m
daemonset.apps/kube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x     63m
daemonset.apps/kube-proxy                3         3         3       3            3           <none>                            71m

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns                2/2     2            2           71m
deployment.apps/kubernetes-dashboard   1/1     1            1           10m

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-86c58d9df4               2         2         2       71m
replicaset.apps/kubernetes-dashboard-cb55bd5bd   1         1         1       10m
[root@container01 k8s_dashboard]# 



也就是說:
	kube-system 這個命名空間 下面的這些組件 
	就是爲了 構建 k8s 而做準備的 

service/kubernetes-dashboard   NodePort    10.98.92.211   <none>        443:32004/TCP   10m 

即 :443:32004  kubernetes-dashboard 這塊和 docker 是反着的 



超詳細命令:

 kubectl get all -n kube-system -o wide

加上 -o wide 


訪問dashboard:
https://ip:port 

我的是如下  :
我用 谷歌瀏覽器是可以進來的 
如果進不來 使用 火狐 瀏覽器 

在這裏插入圖片描述
在這裏插入圖片描述

在這裏插入圖片描述

所以 在dashborad 就可以修改配置 看日誌 

但是:
!!! 在修改之前一定先 拷貝一份 
要不然 萬一出事了 你方便!!

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章