理論:kubernetes的pod資源管理+集羣添加harbor倉庫

文章目錄


pod特點:

最小部署單元

一組容器的集合

一個pod中的容器共享網絡命名空間

pod的生命週期是短暫的

一:pod容器分類

1.1 infrastructure container 基礎容器

維護整個pod的網絡空間

查看容器的網絡

[root@node01 ~]# cat /k8s/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.247.143 \
--kubeconfig=/k8s/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/cfg/bootstrap.kubeconfig \
--config=/k8s/cfg/kubelet.config \
--cert-dir=/k8s/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

每次創建pod時,–pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"就會創建基礎容器,與pod對應,對於用戶是透明的

[root@node01 ~]# docker ps | grep registry	#這個是基礎容器
56ad95a6c12c        registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0   "/pause"                 6 hours ago         Up 6 hours                              k8s_POD_nginx-deployment-78cdb5b557-f2hx2_default_acbc8ca0-9289-11ea-a668-000c29db840b_0
[root@node01 ~]# 

node節點加入集羣后,就會創建一個基礎容器,這個容器是用來管理pod的

1.2 initcontainers 初始化容器

先於業務容器開始執行——以前pod中容器是並行開啓,現在進行了改進

1.3 container 業務容器

並行啓動

官方網站:https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

可以有多個運行應用程序的容器,但也可以有一個或多個初始化容器,它們在啓動應用程序容器之前就已運行。

初始化容器與常規容器完全一樣,除了:

  • 初始化容器始終會運行到完成狀態。
  • 每個init容器必須成功完成才能啓動下一個容器。

如果Pod的初始化容器失敗,Kubernetes將反覆重啓Pod,直到初始化容器成功。但是,如果Pod具有restartPolicy永不,則Kubernetes不會重新啓動Pod。

要爲Pod指定初始化容器,請將initContainers字段添加到Pod規範中,作爲類型爲Container的對象數組 和app containers數組一起。初始化容器的狀態.status.initContainerStatuses 作爲容器狀態的數組在字段中返回(類似於該.status.containerStatuses 字段)。

kubelet組件管理1基礎容器和2初始化容器

運維工作者主要負責業務容器

yaml中的app容器就是業務容器

容器創建有兩種:apply和create

apply包含create,即創建新的容器;也可以在原有容器的基礎上進行更新

二:鏡像拉取策略(image PullPolicy)

IfNotPresent:默認值,鏡像在宿主機不存在時才拉取

Always:每次創建Pod都會重新拉取一次鏡像

Never:Pod永遠不會主動拉取這個鏡像

官方網站介紹:https://kubernetes.io/docs/concepts/containers/images

示例:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: private-image-test-1
spec:
  containers:
    - name: uses-private-image
      image: $PRIVATE_IMAGE_NAME
      imagePullPolicy: Always
      command: [ "echo", "SUCCESS" ]
EOF

2.1 演示kubectl edit查看默認容器的鏡像拉取策略

從默認編輯器編輯資源。

edit命令允許您直接編輯可以通過命令行工具檢索的任何API資源。這將打開

由KUBE _EDITOR或編輯器環境變量定義的編輯器,或者在Linux或“記事本”中退回到“vi”

對於Windows。您可以編輯多個對象,儘管每次只應用一個更改。該命令接受文件名爲

以及命令行參數,儘管您指向的文件必須是以前保存的資源版本。

使用用於獲取資源的API版本進行編輯。若要使用特定的API版本進行編輯,請完全限定

資源、版本和組。

默認格式是YAML。要編輯JSON,請指定“-o JSON”。

標記——Windows行結束符可用於強制Windows行結束符,否則將作爲操作的默認結束符

系統將被使用。

如果更新時發生錯誤,將在磁盤上創建一個臨時文件,其中包含未應用的文件

的變化。更新資源時最常見的錯誤是另一個編輯器更改服務器上的資源。當這個

發生時,您必須將更改應用到資源的較新版本,或將臨時保存的副本更新到

包括最新的資源版本。

例子:

#編輯名爲“docker-registry”的服務:

kubectl edit svc/docker-registry

#使用替代編輯器

KUBE_EDITOR=“nano” kubectl edit svc/docker-registry

#編輯任務“myjob”在JSON使用v1 API格式:

kubectl edit job.v1.batch/myjob -o json

在YAML中編輯部署’mydeployment’,並將修改後的配置保存在註釋中:

kubectl edit deployment/mydeployment -o yaml --save-config

選項:

——allow-missing-template-keys=true:如果爲真,當字段或映射鍵丟失時,忽略模板中的任何錯誤

的模板。僅適用於golang和jsonpath輸出格式。

-f,——filename=[]:文件名、目錄或用於編輯資源的文件的URL

——include-uninitialized=false:如果爲真,kubectl命令將應用於未初始化的對象。如果顯式設置爲

false,此標誌覆蓋使kubectl命令應用於未初始化的對象的其他標誌,例如“——all”。

具有空元數據的對象。初始化器被認爲是初始化的。

-o,——output= ":輸出格式。之一:

json | yaml | | |模板名稱go-template | go-template-file | templatefile | jsonpath | jsonpath-file。

——Output -patch=false:如果資源被編輯,則輸出補丁。

——record=false:在資源註釋中記錄當前的kubectl命令。如果設置爲false,則不記錄

命令。如果設置爲true,則記錄命令。如果沒有設置,默認情況下只更新現有註釋值

已經存在。

-R,——recursive=false:遞歸地處理-f,——filename中使用的目錄。當您想要管理時,它非常有用

在同一目錄中組織的相關清單。

——save-config=false:如果爲真,則當前對象的配置將保存在其註釋中。否則,

註釋將保持不變。當您希望將來在這個對象上執行kubectl apply時,這個標誌非常有用。

——template= ":模板字符串或模板文件的路徑,當-o=go-template, -o=go-template-file時使用。的

模板格式是golang模板[http://golang.org/pkg/text/template/#pkg-overview]。

——validate=true:如果爲真,在發送輸入之前使用模式驗證輸入

——windows-line- ended =false:默認爲平臺本機的行結束符。

用法:

kubectl edit (RESOURCE/NAME | -f FILENAME) [options]

[root@master1 ~]# kubectl edit deploy/nginx-deployment	#使用edit會進入編輯器
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2020-05-10T06:44:19Z
  generation: 1
  labels:
    app: nginx
  name: nginx-deployment
  namespace: default
  resourceVersion: "520771"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx-deployment
  uid: acb9ae71-9289-11ea-a668-000c29db840b
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.15.4
        imagePullPolicy: IfNotPresent		#鏡像拉取默認爲宿主機不存咋該鏡像就拉取
        name: nginx1
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always			#重啓策略爲總是
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 3
  conditions:
  - lastTransitionTime: 2020-05-10T06:44:21Z
    lastUpdateTime: 2020-05-10T06:44:21Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2020-05-10T06:44:19Z
    lastUpdateTime: 2020-05-10T06:44:21Z
    message: ReplicaSet "nginx-deployment-78cdb5b557" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 3
  replicas: 3
  updatedReplicas: 3
  #:q退出
Edit cancelled, no changes made.

2.2 編寫yaml文件測試

[root@master1 ~]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod1
spec:
  containers:
  - name: nginx-03
    image: nginx
    imagePullPolicy: Always
    command: [ "echo","SUCCESS" ]
[root@master1 ~]# kubectl create -f pod1.yaml 
pod/pod1 created
[root@master1 ~]# kubectl get pods -w
NAME                                READY   STATUS              RESTARTS   AGE
nginx-6c94d899fd-xsxct              1/1     Running             0          2d9h
nginx-deployment-78cdb5b557-6z2sf   1/1     Running             0          9h
nginx-deployment-78cdb5b557-9pdf8   1/1     Running             0          9h
nginx-deployment-78cdb5b557-f2hx2   1/1     Running             0          9h
pod1                                0/1     ContainerCreating   0          10s
pod1   0/1   Completed   0     17s
pod1   0/1   Completed   1     33s
pod1   0/1   CrashLoopBackOff   1     34s
pod1   0/1   Completed   2     64s
pod1   0/1   CrashLoopBackOff   2     77s
pod1   0/1   Completed   3     104s

pod1資源處於創建完成、重啓(crashloopbackoff重啓)循環的狀態,創建失敗

2.3 失敗的原因是因爲命令啓動衝突

刪除command:[ “echo”,“SUCCESS” ],同時修改修改一下版本

^C[root@master1 ~]vim pod1.yamll 
apiVersion: v1
kind: Pod
metadata:
  name: pod1
spec:
  containers:
  - name: nginx-03
    image: nginx:1.14
    imagePullPolicy: Always
#    command: [ "echo","SUCCESS" ]
[root@master1 ~]# kubectl create -f pod1.yaml 	#發現pod1已存在,先刪除再重新創建
Error from server (AlreadyExists): error when creating "pod1.yaml": pods "pod1" already exists
[root@master1 ~]# kubectl delete pod/pod1
pod "pod1" deleted
[root@master1 ~]# kubectl create -f pod1.yaml 
pod/pod1 created
[root@master1 ~]# kubectl get pods -w
NAME                                READY   STATUS              RESTARTS   AGE
nginx-6c94d899fd-xsxct              1/1     Running             0          2d9h
nginx-deployment-78cdb5b557-6z2sf   1/1     Running             0          9h
nginx-deployment-78cdb5b557-9pdf8   1/1     Running             0          9h
nginx-deployment-78cdb5b557-f2hx2   1/1     Running             0          9h
pod1                                0/1     ContainerCreating   0          3s
pod1   1/1   Running   0     20s

此時運行成功

備註:刪除的方式還可以指定由什麼yaml文件去刪除對應的資源

[root@master1 ~]# kubectl delete -f pod1.yaml 
pod "pod1" deleted

創建的方式也可以使用apply

^C[root@master1 ~]# kubectl delete -f pod1.yaml 
pod "pod1" deleted
[root@master1 ~]# kubectl apply -f pod1.yaml 
pod/pod1 created
[root@master1 ~]# kubectl get pods -w
NAME                                READY   STATUS              RESTARTS   AGE
nginx-6c94d899fd-xsxct              1/1     Running             0          2d10h
nginx-deployment-78cdb5b557-6z2sf   1/1     Running             0          9h
nginx-deployment-78cdb5b557-9pdf8   1/1     Running             0          9h
nginx-deployment-78cdb5b557-f2hx2   1/1     Running             0          9h
pod1                                0/1     ContainerCreating   0          3s
pod1   1/1   Running   0     11s

2.4 查看分配節點

^C[root@master1 ~]# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE    IP            NODE              NOMINATED NODE
nginx-6c94d899fd-xsxct              1/1     Running   0          2d9h   172.17.42.6   192.168.247.144   <none>
nginx-deployment-78cdb5b557-6z2sf   1/1     Running   0          9h     172.17.42.3   192.168.247.144   <none>
nginx-deployment-78cdb5b557-9pdf8   1/1     Running   0          9h     172.17.42.4   192.168.247.144   <none>
nginx-deployment-78cdb5b557-f2hx2   1/1     Running   0          9h     172.17.45.3   192.168.247.143   <none>
pod1                                1/1     Running   0          59s    172.17.45.4   192.168.247.143   <none>

2.5 到node節點去curl驗證

[root@node01 ~]# curl -I 172.17.45.4
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Sun, 10 May 2020 16:00:13 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 04 Dec 2018 14:44:49 GMT
Connection: keep-alive
ETag: "5c0692e1-264"
Accept-Ranges: bytes

基礎容器是在kubelet啓動後向apiserver申請,後被統一創建

三:部署harbor私有倉庫,加入到集羣內

創建一個harbor倉庫的詳細步驟可以看之前的文檔

https://blog.csdn.net/Lfwthotpt/article/details/105729801

這裏的步驟就不詳解了

3.1 基本環境,服務器主機需要安裝python、docker和docker-compose環境

[root@localhost ~]# hostnamectl set-hostname harbor
[root@localhost ~]# su
[root@harbor ~]# mkdir /abc
[root@harbor ~]# mount.cifs //192.168.0.88/linuxs /abc
Password for root@//192.168.0.88/linuxs:  
[root@harbor ~]# cp /abc/docker-compose .
[root@harbor ~]# cp /abc/harbor-offline-installer-v1.2.2.tgz .
[root@harbor ~]# mv docker-compose /usr/bin/
[root@harbor ~]# docker-compose -v
docker-compose version 1.23.1, build b02f1306
[root@harbor ~]# setenforce 0
[root@harbor ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
[root@harbor ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@harbor ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@harbor ~]# yum install -y docker-ce
[root@harbor ~]# systemctl start docker
[root@harbor ~]# systemctl enable docker
[root@harbor docker]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://fk2yrsh1.mirror.aliyuncs.com"]
}
EOF
[root@harbor docker]# systemctl daemon-reload
[root@harbor docker]# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf 
[root@harbor docker]# sysctl -p
net.ipv4.ip_forward = 1
[root@harbor docker]# systemctl restart network
[root@harbor docker]# systemctl restart docker

3.2 部署harbor服務

harbor被部署爲多個docker容器,因此可以部署在任何支持docker的linux發行版上

先解壓軟件包

[root@harbor ~]# tar xf harbor-offline-installer-v1.2.2.tgz -C /usr/local/
[root@harbor docker]# cd /usr/local/harbor/
[root@harbor harbor]# ls
common                     docker-compose.yml     harbor.v1.2.2.tar.gz  NOTICE
docker-compose.clair.yml   harbor_1_1_0_template  install.sh            prepare
docker-compose.notary.yml  harbor.cfg             LICENSE               upgrade
[root@harbor harbor]# vim harbor.cfg 
hostname = 192.168.247.147
[root@harbor harbor]# sh install.sh 
Note: docker version: 19.03.8
Note: docker-compose version: 1.23.1
[Step 1]: loading Harbor images ...
[Step 2]: preparing environment ...
[Step 3]: checking existing instance of Harbor ...
[Step 4]: starting Harbor ...
✔ ----Harbor has been installed and started successfully.----
Now you should be able to visit the admin portal at http://192.168.247.147. 
For more details, please visit https://github.com/vmware/harbor 

此時harbor倉庫架設成功

3.3 登錄web

默認賬號:admin,密碼:Harbor12345
在這裏插入圖片描述

3.4 先創建一個項目,以供存放專門爲這個項目使用的鏡像

比如叫gsydianshang

在這裏插入圖片描述
此時項目中鏡像是空的

在這裏插入圖片描述

四:將harbor與k8s中的docker關聯

在node節點配置連接私有倉庫——注意後面的逗號要添加上

4.1 以一個節點爲例,別的節點也做

[root@node01 ~]# vim /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://fk2yrsh1.mirror.aliyuncs.com"],
  "insecure-registries":["192.168.247.147"]
}
[root@node01 ~]# systemctl restart docker

正常我們使用docker pull nginx時,默認拉取的是docker共有倉庫鏡像

docker pull 192.168.247.147/gsydianshang/nginx 拉取的是harbor倉庫中gsy項目中的鏡像

4.2 此時順便看一下容器

[root@node01 ~]# docker ps -a
CONTAINER ID        IMAGE                                                                 COMMAND                  CREATED             STATUS                        PORTS               NAMES
59b1c2158e1a        nginx                                                                 "nginx -g 'daemon of…"   16 seconds ago      Up 16 seconds                                     k8s_nginx-03_pod1_default_15b4611a-92d7-11ea-a668-000c29db840b_1
05d9f33ab362        bc26f1ed35cf                                                          "nginx -g 'daemon of…"   19 seconds ago      Up 19 seconds                                     k8s_nginx1_nginx-deployment-78cdb5b557-f2hx2_default_acbc8ca0-9289-11ea-a668-000c29db840b_1
3faf494b46a0        registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0   "/pause"                 20 seconds ago      Up 19 seconds                                     k8s_POD_pod1_default_15b4611a-92d7-11ea-a668-000c29db840b_1
4c89eb5f1dcb        registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0   "/pause"                 20 seconds ago      Up 19 seconds                                     k8s_POD_nginx-deployment-78cdb5b557-f2hx2_default_acbc8ca0-9289-11ea-a668-000c29db840b_1
0dbb8f579312        nginx                                                                 "nginx -g 'daemon of…"   33 hours ago        Exited (0) 32 seconds ago                         k8s_nginx-03_pod1_default_15b4611a-92d7-11ea-a668-000c29db840b_0
f8261311ef62        registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0   "/pause"                 33 hours ago        Exited (0) 32 seconds ago                         k8s_POD_pod1_default_15b4611a-92d7-11ea-a668-000c29db840b_0
f36bb109b1df        bc26f1ed35cf                                                          "nginx -g 'daemon of…"   42 hours ago        Exited (0) 32 seconds ago                         k8s_nginx1_nginx-deployment-78cdb5b557-f2hx2_default_acbc8ca0-9289-11ea-a668-000c29db840b_0
56ad95a6c12c        registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0   "/pause"                 42 hours ago        Exited (0) 32 seconds ago                         k8s_POD_nginx-deployment-78cdb5b557-f2hx2_default_acbc8ca0-9289-11ea-a668-000c29db840b_0
39f034a2f24e        centos:7                                                              "/bin/bash"              12 days ago         Exited (137) 22 seconds ago                       beautiful_jennings

其中有四個業務容器因爲重啓服務正常退出,但是新出現4個up的容器,這是因爲k8s爲了保持pod的正常運轉,會自動根據副本集創建新容器

所以重啓docker不會影響業務,因爲k8s會自動重啓

五:上傳鏡像到harbor

注意:在使用harbor下載鏡像創建資源的時候,要保證node處於harbor登陸狀態

5.1 兩個節點都做登錄

[root@node01 ~]# docker login 192.168.247.147
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

5.2 先拉取一個公網倉庫的tomcat以供測試

[root@node01 ~]# docker pull tomcat
Using default tag: latest
Digest: sha256:cae591b6f798359b0ba2bdd9cc248e695ac6e14d20722c5ff82a9a138719896f
Status: Downloaded newer image for tomcat:latest
docker.io/library/tomcat:latest
[root@node01 ~]# docker images | grep tomcat
tomcat                                                            latest              927899a31456        2 weeks ago         647MB

5.3 上傳鏡像打標籤

[root@node01 ~]# docker tag tomcat 192.168.247.147/gsydianshang/tomcat
[root@node01 ~]# docker push 192.168.247.147/gsydianshang/tomcat
The push refers to repository [192.168.247.147/gsydianshang/tomcat]

5.4 到web刷新一下查看

上傳成功
在這裏插入圖片描述

5.5 查看本地鏡像

[root@node01 ~]# docker images | grep tomcat
192.168.247.147/gsydianshang/tomcat                               latest              927899a31456        2 weeks ago         647MB
tomcat                                                            latest              927899a31456        2 weeks ago         647MB

5.6 此時先把本地打標籤的刪掉,然後從harbor下載測試

[root@node01 ~]# docker rmi 192.168.247.147/gsydianshang/tomcat:latest 
Untagged: 192.168.247.147/gsydianshang/tomcat:latest
Untagged: 192.168.247.147/gsydianshang/tomcat@sha256:8672b0039fe1f37d3d35c11f65aefad5388fd46e260980b95304605397bb4942
[root@node01 ~]# docker images | grep tomcat
tomcat                                                            latest              927899a31456        2 weeks ago         647MB

5.7 進行鏡像下載時出現問題,需要權限拒絕,需要登錄

缺少倉庫的憑據

[root@node01 ~]# docker pull 192.168.247.147/gsydiansahng/tomcat
Using default tag: latest
Error response from daemon: pull access denied for 192.168.247.147/gsydiansahng/tomcat, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
#重新登錄了一下發現是我項目單詞打錯了
[root@node01 ~]# docker pull 192.168.247.147/gsydianshang/tomcat:latest
latest: Pulling from gsydianshang/tomcat
Digest: sha256:8672b0039fe1f37d3d35c11f65aefad5388fd46e260980b95304605397bb4942
Status: Downloaded newer image for 192.168.247.147/gsydianshang/tomcat:latest
192.168.247.147/gsydianshang/tomcat:latest

檢查

[root@node01 ~]# docker images | grep tomcat
tomcat                                                            latest              927899a31456        2 weeks ago         647MB
192.168.247.147/gsydianshang/tomcat                               latest              927899a31456        2 weeks ago         647MB

此時web的下載數變成1
在這裏插入圖片描述

六: 這種方式是使用docker 下載,接下來測試使用K8s編輯yaml文件下載

6.1 先測試常規的kubectl run

[root@master1 ~]# kubectl run tomcat --image=tomcat
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/tomcat created
[root@master1 ~]# kubectl get pods | grep tomcat
tomcat-7c67d9584b-h5gzj             1/1     Running   0          22s
[root@master1 ~]# kubectl get pods/tomcat-7c67d9584b-h5gzj  --export -o yaml 	#只查看跟鏡像相關的
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: tomcat
spec:
  containers:
  - image: tomcat	#此處鏡像指tomcat,指的是默認docker.io/tomcat,但是本地若是有鏡像的話,優先從本地拉取
    imagePullPolicy: Always
    name: tomcat
    resources: {}
[root@master1 ~]# [root@master1 ~]# kubectl describe pod tomcat-7c67d9584b-h5gzj
Events:
  Type    Reason     Age    From                      Message
  ----    ------     ----   ----                      -------
  Normal  Scheduled  2m12s  default-scheduler         Successfully assigned default/tomcat-7c67d9584b-2jlc2 to 192.168.247.143
  Normal  Pulling    2m11s  kubelet, 192.168.247.143  pulling image "tomcat"
  Normal  Pulled     2m7s   kubelet, 192.168.247.143  Successfully pulled image "tomcat"
  Normal  Created    2m7s   kubelet, 192.168.247.143  Created container
  Normal  Started    2m7s   kubelet, 192.168.247.143  Started container

[root@master1 ~]# kubectl delete deploy tomcat
deployment.extensions "tomcat" deleted

docker.io代表從docker官網下載

或者在之前配置的指定registry爲阿里雲

6.2 這裏可以手動寫一個yaml文件去進行測試

[root@master1 ~]# vim tomcat-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-tomcat
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my-tomcat
    spec:
      containers:
      - name: my-tomcat
        image: docker.io/tomcat:8.0.52
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-tomcat
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 31111
  selector:
    app: my-tomcat
[root@master1 ~]# ls
anaconda-ks.cfg  initial-setup-ks.cfg  pod1.yaml               下載  圖片  桌面  視頻
gsy              k8s                   tomcat-deployment.yaml  公共  文檔  模板  音樂
[root@master1 ~]# kubectl create -f tomcat-deployment.yaml 
deployment.extensions/my-tomcat created
service/my-tomcat created

6.3 查看tomcat的pod是否創建

[root@master1 ~]# kubectl get pods | grep tomcat
my-tomcat-57667b9d9-hcshh           0/1     ContainerCreating   0          51s
my-tomcat-57667b9d9-k8tj2           0/1     ContainerCreating   0          51s
tomcat-7c67d9584b-2jlc2             1/1     Running             0          12m

6.4 查看pod的詳細信息

[root@master1 ~]# kubectl describe pod my-tomcat-57667b9d9-hcshh 
Events:
  Type    Reason     Age   From                      Message
  ----    ------     ----  ----                      -------
  Normal  Scheduled  2m6s  default-scheduler         Successfully assigned default/my-tomcat-57667b9d9-hcshh to 192.168.247.143
  Normal  Pulling    2m6s  kubelet, 192.168.247.143  pulling image "docker.io/tomcat:8.0.52"
  Normal  Pulled     82s   kubelet, 192.168.247.143  Successfully pulled image "docker.io/tomcat:8.0.52"
  Normal  Created    71s   kubelet, 192.168.247.143  Created container
  Normal  Started    71s   kubelet, 192.168.247.143  Started container

從docker.io處拉取鏡像

pod資源沒有重啓狀態,只有刪掉和重新創建

七:把harbor參數天驕到yaml文件中

7.1 查看harbor登錄憑據

在node節點

base64 使用64解碼

-w 0 不轉行輸出

[root@node01 ~]# ls -a
.                .bash_logout   .config  .esd_auth             .local   .viminfo  文檔  音樂
..               .bash_profile  .cshrc   .ICEauthority         .pki     下載      桌面
anaconda-ks.cfg  .bashrc        .dbus    initial-setup-ks.cfg  .ssh     公共      模板
.bash_history    .cache         .docker  k8s                   .tcshrc  圖片      視頻
[root@node01 ~]# cat .docker/config.json | base64 -w 0
ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjI0Ny4xNDciOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuOCAobGludXgpIgoJfQp9

兩個node節點上的這個驗證碼應該是一樣的,因爲都是以admin身份登錄

依據這個harbor倉庫的驗證碼,可以編寫從harbor倉庫拉取鏡像的yaml文件

7.2 首先要先創建一個安全登錄harbor的資源

[root@master1 ~]# vim registry-pull-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: registry-pull-secret
data:
  .dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjI0Ny4xNDciOiB7CgkJCSJhdXRoIjogIllXUnRhVzQ2U0dGeVltOXlNVEl6TkRVPSIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuOCAobGludXgpIgoJfQp9
type: kubernetes.io/dockerconfigjson
[root@master1 ~]# kubectl create -f registry-pull-secret.yaml 
secret/registry-pull-secret created
[root@master1 ~]# kubectl get secret
NAME                   TYPE                                  DATA   AGE
default-token-qm9rm    kubernetes.io/service-account-token   3      11d
registry-pull-secret   kubernetes.io/dockerconfigjson        1      24s
[root@master1 ~]# kubectl get secret -n kube-system
NAME                               TYPE                                  DATA   AGE
dashboard-admin-token-dmlzw        kubernetes.io/service-account-token   3      3d22h
default-token-w9vck                kubernetes.io/service-account-token   3      11d
kubernetes-dashboard-certs         Opaque                                11     3d22h
kubernetes-dashboard-key-holder    Opaque                                2      3d23h
kubernetes-dashboard-token-7dhnw   kubernetes.io/service-account-token   3      3d23h
[root@master1 ~]# kubectl get secret -n kube-public
NAME                  TYPE                                  DATA   AGE
default-token-k8kx8   kubernetes.io/service-account-token   3      11d

7.3 驗證時爲了保證環境,首先刪掉本地的tomcat鏡像

[root@node02 ~]# docker images | grep tomcat
tomcat                                                            latest              927899a31456        2 weeks ago         647MB
tomcat                                                            8.0.52              b4b762737ed4        22 months ago       356MB
#一個是kubectl創建,一個docker命令自己拉取的,都刪掉,兩個節點都確認一下

7.4 刪除鏡像前首先要查看是否有因此鏡像創建的資源在啓動

[root@node02 ~]# docker rmi tomcat:latest
Error response from daemon: conflict: unable to remove repository reference "tomcat:latest" (must force) - container 07d278ce7a99 is using its referenced image 927899a31456
[root@node02 ~]# docker rmi tomcat:latest -f
Untagged: tomcat:latest
Untagged: tomcat@sha256:cae591b6f798359b0ba2bdd9cc248e695ac6e14d20722c5ff82a9a138719896f
[root@node02 ~]# docker rmi tomcat:8.0.52
Error response from daemon: conflict: unable to remove repository reference "tomcat:8.0.52" (must force) - container 98da0f346725 is using its referenced image b4b762737ed4
[root@node02 ~]# docker rmi tomcat:8.0.52 -f
Untagged: tomcat:8.0.52
Untagged: tomcat@sha256:32d451f50c0f9e46011091adb3a726e24512002df66aaeecc3c3fd4ba6981bd4
[root@node02 ~]# docker images | grep tomcat
[root@node02 ~]# 

[root@node01 ~]# docker images | grep tomcat
192.168.247.147/gsydianshang/tomcat                               latest              927899a31456        2 weeks ago         647MB
[root@node01 ~]# docker rmi 192.168.247.147/gsydianshang/tomcat:latest
[root@node01 ~]# docker images | grep tomcat
[root@node01 ~]# 

有資源在跑,需要先刪資源

7.5 若是強行刪的話,就會出現none鏡像

[root@node02 ~]# docker images | grep none
<none>                                                            <none>              927899a31456        2 weeks ago         647MB
<none>                                                            <none>              b4b762737ed4        22 months ago       356MB
[root@node02 ~]# docker rmi b4b762737ed4 -f
Error response from daemon: conflict: unable to delete b4b762737ed4 (cannot be forced) - image is being used by running container 98da0f346725
[root@node02 ~]# docker ps -a | grep 98da0f346725
98da0f346725        b4b762737ed4                                                          "catalina.sh run"        35 minutes ago      Up 35 minutes                                        k8s_my-tomcat_my-tomcat-57667b9d9-k8tj2_default_cf7c493c-93ee-11ea-a3ae-000c29a14bd3_0

7.6 刪掉kube中的關於my-tomcat的資源,tomcat也刪掉

[root@master1 ~]# kubectl get deploy
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
my-tomcat          2         2         2            2           37m
nginx              1         1         1            1           9d
nginx-deployment   3         3         3            3           43h
tomcat             1         1         1            1           25m
[root@master1 ~]# kubectl delete deploy/tomcat
deployment.extensions "tomcat" deleted
[root@master1 ~]# kubectl delete deploy/my-tomcat
deployment.extensions "my-tomcat" deleted

7.7 此時再刪none成功

[root@node02 ~]# docker rmi b4b762737ed4
[root@node02 ~]# docker rmi 927899a31456

7.8 安全資源創建成功後,修改原有的tomcat.yaml

[root@master1 ~]# vim tomcat-deployment.yaml #只截取修改部分
    spec:
      imagePullSecrets:
      - name: registry-pull-secret		#這個鏡像拉取安全憑據名稱要與get的一致
      containers:
      - name: my-tomcat
        image: 192.168.247.147/gsydianshang/tomcat
        ports:
        - containerPort: 80

7.9 此時harbor次數爲1

在這裏插入圖片描述

原有tomcat資源被刪掉

[root@master1 ~]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-6c94d899fd-xsxct              1/1     Running   1          3d20h
nginx-deployment-78cdb5b557-6z2sf   1/1     Running   1          43h
nginx-deployment-78cdb5b557-9pdf8   1/1     Running   1          43h
nginx-deployment-78cdb5b557-f2hx2   1/1     Running   1          43h
pod1                                1/1     Running   1          34h

7.10 開始創建

[root@master1 ~]# kubectl create -f tomcat-deployment.yaml 
deployment.extensions/my-tomcat created
The Service "my-tomcat" is invalid: spec.ports[0].nodePort: Invalid value: 31111: provided port is already allocated
#反饋端口已被分配
[root@master1 ~]# kubectl get svc
NAME            TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
kubernetes      ClusterIP   10.0.0.1     <none>        443/TCP          11d
my-tomcat       NodePort    10.0.0.61    <none>        8080:31111/TCP   49m
nginx-service   NodePort    10.0.0.131   <none>        80:37651/TCP     43h

查看網絡

[root@master1 ~]# kubectl get all -o wide
NAME                                    READY   STATUS    RESTARTS   AGE     IP            NODE              NOMINATED NODE
pod/my-tomcat-6cbc7c4d65-gpkxr          1/1     Running   0          2m29s   172.17.42.6   192.168.247.144   <none>
pod/my-tomcat-6cbc7c4d65-hdmhc          1/1     Running   0          2m29s   172.17.45.4   192.168.247.143   <none>
pod/nginx-6c94d899fd-xsxct              1/1     Running   1          3d20h   172.17.42.3   192.168.247.144   <none>
pod/nginx-deployment-78cdb5b557-6z2sf   1/1     Running   1          43h     172.17.42.2   192.168.247.144   <none>
pod/nginx-deployment-78cdb5b557-9pdf8   1/1     Running   1          43h     172.17.42.4   192.168.247.144   <none>
pod/nginx-deployment-78cdb5b557-f2hx2   1/1     Running   1          43h     172.17.45.2   192.168.247.143   <none>
pod/pod1                                1/1     Running   1          34h     172.17.45.3   192.168.247.143   <none>

NAME                    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE   SELECTOR
service/kubernetes      ClusterIP   10.0.0.1     <none>        443/TCP          11d   <none>
service/my-tomcat       NodePort    10.0.0.61    <none>        8080:31111/TCP   50m   app=my-tomcat
service/nginx-service   NodePort    10.0.0.131   <none>        80:37651/TCP     43h   app=nginx

NAME                               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                SELECTOR
deployment.apps/my-tomcat          2         2         2            2           2m29s   my-tomcat    192.168.247.147/gsydianshang/tomcat   app=my-tomcat
deployment.apps/nginx              1         1         1            1           9d      nginx        nginx:1.14                            run=nginx
deployment.apps/nginx-deployment   3         3         3            3           43h     nginx1       nginx:1.15.4                          app=nginx

NAME                                          DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                                SELECTOR
replicaset.apps/my-tomcat-6cbc7c4d65          2         2         2       2m29s   my-tomcat    192.168.247.147/gsydianshang/tomcat   app=my-tomcat,pod-template-hash=6cbc7c4d65
replicaset.apps/nginx-6c94d899fd              1         1         1       3d20h   nginx        nginx:1.14                            pod-template-hash=6c94d899fd,run=nginx
replicaset.apps/nginx-dbddb74b8               0         0         0       9d      nginx        nginx                                 pod-template-hash=dbddb74b8,run=nginx
replicaset.apps/nginx-deployment-78cdb5b557   3         3         3       43h     nginx1       nginx:1.15.4                          app=nginx,pod-template-hash=78cdb5b557

7.11 查看pod描述信息

[root@master1 ~]# kubectl describe pod my-tomcat-6cbc7c4d65-gpkxr
Containers:
  my-tomcat:
    Container ID:   docker://3b25590d6736ebc5322f1bc3e8b750057af4bd9e17566660e7b5ef8d79dd1565
    Image:          192.168.247.147/gsydianshang/tomcat
    Image ID:       docker-pullable://192.168.247.147/gsydianshang/tomcat@sha256:8672b0039fe1f37d3d35c11f65aefad5388fd46e260980b95304605397bb4942
Events:
  Type    Reason     Age    From                      Message
  ----    ------     ----   ----                      -------
  Normal  Scheduled  4m27s  default-scheduler         Successfully assigned default/my-tomcat-6cbc7c4d65-gpkxr to 192.168.247.144
  Normal  Pulling    4m26s  kubelet, 192.168.247.144  pulling image "192.168.247.147/gsydianshang/tomcat"
  Normal  Pulled     3m38s  kubelet, 192.168.247.144  Successfully pulled image "192.168.247.147/gsydianshang/tomcat"
  Normal  Created    3m38s  kubelet, 192.168.247.144  Created container
  Normal  Started    3m37s  kubelet, 192.168.247.144  Started container

鏡像從harbor處拉取

查看harbor的web出鏡像下載數量

刷新一下

7.12 下載次數變爲3次

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章