kubeadm部署1.11.1的k8s集羣

K8s簡介
1.背景介紹
  雲計算飛速發展
    - IaaS
    - PaaS
    - SaaS
  Docker技術突飛猛進
    - 一次構建,到處運行
    - 容器的快速輕量
    - 完整的生態環境
2.什麼是kubernetes
  首先,他是一個全新的基於容器技術的分佈式架構領先方案。Kubernetes(k8s)是Google開源的容器集羣管理系統(谷歌內部:Borg)。在Docker技術的基礎上,爲容器化的應用提供部署運行、資源調度、服務發現和動態伸縮等一系列完整功能,提高了大規模容器集羣管理的便捷性。
  Kubernetes是一個完備的分佈式系統支撐平臺,具有完備的集羣管理能力,多擴多層次的安全防護和准入機制、多租戶應用支撐能力、透明的服務註冊和發現機制、內建智能負載均衡器、強大的故障發現和自我修復能力、服務滾動升級和在線擴容能力、可擴展的資源自動調度機制以及多粒度的資源配額管理能力。同時Kubernetes提供完善的管理工具,涵蓋了包括開發、部署測試、運維監控在內的各個環節。
Kubernetes中,Service是分佈式集羣架構的核心,一個Service對象擁有如下關鍵特徵:
• 擁有一個唯一指定的名字
• 擁有一個虛擬IP(Cluster IP、Service IP、或VIP)和端口號
• 能夠體統某種遠程服務能力
• 被映射到了提供這種服務能力的一組容器應用上
  Service的服務進程目前都是基於Socket通信方式對外提供服務,比如Redis、Memcache、MySQL、Web Server,或者是實現了某個具體業務的一個特定的TCP Server進程,雖然一個Service通常由多個相關的服務進程來提供服務,每個服務進程都有一個獨立的Endpoint(IP+Port)訪問點,但Kubernetes能夠讓我們通過服務連接到指定的Service上。有了Kubernetes內奸的透明負載均衡和故障恢復機制,不管後端有多少服務進程,也不管某個服務進程是否會由於發生故障而重新部署到其他機器,都不會影響我們隊服務的正常調用,更重要的是這個Service本身一旦創建就不會發生變化,意味着在Kubernetes集羣中,我們不用爲了服務的IP地址的變化問題而頭疼了。
  容器提供了強大的隔離功能,所有有必要把爲Service提供服務的這組進程放入容器中進行隔離。爲此,Kubernetes設計了Pod對象,將每個服務進程包裝到相對應的Pod中,使其成爲Pod中運行的一個容器。爲了建立Service與Pod間的關聯管理,Kubernetes給每個Pod貼上一個標籤Label,比如運行MySQL的Pod貼上name=mysql標籤,給運行PHP的Pod貼上name=php標籤,然後給相應的Service定義標籤選擇器Label Selector,這樣就能巧妙的解決了Service於Pod的關聯問題。
  在集羣管理方面,Kubernetes將集羣中的機器劃分爲一個Master節點和一羣工作節點Node,其中,在Master節點運行着集羣管理相關的一組進程kube-apiserver、kube-controller-manager和kube-scheduler,這些進程實現了整個集羣的資源管理、Pod調度、彈性伸縮、安全控制、系統監控和糾錯等管理能力,並且都是全自動完成的。Node作爲集羣中的工作節點,運行真正的應用程序,在Node上Kubernetes管理的最小運行單元是Pod。Node上運行着Kubernetes的kubelet、kube-proxy服務進程,這些服務進程負責Pod的創建、啓動、監控、重啓、銷燬以及實現軟件模式的負載均衡器。
  在Kubernetes集羣中,它解決了傳統IT系統中服務擴容和升級的兩大難題。你只需爲需要擴容的Service關聯的Pod創建一個Replication Controller簡稱(RC),則該Service的擴容及後續的升級等問題將迎刃而解。在一個RC定義文件中包括以下3個關鍵信息。
• 目標Pod的定義
• 目標Pod需要運行的副本數量(Replicas)
• 要監控的目標Pod標籤(Label)
  在創建好RC後,Kubernetes會通過RC中定義的的Label篩選出對應Pod實例並實時監控其狀態和數量,如果實例數量少於定義的副本數量,則會根據RC中定義的Pod模板來創建一個新的Pod,然後將新Pod調度到合適的Node上啓動運行,知道Pod實例的數量達到預定目標,這個過程完全是自動化。
  
 Kubernetes優勢:
    - 容器編排
    - 輕量級
    - 開源
    - 彈性伸縮
    - 負載均衡
•Kubernetes的核心概念
1.Master
  k8s集羣的管理節點,負責管理集羣,提供集羣的資源數據訪問入口。擁有Etcd存儲服務(可選),運行Api Server進程,Controller Manager服務進程及Scheduler服務進程,關聯工作節點Node。Kubernetes API server提供HTTP Rest接口的關鍵服務進程,是Kubernetes裏所有資源的增、刪、改、查等操作的唯一入口。也是集羣控制的入口進程;Kubernetes Controller Manager是Kubernetes所有資源對象的自動化控制中心;Kubernetes Schedule是負責資源調度(Pod調度)的進程

2.Node
  Node是Kubernetes集羣架構中運行Pod的服務節點(亦叫agent或minion)。Node是Kubernetes集羣操作的單元,用來承載被分配Pod的運行,是Pod運行的宿主機。關聯Master管理節點,擁有名稱和IP、系統資源信息。運行docker eninge服務,守護進程kunelet及負載均衡器kube-proxy.
• 每個Node節點都運行着以下一組關鍵進程
• kubelet:負責對Pod對於的容器的創建、啓停等任務
• kube-proxy:實現Kubernetes Service的通信與負載均衡機制的重要組件
• Docker Engine(Docker):Docker引擎,負責本機容器的創建和管理工作
  Node節點可以在運行期間動態增加到Kubernetes集羣中,默認情況下,kubelet會想master註冊自己,這也是Kubernetes推薦的Node管理方式,kubelet進程會定時向Master彙報自身情報,如操作系統、Docker版本、CPU和內存,以及有哪些Pod在運行等等,這樣Master可以獲知每個Node節點的資源使用情況,冰實現高效均衡的資源調度策略。、

3.Pod
  運行於Node節點上,若干相關容器的組合。Pod內包含的容器運行在同一宿主機上,使用相同的網絡命名空間、IP地址和端口,能夠通過localhost進行通。Pod是Kurbernetes進行創建、調度和管理的最小單位,它提供了比容器更高層次的抽象,使得部署和管理更加靈活。一個Pod可以包含一個容器或者多個相關容器。
  Pod其實有兩種類型:普通Pod和靜態Pod,後者比較特殊,它並不存在Kubernetes的etcd存儲中,而是存放在某個具體的Node上的一個具體文件中,並且只在此Node上啓動。普通Pod一旦被創建,就會被放入etcd存儲中,隨後會被Kubernetes Master調度到摸個具體的Node上進行綁定,隨後該Pod被對應的Node上的kubelet進程實例化成一組相關的Docker容器冰啓動起來,在。在默認情況下,當Pod裏的某個容器停止時,Kubernetes會自動檢測到這個問起並且重啓這個Pod(重啓Pod裏的所有容器),如果Pod所在的Node宕機,則會將這個Node上的所有Pod重新調度到其他節點上。

4.Replication Controller
  Replication Controller用來管理Pod的副本,保證集羣中存在指定數量的Pod副本。集羣中副本的數量大於指定數量,則會停止指定數量之外的多餘容器數量,反之,則會啓動少於指定數量個數的容器,保證數量不變。Replication Controller是實現彈性伸縮、動態擴容和滾動升級的核心。

5.Service
  Service定義了Pod的邏輯集合和訪問該集合的策略,是真實服務的抽象。Service提供了一個統一的服務訪問入口以及服務代理和發現機制,關聯多個相同Label的Pod,用戶不需要了解後臺Pod是如何運行。
外部系統訪問Service的問題
  首先需要弄明白Kubernetes的三種IP這個問題
    Node IP:Node節點的IP地址
    Pod IP: Pod的IP地址
    Cluster IP:Service的IP地址
  首先,Node IP是Kubernetes集羣中節點的物理網卡IP地址,所有屬於這個網絡的服務器之間都能通過這個網絡直接通信。這也表明Kubernetes集羣之外的節點訪問Kubernetes集羣之內的某個節點或者TCP/IP服務的時候,必須通過Node IP進行通信
  其次,Pod IP是每個Pod的IP地址,他是Docker Engine根據docker0網橋的IP地址段進行分配的,通常是一個虛擬的二層網絡。
  最後Cluster IP是一個虛擬的IP,但更像是一個僞造的IP網絡,原因有以下幾點
• Cluster IP僅僅作用於Kubernetes Service這個對象,並由Kubernetes管理和分配P地址
• Cluster IP無法被ping,他沒有一個“實體網絡對象”來響應
• Cluster IP只能結合Service Port組成一個具體的通信端口,單獨的Cluster IP不具備通信的基礎,並且他們屬於Kubernetes集羣這樣一個封閉的空間。
Kubernetes集羣之內,Node IP網、Pod IP網於Cluster IP網之間的通信,採用的是Kubernetes自己設計的一種編程方式的特殊路由規則。

6.Label
 Kubernetes中的任意API對象都是通過Label進行標識,Label的實質是一系列的Key/Value鍵值對,其中key於value由用戶自己指定。Label可以附加在各種資源對象上,如Node、Pod、Service、RC等,一個資源對象可以定義任意數量的Label,同一個Label也可以被添加到任意數量的資源對象上去。Label是Replication Controller和Service運行的基礎,二者通過Label來進行關聯Node上運行的Pod。
我們可以通過給指定的資源對象捆綁一個或者多個不同的Label來實現多維度的資源分組管理功能,以便於靈活、方便的進行資源分配、調度、配置等管理工作。
一些常用的Label如下:
• 版本標籤:"release":"stable","release":"canary"......
• 環境標籤:"environment":"dev","environment":"qa","environment":"production"
• 架構標籤:"tier":"frontend","tier":"backend","tier":"middleware"
• 分區標籤:"partition":"customerA","partition":"customerB"
• 質量管控標籤:"track":"daily","track":"weekly"
  Label相當於我們熟悉的標籤,給某個資源對象定義一個Label就相當於給它大了一個標籤,隨後可以通過Label Selector(標籤選擇器)查詢和篩選擁有某些Label的資源對象,Kubernetes通過這種方式實現了類似SQL的簡單又通用的對象查詢機制。

  Label Selector在Kubernetes中重要使用場景如下:

o   kube-Controller進程通過資源對象RC上定義Label Selector來篩選要監控的Pod副本的數量,從而實現副本數量始終符合預期設定的全自動控制流程
o   kube-proxy進程通過Service的Label Selector來選擇對應的Pod,自動建立起每個Service島對應Pod的請求轉發路由表,從而實現Service的智能負載均衡
o   通過對某些Node定義特定的Label,並且在Pod定義文件中使用Nodeselector這種標籤調度策略,kuber-scheduler進程可以實現Pod”定向調度“的特性

•Kubernetes架構和組件

•Kubernetes 組件:
  Kubernetes Master控制組件,調度管理整個系統(集羣),包含如下組件:
  1.Kubernetes API Server
    作爲Kubernetes系統的入口,其封裝了核心對象的增刪改查操作,以RESTful API接口方式提供給外部客戶和內部組件調用。維護的REST對象持久化到Etcd中存儲。
  2.Kubernetes Scheduler
    爲新建立的Pod進行節點(node)選擇(即分配機器),負責集羣的資源調度。組件抽離,可以方便替換成其他調度器。
  3.Kubernetes Controller
    負責執行各種控制器,目前已經提供了很多控制器來保證Kubernetes的正常運行。
  4. Replication Controller
    管理維護Replication Controller,關聯Replication Controller和Pod,保證Replication Controller定義的副本數量與實際運行Pod數量一致。
  5. Node Controller
    管理維護Node,定期檢查Node的健康狀態,標識出(失效|未失效)的Node節點。
  6. Namespace Controller
    管理維護Namespace,定期清理無效的Namespace,包括Namesapce下的API對象,比如Pod、Service等。
  7. Service Controller
    管理維護Service,提供負載以及服務代理。
  8.EndPoints Controller
    管理維護Endpoints,關聯Service和Pod,創建Endpoints爲Service的後端,當Pod發生變化時,實時更新Endpoints。
  9. Service Account Controller
    管理維護Service Account,爲每個Namespace創建默認的Service Account,同時爲Service Account創建Service Account Secret。
  10. Persistent Volume Controller
    管理維護Persistent Volume和Persistent Volume Claim,爲新的Persistent Volume Claim分配Persistent Volume進行綁定,爲釋放的Persistent Volume執行清理回收。
  11. Daemon Set Controller
    管理維護Daemon Set,負責創建Daemon Pod,保證指定的Node上正常的運行Daemon Pod。
  12. Deployment Controller
    管理維護Deployment,關聯Deployment和Replication Controller,保證運行指定數量的Pod。當Deployment更新時,控制實現Replication Controller和 Pod的更新。
  13.Job Controller
    管理維護Job,爲Jod創建一次性任務Pod,保證完成Job指定完成的任務數目
  14. Pod Autoscaler Controller
    實現Pod的自動伸縮,定時獲取監控數據,進行策略匹配,當滿足條件時執行Pod的伸縮動作。

•Kubernetes Node運行節點,運行管理業務容器,包含如下組件:
  1.Kubelet
    負責管控容器,Kubelet會從Kubernetes API Server接收Pod的創建請求,啓動和停止容器,監控容器運行狀態並彙報給Kubernetes API Server。
  2.Kubernetes Proxy
    負責爲Pod創建代理服務,Kubernetes Proxy會從Kubernetes API Server獲取所有的Service信息,並根據Service的信息創建代理服務,實現Service到Pod的請求路由和轉發,從而實現Kubernetes層級的虛擬轉發網絡。
  3.Docker
    Node上需要運行容器服務
部署k8s

環境描述:

操作系統 IP地址 主機名 軟件包列表
CentOS7.3-x86_64 192.168.200.200 Master Docker kubeadim
CentOS7.3-x86_64 192.168.200.201 Minion-1 Docker
CentOS7.3-x86_64 192.168.200.202 Minion-2 Docker

部署基礎環境

1.1 安裝 Docker-CE
1.查看master系統信息:
[root@master ~]# hostname
master
[root@master ~]# cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core)
[root@master ~]# uname -r
3.10.0-514.el7.x86_64
2.查看minion系統信息:
[root@master ~]# hostname
master
[root@master ~]# cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)
[root@master ~]# uname -r
3.10.0-862.el7.x86_64
3.安裝依賴包:
[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
4.設置阿里雲鏡像源
[root@master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
5.安裝 Docker-CE
[root@master ~]# yum install docker-ce -y
6.啓動 Docker-CE
[root@master ~]# systemctl enable docker
[root@master ~]# systemctl start docker
1.2 安裝 Kubeadm

  1. 安裝 Kubeadm 首先我們要配置好阿里雲的國內源,執行如下命令:
    [root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    EOF
  2. 執行以下命令來重建 Yum 緩存
    [root@master ~]# yum -y install epel-release
    [root@master ~]# yum clean all
    [root@master ~]# yum makecache
  3. 安裝 Kubeadm
    [root@master ~]# yum -y install kubelet kubeadm kubectl kubernetes-cni
  4. 啓用 Kubeadm 服務
    [root@master ~]# systemctl enable kubelet && systemctl start kubelet
    1.3 配置 Kubeadm 所用到的鏡像
    [root@master ~]# vim k8s.sh
    #!/bin/bash
    images=(kube-proxy-amd64:v1.11.0 kube-scheduler-amd64:v1.11.0 kube-controller-manager-amd64:v1.11.0 kube-apiserver-amd64:v1.11.0
    etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9
    k8s-dns-dnsmasq-nanny-amd64:1.14.9 )
    for imageName in ${images[@]} ; do
    docker pull keveon/$imageName
    docker tag keveon/$imageName k8s.gcr.io/$imageName
    docker rmi keveon/$imageName
    done
    #個人新加的一句,V 1.11.0 必加
    docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1
    [root@master ~]# sh k8s.sh
    1.4 關閉 Swap
    [root@master ~]# swapoff -a
    [root@master ~]# vi /etc/fstab
    #
    #/etc/fstab
    #Created by anaconda on Sun May 27 06:47:13 2018
    #
    #Accessible filesystems, by reference, are maintained under '/dev/disk'
    #See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    /dev/mapper/cl-root / xfs defaults 0 0
    UUID=07d1e156-eba8-452f-9340-49540b1c2bbb /boot xfs defaults 0 0
    #/dev/mapper/cl-swap swap swap defaults 0 0
    不關閉swap也是可以的,初始化時需要跳過swap錯誤,修改配置文件如下:
    [root@master manifors]# vim /etc/sysconfig/kubelet
    KUBELET_EXTRA_ARGS="--fail-swap-on=false" #不關閉swap
    KUBE_PROXY_MODE=ipvs #啓用IPvs,不定義會降級Iptables
    啓用ipvs需要提前將模塊安裝好並啓用

1.5 關閉 SELinux
[root@master ~]# setenforce 0
1.6 配置轉發參數
[root@master ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
[root@master ~]# sysctl –system
#上述步驟minion端也需要做
主機正式安裝 Kuberentes
2.1 初始化相關鏡像
要初始化鏡像,請運行以下命令:
[root@master ~]# kubeadm init --kubernetes-version=v1.11.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors='SystemVerification'
#上面的操作會產生下面命令,下面這條命令是將minion端加入master端的,在minion上執行:
kubeadm join 192.168.200.200:6443 --token uyicwj.akb6hgdryfo1dtij --discovery-token-ca-cert-hash sha256:f26b1a713f1b10adb1e22aa129b23ea266bde550a2570e2b460070a080b42e08
2.2 配置 kubectl 認證信息
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
2.3 安裝 Flannel 網絡
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
執行完成之後,我們可以運行一下命令,查看現在的節點信息
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 9m v1.11.3

2.4.Node 節點配置
1.執行上述master初始化生成的命令:
[root@minion ~]# kubeadm join 192.168.200.200:6443 --token uyicwj.akb6hgdryfo1dtij --discovery-token-ca-cert-hash sha256:f26b1a713f1b10adb1e22aa129b23ea266bde550a2570e2b460070a080b42e08
#執行完後沒有報錯說明成功了
2.導入後再在master端查看下:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 38m v1.11.3
minion Ready <none> 27m v1.11.3
到這裏master和minion配置完成,但是還沒有創建pod及管理pod的權限需要創建用戶,並授權

2.5.創建nginx 的pod測試
1.創建nginx的pod
[root@master ~]# kubectl run nginx-deploy --image=nginx:1.14-alpine --port=80 --replicas=1 --dry-run=true
deployment.apps/nginx-deploy created (dry run)
#nginx-deploy:pod名稱
#--image=nginx:1.14-alpine:什麼鏡像
#--port=80:暴露的端口號,默認也會暴露
#--replicas=1:創建pod數量
#--dry-run=tru:使用dry-run模式創建,類似於測試,並不會真正創建
2下面命令是真正創建pod,去掉--dry-run=true即可
[root@master ~]# kubectl run nginx-deploy --image=nginx:1.14-alpine --port=80 --replicas=1
deployment.apps/nginx-deploy created
3查看下pod:
[root@master ~]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deploy 1 1 1 1 4m
#DESIRED:期望創建的數量
#CURRENT:已經創建的數量
#UP-TO-DATE:更新的數量
#AVAILABLE:正在運行的數量
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deploy-5b595999-5p496 1/1 Running 0 6m
4查看pod運行的詳細信息:
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-deploy-5b595999-5p496 1/1 Running 0 7m 10.244.2.2 minion-2
5驗證:
[root@minion-1 ~]# curl 10.244.2.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/&gt;
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p&gt;

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
注意:上面那個地址只能在集羣內部使用,在集羣外部無法使用,集羣內部的pod雖然在同一網段但也不會直接通信,因爲當pod掛了,k8s會重啓啓動一個pod,新pod名稱和IP都可能會改變。
使用並操作 Kuberentes
3.1將nginx端口對外映射
1將nginx對外暴露
[root@master ~]# kubectl expose deployment nginx-deploy --name nginx --port=80 --target-port=80 --protocol=TCP
service/nginx exposed

kubectl expose:創建一個服務

deployment nginx-deploy --name nginx:將nginx-deploy的控制器創建一個服務名叫nginx的

#--port=80:服務的端口

target-port=80:容器的端口

#--protocol=TCP:使用的協議,默認就是TCP
2查看服務信息:
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
nginx ClusterIP 10.101.101.195 <none> 80/TCP 3m
3測試:
[root@master ~]# curl 10.101.101.195
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/&gt;
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p&gt;

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
注:上述地址是動態分配的集羣地址,用於集羣內部訪問的,pod之間可同過集羣地址進行聽信。這樣解決的pod地址改變的問題。
3.2創建一個交互式pod
1.創建
[root@master ~]# kubectl run client --image=busybox --replicas=1 -it --restart=Never

client :pod名

--image:選擇鏡像

#--replicas:創建多少個pod
#--restart:是否重啓
2.測試用服務名稱可以訪問pod服務,並且用k8s帶的dns可以自動將服務名解析成集羣地址,這樣即便pod重新創建也不會影響服務訪問
/ # wget -O - -q http://nginx:80/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/&gt;
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p&gt;

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
3.刪除pod,會自動生成新的pod,測試訪問:
[root@master ~]# kubectl delete pod nginx-deploy-5b595999-5p496
pod "nginx-deploy-5b595999-5p496" deleted
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 19m
nginx-deploy-5b595999-5wxpj 1/1 Running 0 20s

wget -O - -q http://nginx:80/

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/&gt;
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p&gt;

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ #
通過上面發現訪問不受影響
4.修改創建好的pod:
[root@master ~]# kubectl edit pod myapp-74c94dcb8c-2s2ks
#Please edit the object below. Lines beginning with a '#' will be ignored,
#and an empty file will abort the edit. If an error occurs while saving this file will be
#reopened with the relevant failures.
#
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2018-09-26T10:29:35Z
generateName: myapp-74c94dcb8c-
labels:
pod-template-hash: "3075087647"
run: myapp
name: myapp-74c94dcb8c-2s2ks
namespace: default
ownerReferences:

  • apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: myapp-74c94dcb8c
    uid: 01f5a169-c177-11e8-b2c9-000c2929855b
    resourceVersion: "29582"
    selfLink: /api/v1/namespaces/default/pods/myapp-74c94dcb8c-2s2ks
    uid: 1042d000-c177-11e8-b2c9-000c2929855b
    spec:
    containers:
  • image: ikubernetes/myapp:v2
    imagePullPolicy: IfNotPresent
    name: myapp
    resources: {}
    。。。。。。。。。。。。。。。。。。。。。。。。。。。。此處省略後面內容
    #edit:修改後面跟上pod就可以,server一樣,所有的都可以通過這種辦法修改
    3.3 pod的擴容、縮容、升級、回滾操作及簡單修改文件方式使其被外部訪問
    1.動態擴容和縮容:
    [root@master ~]# kubectl scale --replicas=5 deployment myapp
    deployment.extensions/myapp scaled
    將myapp擴展出5個
    [root@master ~]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    client 1/1 Running 0 49m
    myapp-848b5b879b-cdql2 1/1 Running 0 14m
    myapp-848b5b879b-d2xtr 1/1 Running 0 3m
    myapp-848b5b879b-lg45w 1/1 Running 0 3m
    myapp-848b5b879b-pfxvp 1/1 Running 0 3m
    myapp-848b5b879b-wfp6k 1/1 Running 0 14m
    nginx-deploy-5b595999-5wxpj 1/1 Running 0 29m
    [root@master ~]# kubectl scale --replicas=3 deployment myapp
    [root@master ~]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    client 1/1 Running 0 49m
    myapp-848b5b879b-cdql2 1/1 Running 0 15m
    myapp-848b5b879b-d2xtr 1/1 Running 0 4m
    myapp-848b5b879b-wfp6k 1/1 Running 0 15m
    nginx-deploy-5b595999-5wxpj 1/1 Running 0 30m
    2.pod升級和回滾:
    / # while true;do sleep 1 && wget -O - -q myapp;done
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    目前版本都是1現在升級成2
    [root@master ~]# kubectl set image deployment myapp myapp=ikubernetes/myapp:v2
    #kubectl set image關鍵字
    #deployment:控制器,後面跟上控制器名
    #myapp:控制器名
    #myapp=ikubernetes/myapp:v2:pod鏡像的新的版本
    3查看效果:
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    開始由1變成2,他應該是有過程的一個一個變化的,由於之前升過,所以他就一次性都變化了,
    4.下面是回滾:
    [root@master ~]# kubectl rollout undo deployment myapp
    #kubectl rollout關鍵字
    #undo:默認回滾到上一版本,後面可以指明版本可以回滾到指定的版本
    #deployment:控制器
    #myapp:控制器名
    顯示更新或回滾過程:
    [root@master ~]# kubectl rollout status deployment myapp
    5.查看:
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

6..查看service狀態:
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h
myapp ClusterIP 10.105.254.230 <none> 80/TCP 17h
nginx ClusterIP 10.101.101.195 <none> 80/TCP 18h
7修改myapp的服務配置
[root@master ~]# kubectl edit svc myapp
#Please edit the object below. Lines beginning with a '#' will be ignored,
#and an empty file will abort the edit. If an error occurs while saving this file will be
#reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-09-26T10:19:31Z
labels:
run: myapp
name: myapp
namespace: default
resourceVersion: "19523"
selfLink: /api/v1/namespaces/default/services/myapp
uid: a8a1b74a-c175-11e8-b2c9-000c2929855b
spec:
clusterIP: 10.105.254.230
ports:

  • port: 80
    protocol: TCP
    targetPort: 80
    selector:
    run: myapp
    sessionAffinity: None
    type: NodePort #將ClusterIP改成NodePort
    status:
    loadBalancer: {}
    #保存退出後查看:
    [root@master ~]# kubectl get svc
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h
    myapp NodePort 10.105.254.230 <none> 80:30108/TCP 17h
    nginx ClusterIP 10.101.101.195 <none> 80/TCP 18h
    會多出現一個端口,其端口是隨機產生的,外部客戶端可用過集羣節點的IP對應開放的端口即可訪問:

3.4 編寫yaml文件,通過yaml文件操作
1.指定pod輸出yaml格式信息:
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
client 1/1 Running 0 2h
myapp-848b5b879b-gd4ll 1/1 Running 0 2h
myapp-848b5b879b-jn5xt 1/1 Running 0 2h
myapp-848b5b879b-lhp74 1/1 Running 0 2h
myapp-ser-759b978dcf-d7fvg 1/1 Running 0 2h
myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 2h
nginx-deploy-5b595999-5wxpj 1/1 Running 1 19h
[root@master ~]# kubectl get pod myapp-848b5b879b-gd4ll -o yaml
apiVersion: v1
kind: Pod #類型
metadata: #元數據
creationTimestamp: 2018-09-27T03:24:31Z
generateName: myapp-848b5b879b-
labels:
pod-template-hash: "4046164356"
run: myapp
name: myapp-848b5b879b-gd4ll
namespace: default
ownerReferences:

  • apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: myapp-848b5b879b
    uid: 72d6f86d-c174-11e8-b2c9-000c2929855b
    resourceVersion: "39758"
    selfLink: /api/v1/namespaces/default/pods/myapp-848b5b879b-gd4ll
    uid: d96ce307-c204-11e8-b6f4-000c2929855b
    spec: #規範
    containers:
  • image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    name: myapp
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    • mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-fnqdb
      readOnly: true
      dnsPolicy: ClusterFirst
      nodeName: minion-2
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: default
      serviceAccountName: default
      terminationGracePeriodSeconds: 30
      tolerations:
  • effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  • effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
    volumes:
  • name: default-token-fnqdb
    secret:
    defaultMode: 420
    secretName: default-token-fnqdb
    status: #當前狀態
    conditions:
  • lastProbeTime: null
    lastTransitionTime: 2018-09-27T03:24:31Z
    status: "True"
    type: Initialized
  • lastProbeTime: null
    lastTransitionTime: 2018-09-27T03:24:33Z
    status: "True"
    type: Ready
  • lastProbeTime: null
    lastTransitionTime: null
    status: "True"
    type: ContainersReady
  • lastProbeTime: null
    lastTransitionTime: 2018-09-27T03:24:31Z
    status: "True"
    type: PodScheduled
    containerStatuses:
  • containerID: docker://dcb9cbf45d178e4f3515a68d8a0c90393c517655e46adf5e5c27c6ef9a057952
    image: ikubernetes/myapp:v1
    imageID: docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
    lastState: {}
    name: myapp
    ready: true
    restartCount: 0
    state:
    running:
    startedAt: 2018-09-27T03:24:32Z
    hostIP: 192.168.200.202
    phase: Running
    podIP: 10.244.2.16
    qosClass: BestEffort
    startTime: 2018-09-27T03:24:31Z
    2.大部分資源的配置都需要一下五個字段:
    (1).apiVersion :定義方式:group/version
    [root@master ~]# kubectl api-versions #顯示可定義的組和版本
    (2).kind :資源類別
    (3)metadata :元數據
    name:必須唯一
    namespace:命名空間,同一個空間name要唯一
    labels: 標籤

(4)spec:定義用戶期望的狀態
(2)status:當前的狀態,本字段由kubernetes自己維護,無需修改
3.可通過下面命令或許每個字段定義的方式,含義:
[root@master ~]# kubectl explain pods.apiVersion #獲取幫助,kubectl explain關鍵字
KIND: Pod
VERSION: v1

FIELD: apiVersion <string>

DESCRIPTION:
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
4.編寫yaml文件
[root@master manifors]# vim myapp.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-daemo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
  • name: busybox
    image: busybox:latest
    command:
    • "/bin/bash"
    • "-c"
    • "sleep 3600"
      [root@master manifors]# kubectl create -f myapp.yaml
      [root@master manifors]# kubectl get pods -w
      NAME READY STATUS RESTARTS AGE
      client 0/1 Error 0 3h
      myapp-848b5b879b-gd4ll 1/1 Running 0 3h
      myapp-848b5b879b-jn5xt 1/1 Running 0 3h
      myapp-848b5b879b-lhp74 1/1 Running 0 3h
      myapp-ser-759b978dcf-d7fvg 1/1 Running 0 3h
      myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 3h
      nginx-deploy-5b595999-5wxpj 1/1 Running 1 20h
      pod-daemo 2/2 Running 0 1m
      3.6 使用標籤、nodename及註解添加
      1.過濾指定類別標籤的pod
      [root@master manifors]# kubectl get pods -l app
      NAME READY STATUS RESTARTS AGE
      pod-daemo 2/2 Running 1 1h
      #-l 過濾指定的標籤的類別
      [root@master manifors]# kubectl get pods -l app --show-labels
      NAME READY STATUS RESTARTS AGE LABELS
      pod-daemo 2/2 Running 1 1h app=myapp,tier=frontend
      #--show-labels:顯示完整標籤信息
      [root@master manifors]# kubectl get pods -L app
      NAME READY STATUS RESTARTS AGE APP
      client 0/1 Error 0 5h
      myapp-848b5b879b-gd4ll 1/1 Running 0 4h
      myapp-848b5b879b-jn5xt 1/1 Running 0 4h
      yapp-848b5b879b-lhp74 1/1 Running 0 4h
      myapp-ser-759b978dcf-d7fvg 1/1 Running 0 5h
      myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 5h
      nginx-deploy-5b595999-5wxpj 1/1 Running 1 22h
      pod-daemo 2/2 Running 1 1h myapp
      #-L 顯示符合標籤類別的標籤
      2.給一個pod打標籤的命令:
      [root@master manifors]# kubectl label pod pod-daemo release=canary

      kubectl label 關鍵字

      #pod pod-daemo :pod pod名
      #release=canary:標籤類型和標籤名 key=values
      3.查看:
      [root@master manifors]# kubectl get pods -l app --show-labels
      NAME READY STATUS RESTARTS AGE LABELS
      pod-daemo 2/2 Running 1 1h app=myapp,release=canary,tier=frontend
      4.修改標籤:
      [root@master manifors]# kubectl label pod pod-daemo release=stable --overwrite
      [root@master manifors]# kubectl get pods -l app --show-labels
      NAME READY STATUS RESTARTS AGE LABELS
      pod-daemo 2/2 Running 1 1h app=myapp,release=stable,tier=frontend
      5.給node節點打標籤,讓pod只允許在指定標籤的節點上
      [root@master manifors]# kubectl label node minion-1 dsiktype=ssd
      node/minion-1 labeled
      [root@master manifors]# kubectl get node --show-labels
      NAME STATUS ROLES AGE VERSION LABELS
      master Ready master 1d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
      minion-1 Ready <none> 1d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,dsiktype=ssd,kubernetes.io/hostname=minion-1
      minion-2 Ready <none> 1d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=minion-2
      6.修改pod文件:
      [root@master manifors]# vim myapp.yaml
      apiVersion: v1
      kind: Pod
      metadata:
      name: pod-daemo
      namespace: default
      labels:
      app: myapp
      tier: frontend
      spec:
      containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
    • name: https
      containerPort: 443
  • name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command:
    • "/bin/sh"
    • "-c"
    • "sleep 3600"
      nodeSelector:
      dsiktype: ssd
      [root@master manifors]# kubectl describe pods pod-daemo
      Name: pod-daemo
      Namespace: default
      Node: minion-1/192.168.200.201
      Start Time: Thu, 27 Sep 2018 14:25:30 +0800
      Labels: app=myapp
      release=stable
      7.查看:
      [root@master manifors]# kubectl describe pod pod-daemo
      Name: pod-daemo
      Namespace: default
      Node: minion-1/192.168.200.201
      Start Time: Thu, 27 Sep 2018 16:46:28 +0800
      Labels: app=myapp
      tier=frontend
      會始終允許在minion-1上
      使用nodeName會綁定在對應的節點上,而標籤可能會有範圍性
      8.將pod綁定在minion-2上運行:
      [root@master manifors]# vim myapp.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-daemo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
    • name: https
      containerPort: 443
  • name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command:
    • "/bin/sh"
    • "-c"
    • "sleep 3600"
      nodeName:
      minion-2
      [root@master manifors]# kubectl create -f myapp.yaml
      pod/pod-daemo created
      [root@master manifors]# kubectl describe pod pod-daemo
      Name: pod-daemo
      Namespace: default
      Node: minion-2/192.168.200.202
      Start Time: Thu, 27 Sep 2018 16:52:44 +0800

9..Annotations資源註解的添加
[root@master manifors]# vim myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-daemo
namespace: default
labels:
app: myapp
tier: frontend
annotations:
minion/created-by: "cluster admin"
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
    • name: https
      containerPort: 443
  • name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command:
    • "/bin/sh"
    • "-c"
    • "sleep 3600"
      nodeName:
      minion-2
      [root@master manifors]# kubectl create -f myapp.yaml
      pod/pod-daemo created
      [root@master manifors]# kubectl describe pod pod-daemo
      Name: pod-daemo
      Namespace: default
      Node: minion-2/192.168.200.202
      Start Time: Thu, 27 Sep 2018 17:01:04 +0800
      Labels: app=myapp
      tier=frontend
      Annotations: minion/created-by=cluster admin
      Status: Running
      IP: 10.244.2.18

3.7 POD生命週期中的重要行爲:
1.探測的簡要介紹
初始化容器
容器探測:
Liveness:探測容器是否存活
Readliness:探測主容器是否可以提供服務
探針類型:
(1) exec
(2) httpGet
(3) tcpsocket
2.用exec探針實例:
[root@master manifors]# vim liveness-exec.yaml

apiVersion: v1
kind: Pod
metadata:
name: liveness-exec
namespace: default
spec:
containers:

  • name: liveness-exec-pod
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","touch /tmp/healthy;sleep 30; rm -f /tmp/healthy;sleep 3600"]
    livenessProbe:
    exec:
    command: ["test","-e","/tmp/healthy"] #執行命令判斷文件是否存在
    initialDelaySeconds: 1 #容器啓動多長時間後探測
    periodSeconds: 3 #探測次數
    1. 用HTTPget探針實例:
      [root@master manifors]# vim liveness-exec.yaml

apiVersion: v1
kind: Pod
metadata:
name: liveness-httpget
namespace: default
spec:
containers:

  • name: liveness-httpget-pod
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    name: http #定義開放端口名
    containerPort: 80 #開放的端口
    livenessProbe:
    httpGet:
    port: http #探測對應的端口名
    path: /index.html #探測頁面
    initialDelaySeconds: 1 #容器啓動多長時間後探測
    periodSeconds: 3 #容器探測的次數
    1. 使用readliessProbe方式探測實例
      [root@master manifors]# cat readliness-httpget.yaml
      apiVersion: v1
      kind: Pod
      metadata:
      name: readliness-httpget
      namespace: default
      spec:
      containers:
  • name: readliness-httpget-pod
    image: ikubernetes/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    • name: http
      containerPort: 80
      readinessProbe:
      httpGet:
      port: http
      path: /index.html
      initialDelaySeconds: 1
      periodSeconds: 3
      當探測頁面出現問題時,k8s就不會對這個容器進行調度,直到頁面恢復正常,而livenessProbe探測服務是否正常,如果有問題就重啓。
      Pod控制器創建
      4.1 創建控制器
      Pod控制器:
      ReplicationController:
      ReplicaSet:
      Deployment:
      1.編寫ReplicaSet控制器的yaml文件
      [root@master manifors]# vim rs-demo-yaml
      apiVersion: apps/v1
      kind: ReplicaSet
      metadata:
      name: myapp
      spec:
      replicas: 2
      selector:
      matchLabels:
      app: myapp
      template:
      metadata:
      name: m namespace: default
      yapp-pod
      labels:
      app: myapp
      spec:
      containers:
      • name: myapp-container
        image: ikubernetes/myapp:v1
        ports:
      • name: http
        containerPort: 80
        2.編寫deployment的yaml文件:
        [root@master manifors]# cat deploy-daemo.yaml
        apiVersion: apps/v1
        kind: Deployment
        metadata:
        name: myapp-deploy
        namespace: default
        spec:
        replicas: 2
        selector:
        matchLabels:
        app: myapp
        release: canary
        template:
        metadata:
        labels:
        app: myapp
        release: canary
        spec:
        containers:
      • name: myapp
        image: ikubernetes/myapp:v1
        ports:
      • name: http
        containerPort: 80
        [root@master manifors]# kubectl apply -f deploy-daemo.yaml
        [root@master manifors]# kubectl get pods
        NAME READY STATUS RESTARTS AGE
        client 0/1 Error 0 2d
        myapp-848b5b879b-gd4ll 1/1 Running 2 2d
        myapp-848b5b879b-jn5xt 1/1 Running 2 2d
        myapp-848b5b879b-lhp74 1/1 Running 2 2d
        myapp-deploy-69b47bc96d-bc6bw 1/1 Running 0 11m
        myapp-deploy-69b47bc96d-tj55r 1/1 Running 0 11m
        myapp-j6n4g 1/1 Running 1 23h
        myapp-ser-759b978dcf-d7fvg 1/1 Running 2 2d
        myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 2d
        nginx-deploy-5b595999-5wxpj 1/1 Running 3 2d
        pod-daemo 2/2 Running 18 1d
        readliness-httpget 1/1 Running 1 1d
        3.進行擴容,可以修改其配置文件
        [root@master manifors]# vim deploy-daemo.yaml
        修改下面內容
        spec:
        replicas: 3 #2修改成3
        [root@master manifors]# kubectl apply -f deploy-daemo.yaml
        #apply可以執行多次,而create只能執行一次,可以列apply是重新加載配置文件
        [root@master manifors]# kubectl get pods
        NAME READY STATUS RESTARTS AGE
        client 0/1 Error 0 2d
        myapp-848b5b879b-gd4ll 1/1 Running 2 2d
        myapp-848b5b879b-jn5xt 1/1 Running 2 2d
        myapp-848b5b879b-lhp74 1/1 Running 2 2d
        myapp-deploy-69b47bc96d-bc6bw 1/1 Running 0 14m
        myapp-deploy-69b47bc96d-ppgvf 1/1 Running 0 1m
        myapp-deploy-69b47bc96d-tj55r 1/1 Running 0 14m
        myapp-j6n4g 1/1 Running 1 23h
        myapp-ser-759b978dcf-d7fvg 1/1 Running 2 2d
        myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 2d
        nginx-deploy-5b595999-5wxpj 1/1 Running 3 2d
        pod-daemo 2/2 Running 18 1d
        readliness-httpget 1/1 Running 1 1d
        4.查看詳細信息:
        [root@master manifors]# kubectl describe deploy myapp-deploy
        Name: myapp-deploy
        Namespace: default
        CreationTimestamp: Sat, 29 Sep 2018 15:53:52 +0800
        Labels: <none>
        Annotations: deployment.kubernetes.io/revision=1
        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"myapp-deploy","namespace":"default"},"spec":{"replicas":3,"selector":{...
        Selector: app=myapp,release=canary
        Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
        StrategyType: RollingUpdate #默認更新策略是滾動更新
        MinReadySeconds: 0
        RollingUpdateStrategy: 25% max unavailable, 25% max surge #最大不能超過25%最少不能少於25%
        Pod Template:
        Labels: app=myapp
        release=canary
        4.2 pod的滾動更新

測試滾動更新:

  1. 監控pod看是否存在變化
    [root@master manifors]# kubectl get pods -l app=myapp -w
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-69b47bc96d-bc6bw 1/1 Running 0 21m
    myapp-deploy-69b47bc96d-ppgvf 1/1 Running 0 8m
    myapp-deploy-69b47bc96d-tj55r 1/1 Running 0 21m
  2. 修改配置文件的方法更新版本,當然還有其他更好的辦法
    [root@master manifors]# vim deploy-daemo.yaml
    修改下面內容:
    spec:
    containers:
    • name: myapp
      image: ikubernetes/myapp:v2 #v1修改成v2即可
  3. 使用apply命令
    [root@master manifors]# kubectl apply -f deploy-daemo.yaml
  4. 查看pod是否有變化
    [root@master manifors]# kubectl get pods -l app=myapp -w
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-69b47bc96d-bc6bw 1/1 Running 0 21m
    myapp-deploy-69b47bc96d-ppgvf 1/1 Running 0 8m
    myapp-deploy-69b47bc96d-tj55r 1/1 Running 0 21m
    myapp-deploy-67f6f6b4dc-tvxvw 0/1 Pending 0 0s
    myapp-deploy-67f6f6b4dc-tvxvw 0/1 Pending 0 0s
    myapp-deploy-67f6f6b4dc-tvxvw 0/1 ContainerCreating 0 0s
    myapp-deploy-67f6f6b4dc-tvxvw 1/1 Running 0 1s
    myapp-deploy-69b47bc96d-ppgvf 1/1 Terminating 0 13m
    myapp-deploy-67f6f6b4dc-gsndw 0/1 Pending 0 0s
    myapp-deploy-67f6f6b4dc-gsndw 0/1 Pending 0 0s
    myapp-deploy-67f6f6b4dc-gsndw 0/1 ContainerCreating 0 1s
    myapp-deploy-69b47bc96d-ppgvf 0/1 Terminating 0 13m
    myapp-deploy-67f6f6b4dc-gsndw 1/1 Running 0 2s
    myapp-deploy-69b47bc96d-bc6bw 1/1 Terminating 0 27m
    myapp-deploy-67f6f6b4dc-z7hlj 0/1 Pending 0 0s
    myapp-deploy-67f6f6b4dc-z7hlj 0/1 Pending 0 0s
    myapp-deploy-67f6f6b4dc-z7hlj 0/1 ContainerCreating 0 1s
    myapp-deploy-69b47bc96d-bc6bw 0/1 Terminating 0 27m
    。。。。。。。。。。。。
    會發現pod會彈出很多信息
    上面信息順序是
  5. 先Pending等待完成調度
  6. ContainerCreating調度完成後開始創建
  7. Running創建好了運行
  8. Terminating在停止一個運行的老版本的pod
    整個更新過程就是這樣
  9. 查看下rs
    [root@master manifors]# kubectl get rs -o wide
    NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
    myapp-74c94dcb8c 0 0 0 2d myapp ikubernetes/myapp:v2 pod-template-hash=3075087647,run=myapp
    myapp-848b5b879b 3 3 3 2d myapp ikubernetes/myapp:v1 pod-template-hash=4046164356,run=myapp
    myapp-deploy-67f6f6b4dc 3 3 3 7m myapp ikubernetes/myapp:v2 app=myapp,pod-template-hash=2392926087,release=canary
    myapp-deploy-69b47bc96d 0 0 0 34m myapp ikubernetes/myapp:v1 app=myapp,pod-template-hash=2560367528,release=canary
    通過×××部分可以看出,v1的模板還沒有刪除,方便回滾
  10. 看歷史的版本保留
    [root@master manifors]# kubectl rollout history deployment myapp-deploy
    deployments "myapp-deploy"
    REVISION CHANGE-CAUSE
    1 <none>
    2 <none>
  11. 用補丁的方式擴容:
    [root@master manifors]# kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}'
    [root@master manifors]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    client 0/1 Error 0 2d
    myapp-848b5b879b-gd4ll 1/1 Running 2 2d
    myapp-848b5b879b-jn5xt 1/1 Running 2 2d
    myapp-848b5b879b-lhp74 1/1 Running 2 2d
    myapp-deploy-67f6f6b4dc-5wbvm 1/1 Running 0 20s
    myapp-deploy-67f6f6b4dc-d2frs 1/1 Running 0 20s
    myapp-deploy-67f6f6b4dc-gsndw 1/1 Running 0 30m
    myapp-deploy-67f6f6b4dc-tvxvw 1/1 Running 0 30m
    myapp-deploy-67f6f6b4dc-z7hlj 1/1 Running 0 30m
    myapp-ser-759b978dcf-d7fvg 1/1 Running 2 2d
    myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 2d
    nginx-deploy-5b595999-5wxpj 1/1 Running 3 2d
    readliness-httpget 1/1 Running 1 1d

  12. 用補丁的方式修改更新策略
    [root@master manifors]# kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}'
    [root@master manifors]# kubectl describe deployment myapp-deploy
    Name: myapp-deploy
    Namespace: default
    CreationTimestamp: Sat, 29 Sep 2018 15:53:52 +0800
    Labels: app=myapp
    release=canary
    Annotations: deployment.kubernetes.io/revision=2
    kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"myapp-deploy","namespace":"default"},"spec":{"replicas":3,"selector":{...
    Selector: app=myapp,release=canary
    Replicas: 5 desired | 5 updated | 5 total | 5 available | 0 unavailable
    StrategyType: RollingUpdate
    MinReadySeconds: 0
    RollingUpdateStrategy: 0 max unavailable, 1 max surge

  13. 更新暫停操作(金絲雀發佈)
    將其一個應用更新到版本3:
    [root@master ~]# kubectset image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy
  14. 查看狀態變化:
    [root@master ~]# kubectl get pod -w
    NAME READY STATUS RESTARTS AGE
    client 0/1 Error 0 5d
    myapp-848b5b879b-4b2ft 1/1 Running 0 14m
    myapp-848b5b879b-nvqwt 1/1 Running 0 14m
    myapp-848b5b879b-vr9d6 1/1 Running 0 14m
    myapp-deploy-67f6f6b4dc-cfpnt 1/1 Running 0 7m
    myapp-deploy-67f6f6b4dc-gcvd8 1/1 Running 0 7m
    myapp-deploy-67f6f6b4dc-hs6cn 1/1 Running 0 13m
    myapp-deploy-67f6f6b4dc-lt6d2 1/1 Running 0 7m
    myapp-deploy-67f6f6b4dc-ptg2k 1/1 Running 0 7m
    myapp-ser-759b978dcf-d7fvg 1/1 Running 3 5d
    myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 5d
    nginx-deploy-5b595999-5wxpj 1/1 Running 4 6d
    readliness-httpget 1/1 Running 2 4d
    myapp-deploy-6bdcd6755d-jxs5s 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-jxs5s 0/1 Pending 0 1s
    myapp-deploy-6bdcd6755d-jxs5s 0/1 ContainerCreating 0 1s
    myapp-deploy-6bdcd6755d-jxs5s 1/1 Running 0 3s
    會發現創建一個新的pod出來,查看其版本和其他pod的版本:
    [root@master ~]# kubectl describe pod myapp-deploy-6bdcd6755d-jxs5s
    Image: ikubernetes/myapp:v3
    其他pod:
    [root@master ~]# kubectl describe pod myapp-deploy-67f6f6b4dc-cfpnt
    Image: ikubernetes/myapp:v2
    由此可見,它新創建出來一個新的pod,我們將它暫停,然後將這新的pod當做金絲雀,如果沒有問題,可以恢復更新將所有的pod都更新。如下
    更新所有pod
    [root@master ~]# kubectl rollout resume deployment myapp-deploy
    查看過程:
    [root@master ~]# kubectl get pod -w
    myapp-deploy-6bdcd6755d-jxs5s 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-jxs5s 0/1 Pending 0 1s
    myapp-deploy-6bdcd6755d-jxs5s 0/1 ContainerCreating 0 1s
    myapp-deploy-6bdcd6755d-jxs5s 1/1 Running 0 3s
    myapp-server-6ff967596f-nxjlb 0/1 ErrImagePull 0 5d
    myapp-server-6ff967596f-nxjlb 0/1 ImagePullBackOff 0 5d
    myapp-deploy-67f6f6b4dc-lt6d2 1/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-jtjz9 0/1 Pending 0 1s
    myapp-deploy-6bdcd6755d-jtjz9 0/1 Pending 0 1s
    myapp-deploy-6bdcd6755d-jtjz9 0/1 ContainerCreating 0 1s
    myapp-deploy-67f6f6b4dc-lt6d2 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-lt6d2 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-lt6d2 0/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-jtjz9 1/1 Running 0 3s
    myapp-deploy-67f6f6b4dc-cfpnt 1/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-nnprv 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-nnprv 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-nnprv 0/1 ContainerCreating 0 0s
    myapp-deploy-67f6f6b4dc-cfpnt 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-cfpnt 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-cfpnt 0/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-nnprv 1/1 Running 0 3s
    myapp-deploy-67f6f6b4dc-ptg2k 1/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-4f82b 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-4f82b 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-4f82b 0/1 ContainerCreating 0 0s
    myapp-deploy-67f6f6b4dc-ptg2k 0/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-4f82b 1/1 Running 0 2s
    myapp-deploy-67f6f6b4dc-gcvd8 1/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-qpq87 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-qpq87 0/1 Pending 0 0s
    myapp-deploy-6bdcd6755d-qpq87 0/1 ContainerCreating 0 1s
    myapp-deploy-67f6f6b4dc-gcvd8 0/1 Terminating 0 16m
    myapp-deploy-6bdcd6755d-qpq87 1/1 Running 0 3s
    myapp-deploy-67f6f6b4dc-hs6cn 1/1 Terminating 0 22m
    myapp-deploy-67f6f6b4dc-hs6cn 0/1 Terminating 0 22m
    myapp-deploy-67f6f6b4dc-hs6cn 0/1 Terminating 0 22m
    myapp-deploy-67f6f6b4dc-hs6cn 0/1 Terminating 0 22m
    myapp-deploy-67f6f6b4dc-ptg2k 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-ptg2k 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-gcvd8 0/1 Terminating 0 16m
    myapp-deploy-67f6f6b4dc-gcvd8 0/1 Terminating 0 16m
    [root@master ~]# kubectl rollout status deployment myapp-deploy
    deployment "myapp-deploy" successfully rolled out
    上述命令也可查看過程
    [root@master ~]# kubectl get rs -o wide #可以看到有三個版本,其中v3在使用
    myapp-deploy-67f6f6b4dc 0 0 0 3d myapp ikubernetes/myapp:v2 app=myapp,pod-template-hash=2392926087,release=canary
    myapp-deploy-69b47bc96d 0 0 0 3d myapp ikubernetes/myapp:v1 app=myapp,pod-template-hash=2560367528,release=canary
    myapp-deploy-6bdcd6755d 5 5 5 3d myapp ikubernetes/myapp:v3 app=myapp,pod-template-hash=2687823118,release=canary
  15. 回滾到第一版:
    [root@master ~]# kubectl rollout history deployment myapp-deploy
    deployments "myapp-deploy"
    REVISION CHANGE-CAUSE
    1 <none>
    5 <none>
    6 <none>
    7 <none>
    所有的版本信息,
    [root@master ~]# kubectl rollout undo deployment myapp-deploy --to-revision=1
    查看
    [root@master ~]# kubectl get rs -o wide
    myapp-deploy-67f6f6b4dc 0 0 0 3d myapp ikubernetes/myapp:v2 app=myapp,pod-template-hash=2392926087,release=canary
    myapp-deploy-69b47bc96d 5 5 5 3d myapp ikubernetes/myapp:v1 app=myapp,pod-template-hash=2560367528,release=canary
    myapp-deploy-6bdcd6755d 0 0 0 3d myapp ikubernetes/myapp:v3 app=myapp,pod-template-hash=2687823118,release=canary
    發現當前工作的是v1
    整個回滾過程和更新過程相同
    [root@master ~]# kubectl rollout history deployment myapp-deploy
    deployments "myapp-deploy"
    REVISION CHANGE-CAUSE
    5 <none>
    6 <none>
    7 <none>
    8 <none>
    會發現1變成8了
    4.3 創建daemonSet控制器
    1.創建daemonSet控制器
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
    name: filebeat-ds
    namespace: default
    spec:
    selector:
    matchLabels:
    app: filebeat
    release: stable
    template:
    metadata:
    labels:
    app: filebeat
    release: stable
    spec:
    containers:
    • name: filebeat
      image: ikubernetes/filebeat:5.6.5-alpine
      env:
      • name: REDIS_HOST
        value: redis.default.svc.cluster.local
      • name: REDIS_LOG_LEVEL
        value: info
  16. 創建nginx的pod
    apiVersion: v1
    kind: Pod
    metadata:
    name: nginx
    namespace: default
    spec:
    containers:
    • name: nginx
      image: nginx:1.15
      hostNetwork: true #使用節點的網絡空間
      Kubernetes service資源:
  17. 定義Redis service
    [root@master manifors]# vim redis-svc.yaml

apiVersion: v1
kind: Service
metadata:
name: redis
namespace: default
spec:
clusterIP: 10.97.97.97
type: ClusterIP
selector:
app: redis
role: ds
ports:

  • port: 6379
    targetPort: 6379
    4.4 部署ingress做代理:
    1. 將下面文件下載到本地(根據情況下載文件)
      文件的地址:https://github.com/kubernetes/ingress-nginx/tree/master/deploy
      [root@master ingress-nginx]# for n in namespace.yaml configmap.yaml tcp-services-configma.yaml rbac.yaml udp-services-configmap.yaml with-rbac.yaml ;do wget https://raw.githubusecontent.com/kubernetes/ingress-nginx/master/deploy/$n;done
      [root@master ingress-nginx]# ls
      configmap.yaml rbac.yaml udp-services-configmap.yaml
      namespace.yaml tcp-services-configmap.yaml with-rbac.yaml
    2. 創建命名空間:
      [root@master ingress-nginx]# kubectl apply -f namespace.yaml
      注:也可以使用手動方式創建:
      [root@master ingress-nginx]# kubectl create namespace 命名空間名
    3. 創建所有的yaml:
      [root@master ingress-nginx]# kubectl apply -f ./
      注:使用上面命令創建ingress-nginx可能會有問題,可以用這條命令創建:kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
      或者使用下命令將鏡像拖到本地:
      [root@minion-2 ~]# docker pull siriuszg/nginx-ingress-controller:0.19.0
      在將鏡像名修改成配置文件裏的名稱:
      [root@minion-2 ~]# docker tag siriuszg/nginx-ingress-controller:0.19.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0
      這個defaultbackend鏡像一樣:
      [root@minion-2 ~]# docker pull chenliujin/defaultbackend:1.4
      [root@minion-2 ~]# docker tag chenliujin/defaultbackend:1.4 k8s.gcr.io/defaultbackend-amd64:1.4
  1. 查看ingress-nginx命名空間的pod
    [root@master ingress-nginx]# kubectl get pods -n ingress-nginx
    NAME READY STATUS RESTARTS AGE
    nginx-ingress-controller-6bd7c597cb-pv7wn 0/1 ContainerCreating 0 3m
  2. 創建無頭服務:(即service沒有IP)
    [root@master ~]# mkdir ingress
    [root@master ~]# cd ingress
    [root@master ingress]# vim deploy-demon.yaml

apiVersion: v1
kind: Service
metadata:
name: myapp
namespaec: default
spec:
selector:
app: myapp
release: canary
ports:

  • name: http
    port: 80
    targetPort: 80

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: myapp-deploy
    namespace: default
    spec:
    replicas: 3
    selector:
    matchLabels:
    app: myapp
    release: canary
    template:
    metadata:
    labels:
    app: myapp
    release: canary
    spec:
    containers:

    • name: myapp
      image: ikubernetes/myapp:v1
      ports:
      • name: http
        containerPort: 80
        [root@master ingress]# kubectl apply -f deploy-demon.yaml
        service/default created
        deployment.apps/myapp-deploy created
        1. 查看下pod和service
          [root@master ingress]# kubectl get svc
          NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
          kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
          myapp ClusterIP 10.111.83.183 <none> 80/TCP 5s
          redis ClusterIP 10.97.97.97 <none> 6379/TCP 22h
          [root@master ingress]# kubectl get pods
          NAME READY STATUS RESTARTS AGE
          myapp-deploy-69b47bc96d-4nr8w 1/1 Running 0 20s
          myapp-deploy-69b47bc96d-l68hk 1/1 Running 0 20s
          myapp-deploy-69b47bc96d-p44gx 1/1 Running 0 20s
          redis-5d5494cb7-6hrs5 1/1 Running 1 22h
        2. 創建ingress-nginx的service
          [root@master ingress-nginx]# vim service-nodeport.yaml
          apiVersion: v1
          kind: Service
          metadata:
          name: ingress-nginx
          namespace: ingress-nginx
          spec:
          type: NodePort
          ports:
      • name: http
        port: 80
        targetPort: 80
        protocol: TCP
        nodePort: 30080
      • name: https
        port: 443
        targetPort: 443
        protocol: TCP
        nodePort: 30443
        selector:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        [root@master ingress-nginx]# kubectl apply -f service-nodeport.yaml
        [root@master ingress-nginx]# kubectl get svc -n ingress-nginx
        NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
        default-http-backend ClusterIP 10.110.120.191 <none> 80/TCP 3h
        ingress-nginx NodePort 10.109.113.104 <none> 80:30080/TCP,443:30443/TCP 45s
        訪問測試:
  1. 創建通過ingress發佈service的yaml文件:
    [root@master manifors]# vim ingress-myapp.yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
    name: ingress-nginx
    namespace: default
    spec:
    rules:
    • host: myapp.zhouhao.com
      http:
      paths:
      • path:
        backend:
        serviceName: myapp
        servicePort: 80

[root@master manifors]# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-myapp myapp.zhouhao.com 80 1m
[root@master manifors]# kubectl describe ingress
Name: ingress-myapp
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends


myapp.zhouhao.com
myapp:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-myapp","namespace":"default"},"spec":{"rules":[{"host":"myapp.zhouhao.com","http":{"paths":[{"backend":{"serviceName":"myapp","servicePort":80},"path":null}]}}]}}

kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message


Normal CREATE 2m nginx-ingress-controller Ingress default/ingress-myapp

  1. 查看ingress-nginx的配置文件是否自動寫入了內容,並在主機上解析域名驗證:

    Kuberneter存儲卷:
    5.1 本地存儲持久化

  2. Pod掛載本地目錄:
    [root@master volumes]# vim pod-vol-deploy.yaml

apiVersion: v1
kind: Pod
metadata:
name: myapp-deploy
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/created-byz: "cluster admin"
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
      volumeMounts: #掛載卷
    • name: html #掛載點名稱
      mountPath: /usr/share/nginx/html/ #路徑
  • name: busybox
    image: busybox:latest
    volumeMounts:
    • name: html
      mountPath: /data/
      command: ['/bin/sh','-c']
      args:
    • "while true;do echo $$(date) >> /data/index.html;done"
      volumes:
  • name: html #定義名稱
    emptyDir: {} #定義大小,{}表示不限制
    [root@master volumes]# kubectl apply -f pod-vol-deploy.yaml
    [root@master volumes]# curl 10.244.2.11
    Tue Oct 9 09:06:06 UTC 2018
    Tue Oct 9 09:06:06 UTC 2018
    Tue Oct 9 09:06:06 UTC 2018
    Tue Oct 9 09:06:06 UTC 2018
    Tue Oct 9 09:06:06 UTC 2018
    Tue Oct 9 09:06:06 UTC 2018
    1. 基於主機路徑的共享存儲:
      [root@master volumes]# vim pod-hostpath.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-vol-hostpath
namespace: default
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    • name: html
      mountPath: /usr/share/nginx/html
      volumes:
  • name: html #定義名稱
    hostPath:
    path: /data/pod/volume1 #主機共享的路徑
    type: DirectoryOrCreate #使用的類型
    1. 類型區別參考:

5.2使用nfs做持久化存儲

  1. 所有節點安裝nfs:
    [root@master volumes]# yum install -y nfs-utils
    注:這裏master做nfs服務端
  2. 配置共享存儲:
    [root@master volumes]# vim /etc/exports
    /data/volumes 192.168.200.0/24(rw,no_root_squash)
  3. node節點測試是否可以掛在上:
    [root@minion-2 ~]# mount -t nfs 192.168.200.200:/data/volumes /mnt
    [root@minion-2 ~]# df -h
    192.168.200.200:/data/volumes 17G 3.5G 14G 21% /mnt
  4. 開始寫yaml文件
    [root@master volumes]# vim pod-nfs.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-vol-nfs
namespace: default
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    • name: html
      mountPath: /usr/share/nginx/html
      volumes:
  • name: html
    nfs:
    path: /data/volumes
    server: master #確保主機名可以被解析
    1. 創建
      [root@master volumes]# kubectl apply -f pod-nfs.yaml
    2. 寫個index.html測試頁
      [root@master volumes]# vim /data/volumes/index.html
      <h1> NFS.zhouhao.com <h1\>
    3. 訪問pod
      [root@master volumes]# kubectl get pods -o wide
      [root@master volumes]# curl 10.244.2.14
      <h1> NFS.zhouhao.com <h1\>
      5.3創建PVC和PV
      1.創建存儲空間:
      [root@master volumes]# mkdir v{1,2,3,4,5}
      [root@master volumes]# ls
      pod-deploy.yaml pod-nfs.yaml v1 v3 v5
      pod-hostpath.yaml pod-vol-deploy.yaml v2 v4
      2.配置nfs共享出去
      [root@master volumes]# vim /etc/exports
      /data/volumes/v1 192.168.200.0/24(rw,no_root_squash)
      /data/volumes/v2 192.168.200.0/24(rw,no_root_squash)
      /data/volumes/v3 192.168.200.0/24(rw,no_root_squash)
      /data/volumes/v4 192.168.200.0/24(rw,no_root_squash)
      /data/volumes/v5 192.168.200.0/24(rw,no_root_squash)
      [root@master volumes]# exportfs -arv
      exporting 192.168.200.0/24:/data/volumes/v5
      exporting 192.168.200.0/24:/data/volumes/v4
      exporting 192.168.200.0/24:/data/volumes/v3
      exporting 192.168.200.0/24:/data/volumes/v2
      exporting 192.168.200.0/24:/data/volumes/v1
      [root@master volumes]# showmount -e
      Export list for master:
      /data/volumes/v5 192.168.200.0/24
      /data/volumes/v4 192.168.200.0/24
      /data/volumes/v3 192.168.200.0/24
      /data/volumes/v2 192.168.200.0/24
      /data/volumes/v1 192.168.200.0/24
      3.定義PV:
      PV的訪問模式:
      #單路讀寫
      • ReadWriteOnce – the volume can be mounted as read-write by a single node
      #多路只讀
      • ReadOnlyMany – the volume can be mounted read-only by many nodes
      #多路讀寫
      • ReadWriteMany – the volume can be mounted as read-write by many nodes
      下面是簡寫:
      • RWO - ReadWriteOnce
      • ROX - ReadOnlyMany
      • RWX - ReadWriteMany
      注:不同的存儲卷支持的訪問模式不一樣:

4.編寫yaml文件
[root@master volumes]# vim pv-daemon.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
name: pv001
spec:
nfs:
path: /data/volumes/v1
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 2Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
labels:
name: pv002
spec:
nfs:
path: /data/volumes/v2
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 5Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv003
labels:
name: pv003
spec:
nfs:
path: /data/volumes/v3
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 20Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv004
labels:
name: pv004
spec:
nfs:
path: /data/volumes/v4
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 10Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv005
labels:
name: pv005
spec:
nfs:
path: /data/volumes/v5
server: master
accessModes: ["ReadWriteOnce","ReadWriteMany"]
capacity:
storage: 10Gi
[root@master volumes]# kubectl apply -f pv-daemon.yaml
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@master volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 2Gi RWO,ROX,RWX Retain Available 19s
pv002 5Gi RWO,ROX,RWX Retain Available 19s
pv003 20Gi RWO,ROX,RWX Retain Available 19s
pv004 10Gi RWO,ROX,RWX Retain Available 19s
pv005 10Gi RWO,RWX Retain Available 19s
5.定義pvc:
[root@master volumes]# vim pvc-daemon.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
namespace: default
spec:
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
resources:
requests:
storage: 6Gi

apiVersion: v1
kind: Pod
metadata:
name: pod-vol-pvc
namespace: default
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    • name: html
      mountPath: /usr/share/nginx/html
      volumes:
  • name: html
    persistentVolumeClaim:
    claimName: mypvc

[root@master volumes]# kubectl apply -f pvc-daemon.yaml
persistentvolumeclaim/mypvc unchanged
pod/pod-vol-pvc created
[root@master volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 2Gi RWO,ROX,RWX Retain Available 21m
pv002 5Gi RWO,ROX,RWX Retain Available 21m
pv003 20Gi RWO,ROX,RWX Retain Available 21m
pv004 10Gi RWO,ROX,RWX Retain Bound default/mypvc 21m
pv005 10Gi RWO,RWX Retain Available 21m
[root@master volumes]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc Bound pv004 10Gi RWO,ROX,RWX 6m
5.4創建configmap:
配置容器化應用的方式:
1.自定義命令行參數:
Args:
2.把配置文件直接放進鏡像;
3.環境變量
(1)cloud Native的應用程序一般可直接通過環境變量加載配置;
(2)通過entrypoint腳本來預處理變量爲配置文件中配置信息;
4.存儲卷
方式一:
1.命令行創建configmap:
[root@master volumes]# kubectl create configmap nginx --from-literal=nginx_port=8080 --from-literal=server_name=myapp.zhouhao.com
configmap/nginx created
[root@master volumes]# kubectl get cm
NAME DATA AGE
nginx 2 8s
[root@master volumes]# kubectl describe cm nginx
Name: nginx
Namespace: default
Labels: <none>
Annotations: <none>

Data

nginx_port:

8080
server_name:

myapp.zhouhao.com
Events: <none>
方式二:
1.創建出一個配置文件:
[root@master configmap]# vim www.conf

server {
server_name myapp.zhouhao.com;
listen 80;
root /data/web/html;
}
2.創建configmap:
[root@master configmap]# kubectl create configmap nginx-www --from-file=./www.conf
configmap/nginx-www created
[root@master configmap]# kubectl get cm
NAME DATA AGE
nginx 2 5m
nginx-www 1 8s
[root@master configmap]# kubectl describe cm nginx-www
Name: nginx-www
Namespace: default
Labels: <none>
Annotations: <none>

Data

www.conf:

server {
server_name myapp.zhouhao.com;
listen 80;
root /data/web/html;
}
方式二是基於文件的
1.將上nginx裏定義的變量應用到下面的pod中:
[root@master configmap]# vim pod-deploy.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-cm-1
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/created-byz: "cluster admin"
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
      env:
    • name: NGINX_SERVER_PORT
      valueFrom:
      configMapKeyRef:
      name: nginx
      key: nginx_port
    • name: NGINX_SERVER_NAME
      valueFrom:
      configMapKeyRef:
      name: nginx
      key: server_name
      [root@master configmap]# kubectl apply -f pod-deploy.yaml
      pod/pod-cm-1 created
      [root@master configmap]# kubectl get pods
      NAME READY STATUS RESTARTS AGE
      myapp-deploy-67f6f6b4dc-4ngzc 1/1 Running 2 2d
      myapp-deploy-67f6f6b4dc-p4m5b 1/1 Running 2 2d
      myapp-deploy-67f6f6b4dc-p5scb 1/1 Running 2 2d
      pod-cm-1 1/1 Running 0 12s
      pod-vol-hostpath 1/1 Running 1 22h
      pod-vol-nfs 1/1 Running 1 22h
      pod-vol-pvc 1/1 Running 0 5h
      tomcat-deploy-7bc5d6bc58-9vw5t 1/1 Running 2 1d
      tomcat-deploy-7bc5d6bc58-tflzt 1/1 Running 2 1d
      tomcat-deploy-7bc5d6bc58-zfnm2 1/1 Running 2 1d
      [root@master configmap]# kubectl exec -it pod-cm-1 -- /bin/sh
      / # printenv
      MYAPP_SVC_PORT_80_TCP_ADDR=10.98.57.156
      KUBERNETES_PORT=tcp://10.96.0.1:443
      KUBERNETES_SERVICE_PORT=443
      MYAPP_SERVICE_PORT_HTTP=80
      TOMCAT_PORT_8080_TCP=tcp://10.103.236.4:8080
      MYAPP_SVC_PORT_80_TCP_PORT=80
      HOSTNAME=pod-cm-1
      SHLVL=1
      MYAPP_SVC_PORT_80_TCP_PROTO=tcp
      HOME=/root
      MYAPP_SERVICE_HOST=10.110.111.0
      NGINX_SERVER_PORT=8080
      NGINX_SERVER_NAME=myapp.zhouhao.com
      。。。。。。。。
      2.將上述nginx的cm在pod中生成文件:
      [root@master configmap]# vim pod-cm-2.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-cm-2
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/created-byz: "cluster admin"
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
      volumeMounts:
    • name: nginxconf
      mountPath: /etc/nginx/config.d/
      readOnly: true
      volumes:
  • name: nginxconf
    configMap:
    name: nginx
    [root@master configmap]# kubectl apply -f pod-cm-2.yaml
    pod/pod-cm-2 created
    [root@master configmap]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-67f6f6b4dc-4ngzc 1/1 Running 2 2d
    myapp-deploy-67f6f6b4dc-p4m5b 1/1 Running 2 2d
    myapp-deploy-67f6f6b4dc-p5scb 1/1 Running 2 2d
    pod-cm-2 1/1 Running 0 6s
    pod-vol-hostpath 1/1 Running 1 23h
    pod-vol-nfs 1/1 Running 1 22h
    pod-vol-pvc 1/1 Running 0 6h
    tomcat-deploy-7bc5d6bc58-9vw5t 1/1 Running 2 1d
    tomcat-deploy-7bc5d6bc58-tflzt 1/1 Running 2 1d
    tomcat-deploy-7bc5d6bc58-zfnm2 1/1 Running 2 1d
    [root@master configmap]# kubectl exec -it pod-cm-2 -- /bin/sh
    / # cd /etc/nginx/config.d/
    /etc/nginx/config.d # ls
    nginx_port server_name
    /etc/nginx/config.d # cat nginx_port
    8080/etc/nginx/config.d #
    /etc/nginx/config.d # cat server_name
    myapp.zhouhao.com/etc/nginx/config.d #
    3.修改下nginx的cm看pod內容是否改變
    [root@master ~]# kubectl edit cm nginx

#Please edit the object below. Lines beginning with a '#' will be ignored,
#and an empty file will abort the edit. If an error occurs while saving this file will be
#reopened with the relevant failures.
#
apiVersion: v1
data:
nginx_port: "8080" #將8080修改成80
server_name: myapp.zhouhao.com
kind: ConfigMap
metadata:
creationTimestamp: 2018-10-10T07:29:15Z
name: nginx
namespace: default
resourceVersion: "125157"
selfLink: /api/v1/namespaces/default/configmaps/nginx
uid: 30c4a9d7-cc5e-11e8-b4a9-000c2929855b
4.查看:
/etc/nginx/config.d # cat nginx_port
8080/etc/nginx/config.d #
發現沒有改變,其實需要退出目錄重新進入在查看:
8080/etc/nginx/config.d # cd ../
/etc/nginx # cd config.d/
/etc/nginx/config.d # cat nginx_port
80/etc/nginx/config.d #
發現改變了,這改變也是有一定時間的,以爲這中間需要過程。
5.下面以上面nginx-www的cm爲例,創建一個pod用裏面內容做配置:
[root@master configmap]# vim pod-cm-3.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-cm-3
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/created-byz: "cluster admin"
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
      volumeMounts:
    • name: nginxconf
      mountPath: /etc/nginx/conf.d/
      readOnly: true
      volumes:
  • name: nginxconf
    configMap:
    name: nginx-www
    [root@master configmap]# kubectl apply -f pod-cm-3.yaml
    pod/pod-cm-3 created
    [root@master configmap]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-67f6f6b4dc-4ngzc 1/1 Running 2 2d
    myapp-deploy-67f6f6b4dc-p4m5b 1/1 Running 2 2d
    myapp-deploy-67f6f6b4dc-p5scb 1/1 Running 2 2d
    pod-cm-3 1/1 Running 0 9s
    pod-vol-hostpath 1/1 Running 1 1d
    pod-vol-nfs 1/1 Running 1 23h
    pod-vol-pvc 1/1 Running 0 6h
    tomcat-deploy-7bc5d6bc58-9vw5t 1/1 Running 2 1d
    tomcat-deploy-7bc5d6bc58-tflzt 1/1 Running 2 1d
    tomcat-deploy-7bc5d6bc58-zfnm2 1/1 Running 2 1d
    [root@master configmap]# kubectl exec -it pod-cm-3 -- /bin/sh
    / # cd /etc/nginx/conf.d/
    /etc/nginx/conf.d # ls
    www.conf
    /etc/nginx/conf.d # nginx -T
    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful
    #configuration file /etc/nginx/nginx.conf:

user nginx;
worker_processes 1;
。。。。。。。。。。。。。
#configuration file /etc/nginx/conf.d/www.conf:
server {
server_name myapp.zhouhao.com;
listen 80;
root /data/web/html;
}
6.根據配置裏面的信息,創建站點目錄和測試然後訪問:
/tc/nginx/conf.d # mkdir /data/web/html -p
/etc/nginx/conf.d # vi /data/web/html/index.html
<h1> myapp.zhouhao.com<h1\>
在任意一個節點上做域名解析然後訪問測試
[root@minion-1 ~]# vim /etc/hosts
10.244.1.18 myapp.zhouhao.com
[root@minion-1 ~]# curl myapp.zhouhao.com
<h1> myapp.zhouhao.com<h1\>
7.修改下nginx-www測試
[root@master ~]# kubectl edit cm nginx-www

#Please edit the object below. Lines beginning with a '#' will be ignored,
#and an empty file will abort the edit. If an error occurs while saving this file will be
#reopened with the relevant failures.
#
apiVersion: v1
data:
www.conf: |
server {
server_name myapp.zhouhao.com;
listen 80; #將80改成8080
root /data/web/html;
}
kind: ConfigMap
metadata:
creationTimestamp: 2018-10-10T07:34:56Z
name: nginx-www
namespace: default
resourceVersion: "125678"
selfLink: /api/v1/namespaces/default/configmaps/nginx-www
uid: fbfebc90-cc5e-11e8-b4a9-000c2929855b
/etc/nginx/conf.d # cd ../
/etc/nginx # cd conf.d/
/etc/nginx/conf.d # ls
www.conf
/etc/nginx/conf.d # cat www.conf
server {
server_name myapp.zhouhao.com;
listen 8080;
root /data/web/html;
}
文件雖然改了但是監聽端口沒有改
/etc/nginx/conf.d # netstat -lnpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0: LISTEN 1/nginx: master pro
重新加載服務
/etc/nginx/conf.d # nginx -s relaod
nginx: invalid option: "-s relaod"
/etc/nginx/conf.d # nginx -s reload
2018/10/10 10:09:50 [notice] 19#19: signal process started
/etc/nginx/conf.d # netstat -lnpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:
LISTEN 1/nginx: master pro
8.訪問測試:
[root@minion-1 ~]# curl myapp.zhouhao.com
curl: (7) Failed connect to myapp.zhouhao.com:80; 拒絕連接
[root@minion-1 ~]# curl myapp.zhouhao.com:8080
<h1> myapp.zhouhao.com<h1\>
5.5創建statefulset控制器:
1.建好PV,便於PVC匹配到對應的PV:
[root@master volumes]# vim pv-daemon.yaml
kind: PersistentVolume
metadata:
name: pv001
labels:
name: pv001
spec:
nfs:
path: /data/volumes/v1
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 5Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
labels:
name: pv002
spec:
nfs:
path: /data/volumes/v2
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 5Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv003
labels:
name: pv003
spec:
nfs:
path: /data/volumes/v3
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 5Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv004
labels:
name: pv004
spec:
nfs:
path: /data/volumes/v4
server: master
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 10Gi

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv005
labels:
name: pv005
spec:
nfs:
path: /data/volumes/v5
server: master
accessModes: ["ReadWriteOnce","ReadWriteMany"]
capacity:
storage: 10Gi
[root@master volumes]# kubectl apply -f pv-daemon.yaml
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@master volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,ROX,RWX Retain Available 10s
pv002 5Gi RWO,ROX,RWX Retain Available 10s
pv003 5Gi RWO,ROX,RWX Retain Available 10s
pv004 10Gi RWO,ROX,RWX Retain Available 10s
pv005 10Gi RWO,RWX Retain Available 10s
2.創建statefulset控制器:
[root@master mandor]# vim statefulSet-daemon-yaml

apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
ports:

  • name: web
    port: 80
    clusterIP: None
    selector:
    app: myapp-pod
    --- #上面是service控制器,下面是StatefulSet控制器
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
    name: myapp
    spec:
    serviceName: myapp #service名
    replicas: 3 #pod數量
    selector: #匹配pod標籤
    matchLabels:
    app: myapp-pod
    template: #pod創建模板
    metadata: #pod元信息
    labels: #pod標籤
    app: myapp-pod
    spec:
    containers:
    • name: myapp
      image: ikubernetes/myapp:v1
      ports:
      • name: web
        containerPort: 80
        volumeMounts: #存儲卷掛載
      • name: myappdata #卷名
        mountPath: /use/share/nginx/html #掛載容器的目錄
        volumeClaimTemplates: #存儲卷模板
  • metadata:
    name: myappdata #存儲卷名,即上面掛載的卷名
    spec:
    accessModes: [ "ReadWriteOnce" ] #掛載的訪問權限
    resources:
    requests:
    storage: 5Gi #請求的PV大小
    [root@master mandor]# kubectl apply -f statefulSet-daemon-yaml
    service/myapp unchanged
    statefulset.apps/myapp created
    [root@master mandor]# kubectl get pod
    NAME READY STATUS RESTARTS AGE
    myapp-0 1/1 Running 0 5m
    myapp-1 1/1 Running 0 5m
    myapp-2 1/1 Running 0 5m
    #注:上面可以看出pod名是有順序的
    [root@master mandor]# kubectl get svc #上面創建的service要是無頭服務
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d
    myapp ClusterIP None <none> 80/TCP 20m
    [root@master mandor]# kubectl get sts
    NAME DESIRED CURRENT AGE
    myapp 3 3 18m
    [root@master mandor]# kubectl get pvc
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    myappdata-myapp-0 Bound pv005 10Gi RWO,RWX 19m
    myappdata-myapp-1 Bound pv001 5Gi RWO,ROX,RWX 19m
    myappdata-myapp-2 Bound pv003 5Gi RWO,ROX,RWX 19m
    [root@master mandor]# kubectl get pv
    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    pv001 5Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-1 5h
    pv002 5Gi RWO,ROX,RWX Retain Available 5h
    pv003 5Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-2 5h
    pv004 10Gi RWO,ROX,RWX Retain Available 5h
    pv005 10Gi RWO,RWX Retain Bound default/myappdata-myapp-0 5h
    刪除看看pod殺死的順序:
    1.先監控pod
    [root@master mandor]# kubectl get pods -w
    NAME READY STATUS RESTARTS AGE
    myapp-0 1/1 Running 0 22m
    myapp-1 1/1 Running 0 22m
    myapp-2 1/1 Running 0 22m
    2,刪除pod
    [root@master mandor]# kubectl delete -f statefulSet-daemon-yaml
    service "myapp" deleted
    statefulset.apps "myapp" deleted
    [root@master mandor]# kubectl get pods -w
    NAME READY STATUS RESTARTS AGE
    myapp-0 1/1 Running 0 22m
    myapp-1 1/1 Running 0 22m
    myapp-2 1/1 Running 0 22m
    myapp-0 1/1 Terminating 0 23m
    myapp-1 1/1 Terminating 0 23m
    myapp-2 1/1 Terminating 0 23m
    myapp-0 0/1 Terminating 0 23m
    myapp-1 0/1 Terminating 0 23m
    myapp-2 0/1 Terminating 0 23m
    myapp-0 0/1 Terminating 0 23m
    myapp-0 0/1 Terminating 0 23m
    myapp-2 0/1 Terminating 0 23m
    myapp-2 0/1 Terminating 0 23m
    myapp-1 0/1 Terminating 0 23m
    myapp-1 0/1 Terminating 0 23m
    可以看出殺死pod會從2開始
    看下創建的順序:
    [root@master mandor]# kubectl apply -f statefulSet-daemon-yaml
    service/myapp created
    statefulset.apps/myapp created
    myapp-0 0/1 Pending 0 0s
    myapp-0 0/1 Pending 0 0s
    myapp-0 0/1 ContainerCreating 0 0s
    myapp-0 1/1 Running 0 2s
    myapp-1 0/1 Pending 0 0s
    myapp-1 0/1 Pending 0 0s
    myapp-1 0/1 ContainerCreating 0 0s
    myapp-1 1/1 Running 0 3s
    myapp-2 0/1 Pending 0 0s
    myapp-2 0/1 Pending 0 0s
    myapp-2 0/1 ContainerCreating 0 0s
    myapp-2 1/1 Running 0 1s
    創建會從0開始,
    查看下刪除pod,PVC是否存在:
    [root@master mandor]# kubectl delete -f statefulSet-daemon-yaml
    service "myapp" deleted
    statefulset.apps "myapp" deleted
    [root@master mandor]# kubectl get pvc
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    myappdata-myapp-0 Bound pv005 10Gi RWO,RWX 29m
    myappdata-myapp-1 Bound pv001 5Gi RWO,ROX,RWX 29m
    myappdata-myapp-2 Bound pv003 5Gi RWO,ROX,RWX 29m
    會發現PVC依然存在,PVC名關聯着pod名,所以對應的pod啓動數據依然存在。
    5.6 Pod名稱解析
    1.在k8s中每個pod名都可以被解析出來:
    [root@master mandor]# kubectl exec -it myapp-1 -- /bin/sh
    / # nslookup myapp-0.myapp.default.svc.cluster.local
    nslookup: can't resolve '(null)': Name does not resolve

Name: myapp-0.myapp.default.svc.cluster.local
Address 1: 10.244.2.33 myapp-0.myapp.default.svc.cluster.local
/ # nslookup myapp-1.myapp.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve

Name: myapp-1.myapp.default.svc.cluster.local
Address 1: 10.244.1.28 myapp-1.myapp.default.svc.cluster.local
/ # nslookup myapp-2.myapp.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve

Name: myapp-2.myapp.default.svc.cluster.local
Address 1: 10.244.2.34 myapp-2.myapp.default.svc.cluster.local
會發現都能解析出來pod的IP,
Pod名解析格式:
myapp-1.myapp.default.svc.cluster.local
pod名. Service名.命名空間名.後綴
5.7 Pod擴容pvc會自動創建匹配pv
1.進行擴容:
[root@master mandor]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 13m
myapp-1 1/1 Running 0 13m
myapp-2 1/1 Running 0 13m
[root@master mandor]# kubectl scale sts myapp --replicas=5
statefulset.apps/myapp scaled

kubectl path sts myapp -p "{"spec":{"replicas":5}}" 打補丁的方式也可以,效果一樣

[root@master mandor]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 13m
myapp-1 1/1 Running 0 13m
myapp-2 1/1 Running 0 13m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 2s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
2.會發現會擴出3和4
[root@master mandor]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myappdata-myapp-0 Bound pv005 10Gi RWO,RWX 46m
myappdata-myapp-1 Bound pv001 5Gi RWO,ROX,RWX 46m
myappdata-myapp-2 Bound pv003 5Gi RWO,ROX,RWX 46m
myappdata-myapp-3 Bound pv002 5Gi RWO,ROX,RWX 59s
myappdata-myapp-4 Bound pv004 10Gi RWO,ROX,RWX 57s
[root@master mandor]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-1 6h
pv002 5Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-3 6h
pv003 5Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-2 6h
pv004 10Gi RWO,ROX,RWX Retain Bound default/myappdata-myapp-4 6h
pv005 10Gi RWO,RWX Retain Bound default/myappdata-myapp-0 6h
pvc也會自動被創建,PV也會自動被匹配
5.8 Pod分區更新
sts支持分區更新,分區就是pod名後邊的數字比如myapp-1,1就是分區,分區更新是定義一個區(數字),大於或等的會進行更新,比如定義4,大於或等於4的會更新,定義0就是全部更新,如下:
1.查看下默認更新策略:
[root@master mandor]# kubectl describe sts myapp
Name: myapp
Namespace: default
CreationTimestamp: Thu, 11 Oct 2018 16:58:31 +0800
Selector: app=myapp-pod
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"replicas":3,"selector":{"match...
Replicas: 5 desired | 5 total
Update Strategy: RollingUpdate #默認滾動更新,沒設置分區
Pods Status: 5 Running / 0 Waiting / 0 Succeeded / 0 Failed
。。。。。。
2.定義分區:
[root@master mandor]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}'
statefulset.apps/myapp patched
上述是打補丁的方式,注意引號
[root@master mandor]# kubectl describe sts myapp
Name: myapp
Namespace: default
CreationTimestamp: Thu, 11 Oct 2018 16:58:31 +0800
Selector: app=myapp-pod
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"replicas":3,"selector":{"match...
Replicas: 5 desired | 5 total
Update Strategy: RollingUpdate
Partition: 4 #這有了分區值,大於等於4的會更新
Pods Status: 5 Running / 0 Waiting / 0 Succeeded / 0 Failed
3.開始更新測試
[root@master mandor]# kubectl set image sts myapp myapp=ikubernetes/myapp:v2
statefulset.apps/myapp image updated
[root@master mandor]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 13m
myapp-1 1/1 Running 0 13m
myapp-2 1/1 Running 0 13m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 2s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
myapp-4 1/1 Terminating 0 24m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
會發現它先把4停了然後重新創建啓動。
4.驗證版本:
[root@master mandor]# kubectl describe pod myapp-4
。。。。。。。。。。。
Containers:
myapp:
Container ID: docker://bb8b5d4e73459dd39ad6abce52c72402a80dfbbc938fa7758766f3e377f845af
Image: ikubernetes/myapp:v2
Image ID: docker-pullable://ikubernetes/myapp@sha256:85a2b81a62f09a414ea33b74fb8aa686ed9b168294b26b4c819df0be0712d358
。。。。。。。
[root@master mandor]# kubectl describe pod myapp-2
Name: myapp-2
Namespace: default
Node: minion-1/192.168.200.201
Start Time: Thu, 11 Oct 2018 16:58:35 +0800
Labels: app=myapp-pod
controller-revision-hash=myapp-854b598c58
statefulset.kubernetes.io/pod-name=myapp-2
Annotations: <none>
Status: Running
IP: 10.244.2.34
Controlled By: StatefulSet/myapp
Containers:
myapp:
Container ID: docker://4fd66e973b1bb74be30b9d3ff9ceb9515a57197669389784e6e80449e788203d
Image: ikubernetes/myapp:v1
[root@master mandor]# kubectl describe pod myapp-0
Name: myapp-0
Namespace: default
Node: minion-1/192.168.200.201
Start Time: Thu, 11 Oct 2018 16:58:31 +0800
Labels: app=myapp-pod
controller-revision-hash=myapp-854b598c58
statefulset.kubernetes.io/pod-name=myapp-0
Annotations: <none>
Status: Running
IP: 10.244.2.33
Controlled By: StatefulSet/myapp
Containers:
myapp:
Container ID: docker://1886176fc8e698327497e15eb3e452e04092805fc3c11b71ea844d26e439ad86
Image: ikubernetes/myapp:v1
[root@master mandor]# kubectl describe pod myapp-3
Name: myapp-3
Namespace: default
Node: minion-2/192.168.200.202
Start Time: Thu, 11 Oct 2018 17:12:38 +0800
Labels: app=myapp-pod
controller-revision-hash=myapp-854b598c58
statefulset.kubernetes.io/pod-name=myapp-3
Annotations: <none>
Status: Running
IP: 10.244.1.29
Controlled By: StatefulSet/myapp
Containers:
myapp:
Container ID: docker://426436b9f8ea96c6e55e8af00790a1cec7b9620bb0a3843c0fc8df869106d86f
Image: ikubernetes/myapp:v1
5.通過上面發現只有4的版本更新了,如果想把所有的都更新,可以通過上面打補丁的方式,將數值改爲0更新即可.
如下:
[root@master mandor]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}'
statefulset.apps/myapp patched
[root@master mandor]# kubectl set image sts myapp myapp=ikubernetes/myapp:v2
[root@master mandor]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 13m
myapp-1 1/1 Running 0 13m
myapp-2 1/1 Running 0 13m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 2s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
myapp-4 1/1 Terminating 0 24m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Terminating 0 25m
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 1s
myapp-4 0/1 ContainerCreating 0 1s
myapp-4 1/1 Running 0 3s
myapp-3 1/1 Terminating 0 41m
myapp-3 0/1 Terminating 0 41m
myapp-3 0/1 Terminating 0 41m
myapp-3 0/1 Terminating 0 41m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 2s
myapp-2 1/1 Terminating 0 55m
myapp-2 0/1 Terminating 0 55m
myapp-2 0/1 Terminating 0 55m
myapp-2 0/1 Terminating 0 55m
myapp-2 0/1 Pending 0 0s
myapp-2 0/1 Pending 0 0s
myapp-2 0/1 ContainerCreating 0 0s
myapp-2 1/1 Running 0 3s
myapp-1 1/1 Terminating 0 55m
myapp-1 0/1 Terminating 0 55m
myapp-1 0/1 Terminating 0 55m
myapp-1 0/1 Terminating 0 55m
myapp-1 0/1 Pending 0 0s
myapp-1 0/1 Pending 0 0s
myapp-1 0/1 ContainerCreating 0 0s
myapp-1 1/1 Running 0 1s
myapp-0 1/1 Terminating 0 55m
myapp-0 0/1 Terminating 0 55m
myapp-0 0/1 Terminating 0 55m
myapp-0 0/1 Terminating 0 55m
myapp-0 0/1 Pending 0 0s
myapp-0 0/1 Pending 0 0s
myapp-0 0/1 ContainerCreating 0 0s
myapp-0 1/1 Running 0 2s
會從3開始.
[root@master mandor]# kubectl describe pod myapp-0
Name: myapp-0
Namespace: default
Node: minion-1/192.168.200.201
Start Time: Thu, 11 Oct 2018 17:54:24 +0800
Labels: app=myapp-pod
controller-revision-hash=myapp-58656f57bf
statefulset.kubernetes.io/pod-name=myapp-0
Annotations: <none>
Status: Running
IP: 10.244.2.38
Controlled By: StatefulSet/myapp
Containers:
myapp:
Container ID: docker://d59df10c758f1164a21b070cd4aa3783cb3a2c6aa32e90688e0575cacd069c86
Image: ikubernetes/myapp:v2
K8s RABC權限控制
6.1 創建用戶並測試
1.K8s的sa賬號創建:
[root@master mandor]# kubectl create serviceaccount admin
serviceaccount/admin created
[root@master mandor]# kubectl get sa
NAME SECRETS AGE
admin 1 9s
default 1 4d
[root@master mandor]# kubectl describe sa admin
Name: admin
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: admin-token-v8p8k
Tokens: admin-token-v8p8k
Events: <none>

2.創建私鑰:
[root@master mandor]# (umask 077;openssl genrsa -out zhouhao.key 2048)
Generating RSA private key, 2048 bit long modulus
.................................................................+++
...................................................................................+++
e is 65537 (0x10001)
[root@master mandor]# openssl req -new -key zhouhao.key -out zhouhao.csr -subj "/CN=zhouhao"
[root@master pki]# openssl x509 -req -in zhouhao.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out zhouhao.crt -days 365
Signature ok
subject=/CN=zhouhao
Getting CA Private Key
[root@master pki]# openssl x509 -in zhouhao.crt -text -noout
Certificate:
Data:
Version: 1 (0x0)
Serial Number: 15289891927309345937 (0xd4309bb2d562e491)
Signature Algorithm: sha1WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Oct 12 10:14:41 2018 GMT
Not After : Oct 12 10:14:41 2019 GMT
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
3.創建用戶
[root@master pki]# kubectl config set-credentials zhouhao --client-certificate=./zhouhao.crt --client-key=./zhouhao.key --embed-certs=true
User "zhouhao" set.
[root@master pki]# kubectl config view
apiVersion: v1
clusters:

  • cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.200.200:6443
    name: kubernetes
    contexts:
  • context:
    cluster: kubernetes
    user: kubernetes-admin
    name: kubernetes-admin@kubernetes
    current-context: kubernetes-admin@kubernetes
    kind: Config
    preferences: {}
    users:
  • name: kubernetes-admin
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
  • name: zhouhao
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    4.授權用戶:
    [root@master pki]# kubectl config set-context zhouhao@kubernetes --cluster=kubernetes --user=zhouhao
    Context "zhouhao@kubernetes" created.
    [root@master pki]# kubectl config view
    apiVersion: v1
    clusters:
  • cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.200.200:6443
    name: kubernetes
    contexts:
  • context:
    cluster: kubernetes
    user: kubernetes-admin
    name: kubernetes-admin@kubernetes
  • context:
    cluster: kubernetes
    user: zhouhao
    name: zhouhao@kubernetes
    current-context: kubernetes-admin@kubernetes
    kind: Config
    preferences: {}
    users:
  • name: kubernetes-admin
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
  • name: zhouhao
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    5.切換用戶:
    [root@master pki]# kubectl config use-context zhouhao@kubernetes
    Switched to context "zhouhao@kubernetes".
    6.查看pod會發現權限不夠會報錯
    [root@master pki]# kubectl get pods
    No resources found.
    Error from server (Forbidden): pods is forbidden: User "zhouhao" cannot list pods in the namespace "default"
    6.2 創建配置文件
    1.切換回管理員賬號:
    [root@master pki]# kubectl config use-context kubernetes-admin@kubernetes
    Switched to context "kubernetes-admin@kubernetes".
    2.創建配置文件並查看:
    [root@master pki]# kubectl config set-cluster mycluster --kubeconfig=/tmp/test.conf --server="https://192.168.200.200:6443" --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
    Cluster "mycluster" set.
    [root@master pki]# kubectl config view --kubeconfig=/tmp/test.conf
    apiVersion: v1
    clusters:
  • cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.200.200:6443
    name: mycluster
    contexts: []
    current-context: ""
    kind: Config
    preferences: {}
    users: []

6.3 創建一個角色並綁定用戶:
1.用命令行生成yaml格式的文件在做修改:
[root@master pki]# kubectl create role pods-reader --verb=get,list,watch --resource=pods --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: pods-reader
rules:

  • apiGroups:
    • ""
      resources:
    • pods
      verbs:
    • get
    • list
    • watch
      [root@master pki]# kubectl create role pods-reader --verb=get,list,watch --resource=pods --dry-run -o yaml >~/mandor/role-demo.yaml
      2.修改並創建:
      [root@master mandor]# vim role-demo.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pods-reader
namespace: default
rules:

  • apiGroups:
    • ""
      resources:
    • pods
      verbs:
    • get
    • list
    • watch
      [root@master mandor]# kubectl create -f role-demo.yaml
      role.rbac.authorization.k8s.io/pods-reader created
      [root@master mandor]# kubectl get role
      NAME AGE
      pods-reader 10s
      [root@master mandor]# kubectl describe pods-reade
      error: the server doesn't have a resource type "pods-reade"
      [root@master mandor]# kubectl describe role pods-reade
      Name: pods-reader
      Labels: <none>
      Annotations: <none>
      PolicyRule:
      Resources Non-Resource URLs Resource Names Verbs

      pods [] [] [get list watch]
      3.創建rolebinding讓用戶綁定角色:
      [root@master mandor]# kubectl create rolebinding zhouhao-read-pods --role=pods-reader --user=zhouhao
      rolebinding.rbac.authorization.k8s.io/zhouhao-read-pods created
      [root@master mandor]# kubectl create rolebinding zhouhao-read-pods --role=pods-reader --user=zhouhao --dry-run -o yaml
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
      creationTimestamp: null
      name: zhouhao-read-pods
      roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: pods-reader
      subjects:

  • apiGroup: rbac.authorization.k8s.io
    kind: User
    name: zhouhao
    [root@master mandor]# kubectl create rolebinding zhouhao-read-pods --role=pods-reader --user=zhouhao --dry-run -o yaml > rolebinding-demo.yaml
    [root@master mandor]# kubectl describe rolebinding zhouhao-read-pods
    Name: zhouhao-read-pods
    Labels: <none>
    Annotations: <none>
    Role:
    Kind: Role
    Name: pods-reader
    Subjects:
    Kind Name Namespace

    User zhouhao
    4.切換用戶驗證權限:
    [root@master ~]# kubectl config use-context zhouhao@kubernetes
    Switched to context "zhouhao@kubernetes".
    [root@master ~]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-67f6f6b4dc-pz4bd 1/1 Running 0 4h
    myapp-deploy-67f6f6b4dc-smw9t 1/1 Running 0 4h
    myapp-deploy-67f6f6b4dc-twgh6 1/1 Running 0 4h
    5.只授權了default命名空間的權限所以查看其它空間的會報錯;
    [root@master ~]# kubectl get pods -n kube-system
    No resources found.
    Error from server (Forbidden): pods is forbidden: User "zhouhao" cannot list pods in the namespace "kube-system"
    6.4通過clusterrole授權
    1.創建clusterrole:
    [root@master ~]# kubectl create clusterrole cluster-readers --verb=get,list,watch --resource=pods -o yaml --dry-run >clusterrole-yaml
    [root@master ~]# vim clusterrole-yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-readers
rules:

  • apiGroups:
    • ""
      resources:
    • pods
      verbs:
    • get
    • list
    • watch
      [root@master ~]# kubectl config use-context kubernetes-admin@kubernetes
      Switched to context "kubernetes-admin@kubernetes".
      [root@master ~]# kubectl apply -f clusterrole-yaml
      clusterrole.rbac.authorization.k8s.io/cluster-readers created
      2.刪除授權綁定
      [root@master ~]# kubectl get rolebinding
      NAME AGE
      zhouhao-read-pods 23m
      [root@master ~]# kubectl delete rolebinding zhouhao-read-pods
      rolebinding.rbac.authorization.k8s.io "zhouhao-read-pods" deleted
      [root@master ~]# kubectl config use-context zhouhao@kubernetes
      Switched to context "zhouhao@kubernetes".
      [root@master ~]# kubectl get pods
      No resources found.
      Error from server (Forbidden): pods is forbidden: User "zhouhao" cannot list pods in the namespace "default"
      3.會發現權限有沒有了
      [root@master ~]# useradd ik8s
      [root@master ~]# cp -r .kube/ /home/ik8s/
      [root@master ~]# chown -R ik8s.ik8s /home/ik8s/
      [root@master ~]# su - ik8s
      [ik8s@master ~]$ kubectl config use-context zhouhao@kubernetes
      Switched to context "zhouhao@kubernetes".
      [ik8s@master ~]$ kubectl config view
      apiVersion: v1
      clusters:
  • cluster:
    certificate-authority-data: REDACTED
    server: https://192.168.200.200:6443
    name: kubernetes
    contexts:
  • context:
    cluster: kubernetes
    user: kubernetes-admin
    name: kubernetes-admin@kubernetes
  • context:
    cluster: kubernetes
    user: zhouhao
    name: zhouhao@kubernetes
    current-context: zhouhao@kubernetes
    kind: Config
    preferences: {}
    users:
  • name: kubernetes-admin
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
  • name: zhouhao
    user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

4.綁定clusterrole:
[root@master ~]# kubectl create clusterrolebinding zhouhao-read-all-pods --clusterrole= cluster-readers --user=zhouhao --dry-run -o yaml>clusterrolebinding-demo.yaml
[root@master mandor]# vim ~/clusterrolebinding-demo.yaml

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: zhouhao-read-all-pods
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-readers
subjects:

  • apiGroup: rbac.authorization.k8s.io
    kind: User
    name: zhouhao
    [ik8s@master ~]$ kubectl get pods
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-67f6f6b4dc-rgsj8 1/1 Running 0 22m
    myapp-deploy-67f6f6b4dc-smw9t 1/1 Running 0 5h
    myapp-deploy-67f6f6b4dc-twgh6 1/1 Running 0 5h
    [ik8s@master ~]$ kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    coredns-78fcdf6894-mwfdj 1/1 Running 23 7d
    coredns-78fcdf6894-nm2q8 1/1 Running 23 7d
    etcd-master 1/1 Running 5 7d
    kube-apiserver-master 1/1 Running 5 7d
    kube-controller-manager-master 1/1 Running 5 7d
    kube-flannel-ds-amd64-2wcrq 1/1 Running 7 7d
    kube-flannel-ds-amd64-hpqch 1/1 Running 6 7d
    kube-flannel-ds-amd64-th26t 1/1 Running 6 7d
    kube-proxy-47jz2 1/1 Running 5 7d
    kube-proxy-pqswg 1/1 Running 5 7d
    kube-proxy-tdpmw 1/1 Running 5 7d
    kube-scheduler-master 1/1 Running 5 7d
    5.資源都可以查看,沒有給刪除權限
    [ik8s@master ~]$ kubectl delete pods myapp-deploy-67f6f6b4dc-rgsj8
    Error from server (Forbidden): pods "myapp-deploy-67f6f6b4dc-rgsj8" is forbidden: User "zhouhao" cannot delete pods in the namespace "default"
    使用rolebindging綁定clusterrole
    [root@master mandor]# kubectl delete -f ~/clusterrolebinding-demo.yaml
    clusterrolebinding.rbac.authorization.k8s.io "zhouhao-read-all-pods" deleted

[root@master mandor]# vim rolebinding-cluster.yaml
[root@master mandor]# kubectl create rolebinding zhouhao-read-pods --clusterrole=cluster-readers --user=zhouhao --dry-run -o yaml >rolebinding-cluster.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: zhouhao-read-pods
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-readers
subjects:

  • apiGroup: rbac.authorization.k8s.io
    kind: User
    name: zhouhao
    [root@master mandor]# kubectl apply -f rolebinding-cluster.yaml
    rolebinding.rbac.authorization.k8s.io/zhouhao-read-pods created
    6.訪問測試:
    [ik8s@master ~]$ kubectl get pods
    NAME READY STATUS RESTARTS AGE
    myapp-deploy-67f6f6b4dc-rgsj8 1/1 Running 0 38m
    myapp-deploy-67f6f6b4dc-smw9t 1/1 Running 0 5h
    myapp-deploy-67f6f6b4dc-twgh6 1/1 Running 0 5h
    [ik8s@master ~]$ kubectl get pods -n kube-system
    No resources found.
    Error from server (Forbidden): pods is forbidden: User "zhouhao" cannot list pods in the namespace "kube-system"
    部署dashboard
    7.1 部署dashboard使其可以被訪問
    1.先在node節點上把鏡像導入並修改tag,源文件中的鏡像pull不了
    [root@minion-1 ~]# docker pull siriuszg/kubernetes-dashboard-amd64:v1.10.0
    v1.10.0: Pulling from siriuszg/kubernetes-dashboard-amd64
    833563f653b3: Pull complete
    Digest: sha256:5170d3ad1d3b7e9d6424c7a1309692ccffbb2d3c410a3f894bcd2e5066ce169c
    Status: Downloaded newer image for siriuszg/kubernetes-dashboard-amd64:v1.10.0
    [root@minion-1 ~]# docker tag siriuszg/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
    2.創建dashboard
    [root@master mandor]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
    secret/kubernetes-dashboard-certs created
    serviceaccount/kubernetes-dashboard created
    role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
    deployment.apps/kubernetes-dashboard created
    service/kubernetes-dashboard created
    3.查看:
    [root@master dashboard]# kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    coredns-78fcdf6894-mwfdj 1/1 Running 23 7d
    coredns-78fcdf6894-nm2q8 1/1 Running 23 7d
    etcd-master 1/1 Running 5 7d
    kube-apiserver-master 1/1 Running 5 7d
    kube-controller-manager-master 1/1 Running 5 7d
    kube-flannel-ds-amd64-2wcrq 1/1 Running 7 7d
    kube-flannel-ds-amd64-hpqch 1/1 Running 6 7d
    kube-flannel-ds-amd64-th26t 1/1 Running 6 7d
    kube-proxy-47jz2 1/1 Running 5 7d
    kube-proxy-pqswg 1/1 Running 5 7d
    kube-proxy-tdpmw 1/1 Running 5 7d
    kube-scheduler-master 1/1 Running 5 7d
    kubernetes-dashboard-767dc7d4d-2mw4r 1/1 Running 0 1m
    4.通過打補丁的訪問使服務可以被訪問
    [root@master dashboard]# kubectl get svc -n kube-system
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 7d
    kubernetes-dashboard ClusterIP 10.100.132.159 <none> 443/TCP 7m
    [root@master dashboard]# kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system
    service/kubernetes-dashboard patched
    [root@master dashboard]# kubectl get svc -n kube-system
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 7d
    kubernetes-dashboard NodePort 10.100.132.159 <none> 443:32626/TCP 15m
    5.瀏覽器訪問:

6.需要認證登錄,將系統中config文件傳到主機上
[root@master dashboard]# ls ~/.kube/
cache config http-cache
[root@master dashboard]# sz ~/.kube/config
然後在選中:

7.2 token方式登錄dashboard
1.爲dashboard創建證書和私鑰:
[root@master dashboard]# cd /etc/kubernetes/pki/
[root@master pki]# (umask 077;openssl genrsa -out dashboard.key 2048)
Generating RSA private key, 2048 bit long modulus
...+++
..............+++
e is 65537 (0x10001)
[root@master pki]# openssl req -new -key dashboard.key -out dashboard.csr -subj "/O=zhouhao/CN=dashboard"
[root@master pki]# openssl x509 -req -in dashboard.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out dashboard.csr -days 365
Signature ok
subject=/O=zhouhao/CN=dashboard
Getting CA Private Key
[root@master pki]# kubectl create secret generic dashboard-cert -n kube-system --from-file=dashboard.crt=./dashboard.csr --from-file=dashboard.key=./dashboard.key
secret/dashboard-cert created

2.使用token方式登錄:
[root@master pki]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@master pki]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created
3.獲取token值:
[root@master pki]# kubectl describe secret dashboard-admin-token-d8mc4 -n kube-system
Name: dashboard-admin-token-d8mc4
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=dashboard-admin
kubernetes.io/service-account.uid=ab682221-d058-11e8-8f2d-000c2929855b

Type: kubernetes.io/service-account-token

Data

ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZDhtYzQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYWI2ODIyMjEtZDA1OC0xMWU4LThmMmQtMDAwYzI5Mjk4NTViIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.jClu0HHKv81G7SSaxxAb_-i0cXhR1_BkAUqjxKgLjH98w_Z4OE_amhvZu93S4uYM4F3nDGfMgXp5Vt2i4vkS3pnLgO2wdcfzMr0--VzAPhywLR2BBGL9N0u9wokSH4znp1KFmmvPy8KdAjlXi_IMp7hcNrSYgGSnF9XBKWLo2JiMsE4YTA_mgLIml8rAIjw-5REyG9o4RPNL0VtBDO1Ny4NA7fpYWj-r_iKlsXHPvnX0Pe7AtzY62MPRXR0Q_VvEwbH32DiYl6ciXMJxQnPi6mxgHQRXk6luY-_EERGvo9pn3dBmJs_moPSsNjSIE7EP0F-W7tsUtcOEMX15L4e8Ow
×××部分即使taken值
4.Token登錄:

5.選擇token登錄將值複製上去選擇登錄:

K8s網絡及高級調度
8.1 管理flannel和calico
1.配置flannel網絡插件:
[root@master ~]# vim kube-flannel.yml
。。。。。
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan",
"Directrouting": true #找到上面內容添加這行改成直接路由模式,默認是false。
。。。。。。。。。。。。。。
或者
[root@master ~]# vim kube-flannel.yml
。。。。。
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "host-gw" #將vxlan改成host-gw也是一樣,區別,這種形式節點不能跨網段,而上述可以跨網段。
。。。。。。。。。。。。。。
2.網絡策略:
#支持rabc的話就做這個如果沒有rabc可以跳過直接執行下一步
[root@master ~]# kubectl apply -f \

https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/rbac.yaml
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
3.安裝部署calico
官網地址:https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/flannel
[root@master ~]# kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
[root@master ~]# kubectapply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/canal/canal.yaml
制定策略讓不同命名空間的pod不能隨意訪問
4.創建兩個命名空間:
[root@master networkpolicy]# kubectl create namespace dev
namespace/dev created
[root@master networkpolicy]# kubectl create namespace port
namespace/port created
5.創建pod
[root@master networkpolicy]# vim pod_a.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-1
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    6.在兩個命名空間分別創建pod:
    [root@master namespace]# kubectl apply -f pod.yaml -n dev
    [root@master namespace]# kubectl apply -f pod.yaml -n port
    [root@master ~]# kubectl get pod -n dev -o wide
    NAME READY STATUS RESTARTS AGE IP NODE
    pod-1 1/1 Running 1 29m 10.244.1.6 minion-1
    [root@master ~]# kubectl get pod -n port -o wide
    NAME READY STATUS RESTARTS AGE IP NODE
    pod-1 1/1 Running 1 26m 10.244.2.5 minion-2
    [root@master ~]# curl 10.244.1.6
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    [root@master ~]# curl 10.244.2.5
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    7.編寫策略:
    [root@master networkpolicy]# vim ingree-def.yaml
    #下面內容是dev命令空間的pod拒絕所有請求
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
    name: deny-all-ingress
    namespace: dev
    spec:
    podSelector: {}
    policyTypes:
  • Ingress
    [root@master networkpolicy]# kubectl apply -f ingree-def.yaml -n dev
    networkpolicy.networking.k8s.io/deny-all-ingress created
    8.訪問測試:
    [root@master namespace]# curl 10.244.1.6
    訪問不到
    [root@master namespace]# curl 10.244.2.5
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    9.修改下策略:
    [root@master namespace]# vim ingree-def.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
namespace: dev
spec:
podSelector: {}
ingress: #ingress入站規則

  • {} #允許所有
    policyTypes:
  • Ingress #類型是入站
    [root@master namespace]# kubectl apply -f ingree-def.yaml
    networkpolicy.networking.k8s.io/deny-all-ingress configured
    10.訪問測試:
    [root@master namespace]# curl 10.244.1.6 #dev命名空間的可以被訪問了
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    [root@master namespace]# curl 10.244.2.5
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    11.給dev的pod打個標籤:
    [root@master namespace]# kubectl label pod pod-1 app=myapp -n dev
    pod/pod-1 labeled
    12.編寫策略匹配標籤進行限制:
    [root@master namespace]# vim allow-myapp-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: all-myapp-ingress
spec:
podSelector:
matchLabels:
app: myapp #匹配標籤是app=myapp的
ingress: #定義入站規則,出站策略將ingress改成egress

  • from: #來自什麼IP
    • ipBlock:
      cidr: 192.168.200.0/24 #允許這個網段的訪問
      except: #排除IP
      • 192.168.200.202/32 #這個IP除外
        ports: #允許端口和協議
    • protocol: TCP
      port: 80 #允許訪問80,默認都是拒絕
      policyTypes:
    • Ingress #類型是入站, 出站策略將Ingress改成Egress

[root@master namespace]# kubectl apply -f allow-myapp-ingress.yaml
networkpolicy.networking.k8s.io/all-myapp-ingress created
13.訪問測試:
[root@master namespace]# curl 10.244.1.6
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master namespace]# curl 10.244.1.6:443
#80可以訪問到,443端口無法訪問
[root@minion-2 ~]# curl 10.244.1.6
192.168.200.202地址的minion-2 80端口無法訪問

8.2高級調度方式
1.節點選擇器:nodeSelector, nodeName
2.節點親和調度:nodeAffinity分爲硬親和和軟親和,硬親和就是必須滿足條件才能完成調度,軟親和就是,滿足條件最好,不滿足也可以調度。
實例:
8.2.1 通過node標籤調度pod
1.使用nodeSelector
[root@master schedule]# vim pod-demon

apiVersion: v1
kind: Pod
metadata:
name: pod-demon
namespace: default
labels:
app: myapp
tier: frontend
annotations:
zhouhao.com/vreated-by: "cluster admin"
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    nodeSelector:
    disktype: ssd
    [root@master schedule]# kubectl apply -f pod-demon
    pod/pod-demon created
    [root@master schedule]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    pod-2 1/1 Running 2 1d
    pod-demon 0/1 Pending 0 1m
    Pending:調度失敗,因爲現在沒有節點的標籤是disktype: ssd
    2.查看下節點的變遷:
    [root@master schedule]# kubectl get nodes --show-labels
    NAME STATUS ROLES AGE VERSION LABELS
    master Ready master 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
    minion-1 Ready <none> 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=minion-1
    minion-2 Ready <none> 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=minion-2
    3.我們給minion-1打個標籤,然後在看下:
    [root@master schedule]# kubectl label nodes minion-1 disktype=ssd
    node/minion-1 labeled
    [root@master schedule]# kubectl get nodes --show-labels
    NAME STATUS ROLES AGE VERSION LABELS
    master Ready master 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
    minion-1 Ready <none> 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=minion-1
    minion-2 Ready <none> 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=minion-2
    [root@master schedule]# kubectl get pods
    NAME READY STATUS RESTARTS AGE
    pod-2 1/1 Running 2 1d
    pod-demon 1/1 Running 0 6m
    [root@master schedule]# kubectl get pods -o wide
    NAME READY STATUS RESTARTS AGE IP NODE
    pod-2 1/1 Running 2 1d 10.244.2.9 minion-2
    pod-demon 1/1 Running 0 7m 10.244.1.12 minion-1
    發現調度到minion-1上運行了。
    8.2.2 通過親和度進行調度
    1.節點親和實例:(硬親和)
    [root@master schedule]# vim pod-affinity-demon

apiVersion: v1
kind: Pod
metadata:
name: pod-affinity-demo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    affinity: #親和度
    nodeAffinity: #採用節點親和
    requiredDuringSchedulingIgnoredDuringExecution: #硬親和
    nodeSelectorTerms:
    • matchExpressions:
      • key: zone #標籤的key
        operator: In #in就是=
        values:
        • foo #值是foo或bar
        • bar
          [root@master schedule]# kubectl get pods #因爲節點中沒有zone這個標籤所以無法調度
          NAME READY STATUS RESTARTS AGE
          pod-2 1/1 Running 2 2d
          pod-affinity-demo 0/1 Pending 0 42s
          pod-demon 1/1 Running 0 2h
          [root@master schedule]# kubectl label nodes minion-1 zone=foo #給minion-1打個標籤
          node/minion-1 labeled
          [root@master schedule]# kubectl get pods -o wide #發現調度到minion-1上了
          NAME READY STATUS RESTARTS AGE IP NODE
          pod-2 1/1 Running 2 2d 10.244.2.9 minion-2
          pod-affinity-demo 1/1 Running 0 2m 10.244.1.13 minion-1
          pod-demon 1/1 Running 0 2h 10.244.1.12 minion-1
          2.節點親和實例:(軟親和)
          [root@master schedule]# vim pod-affinity-demon-2
          apiVersion: v1
          kind: Pod
          metadata:
          name: pod-affinity-demo-2
          namespace: default
          labels:
          app: myapp
          tier: frontend
          spec:
          containers:
  • name: myapp
    image: ikubernetes/myapp:v1
    affinity:
    nodeAffinity:
    preferredDuringSchedulingIgnoredDuringExecution: #採用軟親和
    • preference:
      matchExpressions:
      • key: zone-1 #標籤名
        operator: In
        values:
        • foo #標籤值
        • bar
          weight: 60 #權重,1-100之間
          [root@master schedule]# kubectl apply -f pod-affinity-demon-2
          pod/pod-affinity-demo-2 created
          [root@master schedule]# kubectl get pods -o wide #發現運行在minion-2上
          NAME READY STATUS RESTARTS AGE IP NODE
          pod-2 1/1 Running 2 2d 10.244.2.9 minion-2
          pod-affinity-demo 1/1 Running 0 36m 10.244.1.13 minion-1
          pod-affinity-demo-2 1/1 Running 0 31s 10.244.2.11 minion-2
          pod-demon 1/1 Running 0 2h 10.244.1.12 minion-1
          [root@master schedule]# kubectl get nodes minion-2 --show-labels #查看標籤沒有
          NAME STATUS ROLES AGE VERSION LABELS
          minion-2 Ready <none> 2d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=minion-2
          3.創建pod親和度的實例:
          [root@master schedule]# vim pod-addinity-pod-re.yaml
          apiVersion: v1
          kind: Pod
          metadata:
          name: pod-first
          labels:
          app: myapp
          tier: frontend
          spec:
          containers:
  • name: myapp
    image: ikubernetes/myapp:v1

    4.上述pod正常創建即可
    apiVersion: v1
    kind: Pod
    metadata:
    name: pod-second
    labels:
    app: db
    tier: db
    spec:
    containers:

  • name: busybox
    image: busybox:latest
    command: ["sh","-c","sleep 3600"]
    affinity: #設置親和度
    podAffinity: #採用pod親和度
    requiredDuringSchedulingIgnoredDuringExecution: #硬親和
    • labelSelector: #匹配標籤
      matchExpressions:
      • key: app
        operator: In
        values:
        • myapp
          topologyKey: kubernetes.io/hostname #標籤相同配在匹配對應key相同的值的節點上運行
          [root@master schedule]# kubectl apply -f pod-addinity-pod-re.yaml
          pod/pod-first created
          pod/pod-second created
          [root@master schedule]# kubectl get pods -o wide
          NAME READY STATUS RESTARTS AGE IP NODE
          pod-first 1/1 Running 0 1m 10.244.1.18 minion-1
          pod-second 1/1 Running 0 1m 10.244.1.19 minion-1
          5.反親和調度:
          [root@master schedule]# vim pod-anti-addinity-pod-re.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-first
labels:
app: myapp
tier: frontend
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1

    apiVersion: v1
    kind: Pod
    metadata:
    name: pod-second
    labels:
    app: db
    tier: db
    spec:
    containers:

  • name: busybox
    image: busybox:latest
    command: ["sh","-c","sleep 3600"]
    affinity:
    podAntiAffinity: #採用pod反親和調度
    requiredDuringSchedulingIgnoredDuringExecution:
    • labelSelector:
      matchExpressions:
      • key: app
        operator: In
        values:
        • myapp
          topologyKey: kubernetes.io/hostname
          [root@master schedule]# kubectl apply -f pod-anti-addinity-pod-re.yaml
          pod/pod-first created
          pod/pod-second created
          [root@master schedule]# kubectl get pods -o wide
          NAME READY STATUS RESTARTS AGE IP NODE
          pod-first 1/1 Running 0 22s 10.244.1.20 minion-1
          pod-second 1/1 Running 0 22s 10.244.2.18 minion-2
          反親和就是先配置pod標籤,匹配相同後再匹配節點標籤,節點相同key則不會調度到此節點上運行。

8.2.4污點調度
Taint的effect定義對Pod排斥效果:
NoSchedule:僅影響調度過程,對現存的pod對象不產生影響
NoExecute: 既影響調度過程,也影響現存的pod,對不容忍污點的pod將被驅逐
PreferNoSchedule:對於不能容忍污點的pod,如果pod實在沒有節點被調度也可以運行在此節點上
1.運行deployment:
[root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-69b47bc96d-479xv 1/1 Running 0 9s 10.244.2.19 minion-2
myapp-deploy-69b47bc96d-dqg8h 1/1 Running 0 9s 10.244.2.20 minion-2
myapp-deploy-69b47bc96d-w8ksl 1/1 Running 0 9s 10.244.1.24 minion-1
會發現兩個節點上都會運行,這時我們給兩個節點都打上標籤,看看效果
2.給minion-1打上污點
[root@master schedule]# kubectl taint node minion-1 node-type=prod:NoSchedule
node/minion-1 tainted
3.運行下deployment看效果
[root@master schedule]# kubectl apply -f pod-deployment.yaml
deployment.apps/myapp-deploy created
[root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-69b47bc96d-72d6p 1/1 Running 0 12s 10.244.2.21 minion-2
myapp-deploy-69b47bc96d-fmbj7 1/1 Running 0 12s 10.244.2.22 minion-2
myapp-deploy-69b47bc96d-v8h99 1/1 Running 0 12s 10.244.2.23 minion-2
全部調度minion-2上了,我們給minion-2打上污點,污點效果是pod不能容忍污點會被4.驅逐:
[root@master schedule]# kubectl taint node minion-2 node-type=dev:NoExecute
node/minion-2 tainted
[root@master schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-69b47bc96d-2jz87 0/1 Pending 0 10s <none> <none>
myapp-deploy-69b47bc96d-8w9l4 0/1 Pending 0 10s <none> <none>
myapp-deploy-69b47bc96d-x4ccd 0/1 Pending 0 10s <none> <none>
會發現pod都被驅逐了,因爲節點都有污點所以pod狀態爲Pending了。
5.給pod加上污點容忍度:
[root@master schedule]# vim pod-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
      tolerations: #污點容忍
  • key: node-type #污點的key
    operator: Equal #匹配污點,Equal是等於的意思
    value: prod #污點的值
    effect: NoSchedule #容忍的效果,要和打上污點的效果一致
    [root@master schedule]# kubectl apply -f pod-deployment.yaml
    deployment.apps/myapp-deploy configured
    6.可以發現pod容忍了minion-1上的污點
    [root@master schedule]# kubectl get pods -o wide
    NAME READY STATUS RESTARTS AGE IP NODE
    myapp-deploy-6657b7d689-j2bxx 1/1 Running 0 7s 10.244.1.26 minion-1
    myapp-deploy-6657b7d689-v6kl5 1/1 Running 0 5s 10.244.1.27 minion-1
    myapp-deploy-6657b7d689-xpdvr 1/1 Running 0 9s 10.244.1.25 minion-1
    [root@master schedule]# vim pod-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:

  • name: myapp
    image: ikubernetes/myapp:v1
    ports:
    • name: http
      containerPort: 80
      tolerations:
  • key: node-type
    operator: Exists # Exists是做判斷,只要key存在,值可以爲空,效果也可以爲空,這樣就是,只要污點key存在不管是什麼值、什麼效果都可以容忍。
    value:
    effect:
    [root@master schedule]# kubectl apply -f pod-deployment.yaml
    deployment.apps/myapp-deploy configured
    [root@master schedule]# kubectl get pods -o wide
    NAME READY STATUS RESTARTS AGE IP NODE
    myapp-deploy-f9f87c46d-6plfg 1/1 Running 0 2m 10.244.2.24 minion-2
    myapp-deploy-f9f87c46d-97zvs 1/1 Running 0 1m 10.244.1.28 minion-1
    myapp-deploy-f9f87c46d-slzms 1/1 Running 0 1m 10.244.2.25 minion-2
    會發現節點都被調度了。
    7.去除污點:
    [root@master ~]# kubectl taint node minion-1 node-type-
    node/minion-1 untainted
    [root@master ~]# kubectl taint node minion-2 node-type-
    node/minion-2 untainted
    8.容器的資源限制和需求:
    [root@master resources]# vim pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: pod-demo
labels:
app: myapp
tier: frontend
spec:
containers:

  • name: myapp
    image: ikubernetes/stress-ng
    command: ["/usr/bin/stress-ng","-c 1","--metrics-brief"] #pod裏面做CPU壓測
    resources:
    requests: #定於pod需要多少CPU和內存
    cpu: 200m
    memory: 128Mi
    limits:
    cpu: 500m #定義最多使用值
    memory: 512Mi
    [root@master resources]# kubectl apply -f pod.yaml
    pod/pod-demo created
    [root@master resources]# kubectl exec pod-demo -- top
    Mem: 1166264K used, 699020K free, 12356K shrd, 2104K buff, 684008K cached
    CPU: 62% usr 0% sys 0% nic 37% idle 0% io 0% irq 0% sirq
    Load average: 1.45 0.57 0.38 3/351 11
    PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
    6 1 root R 6892 0% 0 63% {stress-ng-cpu} /usr/bin/stress-ng
    1 0 root S 6244 0% 0 0% /usr/bin/stress-ng -c 1 --metrics-
    7 0 root R 1500 0% 0 0% top
    [root@master resources]# kubectl describe pod pod-demo
    Name: pod-demo
    Namespace: default
    Node: minion-1/192.168.200.201
    Start Time: Mon, 22 Oct 2018 10:57:29 +0800
    。。。。。。。。。。。。。。。。。。。。。
    Volumes:
    default-token-pcqd6:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-pcqd6
    Optional: false
    QoS Class: Burstable 質量類型
    Node-Selectors: <none>
    Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
    node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
    Type Reason Age From Message

    Normal Scheduled 28s default-scheduler Successfully assigned default/pod-demo to minion-1
    Normal Pulling 27s kubelet, minion-1 pulling image "ikubernetes/stress-ng"
    Normal Pulled 23s kubelet, minion-1 Successfully pulled image "ikubernetes/stress-ng"
    Normal Created 23s kubelet, minion-1 Created container
    Normal Started 23s kubelet, minion-1 Started container
    Qos類型有下面三類:
    Guranteed:當requests和limits設置相同時,則Qos是此類型,此類型pod優先級最高,當資源不夠時,會優先運行此類型pod
    Burstable:至少一個容器設置了CPU或內存資源的requests屬性
    BestEffort:沒有任何一個容器設置requests或limits屬性
    K8s資源監控及自動擴容
    9.1部署heapster:
    1.先下載influxdb的yaml文件並做修改:
    [root@master resources]# wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
    [root@master resources]# vim influxdb.yaml
    #修改成如下×××部分
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: monitoring-influxdb
    namespace: kube-system
    spec:
    replicas: 1
    selector:
    matchLabels:
    task: monitoring
    k8s-app: influxdb
    2.在節點上先將鏡像拉出來並修改標籤:
    [root@minion-2 ~]# docker pull influxdb:1.5.2
    1.5.2: Pulling from library/influxdb
    cc1a78bfd46b: Pull complete
    6861473222a6: Pull complete
    7e0b9c3b5ae0: Pull complete
    ef1cd6af9147: Pull complete
    fe4486e82c7c: Pull complete
    d5f280025ad5: Pull complete
    7b3aaccfccbb: Pull complete
    73454d972cf2: Pull complete
    Digest: sha256:4c782a464f03c9714b9d5456cc6057f4cd4a81bafc75b9b604bc27090c565036
    Status: Downloaded newer image for influxdb:1.5.2
    [root@minion-2 ~]# docker tag influxdb:1.5.2 k8s.gcr.io/heapster-influxdb-amd64:v1.5.2
    3.鏡像準備好了,執行創建的命令
    [root@master resources]# kubectl apply -f influxdb.yaml
    deployment.apps/monitoring-influxdb created
    [root@master resources]# kubectl get svc -n kube-system
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5d
    monitoring-influxdb ClusterIP 10.109.26.130 <none> 8086/TCP 31m
    4.將rabc的yaml文件下載:
    [root@master resources]# wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
    5.創建rabc:
    [root@master resources]# kubectl apply -f heapster-rbac.yaml
    6.下載hearster的yaml文件並創建:
    [root@master resources]# wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
    [root@master resources]# vim heapster.yaml
    #修改下×××部分,不改也可以
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: heapster
    namespace: kube-system

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: heapster
    namespace: kube-system
    spec:
    replicas: 1
    selector:
    matchLabels:
    task: monitoring
    k8s-app: heapster
    。。。。。。。。。。。。。。。。。。。。
    spec:
    ports:

  • port: 80
    targetPort: 8082
    type: NodePort
    7.要加節點上把鏡像拉下來並打上標籤:
    [root@minion-2 ~]# docker pull fishchen/heapster-amd64:v1.5.4
    v1.5.4: Pulling from fishchen/heapster-amd64
    c0b4198b9e96: Pull complete
    b0c38d9b6f16: Pull complete
    Digest: sha256:dccaabb0c20cf05c29baefa1e9bf0358b083ccc0fab492b9b3b47fb7e4db5472
    Status: Downloaded newer image for fishchen/heapster-amd64:v1.5.4
    [root@minion-2 ~]# docker tag fishchen/heapster-amd64:v1.5.4 k8s.gcr.io/heapster-amd64:v1.5.4
    8.開始創建:
    [root@master resources]# kubectl apply -f heapster.yaml
    serviceaccount/heapster created
    deployment.apps/heapster created
    service/heapster created
    [root@master resources]# kubectl get svc -n kube-system
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    heapster NodePort 10.106.127.123 <none> 80:30600/TCP 21s
    kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5d
    monitoring-influxdb ClusterIP 10.109.26.130 <none> 8086/TCP 47m
    [root@master resources]# kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    canal-8pxjq 3/3 Running 24 4d
    canal-bfl74 3/3 Running 20 4d
    canal-rtw55 3/3 Running 20 4d
    coredns-78fcdf6894-kqjqt 1/1 Running 12 5d
    coredns-78fcdf6894-w2c7j 1/1 Running 6 4d
    etcd-master 1/1 Running 6 5d
    heapster-84c9bc48c4-tc46l 1/1 Running 0 18s
    kube-apiserver-master 1/1 Running 8 5d
    kube-controller-manager-master 1/1 Running 8 5d
    kube-flannel-ds-amd64-5wwdm 1/1 Running 9 5d
    kube-flannel-ds-amd64-rhhx4 1/1 Running 11 5d
    kube-flannel-ds-amd64-s9jlj 1/1 Running 0 4h
    kube-proxy-j8lkl 1/1 Running 6 5d
    kube-proxy-wf2ss 1/1 Running 6 5d
    kube-proxy-xxdr4 1/1 Running 5 5d
    kube-scheduler-master 1/1 Running 8 5d
    monitoring-influxdb-848b9b66f6-v67n6 1/1 Running 0 53m
    9.訪問測試:

9.2部署grafana圖形化展示
1.將grafana的yaml文件下載到本地:
[root@master resources]# wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml
2.修改配置文件:
[root@master resources]# vim grafana.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: grafana
。。。。。。。。。。。。。。。。。
ports:

  • port: 80
    targetPort: 3000
    selector:
    k8s-app: grafana
    type: NodePort
    3.如上先在節點上將鏡像拖下來然後修改成和配置文件裏一樣的標籤:
    [root@minion-1 ~]# docker pull grafana/grafana:5.0.4
    5.0.4: Pulling from grafana/grafana
    f65523718fc5: Pull complete
    a3ed95caeb02: Pull complete
    4838ae75cd3d: Pull complete
    eec7aa0e332c: Pull complete
    Digest: sha256:9c66c7c01a6bf56023126a0b6f933f4966e8ee795c5f76fa2ad81b3c6dadc1c9
    Status: Downloaded newer image for grafana/grafana:5.0.4
    [root@minion-1 ~]# docker tag grafana/grafana:5.0.4 k8s.gcr.io/heapster-grafana-amd64:v5.0.4
    4.創建grafana:
    [root@master resources]# kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    canal-8pxjq 3/3 Running 24 4d
    canal-bfl74 3/3 Running 20 4d
    canal-rtw55 3/3 Running 20 4d
    coredns-78fcdf6894-kqjqt 1/1 Running 12 5d
    coredns-78fcdf6894-w2c7j 1/1 Running 6 5d
    etcd-master 1/1 Running 6 5d
    heapster-84c9bc48c4-tc46l 1/1 Running 0 36m
    kube-apiserver-master 1/1 Running 8 5d
    kube-controller-manager-master 1/1 Running 8 5d
    kube-flannel-ds-amd64-5wwdm 1/1 Running 9 5d
    kube-flannel-ds-amd64-rhhx4 1/1 Running 11 5d
    kube-flannel-ds-amd64-s9jlj 1/1 Running 0 5h
    kube-proxy-j8lkl 1/1 Running 6 5d
    kube-proxy-wf2ss 1/1 Running 6 5d
    kube-proxy-xxdr4 1/1 Running 5 5d
    kube-scheduler-master 1/1 Running 8 5d
    monitoring-grafana-555545f477-vq2wd 1/1 Running 0 24m
    monitoring-influxdb-848b9b66f6-v67n6 1/1 Running 0 1h
    [root@master resources]# kubectl get svc -n kube-system
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    heapster NodePort 10.106.127.123 <none> 80:30600/TCP 42m
    kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5d
    monitoring-grafana NodePort 10.110.48.235 <none> 80:31536/TCP 24m
    monitoring-influxdb ClusterIP 10.109.26.130 <none> 8086/TCP 1h
    5.訪問測試:

9.3部署metrics-server
1將所有文件下載到本地:地址https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server

[root@master resources]# git clone https://github.com/kubernetes-incubator/metrics-server.git
[root@master 1.8+]# cd /root/resources/metrics-server/deploy/1.8+
2.將目錄下yaml文件中的鏡像手動在節點上拉下來並打上配置文件裏的標籤
[root@minion-2 ~]# docker pull rancher/metrics-server-amd64:v0.3.1
v0.3.1: Pulling from rancher/metrics-server-amd64
8c5a7da1afbc: Pull complete
e2b7e44cc2bf: Pull complete
Digest: sha256:78938f933822856f443e6827fe5b37d6cc2f74ae888ac8b33d06fdbe5f8c658b
Status: Downloaded newer image for rancher/metrics-server-amd64:v0.3.1
[root@minion-2 ~]# docker tag rancher/metrics-server-amd64:v0.3.1 k8s.gcr.io/metrics-server-amd64:v0.3.1
[root@master 1.8+]# kubectl apply -f .
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader configured
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator configured
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io configured
serviceaccount/metrics-server created
deployment.extensions/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@master 1.8+]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
canal-8pxjq 3/3 Running 27 5d
canal-bfl74 3/3 Running 23 5d
canal-rtw55 3/3 Running 24 5d
coredns-78fcdf6894-kqjqt 1/1 Running 13 5d
coredns-78fcdf6894-w2c7j 1/1 Running 7 5d
etcd-master 1/1 Running 7 5d
kube-apiserver-master 1/1 Running 13 5d
kube-controller-manager-master 1/1 Running 12 5d
kube-flannel-ds-amd64-5wwdm 1/1 Running 10 5d
kube-flannel-ds-amd64-rhhx4 1/1 Running 13 5d
kube-flannel-ds-amd64-s9jlj 1/1 Running 1 23h
kube-proxy-j8lkl 1/1 Running 7 5d
kube-proxy-wf2ss 1/1 Running 7 5d
kube-proxy-xxdr4 1/1 Running 6 5d
kube-scheduler-master 1/1 Running 11 5d
metrics-server-5d78f796fd-wn79b 1/1 Running 0 23s
[root@master 1.8+]# kubectl top nodes
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
會發現還是用不了,
解決方法:
[root@master 1.8+]# vim metrics-server-deployment.yaml
#添加×××部分
containers:

  • name: metrics-server
    image: k8s.gcr.io/metrics-server-amd64:v0.3.1
    imagePullPolicy: IfNotPresent
    command:
    • /metrics-server
    • --kubelet-insecure-tls
    • --kubelet-preferred-address-types=InternalIP
      然後重新創建:
      [root@master 1.8+]# kubectl apply -f metrics-server-deployment.yaml
      serviceaccount/metrics-server unchanged
      deployment.extensions/metrics-server configured
      [root@master 1.8+]# kubectl top node
      NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
      master 194m 19% 1116Mi 64%
      minion-1 78m 7% 432Mi 25%
      minion-2 66m 6% 443Mi 25%
      9.3部署prometheus
      1.將prometheus地址拉到本地並運行
      [root@master ~]# git clone https://github.com/iKubernetes/k8s-prom.git
      [root@master ~]# cd k8s-prom/
      [root@master k8s-prom]# kubectl apply -f namespace.yaml
      namespace/prom created
      [root@master k8s-prom]# cd node_exporter/
      [root@master node_exporter]# ls
      node-exporter-ds.yaml node-exporter-svc.yaml
      [root@master node_exporter]# vim node-exporter-ds.yaml
      [root@master node_exporter]# kubectl apply -f .
      daemonset.apps/prometheus-node-exporter created
      service/prometheus-node-exporter created
      [root@master node_exporter]# kubectl get pods -n prom
      NAME READY STATUS RESTARTS AGE
      prometheus-node-exporter-5llld 1/1 Running 0 1m
      prometheus-node-exporter-lw7xv 1/1 Running 0 1m
      prometheus-node-exporter-qsbrs 1/1 Running 0 1m
      [root@master node_exporter]# cd ../prometheus/
      [root@master prometheus]# ls
      prometheus-cfg.yaml prometheus-deploy.yaml prometheus-rbac.yaml prometheus-svc.yaml
      [root@master prometheus]# kubectl apply -f .
      configmap/prometheus-config created
      deployment.apps/prometheus-server created
      clusterrole.rbac.authorization.k8s.io/prometheus created
      serviceaccount/prometheus created
      clusterrolebinding.rbac.authorization.k8s.io/prometheus created
      service/prometheus created
      [root@master prometheus]# vim prometheus-deploy.yaml
      #×××最大限制部分刪除了,否則內存不足運行不起來
      ports:
    • containerPort: 9090
      protocol: TCP
      resources:
      limits:
      memory: 2Gi
      [root@master prometheus]# kubectl apply -f prometheus-deploy.yaml
      deployment.apps/prometheus-server created
      [root@master prometheus]# kubectl get pods -n prom
      NAME READY STATUS RESTARTS AGE
      prometheus-node-exporter-5llld 1/1 Running 0 11m
      prometheus-node-exporter-lw7xv 1/1 Running 0 11m
      prometheus-node-exporter-qsbrs 1/1 Running 0 11m
      prometheus-server-7c8554cf-gkrs9 1/1 Running 0 2m

[root@master prometheus]# kubectl get all -n prom
NAME READY STATUS RESTARTS AGE
pod/prometheus-node-exporter-5llld 1/1 Running 0 13m
pod/prometheus-node-exporter-lw7xv 1/1 Running 0 13m
pod/prometheus-node-exporter-qsbrs 1/1 Running 0 13m
pod/prometheus-server-7c8554cf-gkrs9 1/1 Running 0 3m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus NodePort 10.98.60.233 <none> 9090:30090/TCP 10m
service/prometheus-node-exporter ClusterIP None <none> 9100/TCP 13m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prometheus-node-exporter 3 3 3 3 3 <none> 13m

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/prometheus-server 1 1 1 1 3m

NAME DESIRED CURRENT READY AGE
replicaset.apps/prometheus-server-7c8554cf 1 1 1 3m
2.訪問30090測試:

[root@master prometheus]# cd ../
[root@master k8s-prom]# cd kube-state-metrics/
[root@master kube-state-metrics]# ls
kube-state-metrics-deploy.yaml kube-state-metrics-svc.yaml
kube-state-metrics-rbac.yaml
4.在節點上把鏡像拉下來
[root@minion-1 ~]# ./pull-google.sh gcr.io/google_containers/kube-state-metrics-amd64:v1.3.1
[root@master kube-state-metrics]# kubectl apply -f .
[root@master kube-state-metrics]# kubectl get all -n prom
NAME READY STATUS RESTARTS AGE
pod/kube-state-metrics-58dffdf67d-lvhtl 1/1 Running 0 1m
pod/prometheus-node-exporter-5llld 1/1 Running 0 45m
pod/prometheus-node-exporter-lw7xv 1/1 Running 0 45m
pod/prometheus-node-exporter-qsbrs 1/1 Running 0 45m
pod/prometheus-server-7c8554cf-gkrs9 1/1 Running 0 35m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-state-metrics ClusterIP 10.105.251.81 <none> 8080/TCP 8m
service/prometheus NodePort 10.98.60.233 <none> 9090:30090/TCP 42m
service/prometheus-node-exporter ClusterIP None <none> 9100/TCP 45m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prometheus-node-exporter 3 3 3 3 3 <none> 45m

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-state-metrics 1 1 1 1 1m
deployment.apps/prometheus-server 1 1 1 1 35m

NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-state-metrics-58dffdf67d 1 1 1 1m
replicaset.apps/prometheus-server-7c8554cf 1 1 1 35m
[root@master kube-state-metrics]# cd ../k8s-prometheus-adapter/
[root@master k8s-prometheus-adapter]# ls
custom-metrics-apiserver-auth-delegator-cluster-role-binding.yaml
custom-metrics-apiserver-auth-reader-role-binding.yaml
custom-metrics-apiserver-deployment.yaml
custom-metrics-apiserver-resource-reader-cluster-role-binding.yaml
custom-metrics-apiserver-service-account.yaml
custom-metrics-apiserver-service.yaml
custom-metrics-apiservice.yaml
custom-metrics-cluster-role.yaml
custom-metrics-resource-reader-cluster-role.yaml
hpa-custom-metrics-cluster-role-binding.yaml
5.需要做證書認證:
[root@master ~]# cd /etc/kubernetes/pki/
[root@master pki]# (umask 077;openssl genrsa -out serving.key 2048)
Generating RSA private key, 2048 bit long modulus
......................................................................+++
...................+++
e is 65537 (0x10001)
[root@master pki]# openssl req -new -key serving.key -out serving.csr -subj "/CN=serving"
[root@master pki]# openssl x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 3650
Signature ok
subject=/CN=serving
Getting CA Private Key
[root@master pki]# ls
apiserver.crt ca.crt front-proxy-client.key
apiserver-etcd-client.crt ca.key sa.key
apiserver-etcd-client.key etcd sa.pub
apiserver.key front-proxy-ca.crt serving.crt
apiserver-kubelet-client.crt front-proxy-ca.key serving.csr
apiserver-kubelet-client.key front-proxy-client.crt serving.key
[root@master pki]# kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt=./serving.crt --from-file=serving.key=./serving.key -n prom
secret/cm-adapter-serving-certs created
[root@master pki]# kubectl get secrets -n prom
NAME TYPE DATA AGE
cm-adapter-serving-certs Opaque 2 26s
default-token-svkpd kubernetes.io/service-account-token 3 1h
kube-state-metrics-token-47zdn kubernetes.io/service-account-token 3 25m
prometheus-token-brldq kubernetes.io/service-account-token 3 58m
[root@master k8s-prometheus-adapter]# kubectl apply -f .
clusterrolebinding.rbac.authorization.k8s.io/custom-metrics:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader created
deployment.apps/custom-metrics-apiserver created
clusterrolebinding.rbac.authorization.k8s.io/custom-metrics-resource-reader created
serviceaccount/custom-metrics-apiserver created
service/custom-metrics-apiserver created
apiservice.apiregistration.k8s.io/v1beta1.custom.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/custom-metrics-server-resources created
clusterrole.rbac.authorization.k8s.io/custom-metrics-resource-reader created
clusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created
[root@master k8s-prometheus-adapter]# mv custom-metrics-apiserver-deployment.yaml{,.bak}
6.下載新版的配置文件:
[root@master k8s-prometheus-adapter]# wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-apiserver-deployment.yaml
7.修改配置文件:
[root@master k8s-prometheus-adapter]# vim custom-metrics-apiserver-deployment.yaml
#×××部分的命名空間修改成自己定義的
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
namespace: prom
spec:
8.將confgmap拉下載:
[root@master k8s-prometheus-adapter]# wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-config-map.yaml
9.修改下里面的命名空間:
[root@master k8s-prometheus-adapter]# vim custom-metrics-config-map.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: adapter-config
namespace: prom
[root@master k8s-prometheus-adapter]# kubectl apply -f custom-metrics-config-map.yaml
configmap/adapter-config created
[root@master k8s-prometheus-adapter]# kubectl apply -f custom-metrics-apiserver-deployment.yaml
deployment.apps/custom-metrics-apiserver created
[root@master k8s-prometheus-adapter]# kubectl get pod -n prom
NAME READY STATUS RESTARTS AGE
custom-metrics-apiserver-65f545496-srtdr 1/1 Running 0 16s
kube-state-metrics-58dffdf67d-lvhtl 1/1 Running 0 1h
prometheus-node-exporter-5llld 1/1 Running 0 1h
prometheus-node-exporter-lw7xv 1/1 Running 0 1h
prometheus-node-exporter-qsbrs 1/1 Running 0 1h
prometheus-server-7c8554cf-gkrs9 1/1 Running 0 1h
[root@master k8s-prometheus-adapter]# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
crd.projectcalico.org/v1
custom.metrics.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
metrics.k8s.io/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
10.配置grafana,修改配置文件命名空間修改成prom
[root@master resources]# vim grafana.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: prom
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: grafana
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
containers:

  • name: grafana
    • containerPort: 3000
      protocol: TCP
      volumeMounts:
    • mountPath: /etc/ssl/certs
      name: ca-certificates
      readOnly: true
      env:
      #- name: INFLUXDB_HOST #註釋這兩行內容,這兩行是使用influxdb
      #value: monitoring-influxdb
    • name: GF_SERVER_HTTP_PORT
      value: "3000"

      The following env variables are required to make Grafana accessible via

      the kubernetes api-server proxy. On production clusters, we recommend

      removing these env variables, setup auth for grafana, and expose the grafana

      service using a LoadBalancer or a public IP.

    • name: GF_AUTH_BASIC_ENABLED
      value: "false"
    • name: GF_AUTH_ANONYMOUS_ENABLED
      value: "true"
    • name: GF_AUTH_ANONYMOUS_ORG_ROLE
      value: Admin
    • name: GF_SERVER_ROOT_URL

      If you're only using the API Server proxy, set this value instead:

      value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy

      value: /
      volumes:

  • name: ca-certificates
    hostPath:
    path: /etc/ssl/certs
  • name: grafana-storage
    emptyDir: {}

    apiVersion: v1
    kind: Service
    metadata:
    labels:

    For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

    If you are NOT using this as an addon, you should comment out this line.

    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
    name: monitoring-grafana
    namespace: prom
    spec:
    #In a production setup, we recommend accessing Grafana through an external Loadbalancer
    #or through a public IP.
    #type: LoadBalancer
    #You could also use NodePort to expose the service at a randomly-generated port
    #type: NodePort
    ports:

    • port: 80
      targetPort: 3000
      selector:
      k8s-app: grafana
      tpe: NodePort
      [root@master resources]# kubectl apply -f grafana.yaml
      deployment.apps/monitoring-grafana created
      service/monitoring-grafana created
      [root@master resources]# kubectl get pods -n prom -w
      NAME READY STATUS RESTARTS AGE
      custom-metrics-apiserver-65f545496-srtdr 1/1 Running 0 21m
      kube-state-metrics-58dffdf67d-lvhtl 1/1 Running 0 1h
      monitoring-grafana-ffb4d59bd-sl72s 1/1 Running 0 2m
      prometheus-node-exporter-5llld 1/1 Running 0 2h
      prometheus-node-exporter-lw7xv 1/1 Running 0 2h
      prometheus-node-exporter-qsbrs 1/1 Running 0 2h
      prometheus-server-7c8554cf-gkrs9 1/1 Running 0 1h
      [root@master resources]# kubectl get svc -n prom
      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      custom-metrics-apiserver ClusterIP 10.96.75.182 <none> 443/TCP 1h
      kube-state-metrics ClusterIP 10.105.251.81 <none> 8080/TCP 1h
      monitoring-grafana NodePort 10.107.187.39 <none> 80:31504/TCP 3m
      prometheus NodePort 10.98.60.233 <none> 9090:30090/TCP 2h
      prometheus-node-exporter ClusterIP None <none> 9100/TCP 2h
      11.訪問grafana:

9.4 K8s自動擴容
1.主動擴容:
[root@master ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=1 --requests='cpu=50m,memory=256Mi' --limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80
service/myapp created
deployment.apps/myapp created
2.命令行配置:
[root@master ~]# kubectl autoscale deployment myapp --min=1 --max=8 --cpu-percent=50
kubectl autoscale:關鍵字
deployment :類型type,這裏是deployment
myapp:名稱
--min:最少多少個
--max:最多多少個
--cpu-percent:CPU的閾值百分比,這裏50就是50%
[root@master ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
myapp Deployment/myapp <unknown>/50% 1 8 1 3m
3.壓測測試:
[root@master ~]# kubectl patch svc myapp -p '{"spec":{"type":"NodePort"}}'
service/myapp patched
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1h
myapp NodePort 10.97.180.218 <none> 80:30417/TCP 12m
[root@minion-1 ~]# yum install -y httpd-tools

4.Minion-1用ab命令壓測
[root@minion-1 ~]# ab -c 1000 -n 5000000 http://192.168.200.201:30417/index.html
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.200.201 (be patient)
apr_socket_recv: Connection reset by peer (104)
Total of 6202 requests completed
5.查看這邊hpa×××部分的變化
[root@master ~]# kubectl describe hpa
Name: myapp
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 24 Oct 2018 16:33:48 +0800
Reference: Deployment/myapp
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 102% (51m) / 50%
Min replicas: 1
Max replicas: 8
Deployment pods: 1 current / 3 desired
Conditions:

6.查看pod擴展出兩個:(這個擴容它是根據自己cpu負載計算的)
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6985749785-pz8vg 1/1 Running 0 1h
myapp-6985749785-rf8vb 1/1 Running 0 24s
myapp-6985749785-zx2fv 1/1 Running 0 24s
7.等峯值過去了會自動縮容:(縮容的延遲時間可以自己設定,默認會有延遲)
[root@master ~]# kubectl describe hpa
Name: myapp
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 24 Oct 2018 16:33:48 +0800
Reference: Deployment/myapp
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (0) / 50%
Min replicas: 1
Max replicas: 8
Deployment pods: 3 current / 3 desired
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6985749785-pz8vg 1/1 Running 0 1h
myapp-6985749785-rf8vb 1/1 Running 0 4m
myapp-6985749785-zx2fv 1/1 Running 0 4m
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-6985749785-pz8vg 1/1 Running 0 1h
默認hpa使用的是v1控制器
8.創建v2控制器
[root@master ~]# vim hpa-v2-demo.yaml

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa-v2
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 1
maxReplicas: 10
metrics:

  • type: Resource
    resource:
    name: cpu
    targetAverageUtilization: 50
  • type: Resource
    resource:
    name: memory
    targetAverageValue: 50Mi #v2支持內存
    [root@master ~]# vim hpa-v2-demo.yaml
    [root@master ~]# kubectl get hpa
    NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
    myapp-hpa-v2 Deployment/myapp 3182592/50Mi, 0%/50% 1 10 1 1m
    9.再次壓測:
    [root@minion-1 ~]# ab -c 1000 -n 500000 http://192.168.200.201:30417/index.html
    This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.200.201 (be patient)
apr_socket_recv: Connection reset by peer (104)
Total of 781 requests completed
[root@minion-1 ~]# ab -c 1000 -n 500000 http://192.168.200.201:30417/index.html
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.200.201 (be patient)
apr_socket_recv: Connection reset by peer (104)
Total of 10512 requests completed
[root@master ~]# kubectl describe hpa
Name: myapp-hpa-v2
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"myapp-hpa-v2","namespace":"default"},"spec":{...
CreationTimestamp: Wed, 24 Oct 2018 18:20:53 +0800
Reference: Deployment/myapp
Metrics: ( current / target )
resource memory on pods: 3395584 / 50Mi
resource cpu on pods (as a percentage of request): 37% (18m) / 50%
Min replicas: 1
Max replicas: 10
Deployment pods: 2 current / 2 desired
Conditions:
Type Status Reason Message


AbleToScale False BackoffBoth the time since the previous scale is still within both the downscale and upscale forbidden windows
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message


Normal SuccessfulRescale 2m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target
[root@master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-6985749785-pz8vg 1/1 Running 0 2h
myapp-6985749785-qdfcv 1/1 Running 0 2m
helm入門
10.1 部署tiller
下載helm包:https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
Git地址:https://github.com/helm/helm/releases/tag/v2.11.0
1.下載完上傳到服務器然後解壓啓動:
[root@master ~]# tar xf helm-v2.11.0-linux-amd64.tar.gz
[root@master ~]# cd linux-amd64/
[root@master linux-amd64]# ls
helm LICENSE README.md tiller
[root@master linux-amd64]# mv helm /usr/bin/
2.部署tiller
[root@master linux-amd64]# cd ../
[root@master ~]# mkdir helm
[root@master ~]# cd helm
[root@master helm]# vim tiller-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:

  • kind: ServiceAccount
    name: tiller
    namespace: kube-system
    [root@master helm]# kubectl apply -f tiller-rbac.yaml
    serviceaccount/tiller created
    clusterrolebinding.rbac.authorization.k8s.io/tiller created
    3.初始化:×××部分是鏡像,如果版本有變化重新找個對應的版本鏡像即可
    [root@master helm]# helm init --service-account tiller --upgrade -i sapcc/tiller:v2.11.0 --skip-refresh
    [root@master helm]# kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    coredns-78fcdf6894-qvcg7 1/1 Running 1 1d
    coredns-78fcdf6894-z6hvx 1/1 Running 1 1d
    etcd-master 1/1 Running 1 1d
    kube-apiserver-master 1/1 Running 1 1d
    kube-controller-manager-master 1/1 Running 1 1d
    kube-flannel-ds-amd64-cfbfp 1/1 Running 2 1d
    kube-flannel-ds-amd64-j2qlk 1/1 Running 2 1d
    kube-flannel-ds-amd64-rwgz5 1/1 Running 2 1d
    kube-proxy-b5jnt 1/1 Running 1 1d
    kube-proxy-shjnd 1/1 Running 1 1d
    kube-proxy-sp64v 1/1 Running 1 1d
    kube-scheduler-master 1/1 Running 1 1d
    metrics-server-64d46554f7-grcv6 1/1 Running 1 1d
    tiller-deploy-d89b4dd7f-jng7d 1/1 Running 0 4m
    [root@master helm]# helm version
    Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.5.1", GitCommit:"7cf31e8d9a026287041bae077b09165be247ae66", GitTreeState:"clean"}
    官方可用的chart列表:https://hub.kubeapps.com/
    4.查看可用倉庫:
    [root@master helm]# helm repo list
    NAME URL
    stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
    local http://127.0.0.1:8879/charts
    5.搜索可用chart:
    [root@master helm]# helm search jenkins
    NAME CHART VERSION APP VERSION DESCRIPTION
    stable/jenkins 0.13.5 2.73 Open source continuous integration server. It supports mu...
    6.查看chart的詳細信息:
    [root@master helm]# helm inspect stable/redis
    appVersion: 4.0.8
    description: Open source, advanced key-value store. It is often referred to as a data
    structure server since keys can contain strings, hashes, lists, sets and sorted
    sets.
    engine: gotpl
    home: http://redis.io/
    icon: https://bitnami.com/assets/stacks/redis/img/redis-stack-220x234.png
    keywords:
    • redis
      10.2 helm簡單管理及操作
      1.Helm常用命令:
      Release管理
      Install
      delete
      upgrade/rollback
      list
      history
      status
      chart管理:
      create
      fetch
      inspect
      package
      verlfy

spring.data.mongodb.authentication-database=youwin_edu
spring.data.mongodb.database=youwin_edu
spring.data.mongodb.username=youwin_edu
N1w_2xE6MTQ2ODk5Nj_edu
2.創建一個myapp的helm
[root@master helm]# helm create myapp
Creating myapp
3.會自動生成模板文件:
[root@master helm]# tree myapp/
myapp/
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── ingress.yaml
│ ├── NOTES.txt
│ └── service.yaml
└── values.yaml

4.打包myapp這個項目:
[root@master helm]# helm package myapp/
Successfully packaged chart and saved it to: /root/helm/myapp-0.0.1.tgz
[root@master helm]# ls
myapp myapp-0.0.1.tgz tiller-rbac.yaml
5.啓動helm本地倉庫服務:
[root@master helm]# helm serve
Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
[root@master ~]# helm search myapp #搜索有信息說明啓動了或者查看8879端口
NAME CHART VERSION APP VERSION DESCRIPTION
local/myapp 0.0.1 1.0 A Helm chart for Kubernetes
6.安裝myapp:
[root@master helm]helm install --name myapp-1 local/myapp
NAME: myapp-1
LAST DEPLOYED: Mon Oct 29 15:42:30 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta2/Deployment
NAME AGE
myapp-1 0s

==> v1/Pod(related)

NAME READY STATUS RESTARTS AGE
myapp-1-847d9b9676-6lzzl 0/1 Pending 0 0s

==> v1/Service

NAME AGE
myapp-1 0s

NOTES:

  1. Get the application URL by running these commands:
    export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=myapp,app.kubernetes.io/instance=myapp-1" -o jsonpath="{.items[0].metadata.name}")
    echo "Visit http://127.0.0.1:8080 to use your application"
    kubectl port-forward $POD_NAME 8080:80

[root@master helm]# kubectl get pods #可能文件配置有問題
NAME READY STATUS RESTARTS AGE
myapp-1-847d9b9676-6lzzl 0/1 InvalidImageName 0 39s
myapp-6985749785-pz8vg 1/1 Running 3 4d
7.刪除:
[root@master helm]# helm delete --purge myapp-1
release "myapp-1" deleted

8.添加倉庫:stable倉庫裏面的是穩定的
[root@master helm]# helm repo add stable https://kubernetes-charts.storage.googleapis.com
[root@master helm]# helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com/
9.添加倉庫:incubator倉庫裏面的應用不是穩定版本,測試可以使用
[root@master helm]# helm repo add incubator http://kubernetes-charts-incubator.storage.googleapis.com
"incubator" has been added to your repositories
[root@master helm]# helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com/
local http://127.0.0.1:8879/charts
repo_name1 https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts-incubator/
incubator http://kubernetes-charts-incubator.storage.googleapis.com
部署efk日誌收集:
[root@master ~]# helm fetch incubator/elasticsearch
[root@master ~]# ls
a k8s-prom
anaconda-ks.cfg k8s.sh
a.tar.gz kube-apiserver-amd64-1.11.0.tar.gz
coredns-1.1.3.tar.gz kube-controller-manager-amd64-1.11.0.tar.gz
elasticsearch-1.10.2.tgz kube-flannel.yml
[root@master helm]# tar xf elasticsearch-1.10.2.tgz
[root@master helm]# cd elasticsearch
修改文件:
[root@master elasticsearch]# vim values.yaml
將數量修改成1,因爲資源不夠,將存儲卷關閉
er to form a cluster.
MINIMUM_MASTER_NODES: "1"

client:
name: client
replicas: 1
master:
name: master
exposeHttp: false
replicas: 1
heapSize: "512m"
persistence:
enabled: false
accessMode: ReadWriteOnce
name: data
size: "4Gi"
data:
name: data
exposeHttp: false
replicas: 1
heapSize: "1536m"
persistence:
enabled: false
安裝es
[root@master elasticsearch]# kubectl create namespace efk
[root@master elasticsearch]# helm install --name els1 --namespace=efk -f values.yaml incubator/elasticsearch
NAME: els1
LAST DEPLOYED: Tue Oct 30 10:48:51 2018
NAMESPACE: efk
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME AGE
els1-elasticsearch-client 1s

==> v1beta1/StatefulSet
els1-elasticsearch-data 1s
els1-elasticsearch-master 1s

==> v1/Pod(related)

NAME READY STATUS RESTARTS AGE
els1-elasticsearch-client-7667b8455f-cmbpd 0/1 Init:0/1 0 1s
els1-elasticsearch-data-0 0/1 Init:0/2 0 1s
els1-elasticsearch-master-0 0/1 Init:0/2 0 0s

==> v1/ConfigMap

NAME AGE
els1-elasticsearch 1s

==> v1/Service
els1-elasticsearch-client 1s
els1-elasticsearch-discovery 1s

NOTES:
The elasticsearch cluster has been installed.


Please note that this chart has been deprecated and moved to stable.
Going forward please use the stable version of this chart.


Elasticsearch can be accessed:

  • Within your cluster, at the following DNS name at port 9200:

    els1-elasticsearch-client.efk.svc

  • From outside the cluster, run these commands in the same shell:

    export POD_NAME=$(kubectl get pods --namespace efk -l "app=elasticsearch,component=client,release=els1" -o jsonpath="{.items[0].metadata.name}")
    echo "Visit http://127.0.0.1:9200 to use Elasticsearch"
    kubectl port-forward --namespace efk $POD_NAME 9200:9200
    輸出這狀態信息用status也可以看:
    [root@master elasticsearch]# helm status els1
    日誌收集不完整,由於機器配置問題。
    部署Traefik
    Traefik
    Traefik是一個用Golang開發的輕量級的Http反向代理和負載均衡器。由於可以自動配置和刷新backend節點,目前可以被絕大部分容器平臺支持,例如Kubernetes,Swarm,Rancher等。由於traefik會實時與Kubernetes API交互,所以對於Service的節點變化,traefik的反應會更加迅速。總體來說traefik可以在Kubernetes中完美的運行.
    Traefik 還有很多特性如下:
    • 速度快
    • 不需要安裝其他依賴,使用 GO 語言編譯可執行文件
    • 支持最小化官方 Docker 鏡像
    • 支持多種後臺,如 Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS 等等
    • 支持 REST API
    • 配置文件熱重載,不需要重啓進程
    • 支持自動熔斷功能
    • 支持輪訓、負載均衡
    • 提供簡潔的 UI 界面
    • 支持 Websocket, HTTP/2, GRPC
    • 自動更新 HTTPS 證書
    • 支持高可用集羣模式
    接下來我們使用 Traefik 來替代 Nginx + Ingress Controller 來實現反向代
    理和服務暴漏。
    那麼二者有什麼區別呢?簡單點說吧,在 Kubernetes 中使用 nginx 作爲前端負載均衡,通過 Ingress Controller 不斷的跟 Kubernetes API 交互,實時獲取後端 Service、Pod 等的變化,然後動態更新 Nginx 配置,並刷新使配置生效,來達到服務自動發現的目的,而 Traefik 本身設計的就能夠實時跟 Kubernetes API 交互,感知後端 Service、Pod 等的變化,自動更新配置並熱重載。大體上差不多,但是 Traefik 更快速更方便,同時支持更多的特性,使反向代理、負載均衡更直接更高效。
    11.1部署traefik負載均衡
    1.下載下來服務的yaml文件
    [root@master ~]# wget https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml
    [root@master ~]# wget https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml
    [root@master ~]# wget https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml
    2.創建rbac:
    [root@master ~]# kubectl apply -f ./traefik-rbac.yaml
    clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
    clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
    3.創建traefik ds:
    [root@master ~]# vim ./traefik-ds.yaml
    #少一行type: NodePort
    [root@master ~]# kubectl apply -f ./traefik-ds.yaml
    serviceaccount/traefik-ingress-controller unchanged
    daemonset.extensions/traefik-ingress-controller created
    service/traefik-ingress-service unchanged
    4.查看traefik pod是否允許正常,並且在哪個node上
    [root@master ~]# kubectl --namespace=kube-system get pods -o wide
    NAME READY STATUS RESTARTS AGE IP NODE
    coredns-78fcdf6894-9fs99 1/1 Running 0 24m 10.244.0.2 master
    coredns-78fcdf6894-vckpp 1/1 Running 0 24m 10.244.0.3 master
    etcd-master 1/1 Running 0 24m 192.168.200.200 master
    kube-apiserver-master 1/1 Running 0 24m 192.168.200.200 master
    kube-controller-manager-master 1/1 Running 0 24m 192.168.200.200 master
    kube-flannel-ds-amd64-2xtqz 1/1 Running 0 21m 192.168.200.200 master
    kube-flannel-ds-amd64-fbmvf 1/1 Running 0 20m 192.168.200.201 minion-1
    kube-flannel-ds-amd64-w76wq 1/1 Running 0 20m 192.168.200.202 minion-2
    kube-proxy-b8r7m 1/1 Running 0 20m 192.168.200.202 minion-2
    kube-proxy-t2528 1/1 Running 0 24m 192.168.200.200 master
    kube-proxy-zkgdl 1/1 Running 0 20m 192.168.200.201 minion-1
    kube-scheduler-master 1/1 Running 0 24m 192.168.200.200 master
    traefik-ingress-controller-5hxnj 1/1 Running 0 3m 10.244.2.3 minion-2
    traefik-ingress-controller-6f6d87769d-vn6n4 1/1 Running 0 4m 10.244.2.2 minion-2
    traefik-ingress-controller-kv6x7 1/1 Running 0 3m 10.244.1.2 minion-1
    5.創建traefik的UI:
    [root@master ~]# wget https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml
    [root@master ~]# kubectl apply -f ./ui.yaml
    service/traefik-web-ui created
    ingress.extensions/traefik-web-ui created
    6.測試,創建nginx的pod:
    [root@master ~]# vim nginx.yaml
    apiVersion: v1
    kind: Service
    metadata:
    name: nginx-svc
    spec:
    template:
    metadata:
    labels:
    name: nginx-svc
    namespace: default
    spec:
    selector:
    run: ngx-pod
    ports:

  • protocol: TCP
    port: 80
    targetPort: 80

    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
    name: ngx-pod
    spec:
    replicas: 4
    template:
    metadata:
    labels:
    run: ngx-pod
    spec:
    containers:

    • name: nginx
      image: nginx:1.10
      ports:
      • containerPort: 80

        apiVersion: extensions/v1beta1
        kind: Ingress
        metadata:
        name: ngx-ing
        annotations:
        kubernetes.io/ingress.class: traefik
        spec:
        rules:

  • host: minion-1 #這裏換上能夠解析的域名
    http:
    paths:
    • backend:
      serviceName: nginx-svc
      servicePort: 80
      7.訪問測試:

11.2 配置https訪問
1.生成證書:
[root@master ~]# mkdir /opt/k8s/ssl/ -p
[root@master ~]# mkdir /opt/k8s/conf/ -p
#上述操作在node節點上也要做
[root@master ~]# cd /opt/k8s/conf/
[root@master ssl]# openssl genrsa -des3 -out server.key 2048
[root@master ssl]# openssl req -new -key server.key -out server.csr
[root@master ssl]# cp server.key server.key.org
[root@master ssl]# openssl rsa -in server.key.org -out server.key
[root@master ssl]# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
[root@master ssl]# ls
server.crt server.csr server.key server.key.org
2.將創建好的證書傳給node
[root@master ssl]# scp [email protected]:/opt/k8s/ssl/
3.創建traefik.toml文件:
[root@master ssl]# cd ../conf/
[root@master conf]# vim traefik.toml
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/opt/k8s/ssl/server.crt" #證書的路徑
keyFile = "/opt/k8s/ssl/server.key #證書的路徑
4.將配置文件傳給node:
[root@master conf]#scp /opt/k8s/conf/
[email protected]:/opt/k8s/conf/
5.創建secret:用於驗證
[root@master conf]#kubectl create secret generic traefik-cert --from-file=/opt/k8s/ssl/server.crt --from-file=/opt/k8s/ssl/server.key -n kube-system
6.創建configmap
[root@master conf]#kubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system
7.修改ds的yaml文件:
[root@master ~]# vim traefik-ds.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system

kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
hostNetwork: true
volumes:

  • name: ssl
    secret:
    secretName: traefik-cert
  • name: config
    configMap:
    name: traefik-conf
    containers:
  • image: traefik
    name: traefik-ingress-lb
    volumeMounts:
    • mountPath: "/opt/k8s/ssl/"
      name: "ssl"
    • mountPath: "/opt/k8s/conf/"
      name: "config"
      ports:
    • name: http
      containerPort: 80
    • name: https
      containerPort: 443
    • name: admin
      containerPort: 8080
      args:
    • --configFile=/opt/k8s/conf/traefik.toml
    • --api
    • --kubernetes
    • --logLevel=INFO

      kind: Service
      apiVersion: v1
      metadata:
      name: traefik-ingress-service
      namespace: kube-system
      spec:
      selector:
      k8s-app: traefik-ingress-lb
      ports:

      • protocol: TCP
        port: 80
        name: web
      • protocol: TCP
        port: 443
        name: https
      • protocol: TCP
        port: 8080
        name: admin
        type: NodePort
        主要變化呢是更新了幾個方面:
        kind: DaemonSet 官方默認是使用Deployment
        hostNetwork: true 開啓Node Port端口轉發
        volumeMounts: 新增volumes掛載點
        ports: 新增https443
        args: 新增configfile
        以及Service層的443 ports
        8.先停止之前的DS,再重新創建:
        [root@master ~]# kubectl apply -f ./traefik-ds.yaml
        9.驗證:

至此traefik部署完成。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章