上一篇文章centOS7.2使用yum安裝kubernetes中使用3臺服務器搭建好了kubernetes環境,現在就學習實踐下POD/RC/service實踐。至於kubernetes的核心概念的理解“十分鐘帶你理解Kubernetes核心概念”講得十分透徹,還有Git圖片演示,在這就不多說了。
1.Pod
Pod是k8s的最基本的操作單元,包含一個或多個緊密相關的容器,類似於豌豆莢的概念。一個Pod可以被一個容器化的環境看作應用層的“邏輯宿主機”(Logical Host).一個Pod中的多個容器應用通常是緊耦合的。Pod在Node上被創建、啓動或者銷燬。
爲什麼k8s使用Pod在容器之上再封裝一層呢?一個很重要的原因是Docker容器之間的通信受到Docker網絡機制的限制。在Docker的世界中,一個容器需要通過link方式才能訪問另一個容器提供的服務(端口)。大量容器之間的link將是一個非常繁重的工作。通過Pod的概念將多個容器組合在一個虛擬的“主機”內,可以實現容器之間僅需通過Localhost就能相互通信了。
一個Pod中的應用容器共享同一組資源:
(1)PID命名空間:Pod中的不同應用程序可以看見其他應用程序的進程ID
(2)網絡命名空間:Pod中的多個容器能訪問同一個IP和端口範圍
(3)IPC命名空間:Pod中的多個容器能夠使用SystemV IPC或POSIX消息隊列進行通信。
(4)UTS命名空間:Pod中的多個容器共享一個主機名
(5)Volumes(共享存儲卷):Pod中的各個容器可以訪問在Pod級別定義的Volumes
不建議在k8s的一個pod內運行相同應用的多個實例。也就是說一個Pod內不要運行2個或2個以上相同的鏡像,因爲容易造成端口衝突,而且Pod內的容器都是在同一個Node上的
1.1對Pod的定義
對Pod的定義可以通過Yaml或Json格式的配置文件來完成。關於Yaml或Json中都能寫哪些參數,參考官網http://kubernetes.io/docs/user-guide/pods/multi-container/
下面是Yaml格式定義的一個PHP-test-pod.yaml的Pod,其中kind爲Pod,在spec中主要包含了對Contaners(容器)的定義,可以定義多個容器。文件放在master上。
- apiVersion: v1
- kind: Pod
- metadata:
- name: php-test
- labels:
- name: php-test
- spec:
- containers:
- - name: php-test
- image: 192.168.174.131:5000/php-base:1.0
- env:
- - name: ENV_TEST_1
- value: env_test_1
- - name: ENV_TEST_2
- value: env_test_2
- ports:
- - containerPort: 80
- hostPort: 80
apiVersion: v1
kind: Pod
metadata:
name: php-test
labels:
name: php-test
spec:
containers:
- name: php-test
image: 192.168.174.131:5000/php-base:1.0
env:
- name: ENV_TEST_1
value: env_test_1
- name: ENV_TEST_2
value: env_test_2
ports:
- containerPort: 80
hostPort: 80
1.2kubectl create執行pod文件
[root@localhost k8s]# kubectl create -f ./php-pod.yaml
發現報錯
- [root@localhost k8s]# kubectl create -f ./php-pod.yaml
- Error from server: error when creating ”./php-pod.yaml”: pods “php-test” is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
- [root@localhost k8s]#
[root@localhost k8s]# kubectl create -f ./php-pod.yaml
Error from server: error when creating "./php-pod.yaml": pods "php-test" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
[root@localhost k8s]#
修改/etc/kubernetes/apiserver 中的KUBE_ADMISSION_CONTROL,將ServiceAccount去掉
[root@localhost k8s]# vi /etc/kubernetes/apiserver
- # default admission control policies
- #KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota”
- KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota”
# default admission control policies
[root@localhost k8s]# systemctl restart kube-apiserver.service
重新創建pod
[root@localhost k8s]# kubectl create -f ./php-pod.yaml
pod "php-test" created
1.3查看pod在哪個node上創建
- [root@localhost k8s]# kubectl get pods
- NAME READY STATUS RESTARTS AGE
- php-test 1/1 Running 0 3m
- [root@localhost k8s]# kubectl get pod php-test -o wide
- NAME READY STATUS RESTARTS AGE NODE
- php-test 1/1 Running 0 3m 192.168.174.130
- [root@localhost k8s]#
[root@localhost k8s]# kubectl get pods
NAME READY STATUS RESTARTS AGE
php-test 1/1 Running 0 3m
[root@localhost k8s]# kubectl get pod php-test -o wide
NAME READY STATUS RESTARTS AGE NODE
php-test 1/1 Running 0 3m 192.168.174.130
[root@localhost k8s]#
發現在192.168.174.130上,docker ps -a看看
- [root@localhost ~]# docker ps -a
- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
- 9ca9a8d1bde1 192.168.174.131:5000/php-base:1.0 ”/bin/sh -c ’/usr/loc” About a minute ago Up About a minute k8s_php-test.ac88419d_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_fab25c8c
- bec792435916 kubernetes/pause ”/pause” About a minute ago Up About a minute 0.0.0.0:80->80/tcp k8s_POD.b28ffa81_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_e1c8ba7b
- [root@localhost ~]#
[root@localhost ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9ca9a8d1bde1 192.168.174.131:5000/php-base:1.0 "/bin/sh -c '/usr/loc" About a minute ago Up About a minute k8s_php-test.ac88419d_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_fab25c8c
bec792435916 kubernetes/pause "/pause" About a minute ago Up About a minute 0.0.0.0:80->80/tcp k8s_POD.b28ffa81_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_e1c8ba7b
[root@localhost ~]#
發現有2個容器,一個對應的鏡像是php-base,一個對應的鏡像是kubernetes/pause,它是Netowrk Container, 每啓動一個Pod都會附加啓動這樣一個容器,它的作用就只是簡單的等待,設置Pod的網絡。php-base中php有添加了個info.php頁面,通過瀏覽器訪問http://192.168.174.130/info.php,發現容器正常工作,說明pod沒問題。
查看pod的詳細信息
kubectl describe pod php-test
[root@localhost k8s]# kubectl describe pod php-test
- [root@localhost k8s]# kubectl describe pod php-test
- Name: php-test
- Namespace: default
- Node: 192.168.174.130/192.168.174.130
- Start Time: Thu, 10 Nov 2016 16:02:47 +0800
- Labels: name=php-test
- Status: Running
- IP: 172.17.42.2
- Controllers: <none>
- Containers:
- php-test:
- Container ID: docker://9ca9a8d1bde1e13da2e7ab47fc05331eb6a8c2b6566662576b742f98e2ec9609
- Image: 192.168.174.131:5000/php-base:1.0
- Image ID: docker://sha256:104c7334b9624b054994856318e54b6d1de94c9747ab9f73cf25ae5c240a4de2
- Port: 80/TCP
- QoS Tier:
- cpu: BestEffort
- memory: BestEffort
- State: Running
- Started: Thu, 10 Nov 2016 16:03:04 +0800
- Ready: True
- Restart Count: 0
- Environment Variables:
- ENV_TEST_1: env_test_1
- ENV_TEST_2: env_test_2
- Conditions:
- Type Status
- Ready True
- No volumes.
- Events:
- FirstSeen LastSeen Count From SubobjectPath Type Reason Message
- ——— ——– —– —- ————- ——– —— ——-
- 14m 14m 1 {default-scheduler } Normal Scheduled Successfully assigned php-test to 192.168.174.130
- 14m 14m 2 {kubelet 192.168.174.130} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using “ClusterFirst” policy. Falling back to DNSDefault policy.
- 14m 14m 1 {kubelet 192.168.174.130} spec.containers{php-test} Normal Pulled Container image “192.168.174.131:5000/php-base:1.0” already present on machine
- 14m 14m 1 {kubelet 192.168.174.130} spec.containers{php-test} Normal Created Created container with docker id 9ca9a8d1bde1
- 14m 14m 1 {kubelet 192.168.174.130} spec.containers{php-test} Normal Started Started container with docker id 9ca9a8d1bde1
- [root@localhost k8s]#
[root@localhost k8s]# kubectl describe pod php-test
Name: php-test
Namespace: default
Node: 192.168.174.130/192.168.174.130
Start Time: Thu, 10 Nov 2016 16:02:47 +0800
Labels: name=php-test
Status: Running
IP: 172.17.42.2
Controllers: <none>
Containers:
php-test:
Container ID: docker://9ca9a8d1bde1e13da2e7ab47fc05331eb6a8c2b6566662576b742f98e2ec9609
Image: 192.168.174.131:5000/php-base:1.0
Image ID: docker://sha256:104c7334b9624b054994856318e54b6d1de94c9747ab9f73cf25ae5c240a4de2
Port: 80/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Running
Started: Thu, 10 Nov 2016 16:03:04 +0800
Ready: True
Restart Count: 0
Environment Variables:
ENV_TEST_1: env_test_1
ENV_TEST_2: env_test_2
Conditions:
Type Status
Ready True
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
14m 14m 1 {default-scheduler } Normal Scheduled Successfully assigned php-test to 192.168.174.130
14m 14m 2 {kubelet 192.168.174.130} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
14m 14m 1 {kubelet 192.168.174.130} spec.containers{php-test} Normal Pulled Container image "192.168.174.131:5000/php-base:1.0" already present on machine
14m 14m 1 {kubelet 192.168.174.130} spec.containers{php-test} Normal Created Created container with docker id 9ca9a8d1bde1
14m 14m 1 {kubelet 192.168.174.130} spec.containers{php-test} Normal Started Started container with docker id 9ca9a8d1bde1
[root@localhost k8s]#
kubectl get pod php-test -o yaml或者kubectl get pod php-test -o json 獲取pod更爲詳細的信息1.4測試穩定性
(1)在node上通過docker stop $(docker ps -a -q)停止容器,發現k8s會自動重新生成新容器。
- [root@localhost ~]# docker stop (docker ps -a -q) </span></span></li><li class=""><span>9ca9a8d1bde1 </span></li><li class="alt"><span>bec792435916 </span></li><li class=""><span>[root@localhost ~]# docker ps -a </span></li><li class="alt"><span>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES </span></li><li class=""><span>19aba2fc5300 192.168.174.131:5000/php-base:1.0 "/bin/sh -c '/usr/loc" 2 seconds ago Up 1 seconds k8s_php-test.ac88419d_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_e32e07e1 </span></li><li class="alt"><span>514903617a80 kubernetes/pause "/pause" 3 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp k8s_POD.b28ffa81_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_cac9bd60 </span></li><li class=""><span>9ca9a8d1bde1 192.168.174.131:5000/php-base:1.0 "/bin/sh -c '/usr/loc" 19 minutes ago Exited (137) 2 seconds ago k8s_php-test.ac88419d_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_fab25c8c </span></li><li class="alt"><span>bec792435916 kubernetes/pause "/pause" 19 minutes ago Exited (2) 2 seconds ago k8s_POD.b28ffa81_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_e1c8ba7b </span></li><li class=""><span>[root@localhost ~]# </span></li></ol><div class="save_code tracking-ad" data-mod="popu_249" style="display: none;"><a href="javascript:;"><img src="http://static.blog.csdn.net/images/save_snippets.png"></a></div></div><pre code_snippet_id="1978494" snippet_file_name="blog_20161110_12_7805809" name="code" class="plain" style="display: none;">[root@localhost ~]# docker stop (docker ps -a -q)
9ca9a8d1bde1
bec792435916
[root@localhost ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19aba2fc5300 192.168.174.131:5000/php-base:1.0 “/bin/sh -c ‘/usr/loc” 2 seconds ago Up 1 seconds k8s_php-test.ac88419d_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_e32e07e1
514903617a80 kubernetes/pause “/pause” 3 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp k8s_POD.b28ffa81_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_cac9bd60
9ca9a8d1bde1 192.168.174.131:5000/php-base:1.0 “/bin/sh -c ‘/usr/loc” 19 minutes ago Exited (137) 2 seconds ago k8s_php-test.ac88419d_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_fab25c8c
bec792435916 kubernetes/pause “/pause” 19 minutes ago Exited (2) 2 seconds ago k8s_POD.b28ffa81_php-test_default_173c9f54-a71c-11e6-a280-000c29066541_e1c8ba7b
[root@localhost ~]# (2)停止node服務器(整個系統停止)
node服務器關閉,不是所有node服務器關閉,只是關閉php-test的pod所在的node。然後再master上查詢pod,無法獲取到php-test的pod信息了。- [root@localhost k8s]# kubectl get pods
- NAME READY STATUS RESTARTS AGE
- php-test 1/1 Terminating 1 25m
- [root@localhost k8s]# kubectl get pod php-test -o wide
- Error from server: pods “php-test” not found
- [root@localhost k8s]#
[root@localhost k8s]# kubectl get pods NAME READY STATUS RESTARTS AGE php-test 1/1 Terminating 1 25m [root@localhost k8s]# kubectl get pod php-test -o wide Error from server: pods "php-test" not found [root@localhost k8s]#
重新啓動node服務器,docker ps -a,發現一個容器都沒了。master上執行kubectl get pods,也發現一個pod都沒了,說明如果node服務器掛了,那pod也會銷燬,且不會自動在其它node上創建新的pod。這問題可以通過RC來進行解決,看下面RC內容。
1.5刪除pod
kubectl delete pod NAME,比如kubectl delete pod php-test
2.RC(Replication Controller)
Replication Controller確保任何時候Kubernetes集羣中有指定數量的pod副本(replicas)在運行, 如果少於指定數量的pod副本(replicas),Replication Controller會啓動新的Container,反之會殺死多餘的以保證數量不變。Replication Controller使用預先定義的pod模板創建pods,一旦創建成功,pod 模板和創建的pods沒有任何關聯,可以修改pod 模板而不會對已創建pods有任何影響,也可以直接更新通過Replication Controller創建的pods。對於利用pod 模板創建的pods,Replication Controller根據label selector來關聯,通過修改pods的label可以刪除對應的pods。
2.1定義ReplicationController
master服務上創建文件php-controller.yaml,爲了避免同一個rc定義的pod在同一個node上生成多個pod時,端口衝突,文件中不指定hostPort。replicas指定pod的數量。- apiVersion: v1
- kind: ReplicationController
- metadata:
- name: php-controller
- labels:
- name: php-controller
- spec:
- replicas: 2
- selector:
- name: php-test-pod
- template:
- metadata:
- labels:
- name: php-test-pod
- spec:
- containers:
- - name: php-test
- image: 192.168.174.131:5000/php-base:1.0
- env:
- - name: ENV_TEST_1
- value: env_test_1
- - name: ENV_TEST_2
- value: env_test_2
- ports:
- - containerPort: 80
apiVersion: v1 kind: ReplicationController metadata: name: php-controller labels: name: php-controller spec: replicas: 2 selector: name: php-test-pod template: metadata: labels: name: php-test-pod spec: containers: - name: php-test image: 192.168.174.131:5000/php-base:1.0 env: - name: ENV_TEST_1 value: env_test_1 - name: ENV_TEST_2 value: env_test_2 ports: - containerPort: 80
2.2執行
- [root@localhost k8s]# kubectl create -f php-controller.yaml
- replicationcontroller “php-controller” created
[root@localhost k8s]# kubectl create -f php-controller.yaml replicationcontroller "php-controller" created
2.3查詢
[root@localhost k8s]# kubectl get rc
[root@localhost k8s]# kubectl get rc php-controller
[root@localhost k8s]# kubectl describe rc php-controller
- [root@localhost k8s]# kubectl get rc
- NAME DESIRED CURRENT AGE
- php-controller 2 2 1m
- [root@localhost k8s]# kubectl get rc php-controller
- NAME DESIRED CURRENT AGE
- php-controller 2 2 3m
- [root@localhost k8s]# kubectl describe rc php-controller
- Name: php-controller
- Namespace: default
- Image(s): 192.168.174.131:5000/php-base:1.0
- Selector: name=php-test-pod
- Labels: name=php-controller
- Replicas: 2 current / 2 desired
- Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
- No volumes.
- Events:
- FirstSeen LastSeen Count From SubobjectPath Type Reason Message
- ——— ——– —– —- ————- ——– —— ——-
- 3m 3m 1 {replication-controller } Normal SuccessfulCreate Created pod: php-controller-8x5wp
- 3m 3m 1 {replication-controller } Normal SuccessfulCreate Created pod: php-controller-ynzl7
- [root@localhost k8s]#
[root@localhost k8s]# kubectl get rc NAME DESIRED CURRENT AGE php-controller 2 2 1m [root@localhost k8s]# kubectl get rc php-controller NAME DESIRED CURRENT AGE php-controller 2 2 3m [root@localhost k8s]# kubectl describe rc php-controller Name: php-controller Namespace: default Image(s): 192.168.174.131:5000/php-base:1.0 Selector: name=php-test-pod Labels: name=php-controller Replicas: 2 current / 2 desired Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed No volumes. Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 {replication-controller } Normal SuccessfulCreate Created pod: php-controller-8x5wp 3m 3m 1 {replication-controller } Normal SuccessfulCreate Created pod: php-controller-ynzl7 [root@localhost k8s]#
[root@localhost k8s]# kubectl get pods
[root@localhost k8s]# kubectl get pods -o wide
- [root@localhost k8s]# kubectl get pods -o wide
- NAME READY STATUS RESTARTS AGE NODE
- php-controller-8x5wp 1/1 Running 0 5m 192.168.174.131
- php-controller-ynzl7 1/1 Running 0 5m 192.168.174.130
- [root@localhost k8s]#
[root@localhost k8s]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE NODE php-controller-8x5wp 1/1 Running 0 5m 192.168.174.131 php-controller-ynzl7 1/1 Running 0 5m 192.168.174.130 [root@localhost k8s]#
可見在131和130的2臺node服務器上都創建了Pod.2.4更改副本數量
在文件中pod副本數量是通過replicas來控制的,kubernetes允許通過kubectl scal命令來動態控制Pod的數量。(1)更改replicas的數量爲3
- [root@localhost k8s]# kubectl scale rc php-controller –replicas=3
- replicationcontroller “php-controller” scaled
[root@localhost k8s]# kubectl scale rc php-controller --replicas=3 replicationcontroller "php-controller" scaled
- [root@localhost k8s]# kubectl get pods -o wide
- NAME READY STATUS RESTARTS AGE NODE
- php-controller-0gkhx 1/1 Running 0 10s 192.168.174.131
- php-controller-8x5wp 1/1 Running 0 11m 192.168.174.131
- php-controller-ynzl7 1/1 Running 0 11m 192.168.174.130
- [root@localhost k8s]#
發現在31服務器上多了一個POD[root@localhost k8s]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE NODE php-controller-0gkhx 1/1 Running 0 10s 192.168.174.131 php-controller-8x5wp 1/1 Running 0 11m 192.168.174.131 php-controller-ynzl7 1/1 Running 0 11m 192.168.174.130 [root@localhost k8s]#
(2)更改replicas的數量爲1
- [root@localhost k8s]# kubectl scale rc php-controller –replicas=1
- replicationcontroller “php-controller” scaled
- [root@localhost k8s]# kubectl get pods -o wide
- NAME READY STATUS RESTARTS AGE NODE
- php-controller-0gkhx 1/1 Terminating 0 2m 192.168.174.131
- php-controller-8x5wp 1/1 Running 0 12m 192.168.174.131
- php-controller-ynzl7 1/1 Terminating 0 12m 192.168.174.130
看到其中2個pod的狀態都是Terminating了[root@localhost k8s]# kubectl scale rc php-controller --replicas=1 replicationcontroller "php-controller" scaled [root@localhost k8s]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE NODE php-controller-0gkhx 1/1 Terminating 0 2m 192.168.174.131 php-controller-8x5wp 1/1 Running 0 12m 192.168.174.131 php-controller-ynzl7 1/1 Terminating 0 12m 192.168.174.130
(3)docker rm刪除
通過docker rm刪除,一會後,會自動啓動新的容器,現象和上面的POD測試一樣。
2.5刪除
通過更改replicas=0,可以把該RC下的所有Pod都刪掉。另外kubeclt也提供了stop和delete命令來完成一次性刪除RC和RC控制的全部Pod。需要注意的是,單純的刪除RC,並不會影響已創建好的Pod。kubectl delete rc rcName 刪除rc,但是pod不會收到影響
kubectl delete -f rcfile (比如[root@localhost k8s]# kubectl delete -f php-controller.yaml )會刪除rc,也會刪除rc下的所有pod
3.Service
雖然每個Pod都會分配一個單獨的IP地址,但這個IP地址會隨着Pod的銷燬而消失。這就引出一個問題:如果有一組Pod組成的一個集羣來提供服務,那麼如何來訪問它們呢?
Kubernetes的Service(服務)就是用來解決這個問題的核心概念。
一個Service可以看作一組提供相同服務的Pod的對外訪問接口。Service作用於哪些Pod是通過Label Selector來定義的。
再看上面例子,php-test Pod運行了2個副本(replicas),這2個Pod對於前端程序來說沒有區別,所以前端程序不關心是哪個後端副本在提供服務。並且後端php-test Pod在發生變化(比如replicas數量變化或某個node掛了,Pod在另一個node重新生成),前端無須跟蹤這些變化。“Service”就是用來實現這種解耦的抽象概念。
3.1 對Service的定義
上面rc例子,已經刪除rc並且刪除所有pod了。Service和RC沒有先後順序的,只是如果Pod先於Service生成,則Service中某些信息就沒寫到Pod中。
- [root@localhost k8s]# ls
- php-controller.yaml php-pod.yaml php-service.yaml
- [root@localhost k8s]# kubectl get rc
- [root@localhost k8s]# kubectl get pods
- [root@localhost k8s]# kubectl create -f php-controller.yaml
- replicationcontroller “php-controller” created
- [root@localhost k8s]# kubectl get rc
- NAME DESIRED CURRENT AGE
- php-controller 2 2 11s
- [root@localhost k8s]# kubectl get pods -o wide
- NAME READY STATUS RESTARTS AGE NODE
- php-controller-cntom 1/1 Running 0 28s 192.168.174.131
- php-controller-kn55k 1/1 Running 0 28s 192.168.174.130
- [root@localhost k8s]#
[root@localhost k8s]# ls php-controller.yaml php-pod.yaml php-service.yaml [root@localhost k8s]# kubectl get rc [root@localhost k8s]# kubectl get pods [root@localhost k8s]# kubectl create -f php-controller.yaml replicationcontroller "php-controller" created [root@localhost k8s]# kubectl get rc NAME DESIRED CURRENT AGE php-controller 2 2 11s [root@localhost k8s]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE NODE php-controller-cntom 1/1 Running 0 28s 192.168.174.131 php-controller-kn55k 1/1 Running 0 28s 192.168.174.130 [root@localhost k8s]#
php-service.yaml- apiVersion: v1
- kind: Service
- metadata:
- name: php-service
- labels:
- name: php-service
- spec:
- ports:
- - port: 8081
- targetPort: 80
- protocol: TCP
- selector:
- name: php-test-pod
生成,查詢apiVersion: v1 kind: Service metadata: name: php-service labels: name: php-service spec: ports: - port: 8081 targetPort: 80 protocol: TCP selector: name: php-test-pod
- [root@localhost k8s]# kubectl create -f php-service.yaml
- service “php-service” created
- [root@localhost k8s]# kubectl get service
- NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- kubernetes 10.254.0.1 <none> 443/TCP 6d
- php-service 10.254.165.216 <none> 8081/TCP 29s
- [root@localhost k8s]# kubectl get services
- NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- kubernetes 10.254.0.1 <none> 443/TCP 6d
- php-service 10.254.165.216 <none> 8081/TCP 1m
- [root@localhost k8s]# kubectl get endpoints
- NAME ENDPOINTS AGE
- kubernetes 192.168.174.128:6443 6d
- php-service 172.17.42.2:80,172.17.65.3:80 1m
- [root@localhost k8s]#
通過kubectl get endpoints看到,php-service監控的兩個Pod中的容器地址,這2個地址172.17.42.2:80,172.17.65.3:80,只能在內網訪問(node上,安裝有flannel)。[root@localhost k8s]# kubectl create -f php-service.yaml service "php-service" created [root@localhost k8s]# kubectl get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.254.0.1 <none> 443/TCP 6d php-service 10.254.165.216 <none> 8081/TCP 29s [root@localhost k8s]# kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.254.0.1 <none> 443/TCP 6d php-service 10.254.165.216 <none> 8081/TCP 1m [root@localhost k8s]# kubectl get endpoints NAME ENDPOINTS AGE kubernetes 192.168.174.128:6443 6d php-service 172.17.42.2:80,172.17.65.3:80 1m [root@localhost k8s]#
- [root@localhost k8s]# kubectl get service
- NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- kubernetes 10.254.0.1 <none> 443/TCP 6d
- php-service 10.254.165.216 <none> 8081/TCP 17m
- [root@localhost k8s]#
[root@localhost k8s]# kubectl get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.254.0.1 <none> 443/TCP 6d php-service 10.254.165.216 <none> 8081/TCP 17m [root@localhost k8s]#
3.2.Pod的IP地址和Service的Cluster IP地址
php-service的ip是10.254.165.216,這是Service的Cluster IP地址,是k8s系統中的虛擬IP地址,有系統動態分配。Pod的IP地址是Docker Daemon根據docker0網橋的IP地址段進行分配的。Service的Cluster IP地址相對於Pod的IP地址來說相對穩定,Service被創建時即被分配一個IP地址,在銷燬該Service之前,這個IP地址都不會變化了。而Pod在K8s集羣中生命週期較短,可能被ReplicationController銷燬、再次創建,新創建的Pod將會分配一個新的IP地址。
3.3外部訪問Service
由於Service對象在Cluster IP Range池中分配到的IP只能在內部訪問,所以其他Pod都可以無礙訪問到它。但如果這個service作爲前端服務,準備爲集羣外的客戶端提供服務,我們就需要給這個服務提供公共IP了。
k8s支持兩種對外提供服務的Service的type定義:NodePort和LoadBalancer
3.3.1NodePort
在定義Service時指定spec.type=NodePort,並指定spec.ports.nodePort的值,系統就會k8s集羣中的每個node上打開一個主機上的真實端口號。這樣,能訪問Node的客戶端都就能通過端口號訪問內部的Service了。
php-nodePort-service.yaml
- apiVersion: v1
- kind: Service
- metadata:
- name: php-nodeport-service
- labels:
- name: php-nodeport-service
- spec:
- type: NodePort
- ports:
- - port: 8081
- targetPort: 80
- protocol: TCP
- nodePort: 30001
- selector:
- name: php-test-pod
apiVersion: v1 kind: Service metadata: name: php-nodeport-service labels: name: php-nodeport-service spec: type: NodePort ports: - port: 8081 targetPort: 80 protocol: TCP nodePort: 30001 selector: name: php-test-pod
[root@localhost k8s]# kubectl delete service php-service service "php-service" deleted
- [root@localhost k8s]# kubectl create -f php-nodePort-service.yaml
- The Service “php-nodePort-service” is invalid.
- metadata.name: Invalid value: “php-nodePort-service”: must be a DNS 952 label (at most 24 characters, matching regex [a-z]([-a-z0-9]*[a-z0-9])?): e.g. “my-name”
- [root@localhost k8s]# kubectl create -f php-nodePort-service.yaml
- You have exposed your service on an external port on all nodes in your
- cluster. If you want to expose this service to the external internet, you may
- need to set up firewall rules for the service port(s) (tcp:30001) to serve traffic.
- See http://releases.k8s.io/release-1.2/docs/user-guide/services-firewalls.md for more details.
- service “php-nodeport-service” created
[root@localhost k8s]# kubectl create -f php-nodePort-service.yaml The Service "php-nodePort-service" is invalid. metadata.name: Invalid value: "php-nodePort-service": must be a DNS 952 label (at most 24 characters, matching regex [a-z]([-a-z0-9]*[a-z0-9])?): e.g. "my-name" [root@localhost k8s]# kubectl create -f php-nodePort-service.yaml You have exposed your service on an external port on all nodes in your cluster. If you want to expose this service to the external internet, you may need to set up firewall rules for the service port(s) (tcp:30001) to serve traffic. See http://releases.k8s.io/release-1.2/docs/user-guide/services-firewalls.md for more details. service "php-nodeport-service" created
- [root@localhost k8s]# kubectl get pods -o wide
- NAME READY STATUS RESTARTS AGE NODE
- php-controller-2bvdq 1/1 Running 0 21m 192.168.174.130
- php-controller-42muy 1/1 Running 0 21m 192.168.174.131
[root@localhost k8s]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE NODE php-controller-2bvdq 1/1 Running 0 21m 192.168.174.130 php-controller-42muy 1/1 Running 0 21m 192.168.174.131
這樣我們就可以通過http://192.168.174.130:30001/info.php或http://192.168.174.131:30001/info.php進行訪問了。3.2LoadBalancer
如果雲服務商支持外接負載均衡器,則可以通過spec.tye=LoadBalancer定義Service,同時指定負載均衡器的IP地址。使用這種類型需要指定Service的nodePort和clusterIp.
- apiVersion: v1
- kind: Service
- metadata:
- name: php-loadbalancer-service
- labels:
- name: php-loadbalancer-service
- spec:
- type: LoadBalancer
- clusterIp: 10.254.165.216
- selector:
- app: php-test-pod
- ports:
- - port: 8081
- targetPort: 80
- protocol: TCP
- nodePort: 30001
- status:
- loadBalancer:
- ingress:
- ip: 192.168.174.127 #注意這是負載均衡器的IP地址
apiVersion: v1 kind: Service metadata: name: php-loadbalancer-service labels: name: php-loadbalancer-service spec: type: LoadBalancer clusterIp: 10.254.165.216 selector: app: php-test-pod ports: - port: 8081 targetPort: 80 protocol: TCP nodePort: 30001 status: loadBalancer: ingress: ip: 192.168.174.127 #注意這是負載均衡器的IP地址
status.loadBalancer.ingress.ip設置的192.168.174.127爲雲服務商提供的負載均衡器的IP地址。之後,對該Service的訪問請求將會通過LoadBalancer轉發到後端的Pod上去,負載分發的實現方式則依賴雲服務商提供的LoadBalancer的實現機制。
3.3端口定義
如果一個Pod中有多個對外暴漏端口時,對端口進行命名,是個Endpoint不會因重名而產生歧義。
- selector:
- app: php-test-pod
- ports:
- - name: p1
- port: 8081
- targetPort: 80
- protocol: TCP
- - name: p2
- port: 8082
- targetPort: 22
selector: app: php-test-pod ports: - name: p1 port: 8081 targetPort: 80 protocol: TCP - name: p2 port: 8082 targetPort: 22