簡介
- 一種
全局的
、爲了代理不同後端 Service
而設置的負載均衡服務
,就是 Kubernetes 裏的Ingress
服務。 - Ingress由兩部分組成:
Ingress controller
和Ingress
服務。 - Ingress Controller 會根據你定義的 Ingress 對象,提供對應的代理能力。業界常用的各種反向代理項目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已經爲 Kubernetes 專門維護了對應的 Ingress Controller。
- 官網:https://kubernetes.github.io/ingress-nginx/
ingress相當於一個負載均衡器,集羣中的service 做的是 pod 的暴露端口,然後做後端負載均衡,而 ingress 是在 service 之前,只連接 service。做 service 的負載均衡,反向代理。使用熱更新的方式。
ingress部署
部署的時候可以針對很多的環境,我們選擇部署bare-metal
環境下的ingress,它默認是使用nodeport
的方式進行暴露的。
/獲取它的部署文件
[root@server2 manifest]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml
[root@server2 manifest]# vim deploy.yaml
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0
image: docker.io/jettech/kube-webhook-certgen:v1.2.0 /查看它需要的鏡像
我們還是一樣,先把鏡像下載下來,打標籤上傳到harbor倉庫,來提升試驗速度。
然後:
[root@server2 manifest]# vim deploy.yaml
image: nginx-ingress-controller:0.33.0
image: kube-webhook-certgen:v1.2.0
image: kube-webhook-certgen:v1.2.0 //鏡像拉取位置從我們設置的harbor倉庫拉取
[root@server2 manifest]# vim /etc/docker/daemon.json
"registry-mirrors": ["https://reg.caoaoyuan.org"],
就可以使用了:
[root@server2 manifest]# kubectl apply -f deploy.yaml
[root@server2 manifest]# kubectl get namespaces
NAME STATUS AGE
default Active 11d
ingress-nginx Active 15s /首先會生成一個namespace
kube-node-lease Active 11d
kube-public Active 11d
kube-system Active 11d
[root@server2 manifest]# kubectl get pod -n ingress-nginx /指定namespace纔可以訪問
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-hm2nv 0/1 Completed 0 99s
ingress-nginx-admission-patch-hmxbt 0/1 Completed 0 99s
ingress-nginx-controller-77b5fc5746-z55wh 1/1 Running 0 109s
[root@server2 manifest]# kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-hm2nv 0/1 Completed 0 2m15s
pod/ingress-nginx-admission-patch-hmxbt 0/1 Completed 0 2m15s
pod/ingress-nginx-controller-77b5fc5746-z55wh 1/1 Running 0 2m25s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.97.148.61 <none> 80:32066/TCP,443:31334/TCP 2m25s
service/ingress-nginx-controller-admission ClusterIP 10.103.172.168 <none> 443/TCP 2m26s
/還是用服務的方式管理,使用nodeport的方式暴露端口
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 2m25s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-77b5fc5746 1 1 1 2m25s
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 7s 2m25s
job.batch/ingress-nginx-admission-patch 1/1 6s 2m25s
查看pod:
[root@server2 manifest]# kubectl get pod -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-admission-create-hm2nv 0/1 Completed 0 3m58s 10.244.22.3 server4 <none> <none>
ingress-nginx-admission-patch-hmxbt 0/1 Completed 0 3m58s 10.244.22.4 server4 <none> <none>
ingress-nginx-controller-77b5fc5746-z55wh 1/1 Running 0 4m8s 10.244.22.5 server4 <none> <none>
查看svc信息;
[root@server2 manifest]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.97.148.61 <none> 80:32066/TCP,443:31334/TCP 4m20s
ingress-nginx-controller-admission ClusterIP 10.103.172.168 <none> 443/TCP 4m21s
[root@server2 manifest]# kubectl -n ingress-nginx describe svc ingress-nginx-controller
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=0.33.0
helm.sh/chart=ingress-nginx-2.9.0
Annotations: Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: NodePort
IP: 10.97.148.61
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32066/TCP
Endpoints: 10.244.22.5:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 32066/TCP //開放32066端口
Endpoints: 10.244.22.5:443 //後端對應的是 pod 的 ip 地址
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
它使用的是 nodeport 的方式我們就可以在外部進行訪問了:
[root@rhel7host k8s]# curl 172.25.254.3:20666
<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
<hr><center>nginx/1.19.0</center>
</body>
</html>
//說明服務沒有問題
這是用戶訪問的順序就是:
user -> (ingress -> svc -> pod) -> svc -> pod
用戶訪問 ingress,ingress 用自己的 svc 均衡 pod ,然後 ingress 的 pod 在對後面的 其它 svc 進行管理,實現負載均衡,
我們現在就去部署。
創建ingress服務
:
[root@server2 manifest]# vim ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress1
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: www1.westos.org
http:
paths:
- path: /
backend:
serviceName: myservice
servicePort: 80
//意思是當我們訪問 www1.westos.org 時,負載到 myservice 的服務上
我們應該給 www1.westos.org 做一個解析。我們做到server4上.
[root@rhel7host k8s]# vim /etc/hosts
172.25.254.4 server4 www1.westos.org
但其實3和4 是都可以訪問的,因爲ingress服務 在每個結點都開放了端口:
[root@server3 ~]# netstat -tnlp |grep :32066
tcp 0 0 0.0.0.0:32066 0.0.0.0:* LISTEN 4844/kube-proxy
[root@server4 ~]# netstat -tnlp |grep :32066
tcp 0 0 0.0.0.0:32066 0.0.0.0:* LISTEN 4844/kube-proxy
訪問:
[root@rhel7host k8s]# curl www1.westos.org:32066
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@rhel7host k8s]# curl www1.westos.org:32066/hostname.html
deployment-example-846496db9d-2jznm
[root@rhel7host k8s]# curl www1.westos.org:32066/hostname.html
deployment-example-846496db9d-rn6sx
[root@rhel7host k8s]# curl www1.westos.org:32066/hostname.html
deployment-example-846496db9d-2jznm
[root@rhel7host k8s]# curl www1.westos.org:32066/hostname.html
deployment-example-846496db9d-rn6sx
[root@server2 manifest]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-hm2nv 0/1 Completed 0 90m
ingress-nginx-admission-patch-hmxbt 0/1 Completed 0 90m
ingress-nginx-controller-77b5fc5746-z55wh 1/1 Running 0 90m
//ingress-nginx-controller 這個 pod 其實就是一個nginx,我們登錄進去查看
[root@server2 manifest]# kubectl -n ingress-nginx exec -it ingress-nginx-controller-77b5fc5746-z55wh -- sh
/etc/nginx $ vi nginx.conf
## start server www1.westos.org
server {
server_name www1.westos.org ;
listen 80 ;
listen 443 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace "default";
set $ingress_name "ingress1";
set $service_name "myservice";
set $service_port "80";
set $location_path "/";
我們在ingress.yml中寫的內容被記錄到了這裏,這是因爲yml文件中的內容存儲到etcd中,pod會自動連接api接口把這些內容讀出來,轉換成nginx的配置文件,它之後才能生效,然後pod在進行reload。
多個ingress
ingress也是支持列表的,我們現在嘗試做兩個,但首先得爲他們各分配一個後端的svc
加一個service:
[root@server2 manifest]# vim service.yml
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: myapp
type: ClusterIP
---
kind: Service /新加的service,和上面不同。
apiVersion: v1
metadata:
name: myservice2
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: myappv2
type: ClusterIP
[root@server2 manifest]# kubectl describe svc myservice2
Name: myservice2
Namespace: default
Labels: <none>
Annotations: Selector: app=myappv2
Type: ClusterIP
IP: 10.104.106.74
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: <none> //myservice2 還沒有後端
Session Affinity: None
Events: <none>
爲新加的service建立兩個後端:
[root@server2 manifest]# kubectl delete -f pod2.yml
deployment.apps "deployment-example" deleted
[root@server2 manifest]# vim deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-v1
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-v2
spec:
replicas: 2
selector:
matchLabels:
app: myappv2 /修改標籤
template:
metadata:
labels:
app: myappv2
spec:
containers:
- name: myapp
image: myapp:v2
[root@server2 manifest]# kubectl apply -f deployment.yml
[root@server2 manifest]# kubectl describe svc myservice
Name: myservice
Namespace: default
Labels: <none>
Annotations: Selector: app=myapp
Type: ClusterIP
IP: 10.109.176.196
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.141.201:80,10.244.22.6:80
Session Affinity: None
Events: <none>
[root@server2 manifest]# kubectl describe svc myservice2
Name: myservice2
Namespace: default
Labels: <none>
Annotations: Selector: app=myappv2
Type: ClusterIP
IP: 10.104.106.74
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.141.202:80,10.244.22.7:80
Session Affinity: None //現在各有兩個終端
Events: <none>
添加ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress1
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: www1.westos.org
http:
paths:
- path: /
backend:
serviceName: myservice
servicePort: 80
- host: www2.westos.org
http:
paths:
- path: / 訪問www2.westos.com 時,走 myservice2 這個svc
backend:
serviceName: myservice2
servicePort: 80
[root@server2 manifest]# kubectl describe ingress ingress1
Name: ingress1
Namespace: default
Address: 172.25.254.4
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
www1.westos.org
/ myservice:80 (10.244.141.201:80,10.244.22.6:80)
www2.westos.org
/ myservice2:80 (10.244.141.202:80,10.244.22.7:80)
Annotations: kubernetes.io/ingress.class: nginx
在做解析:
[root@rhel7host k8s]# vim /etc/hosts
172.25.254.4 server4 www1.westos.org www2.westos.org
當前每個service都又2兩個後端,我們進行訪問:
[root@rhel7host k8s]# curl www1.westos.org:32066
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@rhel7host k8s]# curl www1.westos.org:32066/hostname.html
deployment-v1-7449b5b68f-qw4kr
[root@rhel7host k8s]# curl www1.westos.org:32066/hostname.html
deployment-v1-7449b5b68f-8pz5c
/ 訪問 www1 時又兩臺後端在輪詢
[root@rhel7host k8s]# curl www2.westos.org:32066
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@rhel7host k8s]# curl www2.westos.org:32066/hostname.html
deployment-v2-755458b96d-qsgt8
[root@rhel7host k8s]# curl www2.westos.org:32066/hostname.html
deployment-v2-755458b96d-5ccqt
/ 訪問www2時有另外兩臺後端在輪詢
此時pod內部同樣也從etcd中抓取了信息,進行了配置:
[root@server2 manifest]# kubectl -n ingress-nginx exec -it ingress-nginx-controller-77b5fc5746-z55wh -- sh
/etc/nginx $ vi nginx.conf
我們並沒有編輯這個pod的配置文件,但是它卻自動生效了,自動重載。
DaemonSet方式的ingress
[root@server2 manifest]# kubectl -n ingress-nginx get all
...
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.97.148.61 <none> 80:32066/TCP,443:31334/TCP 129m
service/ingress-nginx-controller-admission ClusterIP 10.103.172.168 <none> 443/TCP 129m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 129m
目前使用的是deployment+ NodePort 的方式部署ingress。
我們還可以使用 DaemonSet 的方式進行部署。
- 用
DaemonSet
結合nodeselector
來部署ingress-controller
到特定的node上,然後使用
HostNetwork
直接把該pod與宿主機node的網絡打通,直接使用宿主機的80/433
端口就能訪問服務。 - 優點是整個請求鏈路最簡單,性能相對NodePort模式更好。
- 缺點是由於直接利用宿主機節點的網絡和端口,一個node只能部署一個ingress-controller pod。
- 比較適合大併發的生產環境使用。
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: DaemonSet /改爲DaemonSet控制器
...
/指定部署到server4
這樣的話我們就不需要ingress的svc服務了,不需要暴露端口,因爲我們訪問的是物理結點。
[root@server2 manifest]# kubectl -n ingress-nginx delete service/ingress-nginx-controller
service "ingress-nginx-controller" deleted
[root@server2 manifest]# kubectl -n ingress-nginx delete svc ingress-nginx-controller-admission
service "ingress-nginx-controller-admission" deleted
[root@server2 manifest]# kubectl -n ingress-nginx delete deployments.apps ingress-nginx-controller
deployment.apps "ingress-nginx-controller" deleted
[root@server2 manifest]# kubectl apply -f deploy.yaml
/刪除deployment控制器,在應用
[root@server2 manifest]# kubectl get all -n ingress-nginx
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/ingress-nginx-controller 1 1 1 1 1 kubernetes.io/hostname=server4 49s
/daemonset控制器就生成了
在server4上查看:
[root@server4 ~]# netstat -atnlp|grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 28955/nginx: master
tcp6 0 0 :::80 :::* LISTEN 28955/nginx: master
[root@server4 ~]# netstat -atnlp|grep :443
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 28955/nginx: master
tcp 0 0 10.96.0.1:39272 10.96.0.1:443 ESTABLISHED 4864/calico-node
tcp 0 0 10.96.0.1:39270 10.96.0.1:443 ESTABLISHED 4862/calico-node
tcp 0 0 10.96.0.1:34348 10.96.0.1:443 ESTABLISHED 28917/nginx-ingress
tcp6 0 0 :::443 :::* LISTEN 28955/nginx: master
/ 80和443端口出現
server3查看:
[root@server3 ~]# netstat -tnlp |grep :80
[root@server3 ~]# netstat -tnlp |grep :443
[root@server3 ~]#
沒有端口,因爲我們綁定了只在server4上生效。
[root@rhel7host k8s]# curl www1.westos.org/hostname.html
deployment-v1-7449b5b68f-qw4kr
[root@rhel7host k8s]# curl www1.westos.org/hostname.html
deployment-v1-7449b5b68f-8pz5c
[root@rhel7host k8s]# curl www2.westos.org/hostname.html
deployment-v2-755458b96d-5ccqt
[root@rhel7host k8s]# curl www2.westos.org/hostname.html
deployment-v2-755458b96d-qsgt8
都指定的是server4,所以直接訪問,不用加端口,就訪問成功了。
使用cookie 實現sticky sissions 會話保持
[root@server2 manifest]# vim ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress1
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: www1.westos.org
http:
paths:
- path: /
backend:
serviceName: myservice
servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress2
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: cookie //分離開來,只對ingress2 使用會話保持
spec:
rules:
- host: www2.westos.org
http:
paths:
- path: /
backend:
serviceName: myservice2
servicePort: 80
我們先把 daemonset 的模式切換回 deployment 的模式。
[root@server2 manifest]# kubectl apply -f ingress.yml
ingress.networking.k8s.io/ingress1 created
ingress.networking.k8s.io/ingress2 created
[root@server2 manifest]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress1 <none> www1.westos.org 172.25.254.4 80 94s
ingress2 <none> www2.westos.org 172.25.254.4 80 94s
[root@server2 manifest]# kubectl describe ingress ingress1
Name: ingress1
Rules:
Host Path Backends
---- ---- --------
www1.westos.org
/ myservice:80 (10.244.141.201:80,10.244.22.6:80)
Annotations: kubernetes.io/ingress.class: nginx //1上沒有會話保持
[root@server2 manifest]# kubectl describe ingress ingress2
Name: ingress2
Rules:
Host Path Backends
---- ---- --------
www2.westos.org
/ myservice2:80 (10.244.141.202:80,10.244.22.7:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie // 2上有會話保持
訪問www2.westos.org時,頁面一直不變。
但是當我們訪問 www1.westos.org 時,就會做負載均衡。
ingress的TLS配置
創建證書:
[root@server2 ~]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
[root@server2 ~]# ls
tls.crt tls.key
tls加密:
[root@server2 ~]# kubectl create secret tls tls-secret --key tls.key --cert tls.crt
secret/tls-secret created
[root@server2 ~]# kubectl get secrets
NAME TYPE DATA AGE
default-token-j7pl7 kubernetes.io/service-account-token 3 11d
tls-secret kubernetes.io/tls 2 2m7s /裏面存放key
生效
:
[root@server2 ~]# vim tls.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-tls
spec:
tls:
- hosts:
- www1.westos.org /使用tls認證
secretName: tls-secret
rules:
- host: www1.westos.org
http:
paths:
- path: /
backend:
serviceName: myservice
servicePort: 80
[root@server2 ~]# kubectl apply -f tls.yml
ingress.networking.k8s.io/nginx-tls created
[root@server2 ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress1 <none> www1.westos.org 172.25.254.4 80 18m
ingress2 <none> www2.westos.org 172.25.254.4 80 18m
nginx-tls <none> www1.westos.org 80, 443 4s
訪問:
訪問www1的時候會自動跳轉到443端口,並使用自簽名證書。
訪問www2的時候就不會有反應。
進入pod去訪問:
[root@server2 ~]# kubectl -n ingress-nginx exec -it ingress-nginx-controller-jskzg -- sh
/etc/nginx $ vi nginx.conf
## start server www1.westos.org
server {
server_name www1.westos.org ;
listen 80 ;
listen [::]:80 ;
listen 443 ssl http2 ; //監聽了443端口