Nginx Ingress Controller 入門實踐

Ingress是什麼?

在 Kubernetes 集羣中,Ingress是授權入站連接到達集羣服務的規則集合,提供七層負載均衡能力。可以給 Ingress 配置提供外部可訪問的 URL、負載均衡、SSL、基於名稱的虛擬主機等。簡單來說Ingress說一些規則集合,這些規則可以實現url到Kubernetes中的service這一層的路由功能。既然Ingress只是規則,那麼這些規則的具體實現是怎麼樣的呢?是有Ingress-Controller來實現的,目前比較常見使用較多的是Nginx-Ingress-Controller。

可以這樣理解一下,nginx-ingress-controller是一個nginx應用,它能幹什麼呢?它能代理後端的service,它能根據Ingress的配置,將對應的配置翻譯成nginx應用的配置,來實現七層路由的功能。既然nginx-ingress-controller是作爲一個類似網關的這麼一個應用,那麼我的nginx-ingress-controller這個應用本身就是需要在集羣外能夠訪問到的,那麼我是需要對外暴露nginx-ingress-controller這個應用的,在k8s中是通過創建一個LoadBalancer的Service:nginx-ingress-lb來暴露nginx-ingress-controller這個應用的。對應的,我們也就知道了nginx-ingress-controller這個應用從外部訪問是通過nginx-ingress-lb這個service關聯的SLB(負載均衡產品)來進行的。至於對應的slb的相關配置策略可以參考一下之前的文章,關於服務實現的內容。

簡單的請求鏈路如下:

客戶端 --> slb --> nginx-ingress-lb service --> nginx-ingress-controller pod --> app service --> app pod

我們使用Ingress來暴露服務,那麼需要創建對應的資源,要想功能正常,nginx-ingress-controller的pod需要正常運行,nginx-ingress-lb service和slb監聽配置正常才行,Ingress要關聯的後端應用服務也是需要配置正確,包括應用pod運行正常,應用的service配置正確。

然後我們需要創建對應的Ingress來實現我們的需求。我們先創建一個簡單的Ingress來大體看下具體的功能是什麼樣子的。我們這條Ingress的目的要實現:請求域名:ingress.test.com,實際請求會到後端的tomcat應用中。

相關配置:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  labels:
    app: tomcat
  name: tomcat
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: tomcat
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: tomcat
    spec:
      containers:
        - image: 'tomcat:latest'
          imagePullPolicy: Always
          name: tomcat
          resources:
            requests:
              cpu: 100m
              memory: 200Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
  name: tomcat-svc
  namespace: default
spec:
  clusterIP: 172.21.6.143
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080 #service常見配置錯誤的地方,targetPort必須是pod暴露的端口,不能是其他的
  selector:
    app: tomcat
  sessionAffinity: None
  type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: tomcat
  namespace: default
spec:
  rules:
    - host: ingress.test.com
      http:
        paths:
          - backend:
              serviceName: tomcat-svc
              servicePort: 8080
            path: /

在Ingress創建成功後會自動生成一個端點IP,我們應該將域名:ingress.test.com做A記錄解析到這個端點IP。這樣我們訪問域名:ingress.test.com,實際請求會請求到我們的tomcat應用。

測試結果:

#curl http://端點IP -H "host:ingress.test.com" -I
HTTP/1.1 200 
Date: Thu, 26 Sep 2019 04:55:39 GMT
Content-Type: text/html;charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding

nginx-ingress-controller 配置分析

yaml如下:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  labels:
    app: ingress-nginx
  name: nginx-ingress-controller
  namespace: kube-system
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ingress-nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
      labels:
        app: ingress-nginx
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - ingress-nginx
                topologyKey: kubernetes.io/hostname
              weight: 100
      containers:
        - args:
            - /nginx-ingress-controller
            - '--configmap=$(POD_NAMESPACE)/nginx-configuration'
            - '--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services'
            - '--udp-services-configmap=$(POD_NAMESPACE)/udp-services'
            - '--annotations-prefix=nginx.ingress.kubernetes.io'
            - '--publish-service=$(POD_NAMESPACE)/nginx-ingress-lb'
            - '--v=2'
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
          image: >-
            registry-vpc.cn-shenzhen.aliyuncs.com/acs/aliyun-ingress-controller:v0.22.0.5-552e0db-aliyun
          imagePullPolicy: IfNotPresent
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          name: nginx-ingress-controller
          ports:
            - containerPort: 80
              name: http
              protocol: TCP
            - containerPort: 443
              name: https
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources: {}
          securityContext:
            capabilities:
              add:
                - NET_BIND_SERVICE
              drop:
                - ALL
            procMount: Default
            runAsUser: 33
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /etc/localtime
              name: localtime
              readOnly: true
      dnsPolicy: ClusterFirst
      initContainers:
        - command:
            - /bin/sh
            - '-c'
            - |
              sysctl -w net.core.somaxconn=65535
              sysctl -w net.ipv4.ip_local_port_range="1024 65535"
              sysctl -w fs.file-max=1048576
              sysctl -w fs.inotify.max_user_instances=16384
              sysctl -w fs.inotify.max_user_watches=524288
              sysctl -w fs.inotify.max_queued_events=16384
          image: 'registry-vpc.cn-shenzhen.aliyuncs.com/acs/busybox:latest'
          imagePullPolicy: Always
          name: init-sysctl
          resources: {}
          securityContext:
            privileged: true
            procMount: Default
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      nodeSelector:
        beta.kubernetes.io/os: linux
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: nginx-ingress-controller
      serviceAccountName: nginx-ingress-controller
      terminationGracePeriodSeconds: 30
      volumes:
        - hostPath:
            path: /etc/localtime
            type: File
          name: localtime
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-ingress-lb
  name: nginx-ingress-lb
  namespace: kube-system
spec:
  clusterIP: 172.21.11.181
  externalTrafficPolicy: Local
  healthCheckNodePort: 32435
  ports:
    - name: http
      nodePort: 31184
      port: 80
      protocol: TCP
      targetPort: 80
    - name: https
      nodePort: 31972
      port: 443
      protocol: TCP
      targetPort: 443
  selector:
    app: ingress-nginx
  sessionAffinity: None
  type: LoadBalancer

其中需要特別注意的配置是容器的args配置:

  • --configmap=$(POD_NAMESPACE)/nginx-configuration 表明nginx-ingress-controller使用是哪個namespace下configmap來讀取nginx-ingress-controller的nginx配置。默認是使用kube-system/nginx-configuration這個configmap。
  • --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb 表明選擇使用nginx-ingress-controller的Ingress的端點地址是使用的哪一個LoadBalancer的Service的擴展IP。默認是使用kube-system/nginx-ingress-lb這個Service。
  • --ingress-class=INGRESS_CLASS 這個配置是對nginx-ingress-controller自身的一個標識,表示我是誰,沒有配置就是默認的“nginx”。這個配置有什麼用呢?是用來讓Ingress選擇我要使用的ingress-controller是誰,Ingress通過註解:kubernetes.io/ingress.class: ""決定選擇哪一個ingress-controller,如果沒有配置那麼就是選擇--ingress-class=“nginx”這個ingress-controller。

如何在一個阿里雲kubernetes集羣裏面部署多套Nginx Ingress Controller參考文檔:

https://yq.aliyun.com/articles/645856

之前我們也說了Ingress是規則,會下發到ingress-controller實現相關功能,那我們就來看下下發到ingress-controller的配置究竟是什麼樣。我們可以進入到nginx-ingress-controller的pod內來查看一下nginx配置。在pod內的/etc/nginx/nginx.conf裏面,除了一些公共的配置外,上面的Ingress生成了如下的nginx.conf配置:

## start server ingress.test.com
    server {
        server_name ingress.test.com ;
        
        listen 80;
        
        set $proxy_upstream_name "-";
        
        location / {
            
            set $namespace      "default";
            set $ingress_name   "tomcat";
            set $service_name   "tomcat-svc";
            set $service_port   "8080";
            set $location_path  "/";
            
            rewrite_by_lua_block {
                balancer.rewrite()
            }
            
            access_by_lua_block {
                balancer.access()
            }
            
            header_filter_by_lua_block {
                
            }
            body_filter_by_lua_block {
                
            }
            
            log_by_lua_block {
                
                balancer.log()
                
                monitor.call()
                
            }
            
            port_in_redirect off;
            
            set $proxy_upstream_name    "default-tomcat-svc-8080";
            set $proxy_host             $proxy_upstream_name;
            
            client_max_body_size                    100m;
            
            proxy_set_header Host                   $best_http_host;
            
            # Pass the extracted client certificate to the backend
            
            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            
            proxy_set_header                        Connection        $connection_upgrade;
            
            proxy_set_header X-Request-ID           $req_id;
            proxy_set_header X-Real-IP              $the_real_ip;
            
            proxy_set_header X-Forwarded-For        $the_real_ip;
            
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            
            proxy_set_header X-Original-URI         $request_uri;
            
            proxy_set_header X-Scheme               $pass_access_scheme;
            
            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
            
            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";
            
            # Custom headers to proxied server
            
            proxy_connect_timeout                   10s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;
            
            proxy_buffering                         off;
            proxy_buffer_size                       4k;
            proxy_buffers                           4 4k;
            proxy_request_buffering                 on;
            
            proxy_http_version                      1.1;
            
            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;
            
            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout;
            proxy_next_upstream_tries               3;
            
            proxy_pass http://upstream_balancer;
            
            proxy_redirect                          off;
            
        }
        
    }
    ## end server ingress.test.com

Nginx.conf裏面的配置比較多,這裏就不在一一詳解,後續我們再針對一些常見的功能配置進行說明。

同時最新版本的nginx-ingress-controller已經默認開啓了Upstream的動態更新,可以在nginx-ingress-controller的pod內請求:curl http://127.0.0.1:18080/configuration/backends 查看。具體內容如下:

[{"name":"default-tomcat-svc-8080","service":{"metadata":{"creationTimestamp":null},"spec":{"ports":[{"protocol":"TCP","port":8080,"targetPort":8080}],"selector":{"app":"tomcat"},"clusterIP":"172.21.6.143","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},"port":8080,"secureCACert":{"secret":"","caFilename":"","pemSha":""},"sslPassthrough":false,"endpoints":[{"address":"172.20.2.141","port":"8080"}],"sessionAffinityConfig":{"name":"","cookieSessionAffinity":{"name":"","hash":""}},"upstreamHashByConfig":{"upstream-hash-by-subset-size":3},"noServer":false,"trafficShapingPolicy":{"weight":0,"header":"","cookie":""}},{"name":"upstream-default-backend","port":0,"secureCACert":{"secret":"","caFilename":"","pemSha":""},"sslPassthrough":false,"endpoints":[{"address":"127.0.0.1","port":"8181"}],"sessionAffinityConfig":{"name":"","cookieSessionAffinity":{"name":"","hash":""}},"upstreamHashByConfig":{},"noServer":false,"trafficShapingPolicy":{"weight":0,"header":"","cookie":""}}]

我們可以看到這裏有ingress關聯的service及其endpoint的映射關係,這樣可以就可以請求到具體的pod的業務了。

路由配置的動態更新可以參考文檔瞭解一下:https://yq.aliyun.com/articles/692732

在後續的文章我們再詳細講一些常見的使用場景。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章