Kubernetes Multus-CNI

簡介

Multus CNI 作爲K8S的CNI插件,支持同時添加多個網絡接口到K8S環境中的POD。這樣的部署方式有利用用戶把管理網絡和業務網絡相互隔離,有效控制容器集羣網絡架構

下圖是Multus CNI配置pod網絡接口的例子。圖中顯示了POD具有三個接口:eth0、net0和net1。eth0將kubernetes集羣網絡連接到kubernetes服務器/服務(例如kubernetes api服務器、kubelet等)。net0和net1是附加的網絡附件,通過使用其他CNI插件(例如vlan/vxlan/ptp)連接到其他網絡。

Multi-Homed pod

部署Multus CNI

部署K8S環境

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.2 --pod-network-cidr=192.168.0.0/16 
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
kubectl taint nodes --all node-role.kubernetes.io/master-

部署Multus CNI到K8S

下載multus代碼

[root@develop k8s]# git clone https://github.com/intel/multus-cni.git
Cloning into 'multus-cni'...
remote: Enumerating objects: 26, done.
remote: Counting objects: 100% (26/26), done.
remote: Compressing objects: 100% (21/21), done.
remote: Total 13356 (delta 8), reused 16 (delta 5), pack-reused 13330
Receiving objects: 100% (13356/13356), 22.65 MiB | 238.00 KiB/s, done.
Resolving deltas: 100% (4584/4584), done.
[root@develop k8s]# cd multus-cni/
[root@develop multus-cni]# ls
build  checkpoint  CONTRIBUTING.md  doc  Dockerfile  Dockerfile.openshift  examples  glide.lock  glide.yaml  images  k8sclient  LICENSE  logging  multus  README.md  testing  test.sh  types  vendor
[root@develop multus-cni]# 

進入image目錄,部署Multus環境主要由“flannel-daemonset.yml multus-daemonset.yml ”這兩個編排文件完成。flannel-daemonset.yaml部署flannel網絡需要的基礎組件。Multus在K8S環境中是作爲CNI插件使用的,所以和部署一般的網絡插件方式類似,由multus-daemonset.yml文件編排完成,主要動作包括權限配置,寫入配置到K8S,運行提供CNI功能的容器

[root@develop images]# pwd
/data/k8s/multus-cni/images
[root@develop images]# ls
70-multus.conf  entrypoint.sh  flannel-daemonset.yml  multus-crio-daemonset.yml  multus-daemonset.yml  README.md
[root@develop images]# kubectl get pod --all-namespaces
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-78d4cf999f-4dcjq          0/1     Pending   0          21m
kube-system   coredns-78d4cf999f-76p5l          0/1     Pending   0          21m
kube-system   etcd-develop                      1/1     Running   0          20m
kube-system   kube-apiserver-develop            1/1     Running   0          20m
kube-system   kube-controller-manager-develop   1/1     Running   0          20m
kube-system   kube-proxy-f7n6d                  1/1     Running   0          21m
kube-system   kube-scheduler-develop            1/1     Running   0          20m
[root@develop images]# cat {flannel-daemonset.yml,multus-daemonset.yml} | kubectl apply -f -
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
customresourcedefinition.apiextensions.k8s.io/network-attachment-definitions.k8s.cni.cncf.io created
clusterrole.rbac.authorization.k8s.io/multus created
clusterrolebinding.rbac.authorization.k8s.io/multus created
serviceaccount/multus created
configmap/multus-cni-config created
daemonset.extensions/kube-multus-ds-amd64 created
[root@develop images]# kubectl get pod --all-namespaces                                     
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-78d4cf999f-4dcjq          1/1     Running   0          22m
kube-system   coredns-78d4cf999f-76p5l          1/1     Running   0          22m
kube-system   etcd-develop                      1/1     Running   0          22m
kube-system   kube-apiserver-develop            1/1     Running   0          22m
kube-system   kube-controller-manager-develop   1/1     Running   0          22m
kube-system   kube-flannel-ds-amd64-wlc5m       1/1     Running   0          80s
kube-system   kube-multus-ds-amd64-f69xz        1/1     Running   0          80s
kube-system   kube-proxy-f7n6d                  1/1     Running   0          22m
kube-system   kube-scheduler-develop            1/1     Running   0          22m
[root@develop images]#  

K8S的coredns已經正常啓動,說明flannel插件生效了。
部署Multus完成後需要確保"/etc/cni/net.d/"目錄是否存在別的CNI插件配置文件,kubelet調用CNI插件是根據文件名字的大小順序調用,當前目前只有“70-multus.conf ”這一個文件,所以可以正常使用

[root@develop images]# cat /etc/cni/net.d/
70-multus.conf  multus.d/       
[root@develop images]# cat /etc/cni/net.d/70-multus.conf 
{
  "name": "multus-cni-network",
  "type": "multus",
  "delegates": [
    {
      "type": "flannel",
      "name": "flannel.1",
      "delegate": {
        "isDefaultGateway": true,
        "hairpinMode": true
      }
    }
  ],
  "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig"
}
[root@develop images]# 

接下來可以在K8S環境中配置額外的CNI插件,實現創建出的POD有多個網絡接口。
NetworkAttachmentDefinition用戶自定義網絡資源對象,該對象描述如何將pod連接到對象引用的邏輯或物理網絡。
創建macvlan CNI 插件文件

[root@develop k8s]# cat macvlan-conf-1.yaml 
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-conf-1
spec:
  config: '{
            "cniVersion": "0.3.0",
            "type": "macvlan",
            "master": "ens15f1",
            "mode": "bridge",
            "ipam": {
                "type": "host-local",
                "ranges": [
                    [ {
                         "subnet": "10.10.0.0/16",
                         "rangeStart": "10.10.1.20",
                         "rangeEnd": "10.10.3.50",
                         "gateway": "10.10.0.254"
                    } ]
                ]
            }
        }'
[root@develop k8s]# kubectl apply -f macvlan-conf-1.yaml 
networkattachmentdefinition.k8s.cni.cncf.io/macvlan-conf-1 created
[root@develop k8s]# 

可以使用以下命令查看NetworkAttachmentDefinition創建狀態

[root@develop k8s]# kubectl get network-attachment-definitions
NAME             AGE
macvlan-conf-1   48s
[root@develop k8s]# kubectl describe network-attachment-definitions macvlan-conf-1
Name:         macvlan-conf-1
Namespace:    default
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"k8s.cni.cncf.io/v1","kind":"NetworkAttachmentDefinition","metadata":{"annotations":{},"name":"macvlan-conf-1","namespace":"...
API Version:  k8s.cni.cncf.io/v1
Kind:         NetworkAttachmentDefinition
Metadata:
  Creation Timestamp:  2019-02-09T09:27:20Z
  Generation:          1
  Resource Version:    2371
  Self Link:           /apis/k8s.cni.cncf.io/v1/namespaces/default/network-attachment-definitions/macvlan-conf-1
  UID:                 e68a99b7-2c4c-11e9-9baa-0024ecf14b1f
Spec:
  Config:  { "cniVersion": "0.3.0", "type": "macvlan", "master": "ens15f1", "mode": "bridge", "ipam": { "type": "host-local", "ranges": [ [ { "subnet": "10.10.0.0/16", "rangeStart": "10.10.1.20", "rangeEnd": "10.10.3.50", "gateway": "10.10.0.254" } ] ] } }
Events:    <none>
[root@develop k8s]# 

使用Multus CNI部署多接口POD

創建一個簡單的POD,POD的YAML文件和一般的POD文件一樣,除了多了一個元數據“annotations”,這個元數據的值告訴K8S,創建POD的時候需要加上macvlan-conf-1這個配置,這樣就和之前配置的NetworkAttachmentDefinition資源對應上了,對方方式用的是資源名字

[root@develop k8s]# cat pod-case-01.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-case-01
  annotations:
    k8s.v1.cni.cncf.io/networks: macvlan-conf-1
spec:
  containers:
  - name: pod-case-01
    image: docker.io/centos/tools:latest
    command:
    - /sbin/init
[root@develop k8s]# 

部署POD,部署完成後查看POD接口,eth0爲flannel插件提供,net1由macvlan提供

[root@develop k8s]# kubectl apply -f pod-case-01.yaml 
pod/pod-case-01 created
[root@develop k8s]# kubectl get pod 
NAME          READY   STATUS    RESTARTS   AGE
pod-case-01   1/1     Running   0          21s
[root@develop k8s]# kubectl exec pod-case-01 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if110: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 0a:58:c0:a8:00:0e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.0.14/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2ccf:72ff:fe18:bf16/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
4: net1@if3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default 
    link/ether de:25:34:bb:33:0d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.10.1.25/16 scope global net1
       valid_lft forever preferred_lft forever
[root@develop k8s]# brctl show 
bridge name     bridge id               STP enabled     interfaces
br0             8000.000000000000       no
cni0            8000.0a58c0a80001       no              veth8c34cca9
                                                        vethae2601ad
                                                        vethb19ed751
docker0         8000.024276d139c2       no
virbr0          8000.5254005a2c64       yes             virbr0-nic
[root@develop k8s]# ip a | grep cni0        
31: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    inet 192.168.0.1/24 scope global cni0
108: vethb19ed751@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
109: veth8c34cca9@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
110: vethae2601ad@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default 
[root@develop k8s]# 
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章