k8s實踐6:從解決報錯開始入門RBAC

1.
在k8s集羣使用過程中,總是遇到各種rbac的權限問題.
記錄了幾個報錯,見下:

報錯1:

"message": "pods is forbidden: User \"kubernetes\" cannot list resource \"pods\" in API group \"\" at the cluster scope"

"message": "pservices is forbidden: User \"kubernetes\" cannot list resource \"pservices\" in API group \"\" at the cluster scope",

報錯2:

[root@k8s-master2 ~]# curl https://192.168.32.127:8443/logs  --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "forbidden: User \"kubernetes\" cannot get path \"/logs\"",
  "reason": "Forbidden",
  "details": {

  },
  "code": 403
curl https://192.168.32.127:8443/metrics --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "forbidden: User \"kubernetes\" cannot get path \"/metrics\"",
  "reason": "Forbidden",
  "details": {

  },
  "code": 403

深入學習瞭解rbac的各種基礎知識,相當必要.

2.
從分析報錯開始

報錯1:

"message": "pods is forbidden: User \"kubernetes\" cannot list resource \"pods\" in API group \"\" at the cluster scope"

先看這條報錯的命令記錄:

[root@k8s-master1 ~]# curl https://192.168.32.127:8443/api/v1/pods --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "pods is forbidden: User \"kubernetes\" cannot list resource \"pods\" in API group \"\" at the cluster scope",
  "reason": "Forbidden",
  "details": {
    "kind": "pods"
  },
  "code": 403

這條報錯的意思是什麼呢?
字面上理解,用戶kubernetes在api Group裏沒有權限,無法獲取資源pod列表.
從解決這個報錯開始我們的入門學習.

3.
User kubernetes是從哪冒出來的呢?
這個用戶是我們部署apiserver時,生成的api訪問etcd的用戶.
檢索用戶kubernetes的權限和綁定的羣組,見下:

[root@k8s-master1 ~]# kubectl describe clusterrolebindings |grep -B 9 "User  kubernetes "
Name:         discover-base-url
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"discover-base-url","namespace":""},"roleR...
Role:
  Kind:  ClusterRole
  Name:  discover_base_url
Subjects:
  Kind  Name        Namespace
  ----  ----        ---------
  User  kubernetes 
--
Name:         kube-apiserver
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"roleRef"...
Role:
  Kind:  ClusterRole
  Name:  kube-apiserver
Subjects:
  Kind  Name        Namespace
  ----  ----        ---------
  User  kubernetes 

權限:

[root@k8s-master1 ~]# kubectl describe clusterroles discover_base_url
Name:         discover_base_url
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"lab...
              rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
             [/]                []              [get]
[root@k8s-master1 ~]#

##注意這條權限是上篇apiserver裏面新增的權限.

[root@k8s-master1 ~]# kubectl describe clusterroles kube-apiserver
Name:         kube-apiserver
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGr...
PolicyRule:
  Resources      Non-Resource URLs  Resource Names  Verbs
  ---------      -----------------  --------------  -----
  nodes/metrics  []                 []              [get create]
  nodes/proxy    []                 []              [get create]
[root@k8s-master1 ~]#

##一個用的是Resources
##一個用的是Non-Resource

4.
引出問題1:
Non-Resouce是什麼?
google了好久,也只是看到隻言片語.以下是我自己的理解:
回頭看上篇檢索apiserver時顯示的信息:

[root@k8s-master1 ~]# curl https://192.168.32.127:8443/ --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/",
    "/apis/admissionregistration.k8s.io",
    "/apis/admissionregistration.k8s.io/v1beta1",
    "/apis/apiextensions.k8s.io",
    "/apis/apiextensions.k8s.io/v1beta1",
    "/apis/apiregistration.k8s.io",
    "/apis/apiregistration.k8s.io/v1",
    "/apis/apiregistration.k8s.io/v1beta1",
    "/apis/apps",
    "/apis/apps/v1",
    "/apis/apps/v1beta1",
    "/apis/apps/v1beta2",
    "/apis/authentication.k8s.io",
    "/apis/authentication.k8s.io/v1",
    "/apis/authentication.k8s.io/v1beta1",
    "/apis/authorization.k8s.io",
    "/apis/authorization.k8s.io/v1",
    "/apis/authorization.k8s.io/v1beta1",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/autoscaling/v2beta1",
    "/apis/autoscaling/v2beta2",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/batch/v1beta1",
    "/apis/certificates.k8s.io",
    "/apis/certificates.k8s.io/v1beta1",
    "/apis/coordination.k8s.io",
    "/apis/coordination.k8s.io/v1beta1",
    "/apis/events.k8s.io",
    "/apis/events.k8s.io/v1beta1",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/apis/networking.k8s.io",
    "/apis/networking.k8s.io/v1",
    "/apis/policy",
    "/apis/policy/v1beta1",
    "/apis/rbac.authorization.k8s.io",
    "/apis/rbac.authorization.k8s.io/v1",
    "/apis/rbac.authorization.k8s.io/v1beta1",
    "/apis/scheduling.k8s.io",
    "/apis/scheduling.k8s.io/v1beta1",
    "/apis/storage.k8s.io",
    "/apis/storage.k8s.io/v1",
    "/apis/storage.k8s.io/v1beta1",
    "/healthz",
    "/healthz/autoregister-completion",
    "/healthz/etcd",
    "/healthz/log",
    "/healthz/ping",
    "/healthz/poststarthook/apiservice-openapi-controller",
    "/healthz/poststarthook/apiservice-registration-controller",
    "/healthz/poststarthook/apiservice-status-available-controller",
    "/healthz/poststarthook/bootstrap-controller",
    "/healthz/poststarthook/ca-registration",
    "/healthz/poststarthook/generic-apiserver-start-informers",
    "/healthz/poststarthook/kube-apiserver-autoregistration",
    "/healthz/poststarthook/rbac/bootstrap-roles",
    "/healthz/poststarthook/scheduling/bootstrap-system-priority-classes",
    "/healthz/poststarthook/start-apiextensions-controllers",
    "/healthz/poststarthook/start-apiextensions-informers",
    "/healthz/poststarthook/start-kube-aggregator-informers",
    "/healthz/poststarthook/start-kube-apiserver-admission-initializer",
    "/healthz/poststarthook/start-kube-apiserver-informers",
    "/logs",
    "/metrics",
    "/openapi/v2",
    "/swagger-2.0.0.json",
    "/swagger-2.0.0.pb-v1",
    "/swagger-2.0.0.pb-v1.gz",
    "/swagger-ui/",
    "/swagger.json",
    "/swaggerapi",
    "/version"
  ]
}[root@k8s-master1 ~]# 

從healthz開始的都是Non-resouce,是不是呢?修改clusterroles,測試見下:

[root@k8s-master1 roles]# cat clusterroles1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: discover_base_url
rules:
- nonResourceURLs:
#  - /
  - /healthz/*
  verbs:
  - get
[root@k8s-master1 roles]#
[root@k8s-master1 roles]# kubectl apply -f clusterroles1.yaml
clusterrole.rbac.authorization.k8s.io "discover_base_url" configured
[root@k8s-master1 roles]# kubectl apply -f clusterrolebindings1.yaml
clusterrolebinding.rbac.authorization.k8s.io "discover-base-url" configured
[root@k8s-master1 roles]#

[root@k8s-master1 roles]# kubectl describe clusterroles discover_base_url
Name:         discover_base_url
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"lab...
              rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
             [/healthz/*]       []              [get]

##具有Non-Resources /healthz的get權限

[root@k8s-master1 roles]# curl https://192.168.32.127:8443/logs --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "forbidden: User \"kubernetes\" cannot get path \"/logs\"",
  "reason": "Forbidden",
  "details": {

  },
  "code": 403
}[root@k8s-master1 roles]#
[root@k8s-master1 roles]# curl https://192.168.32.127:8443/metrics --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "forbidden: User \"kubernetes\" cannot get path \"/metrics\"",
  "reason": "Forbidden",
  "details": {

  },
  "code": 403
}[root@k8s-master1 roles]#

可以看到除了healthz執行成功,其他全部失敗.
修改clusterroles,再測試,見下:

[root@k8s-master1 roles]# kubectl describe clusterroles discover_base_url
Name:         discover_base_url
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"lab...
              rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
             [/healthz/*]       []              [get]
             [/logs]            []              [get]
             [/metrics]         []              [get]
             [/version]         []              [get]
[root@k8s-master1 roles]#

再執行上面報錯的命令,全部正常.
可見,Non-Resourece包含了/healthz/*,/logs,/metrics等等.

5.
引出問題2:
Resource的權限配置?

先來條執行報錯的命令:

[root@k8s-master1 roles]#curl https://192.168.32.127:8443/api/v1/nodes/proxy  --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "nodes \"proxy\" is forbidden: User \"kubernetes\" cannot get resource \"nodes\" in API group \"\" at the cluster scope",
  "reason": "Forbidden",
  "details": {
    "name": "proxy",
    "kind": "nodes"
  },
  "code": 403
}[root@k8s-master1 roles]#

好奇怪,根據我們上面檢索的權限,見下:

--
Name:         kube-apiserver
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"roleRef"...
Role:
  Kind:  ClusterRole
  Name:  kube-apiserver
Subjects:
  Kind  Name        Namespace
  ----  ----        ---------
  User  kubernetes  
Name:         kube-apiserver
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGr...
PolicyRule:
  Resources      Non-Resource URLs  Resource Names  Verbs
  ---------      -----------------  --------------  -----
  nodes/metrics  []                 []              [get create]
  nodes/proxy    []                 []              [get create]
[root@k8s-master1 ~]#

按道理是應該可以正常檢索得到的.爲什麼報錯呢?先不管,添加權限測試下看看,見下:

獲取kube-apiserver這個clusterroles權限的描述,見下:

 [root@k8s-master1 roles]# kubectl get clusterroles kube-apiserver -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGroups":[""],"resources":["nodes/proxy","nodes/metrics"],"verbs":["get","create"]}]}
  creationTimestamp: 2019-02-28T06:51:53Z
  name: kube-apiserver
  resourceVersion: "35075"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/kube-apiserver
  uid: 5519ea8d-3b25-11e9-95a3-000c29383c89
rules:
- apiGroups:
  - ""
  resources:
  - nodes/proxy
  - nodes/metrics
  verbs:
  - get
  - create
[root@k8s-master1 roles]#

修改:

[root@k8s-master1 roles]# cat clusterroles2.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kube-apiserver
rules:
- apiGroups: [""]
  resources: ["nodes", "nodes/proxy","nodes/metrics"]
  verbs: ["get", "list","create"]
[root@k8s-master1 roles]#
[root@k8s-master1 roles]# kubectl apply -f clusterroles2.yaml
clusterrole.rbac.authorization.k8s.io "kube-apiserver" configured
[root@k8s-master1 roles]# kubectl get clusterroles kube-apiserver -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGroups":[""],"resources":["nodes","nodes/proxy","nodes/metrics"],"verbs":["get","list","create"]}]}
  creationTimestamp: 2019-02-28T06:51:53Z
  name: kube-apiserver
  resourceVersion: "476880"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/kube-apiserver
  uid: 5519ea8d-3b25-11e9-95a3-000c29383c89
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - nodes/proxy
  - nodes/metrics
  verbs:
  - get
  - list
  - create

再執行前面報錯的命令:

[root@k8s-master1 roles]# curl https://192.168.32.127:8443/api/v1/nodes/k8s-master1 --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
  "kind": "Node",
  "apiVersion": "v1",
  "metadata": {
    "name": "k8s-master1",
    "selfLink": "/api/v1/nodes/k8s-master1",
    "uid": "46a353d3-3b07-11e9-95a3-000c29383c89",
    "resourceVersion": "477158",
    "creationTimestamp": "2019-02-28T03:16:44Z",
    "labels": {
      "beta.kubernetes.io/arch": "amd64",
      "beta.kubernetes.io/os": "linux",
      "kubernetes.io/hostname": "k8s-master1"
    },
    "annotations": {
      "node.alpha.kubernetes.io/ttl": "0",
      "volumes.kubernetes.io/controller-managed-attach-detach": "true"
    }
  },
  "spec": {

  },
  "status": {
    "capacity": {
      "cpu": "1",
      "ephemeral-storage": "17394Mi",
      "hugepages-1Gi": "0",
      "hugepages-2Mi": "0",
      "memory": "1867264Ki",
      "pods": "110"
    },
    "allocatable": {
      "cpu": "1",
      "ephemeral-storage": "16415037823",
      "hugepages-1Gi": "0",
      "hugepages-2Mi": "0",
      "memory": "1764864Ki",
      "pods": "110"
    },
    "conditions": [
      {
        "type": "OutOfDisk",
        "status": "False",
        "lastHeartbeatTime": "2019-03-18T06:36:47Z",
        "lastTransitionTime": "2019-03-13T08:07:21Z",
        "reason": "KubeletHasSufficientDisk",
        "message": "kubelet has sufficient disk space available"
      },
      {
        "type": "MemoryPressure",
        "status": "False",
        "lastHeartbeatTime": "2019-03-18T06:36:47Z",
        "lastTransitionTime": "2019-03-13T08:07:21Z",
        "reason": "KubeletHasSufficientMemory",
        "message": "kubelet has sufficient memory available"
      },
      {
        "type": "DiskPressure",
        "status": "False",
        "lastHeartbeatTime": "2019-03-18T06:36:47Z",
        "lastTransitionTime": "2019-03-13T08:07:21Z",
        "reason": "KubeletHasNoDiskPressure",
        "message": "kubelet has no disk pressure"
      },
      {
        "type": "PIDPressure",
        "status": "False",
        "lastHeartbeatTime": "2019-03-18T06:36:47Z",
        "lastTransitionTime": "2019-02-28T03:16:45Z",
        "reason": "KubeletHasSufficientPID",
        "message": "kubelet has sufficient PID available"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastHeartbeatTime": "2019-03-18T06:36:47Z",
        "lastTransitionTime": "2019-03-13T08:07:31Z",
        "reason": "KubeletReady",
        "message": "kubelet is posting ready status"
      }
    ],
    "addresses": [
      {
        "type": "InternalIP",
        "address": "192.168.32.128"
      },
      {
        "type": "Hostname",
        "address": "k8s-master1"
      }
    ],
    "daemonEndpoints": {
      "kubeletEndpoint": {
        "Port": 10250
      }
    },
    "nodeInfo": {
      "machineID": "d1471d605c074c43bf44cd5581364aea",
      "systemUUID": "84F64D56-0428-2BBD-7F9E-26CE9C1D7023",
      "bootID": "c49804b6-0645-49d3-902f-e66b74fed805",
      "kernelVersion": "3.10.0-514.el7.x86_64",
      "osImage": "CentOS Linux 7 (Core)",
      "containerRuntimeVersion": "docker://17.3.1",
      "kubeletVersion": "v1.12.3",
      "kubeProxyVersion": "v1.12.3",
      "operatingSystem": "linux",
      "architecture": "amd64"
    },
    "images": [
      {
        "names": [
          "registry.access.redhat.com/rhel7/pod-infrastructure@sha256:92d43c37297da3ab187fc2b9e9ebfb243c1110d446c783ae1b989088495db931",
          "registry.access.redhat.com/rhel7/pod-infrastructure:latest"
        ],
        "sizeBytes": 208612920
      },
      {
        "names": [
          "tutum/dnsutils@sha256:d2244ad47219529f1003bd1513f5c99e71655353a3a63624ea9cb19f8393d5fe",
          "tutum/dnsutils:latest"
        ],
        "sizeBytes": 199896828
      },
      {
        "names": [
          "httpd@sha256:5e7992fcdaa214d5e88c4dfde274befe60d5d5b232717862856012bf5ce31086"
        ],
        "sizeBytes": 131692150
      },
      {
        "names": [
          "httpd@sha256:20ead958907f15b638177071afea60faa61d2b6747c216027b8679b5fa58794b",
          "httpd@sha256:e76e7e1d4d853249e9460577d335154877452937c303ba5abde69785e65723f2",
          "httpd:latest"
        ],
        "sizeBytes": 131679770
      }
    ]
  }
}[root@k8s-master1 roles]#

整個node的數據都讀取出來了.

6.
接上面問題的思考,先對比下,修改前和修改後權限的對比,見下:
修改前:

rules:
- apiGroups:
  - ""
  resources:
  - nodes/proxy
  - nodes/metrics
  verbs:
  - get
  - create

修改後:

 rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - nodes/proxy
  - nodes/metrics
  verbs:
  - get
  - list
  - create

修改的就是resources加上了nodes這個資源.其他pods,svc之類的權限,參考這個權限修改就能夠實現訪問.
我的理解是:只有具有了訪問這個資源的權限之後,才能夠訪問它的子資源.
 

7.
遺留問題

還遇到個報錯,見下:

[root@k8s-master1 roles]#curl https://192.168.32.127:8443/api/v1/nodes/proxy  --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "nodes \"proxy\" not found",
  "reason": "NotFound",
  "details": {
    "name": "proxy",
    "kind": "nodes"
  },
  "code": 404
}[root@k8s-master1 roles]#

這是子資源沒有生成的問題.後面再來測試.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章