問題背景:
接performance team報問題,平時一直在跑的deploy job 出現了大面積的fail,
排查步驟:
接到客戶報問題後,第一反應,肯定是查看pod信息
通過kubectl -n performance get pods -o wide
發現有大量的pods停留在 ContainerCreating 狀態,因爲pod沒有被建立,此時是不能通過kubectl logs查看log的,
所以只能先通過kubectl describe查看哪些events發生在這個pod上,
```kubectl describe pod pvc-proxy-fc3c60531bb42efdfe4a65096ba5b9a0-9pr9g -n performance-deployments
=====================================
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50m default-scheduler Successfully assigned pvc-proxy-fc3c60531bb42efdfe4a65096ba5b9a0-9pr9g to ip-10-146-20-198.ec2.internal
Normal SuccessfulMountVolume 50m kubelet, ip-10-146-20-198.ec2.internal MountVolume.SetUp succeeded for volume "learn-perf-deployments-share"
Normal SuccessfulMountVolume 50m kubelet, ip-10-146-20-198.ec2.internal MountVolume.SetUp succeeded for volume "learn-service-account-token-7ntls"
Warning FailedCreatePodSandBox 45m (x187 over 50m) kubelet, ip-10-146-20-198.ec2.internal Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "pvc-proxy-fc3c60531bb42efdfe4a65096ba5b9a0-9pr9g": Error response from daemon: grpc: the connection is unavailable
Normal SandboxChanged 36s (x1839 over 50m) kubelet, ip-10-146-20-198.ec2.internal Pod sandbox changed, it will be killed and re-created.
```
1)簡單從events上看到有些warning,google一把,有些說是高壓力下,network 可能出現問題,導致pod不能被建立,aws/amazon-vpc-cni-k8s#59, 此時需要升級plugin的版本。由於我們正好剛剛升級過K8S cluster,所以想當然,認爲是不是這個plugin忘記同步升級了。 但這隻能是憑空猜測,因爲不知道當前cluster deploy的整體架構,不知道有沒有使用這個plugin。所以這個只能是猜測疑點1.
2)由於白天,不能聯繫到美國有權限的團隊,只能繼續思考
因爲是大批量pod 同時發生問題,懷疑調度問題,本想登錄到ec2上看看到底發生了什麼,發現該pod被調度到kubernetes master node上了,無奈外企權限控制極度嚴苛,沒有ssh權限;然後找出現問題的pod2,希望發現是否有被調度到其他node上的pod。觀察後,發現全部是同一個node,到其他node上沒有問題。此時鎖定懷疑點2,是不是這個node有問題。這個理由感覺更充分一些,以往經驗也告訴我,確實node發生問題,機率最大。
綜上,兩點疑點,都發郵件通知美國團隊,重點讓他們排查node,插件版本問題,只是猜測。
不過由於權限限制,這些都只能是猜測,定位某個pod的問題,還是應該查看kubelet log。具體定位調度時問題。。。。
3)經美國有權限同事,ssh node查看,果然發現docker最重要的創建container的進程 containerd 意外退出了。他們只是單純remove掉node,也沒有定位什麼原因導致進程退出。如果換做我,是要進一步看看什麼地方導致的。
It appears the issue with this node is not network related but rather containerd is not running on there for some reason.
Working node:
```
admin@ip-10-146-22-22:~$ ps -fe | grep containerd
root 1560 1546 0 Jan17 ? 00:09:26 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc
root 1756 1560 0 Jan17 ? 00:00:00 docker-containerd-shim 2a0c08abf30c240c18028ee06a13dc250be047f93a67316f29a8f9153e77705c /var/run/docker/libcontainerd/2a0c08abf30c240c18028ee06a13dc250be047f93a67316f29a8f9153e77705c docker-runc
root 1757 1560 0 Jan17 ? 00:00:00 docker-containerd-shim d5254a9e589d5c48d2e0417b8d694c31b2a5a1536dfd9c48c885d414fe68e250 /var/run/docker/libcontainerd/d5254a9e589d5c48d2e0417b8d694c31b2a5a1536dfd9c48c885d414fe68e250 docker-runc
root 1759 1560 0 Jan17 ? 00:00:00 docker-containerd-shim 9c6fca10612add0ac73e756825c57c222a88582fbe7aad4ff9d0f68904575f38 /var/run/docker/libcontainerd/9c6fca10612add0ac73e756825c57c222a88582fbe7aad4ff9d0f68904575f38 docker-runc
...
```
Broken node:
```
admin@ip-10-146-20-69:~$ ps -fe | grep containerd
root 1763 1 0 Jan17 ? 00:00:00 docker-containerd-shim dd1cb327ebd34c75f96488c45fb0db2f08b297abfa3e002fec56d52f8256f70c /var/run/docker/libcontainerd/dd1cb327ebd34c75f96488c45fb0db2f08b297abfa3e002fec56d52f8256f70c docker-runc
root 1764 1 0 Jan17 ? 00:00:00 docker-containerd-shim 986657c08709d9656c530c59eaa95f26b380847ffb63c3558241c69418097d7c /var/run/docker/libcontainerd/986657c08709d9656c530c59eaa95f26b380847ffb63c3558241c69418097d7c docker-runc
root 1765 1 0 Jan17 ? 00:00:00 docker-containerd-shim 8807a8c3afdc0b4edd2a0d220ea0ed01e492c0e55838c731ab451e50b7285b6a /var/run/docker/libcontainerd/8807a8c3afdc0b4edd2a0d220ea0ed01e492c0e55838c731ab451e50b7285b6a docker-runc
```
Im going to kill that node to get a working one to come up and we can come up with a list of action items in order to enable you guys to address in the future
處理辦法,
從k8s cluster 中remove 掉那個有問題的node
後續跟進action:
增加對node上 重要進程containerd的監控
導出kubelet log到elasticsearch
學習到pod調度停留在containercreating狀態,有很多種原因造成,需要認真定位,不能亂猜。