kubernetes&&簡單命令使用
- 運行一個pod
- 刪除一個pod
- 強行刪除一個pod
- 改變pod副本數
- 通過svc實現負載
- 編輯svc通過nodeport實現外部訪問
- k8s中的資源有哪些?
- 查看yaml文件中鍵version的值
- 查看yaml文件的編寫方法
- 如何編寫一個pod類型的yaml文件及簡單錯誤排查思路
- 進入一個運行的pod中
- 查看運行pod的labels屬性
- 修改運行pod的labels屬性(如何令一個pod脫離控制器)
- 修改運行中的deployment實現擴容
- 修改運行中的deployment使用的鏡像
- 回滾運行中的deployment到之前的老舊版本
- 查看回滾狀態
- 查看回滾歷史記錄
- 回滾運行中的deployment到指定版本
- 暫停deployment的更新
- 創建一個daemonset並檢查
- 使用coredns的ip地址對svc的name域名進行解析
- 創建證書以及cert存儲方式
- 熱更新configmap
- 查看secret下的service account
- 生成secret下的opaque secret類型中的base64位加密
- 使用kubectl 創建docker registry 認證的 secret
- 查看node節點的標籤
- 添加、移除污點
- 查看集羣信息
運行一個pod
# 指定目標鏡像倉庫
[root@k8s-master01 flannel]# cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"insecure-registries": ["https://hub.atguigu.com"]
}
# 運行一個pod,名稱爲nginx-deployment,使用的鏡像爲hub.atguigu.com/library/myapp:v1,暴露的端口爲80,副本數爲1
kubectl run nginx-deployment --image=hub.atguigu.com/library/myapp:v1 --port=80 --replicas=1
# 檢測
[root@k8s-master01 flannel]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-85756b779-psfjz 0/1 ContainerCreating 0 10s
[root@k8s-master01 flannel]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-85756b779-psfjz 0/1 ContainerCreating 0 18s <none> k8s-node03 <none> <none>
[root@k8s-master01 flannel]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-85756b779-psfjz 1/1 Running 0 2m35s 10.244.1.2 k8s-node03 <none> <none>
[root@k8s-master01 flannel]# curl 10.244.1.2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master01 flannel]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-85756b779 1 1 1 3m47s
[root@k8s-master01 flannel]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/1 1 1 3m54s
[root@k8s-master01 flannel]# curl 10.244.1.2/hostname.html
nginx-deployment-85756b779-psfjz
刪除一個pod
[root@k8s-master01 flannel]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-85756b779-psfjz 1/1 Running 0 28m
[root@k8s-master01 flannel]# kubectl delete pod nginx-deployment-85756b779-psfjz
pod "nginx-deployment-85756b779-psfjz" deleted
[root@k8s-master01 flannel]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-85756b779-6hntp 1/1 Running 0 9s
強行刪除一個pod
# 當node節點失聯時,與node節點有關的pod會在刪除時卡住,使用以下命令強制刪除
[root@k8s-master01 core]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-pod 1/1 Terminating 0 10h 10.244.1.10 k8s-node03 <none> <none>
[root@k8s-master01 templates]# kubectl delete pod myapp-pod
pod "myapp-pod" deleted
^C
[root@k8s-master01 templates]# kubectl delete pods myapp-pod --grace-period=0 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "myapp-pod" force deleted
改變pod副本數
[root@k8s-master01 flannel]# kubectl scale --replicas=3 deployment/nginx-deployment
[root@k8s-master01 flannel]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-85756b779-6hntp 1/1 Running 0 12h 10.244.1.3 k8s-node03 <none> <none>
nginx-deployment-85756b779-rc72j 1/1 Running 0 31s 10.244.1.4 k8s-node03 <none> <none>
nginx-deployment-85756b779-vhtss 1/1 Running 0 31s 10.244.1.5 k8s-node03 <none> <none>
通過svc實現負載
[root@k8s-master01 flannel]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18h
[root@k8s-master01 flannel]# kubectl expose deployment nginx-deployment --port=9000 --target-port=80
service/nginx-deployment exposed
[root@k8s-master01 flannel]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18h
nginx-deployment ClusterIP 10.99.161.167 <none> 9000/TCP 5s
[root@k8s-master01 flannel]# curl 10.99.161.167:9000/hostname.html
nginx-deployment-85756b779-vhtss
[root@k8s-master01 flannel]# curl 10.99.161.167:9000/hostname.html
nginx-deployment-85756b779-rc72j
[root@k8s-master01 flannel]# curl 10.99.161.167:9000/hostname.html
nginx-deployment-85756b779-6hntp
[root@k8s-master01 flannel]# ipvsadm -Ln | grep 10.99.161.167
TCP 10.99.161.167:9000 rr
編輯svc通過nodeport實現外部訪問
[root@k8s-master01 flannel]# kubectl edit svc nginx-deployment
spec:
clusterIP: 10.99.161.167
ports:
- port: 9000
protocol: TCP
targetPort: 80
selector:
run: nginx-deployment
sessionAffinity: None
# 將ClusterIp修改爲NodePort
type: NodePort
# 查看
[root@k8s-master01 flannel]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
nginx-deployment NodePort 10.99.161.167 <none> 9000:31343/TCP 14m
# 檢測
k8s中的資源有哪些?
k8s中的資源,以適用性範圍分類,可分爲以下三類。
名稱空間級別
僅在此名稱空間下生效!
常用的名稱空間有
- kube-system:k8s集羣啓動運行時,運行系統組件的pod(coredns/apiserver/controller manager/flannel/proxy/scheduler)的使用的名稱空間
- default:k8s啓動pod且未指定名稱空間時,名稱空間默認爲default。
[root@k8s-master01 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-hnvm4 0/1 CrashLoopBackOff 15 2d6h
coredns-5c98db65d4-lgv5d 0/1 CrashLoopBackOff 15 2d6h
etcd-k8s-master01 1/1 Running 3 2d6h
kube-apiserver-k8s-master01 1/1 Running 3 2d6h
kube-controller-manager-k8s-master01 1/1 Running 3 2d6h
kube-flannel-ds-amd64-hfq4w 1/1 Running 1 2d
kube-flannel-ds-amd64-wwnvz 1/1 Running 0 2d
kube-proxy-4thcv 1/1 Running 1 2d
kube-proxy-bshkp 1/1 Running 3 2d6h
kube-scheduler-k8s-master01 1/1 Running 3 2d6h
[root@k8s-master01 ~]# kubectl get pod -n default
NAME READY STATUS RESTARTS AGE
nginx-deployment-85756b779-46rvg 1/1 Running 0 32h
nginx-deployment-85756b779-65lf4 1/1 Running 0 32h
nginx-deployment-85756b779-wkh28 1/1 Running 0 32h
- 工作負載型資源(workload)
Pod、ReplicaSet、Deployment、StatefulSet、DaemonSet、Job、CronJob、(ReplicationController在v1.11版本被廢棄) - 服務發現及負載均衡型資源(ServiceDiscovery LoadBalance)
Service、Ingress - 配置與存儲型資源:
Volume(存儲卷)、CSI(容器存儲接口、可以擴展各種各樣的第三方存儲卷) - 特殊類型的存儲卷:
ConfigMap(當配置中心來使用的資源類型)、Secret(保存敏感數據)、DownwardAPI(把外部環境中的信息輸出給容器)
集羣級別
- Role
- ClusterRole
- RoleBinding
- ClusterRoleBinding
- Namespace
- Node
元數據型
如HPA,通過指標(CPU、內存)進行操作
- PodTemplate
- LimitRange
查看yaml文件中鍵version的值
[root@k8s-master01 ~]# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
node.k8s.io/v1beta1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
查看yaml文件的編寫方法
[root@k8s-master01 ~]# kubectl explain svc
KIND: Service
VERSION: v1
DESCRIPTION:
Service is a named abstraction of software service (for example, mysql)
consisting of local port (for example 3306) that the proxy listens on, and
the selector that determines which pods will answer requests sent through
the proxy.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
spec <Object>
Spec defines the behavior of a service.
https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
status <Object>
Most recently observed status of the service. Populated by the system.
Read-only. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
[root@k8s-master01 ~]# kubectl explain svc.metadata
KIND: Service
VERSION: v1
RESOURCE: metadata <Object>
DESCRIPTION:
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
ObjectMeta is metadata that all persisted resources must have, which
includes all objects users must create.
FIELDS:
annotations <map[string]string>
Annotations is an unstructured key value map stored with a resource that
may be set by external tools to store and retrieve arbitrary metadata. They
are not queryable and should be preserved when modifying objects. More
info: http://kubernetes.io/docs/user-guide/annotations
如何編寫一個pod類型的yaml文件及簡單錯誤排查思路
# 編寫yaml文件,故意啓相同的容器造成端口衝突
[root@k8s-master01 install-k8s]# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
namespace: default
labels:
app: myapp
spec:
containers:
- name: app
image: hub.atguigu.com/library/myapp:v1
- name: test
image: hub.atguigu.com/library/myapp:v1
# 聲明一個pod
[root@k8s-master01 install-k8s]# kubectl apply -f pod.yaml
pod/myapp-pod created
# 檢查聲明的pod,有問題
[root@k8s-master01 install-k8s]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-pod 1/2 Error 1 10s
# 查看指定pod的運行、配置信息,看錯出在哪
[root@k8s-master01 install-k8s]# kubectl describe pod myapp-pod
Name: myapp-pod
Namespace: default
Priority: 0
Node: k8s-node03/192.168.0.212
Start Time: Tue, 02 Jun 2020 08:15:03 +0800
Labels: app=myapp
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"myapp"},"name":"myapp-pod","namespace":"default"},"spec":{"c...
Status: Running
IP: 10.244.1.9
Containers:
app:
Container ID: docker://d257e83544b556115668fd33242c08c42c985bf92f9cdd0bec1ce157ca94e98b
Image: hub.atguigu.com/library/myapp:v1
Image ID: docker-pullable://hub.atguigu.com/library/myapp@sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 02 Jun 2020 08:15:03 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gznkj (ro)
test:
Container ID: docker://4a222b1b14c05a71afd41f7212458570b0e40dfd369011201dc01141d439062f
Image: hub.atguigu.com/library/myapp:v1
Image ID: docker-pullable://hub.atguigu.com/library/myapp@sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e
Port: <none>
Host Port: <none>
State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 02 Jun 2020 08:15:21 +0800
Finished: Tue, 02 Jun 2020 08:15:24 +0800
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 02 Jun 2020 08:15:06 +0800
Finished: Tue, 02 Jun 2020 08:15:09 +0800
Ready: False
Restart Count: 2
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gznkj (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-gznkj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gznkj
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22s default-scheduler Successfully assigned default/myapp-pod to k8s-node03
Normal Pulled <invalid> kubelet, k8s-node03 Container image "hub.atguigu.com/library/myapp:v1" already present on machine
Normal Created <invalid> kubelet, k8s-node03 Created container app
Normal Started <invalid> kubelet, k8s-node03 Started container app
Normal Pulled <invalid> (x3 over <invalid>) kubelet, k8s-node03 Container image "hub.atguigu.com/library/myapp:v1" already present on machine
Normal Created <invalid> (x3 over <invalid>) kubelet, k8s-node03 Created container test
Normal Started <invalid> (x3 over <invalid>) kubelet, k8s-node03 Started container test
Warning BackOff <invalid> (x2 over <invalid>) kubelet, k8s-node03 Back-off restarting failed container
# 查看指定pod的指定容器的日誌信息,報錯是什麼引起的。
[root@k8s-master01 install-k8s]# kubectl log myapp-pod -c test
log is DEPRECATED and will be removed in a future version. Use logs instead.
2020/06/02 00:18:07 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2020/06/02 00:18:07 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2020/06/02 00:18:07 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2020/06/02 00:18:07 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2020/06/02 00:18:07 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2020/06/02 00:18:07 [emerg] 1#1: still could not bind()
nginx: [emerg] still could not bind()
# 去掉第二個容器,重新啓動pod,pod正常
[root@k8s-master01 install-k8s]# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
namespace: default
labels:
app: myapp
spec:
containers:
- name: app
image: hub.atguigu.com/library/myapp:v1
[root@k8s-master01 install-k8s]# kubectl create -f pod.yaml
pod/myapp-pod created
[root@k8s-master01 install-k8s]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 4s
進入一個運行的pod中
# 查詢
[root@k8s-master01 templates]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 1 86m
readiness-httpget-pod 0/1 Running 0 4m26s
# 進入容器,如果pod有多個容器組成,需要使用"-c"指定某個容器
# -it : 以交互式方式進入容器
# -- : 默認格式,必須有
# /bin/sh : 以/bin/sh解釋器方式執行命令
[root@k8s-master01 templates]# kubectl exec readiness-httpget-pod -it -- /bin/sh
/ # echo "index1" >> /usr/share/nginx/html/index1.html
/ # exit
查看運行pod的labels屬性
[root@k8s-master01 Controller]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
frontend-2gk2j 1/1 Running 0 57s tier=frontend
frontend-6tx2v 1/1 Running 0 82m tier=frontend
frontend-qqcw2 1/1 Running 0 82m tier=frontend
修改運行pod的labels屬性(如何令一個pod脫離控制器)
[root@k8s-master01 Controller]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
frontend-2gk2j 1/1 Running 0 57s tier=frontend
frontend-6tx2v 1/1 Running 0 82m tier=frontend
frontend-qqcw2 1/1 Running 0 82m tier=frontend
[root@k8s-master01 Controller]# kubectl label pod frontend-qqcw2 tier=frontend-new --overwrite=True
pod/frontend-qqcw2 labeled
[root@k8s-master01 Controller]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
frontend-2gk2j 1/1 Running 0 5m43s tier=frontend
frontend-6tx2v 1/1 Running 0 87m tier=frontend
frontend-95ms2 1/1 Running 0 3s tier=frontend
frontend-qqcw2 1/1 Running 0 87m tier=frontend-new
修改運行中的deployment實現擴容
[root@k8s-master01 Controller]# kubectl scale deployment myapp-deployment --replicas=5
deployment.extensions/myapp-deployment scaled
[root@k8s-master01 Controller]# kubectl get rs
NAME DESIRED CURRENT READY AGE
myapp-deployment-8998cb69f 5 5 5 4m29s
修改運行中的deployment使用的鏡像
[root@k8s-master01 Controller]# kubectl set image deployment/deployment-demo1 myapp-container=wangyanglinux/myapp:v2
deployment.extensions/deployment-demo1 image updated
[root@k8s-master01 Controller]# kubectl get rs
NAME DESIRED CURRENT READY AGE
deployment-demo1-7d946455f5 3 3 3 55s
deployment-demo1-b57fc6778 0 0 0 9m24s
回滾運行中的deployment到之前的老舊版本
[root@k8s-master01 Controller]# kubectl get rs
NAME DESIRED CURRENT READY AGE
deployment-demo1-7d946455f5 3 3 3 55s
deployment-demo1-b57fc6778 0 0 0 9m24s
[root@k8s-master01 Controller]# kubectl rollout undo deployment/deployment-demo1
deployment.extensions/deployment-demo1 rolled back
[root@k8s-master01 Controller]# kubectl get rs
NAME DESIRED CURRENT READY AGE
deployment-demo1-7d946455f5 0 0 0 6m25s
deployment-demo1-b57fc6778 3 3 3 14m
查看回滾狀態
[root@k8s-master01 Controller]# kubectl rollout status deployment/deployment-demo1
deployment "deployment-demo1" successfully rolled out
查看回滾歷史記錄
[root@k8s-master01 Controller]# kubectl rollout history deployment/deployment-demo1
deployment.extensions/deployment-demo1
REVISION CHANGE-CAUSE
2 <none>
3 <none>
回滾運行中的deployment到指定版本
[root@k8s-master01 Controller]# kubectl rollout history deployment/deployment-demo1
deployment.extensions/deployment-demo1
REVISION CHANGE-CAUSE
2 <none>
3 <none>
[root@k8s-master01 Controller]# kubectl get rs
NAME DESIRED CURRENT READY AGE
deployment-demo1-7d946455f5 0 0 0 43m
deployment-demo1-b57fc6778 3 3 3 51m
[root@k8s-master01 Controller]# kubectl rollout undo deployment/deployment-demo1 --to-revision=2
deployment.extensions/deployment-demo1 rolled back
[root@k8s-master01 Controller]# kubectl get rs
NAME DESIRED CURRENT READY AGE
deployment-demo1-7d946455f5 3 3 2 44m
deployment-demo1-b57fc6778 0 0 0 53m
暫停deployment的更新
[root@k8s-master01 Controller]# kubectl rollout pause deployment/deployment-demo1
deployment.extensions/deployment-demo1 paused
創建一個daemonset並檢查
[root@k8s-master01 Controller]# cat daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-example
labels:
app: daemonset
spec:
selector:
matchLabels:
name: daemonset-example
template:
metadata:
labels:
name: daemonset-example
spec:
containers:
- name: daemonset-example
image: wangyanglinux/myapp:v1
[root@k8s-master01 Controller]# kubectl create -f daemonset.yaml
daemonset.apps/daemonset-example created
[root@k8s-master01 Controller]# kubectl get daemonset -o wide
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
daemonset-example 1 1 1 1 1 <none> 33s daemonset-example wangyanglinux/myapp:v1 name=daemonset-example
[root@k8s-master01 Controller]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
daemonset-example-xr5k5 1/1 Running 0 56s 10.244.1.31 k8s-node03 <none> <none>
deployment-demo1-7d946455f5-9nrwf 1/1 Running 0 60m 10.244.1.30 k8s-node03 <none> <none>
deployment-demo1-7d946455f5-m7mdn 1/1 Running 0 60m 10.244.1.29 k8s-node03 <none> <none>
deployment-demo1-7d946455f5-xqsc5 1/1 Running 0 60m 10.244.1.28 k8s-node03 <none> <none>
使用coredns的ip地址對svc的name域名進行解析
# 獲取coredns的IP地址
[root@k8s-master01 Service]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-5c98db65d4-57x4l 1/1 Running 5 155m 10.244.1.33 k8s-node03 <none> <none>
coredns-5c98db65d4-jtmst 1/1 Running 149 2d8h 10.244.0.20 k8s-master01 <none> <none>
coredns-5c98db65d4-rwdhq 1/1 Terminating 0 12h 10.244.2.22 k8s-node04 <none> <none>
etcd-k8s-master01 1/1 Running 5 5d8h 192.168.0.200 k8s-master01 <none> <none>
kube-apiserver-k8s-master01 1/1 Running 6 5d8h 192.168.0.200 k8s-master01 <none> <none>
kube-controller-manager-k8s-master01 1/1 Running 9 5d8h 192.168.0.200 k8s-master01 <none> <none>
kube-flannel-ds-amd64-hfq4w 1/1 Running 4 5d2h 192.168.0.212 k8s-node03 <none> <none>
kube-flannel-ds-amd64-wwnvz 1/1 Running 2 5d2h 192.168.0.200 k8s-master01 <none> <none>
kube-flannel-ds-amd64-wxb47 1/1 Running 2 2d13h 192.168.0.213 k8s-node04 <none> <none>
kube-proxy-4k2k7 1/1 Running 2 2d13h 192.168.0.213 k8s-node04 <none> <none>
kube-proxy-4thcv 1/1 Running 2 5d2h 192.168.0.212 k8s-node03 <none> <none>
kube-proxy-bshkp 1/1 Running 5 5d8h 192.168.0.200 k8s-master01 <none> <none>
kube-scheduler-k8s-master01 1/1 Running 9 5d8h 192.168.0.200 k8s-master01 <none> <none>
# 獲取svc的Name
[root@k8s-master01 Service]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h39m
service-v1 ClusterIP 10.100.247.174 <none> 80/TCP 80m
service-v2-headless ClusterIP None <none> 80/TCP 42m
# 使用svc的Name拼成完整域名並使用coredns的ip地址對域名解析
[root@k8s-master01 Service]# dig -t A service-v2-headless.default.svc.cluster.local. @10.244.1.33
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-16.P2.el7_8.6 <<>> -t A service-v2-headless.default.svc.cluster.local. @10.244.1.33
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39776
;; flags: qr rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;service-v2-headless.default.svc.cluster.local. IN A
;; ANSWER SECTION:
service-v2-headless.default.svc.cluster.local. 16 IN A 10.244.1.39
service-v2-headless.default.svc.cluster.local. 16 IN A 10.244.1.38
service-v2-headless.default.svc.cluster.local. 16 IN A 10.244.1.37
;; Query time: 7 msec
;; SERVER: 10.244.1.33#53(10.244.1.33)
;; WHEN: 五 6月 05 00:22:28 CST 2020
;; MSG SIZE rcvd: 257
創建證書以及cert存儲方式
[root@k8s-master01 https]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
Generating a 2048 bit RSA private key
.........................+++
.............+++
writing new private key to 'tls.key'
-----
[root@k8s-master01 https]# ls
tls.crt tls.key
[root@k8s-master01 https]# kubectl create secret tls tls-secret --key tls.key --cert tls.crt
secret/tls-secret created
[root@k8s-master01 https]# ls
tls.crt tls.key
熱更新configmap
[root@k8s-master01 configmap]# kubectl edit configmap log-config
查看secret下的service account
注: 只有需要訪問api接口的pod,其對應目錄下才會存在ca
[root@k8s-master01 secret]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-4tm9l 1/1 Running 0 4h48m
coredns-5c98db65d4-jtmst 1/1 Running 149 5d22h
coredns-5c98db65d4-kvlpl 1/1 Terminating 0 2d16h
etcd-k8s-master01 1/1 Running 5 8d
kube-apiserver-k8s-master01 1/1 Running 6 8d
kube-controller-manager-k8s-master01 1/1 Running 9 8d
kube-flannel-ds-amd64-hfq4w 1/1 Running 4 8d
kube-flannel-ds-amd64-wwnvz 1/1 Running 2 8d
kube-flannel-ds-amd64-wxb47 1/1 Running 2 6d3h
kube-proxy-4k2k7 1/1 Running 2 6d3h
kube-proxy-4thcv 1/1 Running 2 8d
kube-proxy-bshkp 1/1 Running 5 8d
kube-scheduler-k8s-master01 1/1 Running 9 8d
[root@k8s-master01 secret]# kubectl exec kube-proxy-4k2k7 -n kube-system ls /run/secrets/kubernetes.io/serviceaccount
ca.crt
namespace
token
生成secret下的opaque secret類型中的base64位加密
# 加密
[root@k8s-master01 secret]# echo -n "admin" | base64
YWRtaW4=
# 解密
[root@k8s-master01 secret]# echo -n "YWRtaW4=" | base64 -d
admin
使用kubectl 創建docker registry 認證的 secret
[root@k8s-master01 secret]# kubectl create secret docker-registry myregistrykey --docker-server=hub.atguigu.com --docker-username=admin --docker-password=Harbor12345 [email protected]
secret/myregistrykey created
查看node節點的標籤
[root@k8s-master01 ~]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master01 Ready master 9d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node03 NotReady <none> 9d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node03,kubernetes.io/os=linux
k8s-node04 Ready <none> 6d23h v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node04,kubernetes.io/os=linux
添加、移除污點
[root@k8s-master01 nodeAffinity]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 9d v1.15.1
k8s-node03 NotReady <none> 9d v1.15.1
k8s-node04 Ready <none> 7d4h v1.15.1
[root@k8s-master01 nodeAffinity]# kubectl taint nodes k8s-node04 key1=value1:NoSchedule
node/k8s-node04 tainted
[root@k8s-master01 nodeAffinity]# kubectl describe node k8s-node04 | grep Taints
Taints: key1=value1:NoSchedule
[root@k8s-master01 nodeAffinity]# kubectl taint nodes k8s-node04 key1:NoSchedule-
node/k8s-node04 untainted
[root@k8s-master01 nodeAffinity]# kubectl describe node k8s-node04 | grep Taints
Taints: <none>
當有多個master節點存在時,可以通過更改節點的污點狀態,允許pod運行在master節點上
[root@k8s-master01 toleration]# kubectl taint nodes k8s-master01 node-role.kubernetes.io/master=:PreferNoSchedule
node/k8s-master01 tainted
[root@k8s-master01 toleration]# kubectl describe node k8s-master01 | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
查看集羣信息
[root@k8s-master01 .kube]# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EVXpNREEzTlRrMU5Gb1hEVE13TURVeU9EQTNOVGsxTkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFoyCjQ0a21McWNxbkhXbUV2c3YyL0ZRdURVSlVHVDBMN2x0Y0dxNEI1OEFYam9ib1V2Z3lSM25PUTQ1cWNRVzlZaXYKVTVrYWRyTm1UVktZN0hLUnhoS1ozbUFvVEtHVXI3L3Y2NHgxK0ZsOUJ0b0xxSytIb2hpTEpubXVFNnA0RHEyTgo4MGN2Yi93eFhDZXdHTWtlWE5ka3huNkVPRk1vb2xkMUNYanFNQzZRZkprUFdaSHNyd0J6S3E5aTVScjQ0eEk5CkV3bU9XeGNGcTF5QWRJak9KWjdvTmVpUEMvZ3p6UmVnVmhIY3pzZ3BYZVdmdlNCNkZoSjJpVCtrcCthY1NqZHUKMVVFQWJjWGdmWHVEYXJqcWcvayt1NjBRNC9zR0hDMWJVWWJ6WVlpRUdmVTlGem9EeWwzdXo1bzc4NG9XelVFWgpsbWJxY2l5M055S3EycmpsSE1rQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFBNHIxZW40WDYzNk9rdml3UjhSVHFBamRuR1kKMSt3SDF0c0RlQ1J1SnFISm9yTDVyeTdUZlJ5MWtsMVpZbHJiSm1jZXhvY3dOS0RhcTVEbVl3elBEaENHcDN6ZwpRajJCSEZMS0RHRTRzbFRZVW1IZW9ldzgvSEpaaElpRHE5bXVLRnVPc25LWDJQQVEvemRIUjZmanNveEtrSmE1CmE2c0FYcFphMjhzME4xOGUwbkFNSFBIczAySEk0N1puRy81TVNleUpJRmN0S3doSGpyb1ZIMjg3eUI0MVJFTGcKek8vWGZ3dVpOdWN6U2lMb25lNnpUdll2Q0R2YlU2YmlVTlh1NWxGS25kWW9rRUtwdE9PQ1V6WG55ZnkrcEUrZQowVHNLVzJGcCtsV2NGaUpXMmt5Q2xKbldIcTdoNk92Sy91UHAvTE9Fd2Q4SzBnS01hV2Z5Qk00RmJRVT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.0.200:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJQ0ZrbGcybExFall3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBMU16QXdOelU1TlRSYUZ3MHlNVEExTXpBd056VTVOVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTA5UStSYVI4UWszOUp2TkcKVnUvVTE4Mm4yZkl4bW1WaFIrNVdxV1JTbWRJWUxSRkh1Tk5MRzkxSlVGR2FxZC9MdTRmeTNUM2YzM0RxMTJhNgpjNld0NFpKdnp4Lys1TnIzU1Z3bGdPZGJPTXNLMVk0VWJReFd4M1pnL0hsay9PU1lvYXlWOVhBS01sRnNFR0VXCnpqSklHb0NNRE9aKy95Y0pncFd3Mng1TW52ZXBIS3UxVXYzQkFER2gxQXNTWlNFNWhTZlFKS014eWpEckRXNHgKc1lacHllK3FwRVAyazRqNXBkUTJaZG8wcHBlR0FGWktOOXlYWVhtYjBwRVBvWmRrNkYvNmdleWxJdmhVOXlFYgphNEpSb1dFMm8reE90TnBCZlVQeUdLVHN4cnIrM0NKby8zOXVGRE85ZHJiUXMyeHFYOU5PMEwwOE0vU1ZHUHl3CkVpdWNRUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFCVkNINzV0ZUxSeFVzUEpYYmxjOW9Xd3YvMVlSSFFUZXBrKwpkbFVTcE5DaktKRE1GMTdQQXdLUk01djQzdTlLWkNDMXFmUFV1NDFzNUQvdzQvTzQxMWZGN1hNeFZHQ1BuWjBDCkJPR2hORUttMlViWk1yTk1vRWoybnZaNlBjeFVGOEw3amVlb1k0MDBMblVuL05aOTloUUxXZUtsSXFYVWhodzUKUDRZaFBiRTBIcmlJQ1dWcEpHNEpNNW9VSmg0ZG92TitoQW5PZ3dZNExVelEzL0JGWW4vZmNDRHA4c2VOajBWRQorUWd4QWp1MWovcDVhaEl0NFdVTzNKbm1obWswNkpBKzdBcVlNQUt5czJRc1g4ZFRjQkcyZ2NrR2dIUGlvSDh4CmVHK1FvYll5VzFHbnVvTTlwT285SEZnL2thRjJOZnMrdlU1ZG5JVklFSGlxY01BWlZyMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBMDlRK1JhUjhRazM5SnZOR1Z1L1UxODJuMmZJeG1tVmhSKzVXcVdSU21kSVlMUkZICnVOTkxHOTFKVUZHYXFkL0x1NGZ5M1QzZjMzRHExMmE2YzZXdDRaSnZ6eC8rNU5yM1NWd2xnT2RiT01zSzFZNFUKYlF4V3gzWmcvSGxrL09TWW9heVY5WEFLTWxGc0VHRVd6akpJR29DTURPWisveWNKZ3BXdzJ4NU1udmVwSEt1MQpVdjNCQURHaDFBc1NaU0U1aFNmUUpLTXh5akRyRFc0eHNZWnB5ZStxcEVQMms0ajVwZFEyWmRvMHBwZUdBRlpLCk45eVhZWG1iMHBFUG9aZGs2Ri82Z2V5bEl2aFU5eUViYTRKUm9XRTJvK3hPdE5wQmZVUHlHS1RzeHJyKzNDSm8KLzM5dUZETzlkcmJRczJ4cVg5Tk8wTDA4TS9TVkdQeXdFaXVjUVFJREFRQUJBb0lCQUFrenZDek1VM1dSNjhCbwpheExWd2xwSm5kUVMrR0tycXNrMEttR2JjUmNya0U5TTQ5KzhsaE8wemoyRi9nRUpMdEdMdTFvdkdPMmMreWEyCldMMHpZbFZkUml3cVNLbHFkYm1qSGlIMmF2a1JvUHZiK3prdGd3dVJNZTlsMnFROXpmK2YvcmUxMFV1VVMreCsKT3o1ajRzdjc3NW1UM2NwNXlLajZsYjgvRnJjRkdPN2s3YUxzOEpFbW1GYkk0SEx2d2Nscks5azJPdXFRMVlGNApsK2lnczhSWHhHcmk4YWY2NXJzWXY5Q1BNK0Ztc2RSWHNKa0Q2MzQyZzZtUk5vd2hyV1JoaWcxTU9ZVTZTNzFXCmtWaXA2NnhiS1ZOMTlrUTNZeUpjTmVkQ1Nsd2cwMFNLaGJIZ0hQbXllTFVqNktBWlZWOUdoTXE0WGI2VjJYZEkKYTljM29BRUNnWUVBL1RpQzk0djk1WnhWUm9aYUtLN0g4TlhSdzhWbnVJVEl3MERITmwvTzc4cEZQZCtOeldKYgpBQzlTTXYxUkF6SkJRcXJUdCtwakdvd0VjNmp0bXlvai9wSzYyVnpsU09sc0lWVG5XQW1mVWhkcEpDbUk2ZmVVCk5uSFQvdndZZm9NQU9aK1ZTY1NOUFI0TWtyc3VBUk9LZ1VITW1DcmhKcjZtT3hmM3RxMCtmY0VDZ1lFQTFpZHUKWVlpTEwvNzA4MUtlQWticm0zd1dBSGZjOVFTdi9wYklJTFNwcnZVQjJRL3hBbHFWREVHWG1oWFBCVmI0dWU4aQpvaENrZzJMUFcyakY0am9iRFU4dFVYSW5rVk9OQTQ5UW9ZMUxMeXZXcHB2bStFZmV5TXdEWkxvbzFhSk8xUEIvCm5PYk1oMDBRajVFZ2hwd0ZCdE5uTU5PZFIwY2Z4TkRzUER5WnZvRUNnWUFzQ1JqVmZkWGdpVWhYSkdRbmNRVzYKUHlUa2U3N20yc2lqRSsvUTUrWnYwdWdwczJmUWtNc3NoQTR5YWRVZHppNkZMbm4xSU9DdExDNVdBc21YVTBQQgpNTGtudGJ6MTZnbnczZmdCV21NSGZxUzlNaS9xS0REeEt1aG9EbVVnRXg0RjUxZXA1WEYrY0d4VlZCSFRCQmZ4CkZJVkU3U2dNZWRra3E2MWJhbE5Vd1FLQmdRQ0tqenF3Nm1xOEpDY1NwQnJOK0ZzSzMrOVFZRDFiWHF3TWVqeUUKUk1BaERpOGk1VmlYb0VvZGd2YjI0RE54RGdPaU1lSmpuNGNCNTFXb25CS2t1OW15ODg2cmlzT2xHTHo5VjZYZwowUTJiT0s4S1g0YkNqNlhLbjQxMmg2aFNDVkVlSDNsQjZHYmZCL0sySVQwOU93ZFpra0NLNi9Bd0pRbWVDMXM5CjRpdkxnUUtCZ1FERWRRZ3NLT2tWTTRrRnU0bTRwTEJ6MGlja3RqeDBVNVpPQ2Y4UGxjNnZ4djNwQk1QaHdQeG0KbXJLalZVRDVvM3l4aWhhMFM1YmVMY1JmMDlsdGtzUXFVTndVWUI4ZVlNU0NhSWUyUFUrUUl0N0lmTlJTUGwzbwp4SXFkNzVOWmp4WjlhNXhpUHQ5VWZNY2ZtSnJwUlBJdll2OEkyRm9SVi9XRWJka3ZTVW9TS3c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=