K8S中部署Helm

 K8S中的包管理工具

1. 客戶端Helm(即Helm)

 通過腳本安裝:curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > helm.sh,賦權運行:

chmod +x helm.sh
./helm.sh

# 輸出
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.13.1-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.

# 驗證
helm help

注:可能在執行腳本時出現curl: (7) Failed connect to kubernetes-helm.storage.googleapis.com:443; 網絡不可達異常信息,多執行幾次即可。

2. 服務端Tiller

直接helm init,即可在K8S集羣中安裝Tiller(在kube-system命名空間中),但執行的時雖然提示成功了,但K8S查看容器狀態發現有Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.13.1"....的異常,查看tiller-deployment的yaml文件發現容器的鏡像爲gcr.io/kubernetes-helm/tiller:v2.13.1,拉不到,去dockerhub上查看谷歌複製鏡像命名空間中mirrorgooglecontainers是否有,沒有又查看是否有用戶鏡像docker search tiller:v2.13.1,拉取一個用戶的鏡像,修改tag、刪除舊的(建議在每個節點都幹一下,選擇器可能沒有指定):

docker pull hekai/gcr.io_kubernetes-helm_tiller_v2.13.1
docker tag hekai/gcr.io_kubernetes-helm_tiller_v2.13.1 gcr.io/kubernetes-helm/tiller:v2.13.1
docker rmi hekai/gcr.io_kubernetes-helm_tiller_v2.13.1

再次查看pod已經成功。

tiller授權:

# 設置賬號
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

# 使用 kubectl patch 更新 API 對象
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

# 查看授權是否成功
kubectl get deploy --namespace kube-system   tiller-deploy  --output yaml|grep  serviceAccount

serviceAccount: tiller
serviceAccountName: tiller


helm version

Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

卸載tillerhelm resethelm reset --force

3. 使用

創建Helm chart(Helm中的包的形式叫做chart):

# 拉取測試代碼
git clone https://github.com/daemonza/testapi.git;

cd testapi
# 創建chart骨架
helm create testapi-chart

生成的chart骨架爲:

testapi-chart
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── ingress.yaml
│ ├── NOTES.txt
| ├── service.yaml
│ └── tests
└── values.yaml

其中templates目錄中存放的是K8S部署文件的模板,Chart.yaml文件如下:

# chartAPI的版本,必須只能設爲v1
apiVersion: v1
# 可選參數
appVersion: "1.0"
# 可選參數
description: A Helm chart for Kubernetes
# chart的名字,必選參數
name: testapi-chart
# chart的版本號,必選參數,必須符合SemVer
version: 0.1.0

其中values.yaml文件如下:

# Default values for testapi-chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: nginx
  tag: stable
  pullPolicy: IfNotPresent

nameOverride: ""
fullnameOverride: ""

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []

  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}

可以進入Chart.yaml所在目錄運行Chart:

cd testapi-chart

# 運行chart
helm lint

一切OK的話可以進行打包(在Chart.yaml的父目錄外):

# --debug標識可選,加上可以看到輸出,testapi-chart是要打包的chart目錄,打出的包在當前目錄下
helm package testapi-chart --debug

# 輸出
Successfully packaged chart and saved it to: /root/k8s/helm/testapi/testapi-chart-0.1.0.tgz
[debug] Successfully saved /root/k8s/helm/testapi/testapi-chart-0.1.0.tgz to /root/.helm/repository/local

現在打包出來在當前目錄,也可以直接發佈到本地的helm倉庫:helm install testapi-chart-0.1.0.tgz,輸出如下:

NAME:   lumbering-zebu
LAST DEPLOYED: Fri Apr 26 18:54:26 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME                          READY  UP-TO-DATE  AVAILABLE  AGE
lumbering-zebu-testapi-chart  0/1    1           0          0s

==> v1/Pod(related)
NAME                                           READY  STATUS             RESTARTS  AGE
lumbering-zebu-testapi-chart-7fb48fc7b6-n6824  0/1    ContainerCreating  0         0s

==> v1/Service
NAME                          TYPE       CLUSTER-IP  EXTERNAL-IP  PORT(S)  AGE
lumbering-zebu-testapi-chart  ClusterIP  10.97.1.55  <none>       80/TCP   0s


NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=testapi-chart,app.kubernetes.io/instance=lumbering-zebu" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:80

上述就已經在K8S中創建了deployment,查看默認的命名空間就可以發現多了一個lumbering-zebu-testapi-chart的Deployment,可以查看deployment的包:

helm ls

# 輸出
NAME          	REVISION	UPDATED                 	STATUS  	CHART              	APP VERSION	NAMESPACE
lumbering-zebu	1       	Fri Apr 26 18:54:26 2019	DEPLOYED	testapi-chart-0.1.0	1.0        	default

修改Chart的打包版本0.1.0–>0.1.1,再次執行打包、發佈,再次查看:

kubectl get deployments

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
lumbering-zebu-testapi-chart   1/1     1            1           13m
odd-chicken-testapi-chart      1/1     1            1           85s

出現2個了,現在需要刪除舊版本的deployment的chart:helm delete lumbering-zebu-testapi-chart,通過helm lskubectl get pods可以發現舊版本的deployment都已經被刪除。刪除後同樣可以回滾:

# 將testapi包按順序回滾1次修改,注意不帶-testapi-chart
helm rollback lumbering-zebu 1
# 輸出
Rollback was a success! Happy Helming!
# 驗證
helm ls

但這種情況必須記得刪除包的名字,實際可以通過helm ls --deleted查看已刪除包的名字。

 升級,可以在修改相關的Chart.yaml文件後,直接在其所在目錄運行helm upgrade odd-chicken .命令即可更新:

# 驗證
helm ls
# 版本號已變
NAME       	REVISION	UPDATED                 	STATUS  	CHART               	APP VERSION	NAMESPACE
odd-chicken	2       	Fri Apr 26 19:26:21 2019	DEPLOYED	testapi-chart2-2.1.1	2.0        	default

【設置Helm倉庫】

 越來越覺得這東西像mvn了,Helm的倉庫就是一個WEB服務器,如從charts目錄提供helm服務:helm serve --repo-path ./charts。此外關於Chart服務的管理可能需要安裝Monocular來提供WEB頁面,安裝步驟如下:

# 拉取所需要的鏡像
docker pull registry.cn-shanghai.aliyuncs.com/hhu/defaultbackend:1.4
docker tag registry.cn-shanghai.aliyuncs.com/hhu/defaultbackend:1.4 k8s.gcr.io/defaultbackend:1.4
docker rmi registry.cn-shanghai.aliyuncs.com/hhu/defaultbackend:1.4

# 安裝Nginx Ingress controller
helm install stable/nginx-ingress --set controller.hostNetwork=true,rbac.create=true

# 添加源(最新的源)
helm repo add monocular https://helm.github.io/monocular
# 安裝monocular
helm install monocular/monocular

然後等待,安裝完了之後,即可通過

【使用Helm倉庫】

 使用Helmet作爲Helm倉庫,可以將它部署到K8S集羣中並添加Chart,

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章