KubeVirt的價值及使用
本文將從Kubevirt的價值,潛力,安裝,架構,特性及如何使用兩方面介紹。
KubeVirt價值
如果說kubevirt的價值是什麼?那麼就是要問爲什麼需要Kubevirt。
-
kubevirt 技術試圖解決在開發團隊已經接受或者試圖接受K8S思想時但存在有些基於虛擬化的工作負載難以實現容器化的情景。
-
更確切地說,KubeVirt提供了一個統一的開發平臺,開發者在同一個共享環境中既能夠對容器化的應用程序構建,修改,部署,也能對虛擬化的應用程序進行同樣的操作。
-
這樣做好處是廣泛而顯著的。依賴基於虛擬機工作負載的團隊有能力迅速容器化應用。隨着虛擬化工作負載直接放置在開發工作流程,隨着時間的推移,團隊仍可以對工作負載進行分解,同時可以無縫地利用其他虛擬化組件。
KubeVirt潛力
利用KubeVirt能做哪些事情?
- 利用KubeVirt和Kubernetes 管理不便於容器化且適合虛擬機的應用。
- 將已有的虛擬化工作負載與新容器化的工作負載相結合共存於同一個平臺。
- 支持在容器中與已有的虛擬化應用有交互的新微服務應用的開發。
KubeVirt架構
重要組件
- virt-api
(1) HTTP API Server作爲所有涉及虛擬化相關的處理流程的入口(Entry Point),負責更新、驗證VMI CRDs;
(2) 提供RESTful API來管理集羣中虛擬機,Kubevirt採用CRD的工作方式,virt-api提供自定義的API請求處理流程,如VNC、 Console、 Start/Stop虛擬機; - virt-controller
(1) 該控制器負責監控虛擬機實例VMI對象和管理集羣中每個虛擬機實例VMI的狀態以及與其相關的Pod;
(2) VMI對象將在其生命週期內始終與容器關聯,但是,由於VMI的遷移,容器實例可能會隨時間變化。 - virt-handler
(1) 在K8S的計算節點上,virt-handler運行於Pod中,作爲DaemonSet;
(2) 類似於virt-controller都是響應式的,virt-handler負責監控每個虛擬機實例的狀態變化,一旦檢測到狀態變化就響應並確保相應操作能達到所需(理想)狀態;
(3) virt-handler負責以下幾方面:保持集羣級VMI Spec與相應libvirt域之間的同步;報告Libvirt域狀態和集羣Spec的變化;調用以節點爲中心的插件以滿足VMI Spec定義的網絡和存儲要求。 - virt-launcher
(1) 每個虛擬機實例(VMI)對象都會對應一個Pod,該Pod中的基礎容器中運行着Kubevirt核心組件virt-launcher;
(2) K8S或者Kubelet是不負責運行VMI的運行的,取而代之的是,羣集中每個節點上的守護進程會負責爲每個Pod啓動一個與VMI對象關聯的VMI進程,無論何時在主機上對其進行調度。
(3) virt-launcher Pod的主要功能是提供cgroups和名稱空間並用於託管VMI進程。
(4) virt-handler通過將VMI的CRD對象傳遞給virt-launcher來通知virt-launcher啓動VMI。然後virt-launcher在其容器中使用本地libvirtd實例來啓動VMI。從此開始,virt-launcher將監控VMI進程,並在VMI實例退出後終止;
(5) 如果K8S的Runtime在VMI退出之前試圖關閉virt-launcher Pod時,virt-launcher會將信號從K8S轉發給VMI進程,並嘗試推遲pod的終止,直到VMI成功關閉。 - Libvirtd
(1) 每個VMI實例對應的Pod都會有一個libvirtd實例;
(2) virt-launcher藉助於libvirtd來管理VMI實例的生命週期;
KubeVirt安裝與使用
Checking if nested virtualization is supported
For Intel Processors
# cat /sys/module/kvm_intel/parameters/nested
Y[1]
For AMD Processors
# $ cat /sys/module/kvm_amd/parameters/nested
Y[1]
If Value is not Y[1], do actions as below:
Intel Processor
# modprobe -r kvm_intel
# modprobe kvm_intel nested=1
AMD Processor
# modprobe -r kvm_amd
# modprobe kvm_amd nested=1
If you want to enable it permanently, as below:
Intel Processor
# echo 'options kvm_intel nested=1' >/etc/modprobe.d/kvm-nested.conf
AMD Processor
# echo 'options kvm_amd nested=1' >/etc/modprobe.d/kvm-nested.conf
Check for the Virtualization Extensions
If CPU supports virtualization extensions, Some messages will be output after executing the following command:
# egrep 'svm|vmx' /proc/cpuinfo
If not, create the ConfigMap so that KubeVirt uses emulation mode:
# kubectl create configmap kubevirt-config -n kubevirt --from-literal debug.useEmulation=true
# KubeVirt's Version
$ export KUBEVIRT_VERSION=v0.23.0
# creates KubeVirt operator
$ kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml
# creates KubeVirt KV custom resource
$ kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr.yaml
Download virtctl and Yamls related to KubeVirt:
(1) Create Kubevirt Directory and Set Kubevirt version ENV
# mkdir Kubevirt
# pushd Kubevirt
# export KUBEVIRT_VERSION="v0.23.0"
(2) Download KubeVirt VirtCtl and Install
# curl -L -o /usr/local/bin/virtctl https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64
# chmod +x /usr/local/bin/virtctl
# virtctl --help
virtctl controls virtual machine related operations on your kubernetes cluster.
Available Commands:
console Connect to a console of a virtual machine instance.
expose Expose a virtual machine instance, virtual machine, or virtual machine instance replica set as a new service.
help Help about any command
image-upload Upload a VM image to a PersistentVolumeClaim.
restart Restart a virtual machine.
start Start a virtual machine.
stop Stop a virtual machine.
version Print the client and server version information.
vnc Open a vnc connection to a virtual machine instance.
Use "virtctl <command> --help" for more information about a given command.
Use "virtctl options" for a list of global command-line options (applies to all commands).
(3) Download Kubevirt Operator and Custom Resources yaml files
# wget https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml
# wget https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr.yaml
(4) Download testVM yaml
# wget https://raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/vm.yaml
# popd Kubevirt
(5) CDI Deploy
VERSION=v1.10.9
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator-cr.yaml
(6) Deploy hostpath
mkdir -p /var/run/kubevirt/hostpath
kubectl create -f https://github.com/kubevirt/kubevirt/blob/master/docs/devel/hostpath-provisioner.yaml
Deploy KubeVirt Operator
# pushd Kubevirt
# kubectl create -f kubevirt-operator.yaml
# kubectl get pods -n kubevirt
NAME READY STATUS RESTARTS AGE
virt-operator-7f589cd8cb-2fsm8 0/1 ContainerCreating 0 12s
...
virt-operator-7f589cd8cb-n9knm 1/1 Running 0 28s
Deploy KubeVirt
# kubectl create -f kubevirt-cr.yaml
# kubectl get pods -n kubevirt
NAME READY STATUS RESTARTS AGE
virt-api-7dc455b79c-b7l8j 1/1 Running 1 31h
virt-api-7dc455b79c-p8sl5 1/1 Running 1 31h
virt-controller-76cccd9979-kr7sv 1/1 Running 3 31h
virt-controller-76cccd9979-q9cxk 1/1 Running 3 31h
virt-handler-k8df6 1/1 Running 1 31h
virt-operator-7f589cd8cb-2fsm8 1/1 Running 2 2d23h
virt-operator-7f589cd8cb-n9knm 1/1 Running 5 2d23h
then the pod, svc status as below:
# kubectl get po -n kubevirt
NAME READY STATUS RESTARTS AGE
virt-api-7dc455b79c-b7l8j 1/1 Running 0 60m
virt-api-7dc455b79c-p8sl5 1/1 Running 0 60m
virt-controller-76cccd9979-kr7sv 1/1 Running 0 57m
virt-controller-76cccd9979-q9cxk 1/1 Running 0 57m
virt-handler-k8df6 1/1 Running 0 60m
virt-operator-7f589cd8cb-2fsm8 1/1 Running 1 40h
virt-operator-7f589cd8cb-n9knm 1/1 Running 1 40h
# kubectl get svc -n kubevirt
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubevirt-prometheus-metrics ClusterIP 10.96.32.24 <none> 443/TCP 62m
virt-api ClusterIP 10.108.69.243 <none> 443/TCP 62m
Deploy a VirtualMachine
# kubectl apply -f vm.yaml
# kubectl get vms
NAME AGE RUNNING VOLUME
testvm 54m false
# kubectl get vms -o yaml testvm
Note:
Field RUNNING is set “false”, that means we’ve only defined the object but it now needs to be instantiated but not be running,
We should start VM with virtctl:
# virtctl start testvm
VM testvm was scheduled to start
# kubectl get vms
NAME AGE RUNNING VOLUME
testvm 59m true
# kubectl get vmis
NAME AGE PHASE IP NODENAME
testvm 25s Running 10.244.166.133 node1
# kubectl get vmis -o yaml testvm
Note:
(1) vmis stands for VirtualMachineInstance.
(2) Field PHASE indicates VMI’s progress transitioning from one state to the next till to RUNNING.
Using virtctl to connect to the VMI consoles interfaces, and EXIT with “Ctrl+]”
# virtctl console testvm
Successfully connected to testvm console. The escape sequence is ^]
login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root.
testvm login:cirros
Password:gocubsgo
$ uname -msr
Linux 4.4.0-28-generic x86_64
$
VNC to Connet to VM, VNC requires remote-viewer from the virt-viewer package installed on the host.
# virtctl vnc testvm
Clean Up VM Instance
(1) Stop VM
# virtctl stop testvm
VM testvm was scheduled to stop
(2) Delete VM
# kubectl delete vm testvm
virtualmachine.kubevirt.io "testvm" deleted