docker kata k8s

docker

sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
sudo apt-get install docker.io -y
docker version

kata

ARCH=$(arch)

BRANCH="${BRANCH:-master}"

sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/xUbuntu_$(lsb_release -rs)/ /' > /etc/apt/sources.list.d/kata-containers.list"

curl -sL  http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/xUbuntu_$(lsb_release -rs)/Release.key | sudo apt-key add -

sudo -E apt-get update
sudo -E apt-get -y install kata-runtime kata-proxy kata-shim


sudo kata-runtime kata-check

 

qemu

sudo mount -o loop,offset=3145728 kata-containers-image_clearlinux_1.6.2_agent_4a3627d0c16.img /tmp/tt

qemu-system-x86_64 -m 1024 -boot d -enable-kvm -smp 3 -net nic -net user -hda testing-image.img -cdrom ubuntu-16.04.iso

 

cpio

gzip -dk -S .initrd kata-containers-initrd_alpine_1.6.2_agent_4a3627d0c16.initrd
sudo cpio -i --make-directories < kata-containers-initrd_alpine_1.6.2_agent_4a3627d0c16

 

 

kubernetes

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
sudo vi /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main
sudo apt-get update
apt-cache madison kubeadm
sudo apt-get install -y kubelet=1.14.0-00 kubeadm=1.14.0-00 kubectl=1.14.0-00

src tree

  • api: 輸出接口文檔用,基本是json源碼
  • build:構建腳本
  • cmd:可執行文件入口代碼
  • pkg:項目主目錄,核心具體實現
  • plugin:插件
  • test:測試相關的工具
  • third_party:第三方工具
  • docs:文檔
  • example:使用例子
  • Godeps:項目依賴的Go的第三方包,比如docker客戶端sdk,rest等
  • hack:工具箱,各種編譯,構建,校驗的腳本都在這

對於一門面向對象的語言來說,最後的執行可能是一層接口套一層接口,而接口和實現的分離也造成了當你閱讀到某個地方之後就無法深入下去。或者說,純粹的自頂向下的閱讀方式並不適合面向對象的代碼。所以,目前我的閱讀方法開始變成了碎片式閱讀,先把整個代碼目錄樹給看一遍,然後去最有可能解釋我心中疑問的地方去尋找答案,然後一片片把真相拼合起來。

 

 

 

 

pkg/apis/core/types.go

Pod is a collection of containers.

Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.

The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>.

The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint.

Volume represents a named volume in a pod that may be accessed by any containers in the pod.

Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy.

Endpoints is a collection of endpoints that implement the actual service.

Node is a worker node in Kubernetes


 

cd $HOME
mkdir ~/mix
sudo docker run -it -v /home/$USER/mix:/home/clr/mix --network host --privileged -v /dev:/dev -v /tmp:/tmp dockerimage --mixdir=/home/clr/mix

sudo docker run -it -v /home/$USER/mix:/mix --network host --privileged -v /tmp:/tmp ubuntu /bin/bash

 

 

# Delete all containers
docker rm $(docker ps -a -q)
# Delete all images
docker rmi $(docker images -q)

Cobra commands

 

Policy defines how a object will be configured.

Context holds level security attributes and common settings.

 

Scheduler

The Kubernetes scheduler has only one job: find a node for all pods in the cluster, and let the K8S apiserver know. The apiserver and the kubelet will take care of the rest to start the actual containers.

So let’s see how the scheduling lifecycle really looks like:

  1. A pod is created and its desired state is saved to etcd with the node name unfilled.
  2. The scheduler somehow notices that there is a new pod with no node bound.
  3. It finds the node that best fits that pod.
  4. Tells the apiserver to bind the pod to the node -> saves the new desired state to etcd.
  5. Kubelets are watching bound pods through the apiserver, and start the containers on the particular node.

implementation is very easy:

  1. A loop to watch the unbound pods in the cluster through querying the apiserver.
  2. Some custom logic that finds the best node for a pod.
  3. A request to the bind endpoint on the apiserver.

 

PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource.

 

ReplicaSet

A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.

 

How a ReplicaSet works

A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template.

The link a ReplicaSet has to its Pods is via the Pods’ metadata.ownerReferences field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning ReplicaSet’s identifying information within their ownerReferences field. It’s through this link that the ReplicaSet knows of the state of the Pods it is maintaining and plans accordingly.

A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the OwnerReference is not a controller and it matches a ReplicaSet’s selector, it will be immediately acquired by said ReplicaSet.


When to use a ReplicaSet

A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don’t require updates at all.

This actually means that you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your application in the spec section.

 

reference

official

https://kubernetes.io/docs/concepts/

https://github.com/kata-containers/documentation/blob/master/install/README.md

https://github.com/kata-containers/documentation/blob/master/install/ubuntu-installation-guide.md

http://www.leonstudio.org/p/284

https://github.com/kata-containers/kata-containers

https://github.com/kata-containers/osbuilder#initrd-creation

https://github.com/kata-containers/documentation/blob/master/design/architecture.md

https://katacontainers.io/#

install

https://blog.csdn.net/wangchunfa122/article/details/86529406#kubectlkubeletkubeadm_138

https://linuxconfig.org/how-to-install-kubernetes-on-ubuntu-18-04-bionic-beaver-linux

https://blog.csdn.net/nklinsirui/article/details/80581286

https://yq.aliyun.com/articles/672970

https://kubernetes.io/zh/docs/concepts/containers/images/

https://my.oschina.net/Kanonpy/blog/3006129

details

https://banzaicloud.com/blog/k8s-custom-scheduler/

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章