快樂的小蜜蜂 - 在家實踐 K8s 的安裝

當下容器技術很火,爲了保持時代同步,再cento7.0 上安裝了k8s:

1: 找了兩臺server:

192.168.122.168  k8s-master

192.168.122.234  k8s-node

上面hostname 來設置:

hostnamectl set-hostname k8s-master

hostnamectl set-hostname k8s-node

2: 關閉防火牆:

systemctl stop firewalld

systemctl disable firewalld

setenforce 0

#查看防火牆狀態
firewall-cmd --state

3: 安裝epel-release源

yum -y install epel-release

4: 在主機上安裝: 192.168.122.168 :kubernetes Master

使用yum安裝etcd、kubernetes-master:

yum -y install etcd kubernetes-master

編輯:vi /etc/etcd/etcd.conf文件,修改結果如下:

#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="default"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
 

配置:vi /etc/kubernetes/apiserver文件,配置結果如下:

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""
 

注意: 上面最後一段把“serviceaccount ” 參數刪掉了。

5:啓動etcd、kube-apiserver、kube-controller-manager、kube-scheduler等服務,並設置開機啓動。

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES;systemctl enable $SERVICES;systemctl status $SERVICES ; done

6: 在etcd中定義flannel網絡

etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'

》》》》》》》》以上master主機上的配置安裝什麼的都弄完了》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》

接下來弄node從機上的配置安裝什麼的

 7 在node機上192.168.26.228安裝kubernetes Node和flannel組件應用

yum -y install flannel kubernetes-node

8: 爲flannel網絡指定etcd服務,修改/etc/sysconfig/flanneld文件,配置結果如下圖:

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.122.168:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
 

9; 修改:vi /etc/kubernetes/config文件,配置結果如下圖:

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.122.168:8080"
 

10:修改node機的kubelet配置文件/etc/kubernetes/kubelet

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.122.168"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.122.168:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""
 

11: node節點機上啓動kube-proxy,kubelet,docker,flanneld等服務,並設置開機啓動。

for SERVICES in kube-proxy kubelet docker flanneld;do systemctl restart $SERVICES;systemctl enable $SERVICES;systemctl status $SERVICES; done

》》》》》》以上所有master主機,node節點機上的配置就完成了,接下來看看k8s集羣是否搭建起來了》》》》》》》》》》》》》》》》》》》

12: 在master主機上192.168.122.168執行如下命令,查看運行的node節點機器:

kubectl get nodes

[root@k8s-master ~]# kubectl get nodes
NAME              STATUS    AGE
192.168.122.168   Ready     2h
 

成功啦! 另外可以用; kubectl describe nodes 來看node 的運行情況:

root@k8s-master ~]# kubectl describe node
Name:            192.168.122.168
Role:            
Labels:            beta.kubernetes.io/arch=amd64
            beta.kubernetes.io/os=linux
            kubernetes.io/hostname=192.168.122.168
Taints:            <none>
CreationTimestamp:    Sat, 19 Oct 2019 07:24:05 -0400
Phase:            
Conditions:
  Type            Status    LastHeartbeatTime            LastTransitionTime            Reason                Message
  ----            ------    -----------------            ------------------            ------                -------
  OutOfDisk         False     Sat, 19 Oct 2019 09:46:13 -0400     Sat, 19 Oct 2019 09:24:10 -0400     KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure     False     Sat, 19 Oct 2019 09:46:13 -0400     Sat, 19 Oct 2019 07:24:05 -0400     KubeletHasSufficientMemory     kubelet has sufficient memory available
  DiskPressure         False     Sat, 19 Oct 2019 09:46:13 -0400     Sat, 19 Oct 2019 07:24:05 -0400     KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready         True     Sat, 19 Oct 2019 09:46:13 -0400     Sat, 19 Oct 2019 09:24:20 -0400     KubeletReady             kubelet is posting ready status
Addresses:        192.168.122.168,192.168.122.168,192.168.122.168  (注意,這個有三個node ,表示有三個node, 如果一個node shutdown, 那麼就會變成兩個ip address)
Capacity:
 alpha.kubernetes.io/nvidia-gpu:    0
 cpu:                    2
 memory:                1014848Ki
 pods:                    110
Allocatable:
 alpha.kubernetes.io/nvidia-gpu:    0
 cpu:                    2
 memory:                1014848Ki
 pods:                    110
System Info:
 Machine ID:            ecb8036a93214aef8f127f89ceb8fb99
 System UUID:            ECB8036A-9321-4AEF-8F12-7F89CEB8FB99
 Boot ID:            b529266e-89cf-4d5f-9f35-54d727f522a6
 Kernel Version:        3.10.0-957.el7.x86_64
 OS Image:            CentOS Linux 7 (Core)
 Operating System:        linux
 Architecture:            amd64
 Container Runtime Version:    docker://1.13.1
 Kubelet Version:        v1.5.2
 Kube-Proxy Version:        v1.5.2
ExternalID:            192.168.122.168
Non-terminated Pods:        (0 in total)
  Namespace            Name        CPU Requests    CPU Limits    Memory Requests    Memory Limits
  ---------            ----        ------------    ----------    ---------------    -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.
  CPU Requests    CPU Limits    Memory Requests    Memory Limits
  ------------    ----------    ---------------    -------------
  0 (0%)    0 (0%)        0 (0%)        0 (0%)
Events:
  FirstSeen    LastSeen    Count    From                SubObjectPath    Type        Reason            Message
  ---------    --------    -----    ----                -------------    --------    ------            -------
  22m        22m        1    {kubelet 192.168.122.168}            Normal        Starting        Starting kubelet.
  22m        22m        1    {kubelet 192.168.122.168}            Warning        ImageGCFailed        unable to find data for container /
  22m        22m        2    {kubelet 192.168.122.168}            Normal        NodeHasSufficientDisk    Node 192.168.122.168 status is now: NodeHasSufficientDisk
  22m        22m        1    {kubelet 192.168.122.168}            Normal        NodeHasSufficientMemory    Node 192.168.122.168 status is now: NodeHasSufficientMemory
  22m        22m        1    {kubelet 192.168.122.168}            Normal        NodeHasNoDiskPressure    Node 192.168.122.168 status is now: NodeHasNoDiskPressure
  22m        22m        1    {kubelet 192.168.122.168}            Warning        Rebooted        Node 192.168.122.168 has been rebooted, boot id: b529266e-89cf-4d5f-9f35-54d727f522a6
  22m        22m        1    {kubelet 192.168.122.168}            Normal        NodeNotReady        Node 192.168.122.168 status is now: NodeNotReady
  21m        21m        1    {kubelet 192.168.122.168}            Normal        NodeReady        Node 192.168.122.168 status is now: NodeReady
[root@k8s-master ~]# 
 

具體測試方法可以reboot node server 來看master 上kubectl get node 就變成notready 了。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章