You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
# 节点状态 kubectl get node NAME STATUS ROLES AGE VERSION k8s-master NotReady master 7m4s v1.19.11 k8s-node01 NotReady <none> 8s v1.19.11 k8s-node02 NotReady <none> 4s v1.19.11
# 检查日志,发现网络插件未安装 journalctl -u kubelet -f Jun 02 14:24:29 k8s-master kubelet[75636]: W0602 14:24:29.172144 75636 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d Jun 02 14:24:32 k8s-master kubelet[75636]: E0602 14:24:32.958021 75636 kubelet.go:2129] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
# CIDR的值,与 kubeadm中“--pod-network-cidr=10.244.0.0/16” 一致 vi calico.yaml 3680 # The default IPv4 pool to create on startup if none exists. Pod IPs will be 3681 # chosen from this range. Changing this value after installation will have 3682 # no effect. This should fall within `--cluster-cidr`. 3683 - name: CALICO_IPV4POOL_CIDR 3684 value: "10.244.0.0/16"
kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 22m v1.19.11 k8s-node01 Ready <none> 15m v1.19.11 k8s-node02 Ready <none> 15m v1.19.11
3.5 节点角色
1 2 3 4 5 6 7 8
kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master Ready master 25m v1.19.11 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master= k8s-node01 Ready <none> 18m v1.19.11 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux k8s-node02 Ready <none> 18m v1.19.11 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID'
# 内核参数告警 systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2021-06-02 14:11:22 CST; 10s ago Docs: https://docs.docker.com Main PID: 52488 (dockerd) Tasks: 10 Memory: 57.5M CPU: 1.767s CGroup: /system.slice/docker.service └─52488 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
Jun 02 14:11:22 k8s-node01 dockerd[52488]: time="2021-06-02T14:11:22.031860026+08:00" level=warning msg="Your kernel does not support swap memory limit" Jun 02 14:11:22 k8s-node01 dockerd[52488]: time="2021-06-02T14:11:22.031910601+08:00" level=warning msg="Your kernel does not support cgroup rt period" Jun 02 14:11:22 k8s-node01 dockerd[52488]: time="2021-06-02T14:11:22.031917871+08:00" level=warning msg="Your kernel does not support cgroup rt runtime" Jun 02 14:11:22 k8s-node01 dockerd[52488]: time="2021-06-02T14:11:22.032178268+08:00" level=info msg="Loading containers: start." Jun 02 14:11:22 k8s-node01 dockerd[52488]: time="2021-06-02T14:11:22.118451062+08:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip c Jun 02 14:11:22 k8s-node01 dockerd[52488]: time="2021-06-02T14:11:22.146043534+08:00" level=info msg="Loading containers: done." Jun 02 14:11:22 k8s-node01 dockerd[52488]: time="2021-06-02T14:11:22.614147613+08:00" level=info msg="Docker daemon" commit=99e3ed8919 graphdriver(s)=overlay2 version=19.03.15 Jun 02 14:11:22 k8s-node01 dockerd[52488]: time="2021-06-02T14:11:22.614842390+08:00" level=info msg="Daemon has completed initialization" Jun 02 14:11:22 k8s-node01 dockerd[52488]: time="2021-06-02T14:11:22.654911232+08:00" level=info msg="API listen on /var/run/docker.sock"