LinuCエヴァンジェリストの鯨井貴博@opensourcetechです。
前回記事で作成したKVM環境上のVMにkubeadmでkubernetesクラスターを構築してみます。
今回構築する構成ですが、以下になります。
なお、kubernetes公式の構築ドキュメントは、以下です。
https://kubernetes.io/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
ここからはクラスターに参加する全ノードで実施します。
始める前に
まず、インストール開始前の確認です。
swapが有効になっていたので、無効化します。
※起動時の読込(/etc/fstab)も忘れないように。
kubeuser@kubemaster1:~$ swapon -s Filename Type Size Used Priority /swap.img file 2097148 524 -2 kubeuser@kubemaster1:~$ sudo swapoff -a kubeuser@kubemaster1:~$ swapon -s kubeuser@kubemaster1:~$ sudo vi /etc/fstab kubeuser@kubemaster1:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation /dev/disk/by-id/dm-uuid-LVM-P0dKrKHJE2vSGmg4XOnabjUBNgi9Rq9knLPD3QGY4RJYHxFUsYOFRVyxJwUnB92O / ext4 defaults 0 1 # /boot was on /dev/vda2 during curtin installation /dev/disk/by-uuid/6aa1b86e-f727-4312-8a35-2f7d6a7e5d23 /boot ext4 defaults 0 1 #/swap.img none swap sw 0 0
iptablesがブリッジを通過するトラフィックを処理できるようにする
br_netfilterモジュールのロード。
kubeuser@kubeworker2:~$ lsmod | grep br_netfilter kubeuser@kubeworker2:~$ sudo modprobe br_netfilter [sudo] password for kubeuser: kubeuser@kubeworker2:~$ lsmod | grep br_netfilter br_netfilter 28672 0 bridge 176128 1 br_netfilter
そして、Linuxノードのiptablesがブリッジを通過するトラフィックを正確に処理するための設定をします。
kubeuser@kubemaster1:~$ sudo vi /etc/sysctl.d/k8s.conf ubeuser@kubemaster1:~$ cat /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 kubeuser@kubemaster1:~$ sudo sysctl --system * Applying /etc/sysctl.d/10-console-messages.conf ... kernel.printk = 4 4 1 7 * Applying /etc/sysctl.d/10-ipv6-privacy.conf ... net.ipv6.conf.all.use_tempaddr = 2 net.ipv6.conf.default.use_tempaddr = 2 * Applying /etc/sysctl.d/10-kernel-hardening.conf ... kernel.kptr_restrict = 1 * Applying /etc/sysctl.d/10-link-restrictions.conf ... fs.protected_hardlinks = 1 fs.protected_symlinks = 1 * Applying /etc/sysctl.d/10-magic-sysrq.conf ... kernel.sysrq = 176 * Applying /etc/sysctl.d/10-network-security.conf ... net.ipv4.conf.default.rp_filter = 2 net.ipv4.conf.all.rp_filter = 2 * Applying /etc/sysctl.d/10-ptrace.conf ... kernel.yama.ptrace_scope = 1 * Applying /etc/sysctl.d/10-zeropage.conf ... vm.mmap_min_addr = 65536 * Applying /usr/lib/sysctl.d/50-default.conf ... net.ipv4.conf.default.promote_secondaries = 1 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument net.ipv4.ping_group_range = 0 2147483647 net.core.default_qdisc = fq_codel fs.protected_regular = 1 fs.protected_fifos = 1 * Applying /usr/lib/sysctl.d/50-pid-max.conf ... kernel.pid_max = 4194304 * Applying /etc/sysctl.d/99-sysctl.conf ... * Applying /etc/sysctl.d/k8s.conf ... net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 * Applying /usr/lib/sysctl.d/protect-links.conf ... fs.protected_fifos = 1 fs.protected_hardlinks = 1 fs.protected_regular = 2 fs.protected_symlinks = 1 * Applying /etc/sysctl.conf ...
iptablesがnftablesバックエンドを使用しないようにする
nftablesバックエンドは現在のkubeadmパッケージと互換性がないので、iptables-legacyが使用されるように、変更します。
kubeuser@kubemaster1:~$ sudo update-alternatives --set iptables /usr/sbin/iptables-legacy kubeuser@kubemaster1:~$ sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy kubeuser@kubemaster1:~$ sudo update-alternatives --set arptables /usr/sbin/arptables-legacy update-alternatives: using /usr/sbin/arptables-legacy to provide /usr/sbin/arptables (arptables) in manual mode kubeuser@kubemaster1:~$ sudo update-alternatives --set ebtables /usr/sbin/ebtables-legacy update-alternatives: using /usr/sbin/ebtables-legacy to provide /usr/sbin/ebtables (ebtables) in manual mode
ランタイムのインストール
Docker・CRI-O・Containerdなどから選択できますが、
今回はDockerを使いました。
参考:https://kubernetes.io/ja/docs/setup/production-environment/container-runtimes/
kubeuser@kubemaster1:~$ sudo apt install docker.io [sudo] password for kubeuser: Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: bridge-utils containerd dns-root-data dnsmasq-base libidn11 pigz runc ubuntu-fan Suggested packages: ifupdown aufs-tools cgroupfs-mount | cgroup-lite debootstrap docker-doc rinse zfs-fuse | zfsutils The following NEW packages will be installed: bridge-utils containerd dns-root-data dnsmasq-base docker.io libidn11 pigz runc ubuntu-fan 0 upgraded, 9 newly installed, 0 to remove and 44 not upgraded. Need to get 74.5 MB of archives. After this operation, 361 MB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://jp.archive.ubuntu.com/ubuntu focal/universe amd64 pigz amd64 2.4-1 [57.4 kB] Get:2 http://jp.archive.ubuntu.com/ubuntu focal/main amd64 bridge-utils amd64 1.6-2ubuntu1 [30.5 kB] Get:3 http://jp.archive.ubuntu.com/ubuntu focal-updates/main amd64 runc amd64 1.0.1-0ubuntu2~20.04.1 [4155 kB] Get:4 http://jp.archive.ubuntu.com/ubuntu focal-updates/main amd64 containerd amd64 1.5.5-0ubuntu3~20.04.1 [33.0 MB] Get:5 http://jp.archive.ubuntu.com/ubuntu focal/main amd64 dns-root-data all 2019052802 [5300 B] Get:6 http://jp.archive.ubuntu.com/ubuntu focal/main amd64 libidn11 amd64 1.33-2.2ubuntu2 [46.2 kB] Get:7 http://jp.archive.ubuntu.com/ubuntu focal-updates/main amd64 dnsmasq-base amd64 2.80-1.1ubuntu1.4 [315 kB] Get:8 http://jp.archive.ubuntu.com/ubuntu focal-updates/universe amd64 docker.io amd64 20.10.7-0ubuntu5~20.04.2 [36.9 MB] Get:9 http://jp.archive.ubuntu.com/ubuntu focal/main amd64 ubuntu-fan all 0.12.13 [34.5 kB] Fetched 74.5 MB in 32s (2336 kB/s) Preconfiguring packages ... Selecting previously unselected package pigz. (Reading database ... 107919 files and directories currently installed.) Preparing to unpack .../0-pigz_2.4-1_amd64.deb ... Unpacking pigz (2.4-1) ... Selecting previously unselected package bridge-utils. Preparing to unpack .../1-bridge-utils_1.6-2ubuntu1_amd64.deb ... Unpacking bridge-utils (1.6-2ubuntu1) ... Selecting previously unselected package runc. Preparing to unpack .../2-runc_1.0.1-0ubuntu2~20.04.1_amd64.deb ... Unpacking runc (1.0.1-0ubuntu2~20.04.1) ... Selecting previously unselected package containerd. Preparing to unpack .../3-containerd_1.5.5-0ubuntu3~20.04.1_amd64.deb ... Unpacking containerd (1.5.5-0ubuntu3~20.04.1) ... Selecting previously unselected package dns-root-data. Preparing to unpack .../4-dns-root-data_2019052802_all.deb ... Unpacking dns-root-data (2019052802) ... Selecting previously unselected package libidn11:amd64. Preparing to unpack .../5-libidn11_1.33-2.2ubuntu2_amd64.deb ... Unpacking libidn11:amd64 (1.33-2.2ubuntu2) ... Selecting previously unselected package dnsmasq-base. Preparing to unpack .../6-dnsmasq-base_2.80-1.1ubuntu1.4_amd64.deb ... Unpacking dnsmasq-base (2.80-1.1ubuntu1.4) ... Selecting previously unselected package docker.io. Preparing to unpack .../7-docker.io_20.10.7-0ubuntu5~20.04.2_amd64.deb ... Unpacking docker.io (20.10.7-0ubuntu5~20.04.2) ... Selecting previously unselected package ubuntu-fan. Preparing to unpack .../8-ubuntu-fan_0.12.13_all.deb ... Unpacking ubuntu-fan (0.12.13) ... Setting up runc (1.0.1-0ubuntu2~20.04.1) ... Setting up dns-root-data (2019052802) ... Setting up libidn11:amd64 (1.33-2.2ubuntu2) ... Setting up bridge-utils (1.6-2ubuntu1) ... Setting up pigz (2.4-1) ... Setting up containerd (1.5.5-0ubuntu3~20.04.1) ... Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service. Setting up docker.io (20.10.7-0ubuntu5~20.04.2) ... Adding group `docker' (GID 117) ... Done. Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service. Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket. Setting up dnsmasq-base (2.80-1.1ubuntu1.4) ... Setting up ubuntu-fan (0.12.13) ... Created symlink /etc/systemd/system/multi-user.target.wants/ubuntu-fan.service → /lib/systemd/system/ubuntu-fan.service. Processing triggers for systemd (245.4-4ubuntu3.11) ... Processing triggers for man-db (2.9.1-1) ... Processing triggers for dbus (1.12.16-2ubuntu2.1) ... Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
kubeadm、kubelet、kubectlのインストール
apt-transport-httpsとcurlのインストール。
これからの作業用に必要なものです。
euser@kubemaster1:~$ sudo apt update && sudo apt install apt-transport-https curl Hit:1 http://jp.archive.ubuntu.com/ubuntu focal InRelease Get:2 http://jp.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB] Get:3 http://jp.archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB] Get:4 http://jp.archive.ubuntu.com/ubuntu focal-security InRelease [114 kB] Fetched 336 kB in 3s (114 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done 44 packages can be upgraded. Run 'apt list --upgradable' to see them. Reading package lists... Done Building dependency tree Reading state information... Done curl is already the newest version (7.68.0-1ubuntu2.7). curl set to manually installed. The following NEW packages will be installed: apt-transport-https 0 upgraded, 1 newly installed, 0 to remove and 44 not upgraded. Need to get 4680 B of archives. After this operation, 162 kB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://jp.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.6 [4680 B] Fetched 4680 B in 0s (61.2 kB/s) Selecting previously unselected package apt-transport-https. (Reading database ... 108273 files and directories currently installed.) Preparing to unpack .../apt-transport-https_2.0.6_all.deb ... Unpacking apt-transport-https (2.0.6) ... Setting up apt-transport-https (2.0.6) ...
kubernetes用のAPTレポジトリ追加して、レポジトリキャッシュの更新をします。
kubeuser@kubemaster1:~$ sudo vi /etc/apt/sources.list.d/kubernetes.list kubeuser@kubemaster1:~$ cat /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main kubeuser@kubemaster1:~$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - OK kubeuser@kubemaster1:~$ sudo apt update Hit:1 http://jp.archive.ubuntu.com/ubuntu focal InRelease Get:2 http://jp.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB] Get:3 http://jp.archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB] Get:5 http://jp.archive.ubuntu.com/ubuntu focal-security InRelease [114 kB] Get:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [9383 B] Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [51.6 kB] Fetched 388 kB in 4s (91.4 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done 44 packages can be upgraded. Run 'apt list --upgradable' to see them.
kubeadm・kubelet・kubectlをインストールします。
kubeuser@kubemaster1:~$ sudo apt install kubeadm=1.18.0-00 kubelet=1.18.0-00 kubectl=1.18.0-00 Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: conntrack cri-tools kubernetes-cni socat Suggested packages: nftables The following NEW packages will be installed: conntrack cri-tools kubeadm kubectl kubelet kubernetes-cni socat 0 upgraded, 7 newly installed, 0 to remove and 44 not upgraded. Need to get 72.9 MB of archives. After this operation, 301 MB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://jp.archive.ubuntu.com/ubuntu focal/main amd64 conntrack amd64 1:1.4.5-2 [30.3 kB] Get:2 http://jp.archive.ubuntu.com/ubuntu focal/main amd64 socat amd64 1.7.3.3-2 [323 kB] Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.19.0-00 [11.2 MB] Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.8.7-00 [25.0 MB] Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.18.0-00 [19.4 MB] Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.18.0-00 [8822 kB] Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.18.0-00 [8163 kB] Fetched 72.9 MB in 13s (5579 kB/s) Selecting previously unselected package conntrack. (Reading database ... 108277 files and directories currently installed.) Preparing to unpack .../0-conntrack_1%3a1.4.5-2_amd64.deb ... Unpacking conntrack (1:1.4.5-2) ... Selecting previously unselected package cri-tools. Preparing to unpack .../1-cri-tools_1.19.0-00_amd64.deb ... Unpacking cri-tools (1.19.0-00) ... Selecting previously unselected package kubernetes-cni. Preparing to unpack .../2-kubernetes-cni_0.8.7-00_amd64.deb ... Unpacking kubernetes-cni (0.8.7-00) ... Selecting previously unselected package socat. Preparing to unpack .../3-socat_1.7.3.3-2_amd64.deb ... Unpacking socat (1.7.3.3-2) ... Selecting previously unselected package kubelet. Preparing to unpack .../4-kubelet_1.18.0-00_amd64.deb ... Unpacking kubelet (1.18.0-00) ... Selecting previously unselected package kubectl. Preparing to unpack .../5-kubectl_1.18.0-00_amd64.deb ... Unpacking kubectl (1.18.0-00) ... Selecting previously unselected package kubeadm. Preparing kubeuser@kubemaster1:~$ sudo apt-mark hold kubelet kubeadm kubectl kubelet set on hold. kubeadm set on hold. kubectl set on hold.
ここからは1台目のmasterノードのみで実施します。
CNI(Container Networking Interface)の設定
コンテナが稼働するPodネットワークに関する設定。
今回は、Calicoを使いました。
参考:https://kubernetes.io/ja/docs/concepts/cluster-administration/networking/
以下にあるcalico.yamlを編集して使用します。
https://docs.projectcalico.org/manifests/calico.yaml
kubeuser@kubemaster1:~$ wget https://docs.projectcalico.org/manifests/calico.yaml --2021-12-01 11:03:11-- https://docs.projectcalico.org/manifests/calico.yaml Resolving docs.projectcalico.org (docs.projectcalico.org)... 2406:da18:880:3800:1655:e904:cce5:66a5, 2400:6180:0:d1::5be:9001, 178.128.124.245, ... Connecting to docs.projectcalico.org (docs.projectcalico.org)|2406:da18:880:3800:1655:e904:cce5:66a5|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 217525 (212K) [text/yaml] Saving to: ‘calico.yaml’ calico.yaml 100%[===================>] 212.43K 518KB/s in 0.4s 2021-12-01 11:03:12 (518 KB/s) - ‘calico.yaml’ saved [217525/217525] kubeuser@kubemaster1:~$ ls kubeuser@kubemaster1:~$ cat -n calico.yaml 1 --- 2 # Source: calico/templates/calico-config.yaml 3 # This ConfigMap is used to configure a self-hosted Calico installation. 4 kind: ConfigMap 5 apiVersion: v1 6 metadata: 7 name: calico-config . . . 4221 # no effect. This should fall within `--cluster-cidr`. 4222 # - name: CALICO_IPV4POOL_CIDR 4223 # value: "192.168.0.0/16" 4224 # Disable file logging so `kubectl logs` works. . . .
4222行目、及び4223行目を編集します。
※どのサブネットを使うかは、それぞれの環境に合わせてください。
4221 # no effect. This should fall within `--cluster-cidr`. 4222 - name: CALICO_IPV4POOL_CIDR 4223 value: "10.0.0.0/16" 4224 # Disable file logging so `kubectl logs` works.
/etc/hostsを編集します。
※クラスターを組む、masterノード3台・workerノード2台分を追記しています。
kubeuser@kubemaster1:~$ sudo vi /etc/hosts kubeuser@kubemaster1:~$ cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 kube_master1 192.168.1.251 k8smaster1 192.168.1.252 k8smaster2 192.168.1.249 k8smaster3 192.168.1.253 k8sworker 192.168.1.254 k8sworker2 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
kubeadm-config.yamlの編集
kubeuser@kubemaster1:~$ sudo vi kubeadm-config.yaml kubeuser@kubemaster1:~$ cat kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.18.0 controlPlaneEndpoint: "k8smaster1:6443" networking: podSubnet: 10.0.0.0/16 ・・・ここを編集(calico.yamlのCALICO_IPV4POOL_CIDRと一致させる)
kubeadm initを実行して、クラスター1台目のmasterノードを組み込みます。
出力の最後に、masterノード・workerノードを追加するためのコマンドが出力されます。
kubeuser@kubemaster1:~$ sudo kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out W1201 11:21:16.673692 49217 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.0 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubemaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8smaster1] and IPs [10.96.0.1 192.168.1.251] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubemaster1 localhost] and IPs [192.168.1.251 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubemaster1 localhost] and IPs [192.168.1.251 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W1201 11:22:21.499446 49217 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W1201 11:22:21.500873 49217 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 32.507684 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 3017c568250f52e8e22d2b3a5b8838f071016c0fe5e740c38652f85ffa99198a [mark-control-plane] Marking the node kubemaster1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kubemaster1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: cde4rl.f5gk2l5tt0k45nkh [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join k8smaster1:6443 --token cde4rl.f5gk2l5tt0k45nkh \ --disco kubeuser@kubemaster1:~$ ls calico.yaml kubeadm-config.yaml kubeadm-init.out
kubectlコマンドを現在のユーザーで実行できるようにします。
※このコマンドも、kubeadm initの出力に含まれています。
kubeuser@kubemaster1:~$ mkdir -p $HOME/.kube kubeuser@kubemaster1:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config kubeuser@kubemaster1:~$ ls -la $HOME/.kube/config -rw------- 1 root root 5450 Dec 1 11:28 /home/kubeuser/.kube/config kubeuser@kubemaster1:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config kubeuser@kubemaster1:~$ ls -la $HOME/.kube/config -rw------- 1 kubeuser kubeuser 5450 Dec 1 11:28 /home/kubeuser/.kube/config kubeuser@kubemaster1:~$ cat ~/.kube/config apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1USXdNVEV4TWpJeE5sb1hEVE14TVRFeU9URXhNakl4Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnJ0CkZFVFVjbitnZjh3T3RPNFFkdmFhNmtzV1hQSk5nckdGbkUrcGtLU2hEU3ZVYXFOcGEvU3J6Zyt6R2JYZXliRTIKdjh0Z242Z0RWVGcwd24vZXR1YVJZc2N1SDVFcnFTamdUbFpRRUo3THpnR2p2OTM2UzRjVCtXdjYrK3pJL2lKcgpFS0lYRWIyVHFvWkpnZEhKVGNHbCsrdjdBYWwvNUdTZWE0SmhjRit5TXRsWFo5NE1ORko2VHp3YyswTVo2VDhkCnVQNjRlWHgvdEtXeDMrdG8ydElCR01jU3hqZDNhTE04bXRLWVo1SzdNTHRuM2hKR0xxM05nb0psdVRub1lWQm8KbVZYVTIrTDFSYnJSS3RmdmIwRWZRNnUwcnFCazNSN0l5MHJRNkl6a1kyVUoxdUNKNVV5UWRvczhjUXdrRlFWdgpLK1UrV2NEanVwbDVPRDhla1ZzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFFa0hmRjJmMTl3dmRYQ3hIdVhWZHVNVHVFWTkKaGU4TXZUQTlValhpTm1HWUZYSDQrZW53WGlLL3FOTEh4WUVhR2RLOTJhWGJmcFlGVG5BSkx6cDA4VzFNSTk3WgpoMFVLUVdsWTl2elhPalZLTjEwZnBkY1A5eThvVk0wSWhtWXdDbk9nZmdpRFdtM1QxL2dWbi9lL09xTWlsMmN4CjFjV2ZBeDlpRjlDVVA0ek5WbkUwUlNLQTI5c09TQS9nMEY2a3N0bTlHVmV3enlIZlZaalZUUVFGdEIxbHJTRFMKeitCQ0RFcERmOEFmejhZOXAwVHNESkFsQm5WempKZEhFcnV3TlUzSmVIUDcwZnB1QUtTYmZpT2J0ajBpVDVKMgpuZ1lhUDJVTitJSzYxUHFCb3ZIT3pNeE0rb0ZLODIyTlkvRXJyaEZGdVBJb1FKbEVGNkRPN0IrVUFGQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://k8smaster1:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJWDNtZXU5K0s4UFV3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeU1ERXhNVEl5TVRaYUZ3MHlNakV5TURFeE1USXlNakJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXRuR3ByOVgxTUtMbVhhc04KUHRLOFUyZTdTNFFqb2V5UXdQSTdTSjVvQktFSGpSZDFkNUFRRXFUajRpbFlpODVydnJDRzZXVC84RW00eFQxdgpIYk1EbWdDMXNxVWVINjF0THpQOElkOEtJUjJyRGZFQk9TL3U4RnlVeUpQZyt5VENOaVJXWTZ5U21sL2NaVVFNClNtbDdXVDJPVmtNdzlCa3ZEcTNjRjMxRFdWVkRHS0l0dENxcEZ3d1B1WHE2N0xoNnhHLzlpK05UYVN0YVlPcU8KY2xORXNxQ1NoMVdVUkNiZWtma0k0OFRkTGdZd0dJL3lVV2xpbGEyRzUzOVRSVkEvc2pJVHpSbHVsZXVJY21hcQpvWGJiTThKTTF0UmF2MkJXRTZGbVY0NnFFdjJvMVBWL1kzZFF3Z054V0JJcWZ5NnVzZ2ZoejVuakd2K3BYUlZqClpHaHpzd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFCNitkSndvZ1pNdDJkVHlFdkp3aXhwcnE1U1Rvb3BmRUdISwo5L1loWWtDbmhKY0FSUGg3VXU5bW1yS3VsVWlka2N2SExRaGlFWEMxRzBLbWUyTkNHdGRJQ0V6cHJtbEExOGlvCmlEU2NVR2RqRnpBNkVqMjZFcUQvaVpkZHkxQWtXVlRZT2owKzRSUWVEWE5xa1FnL3FWNStURGpUb0NRNVdrZ0kKQ2lhU3hlTmxSZXc1WmVlTXo2WjU0bGhvcDNTcWlvaVFnbkM1U3dTbENQOEk3NENLVG9RYXNxcEtPSk5HR2JCMgpwRmtQOEJwWXYxTXlpYVgxKzVBYUhHaGVVR29PcG5VaU5RQ3E0ZktBUW94K04rTEl3OWJRdTBPSCsvVmJOdzRzCkYrcG5SRXZ1WG9zTFY3aVBMQXNNSHpHOUpOczhqcFdMZHR1ekpXSG5MQ0Fhb2lMUGNPVT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBdG5HcHI5WDFNS0xtWGFzTlB0SzhVMmU3UzRRam9leVF3UEk3U0o1b0JLRUhqUmQxCmQ1QVFFcVRqNGlsWWk4NXJ2ckNHNldULzhFbTR4VDF2SGJNRG1nQzFzcVVlSDYxdEx6UDhJZDhLSVIyckRmRUIKT1MvdThGeVV5SlBnK3lUQ05pUldZNnlTbWwvY1pVUU1TbWw3V1QyT1ZrTXc5Qmt2RHEzY0YzMURXVlZER0tJdAp0Q3FwRnd3UHVYcTY3TGg2eEcvOWkrTlRhU3RhWU9xT2NsTkVzcUNTaDFXVVJDYmVrZmtJNDhUZExnWXdHSS95ClVXbGlsYTJHNTM5VFJWQS9zaklUelJsdWxldUljbWFxb1hiYk04Sk0xdFJhdjJCV0U2Rm1WNDZxRXYybzFQVi8KWTNkUXdnTnhXQklxZnk2dXNnZmh6NW5qR3YrcFhSVmpaR2h6c3dJREFRQUJBb0lCQUFNdDI5MGFoMWsvblhBSQphUVN0TzJiZ3FkelpBcDN4dDF3RlhJOFpZNHFoRzdhVHNCSlRTbFJvMXllN3ZMVkM5Wkd2RmNxL1hjNWNHb0lsClhlaFFsRUY4dmEzTlBzY2lpSUtJRDE2dnVrZDFjdU9kVFg0bm5heEdrTGttQ29lVnptU1BJWW8vR1piakVMNGEKLzNQVWZyZkJZTmVUK0Nob3YrOHJqR2hFWUlZUDlMaklIZkJYR3JnWXE4VWZ5bndRakJBTVJvbFd0UjEva1NpSQpSOEdGMVVWOXloVnpPV2oxQ0ltWWdPbkJ2STBsZEMrUHVLaUZqNlF6OGVFU1JWaXhic29xUGhsWkFtbk5wTVliCjkvTk5oZmtPSXRGMGdBMGViQXFvZXRxdkZHYkw0RkFqVU9Hak5pNGpESTFFUmNuRWpSOXRLaEhDeWRDa2RhQjQKcFdyZ1dlRUNnWUVBN1VodXAyQkMyWE1zSVlQL0EyaEE4YWFlRHlWUDJ4TGxJN2Z4bThLR2QyeFFmNUR4QzJjVQowT2JTUmNwMnNqN3IyWXk1d2ZrUWJ1THZlR2RXWWpDS0VlNVo5WEJKTXdTYmVGSXBDY0QwUll2b2YwdG4xR2tLCkRkQU1LcXEweHFlc3NmT1hXRmlWVkg4ekNyQ0xBdXlHTW9TYnFvK1VwYlhVZHNMaFV2QndZSWtDZ1lFQXhOWFgKZEVxMGRwbXpJSGZ2ZWEzdVVQRHBIUjAzNkYvOEYwRVZrNjcrMW1TUDFVTTJuSXk0RFJSRzdhSG0relpqSGtiUQpUNnlWNDV4ckdKV2xmRXd4Y25OY1VaQ1Nhblh6azRzNkIwTFFUeU1lYlQ1MmY3S1dsU0s4eThkUldjN0kzcXJCCkFaM3FpMjBoWlQwV1lyNkUzZXR6ODFnSjBYbElSM2NzQk9jK1Mxc0NnWUVBczMyeUxxeVRoUGdwYnVUeGQvdFoKL1RKRHFFTmFSK2JnTElmTm5UeW1DUnFIUGlnL0hwZ0lXQW56RDlZYXViVDlKZURjOTQxWFQvb2NtZURacUliOQpPcGtwdFk4TjRDamhEa0JnU0wrTVNEdVFVUktTWlV4YnpaME9Sd3hBbVhGbkltbVlsN3pTb1V0aktmZm4vL3M1CmZHZHhkYkVOQ2RrazhmMXpBeEZjZ0xrQ2dZRUFsbkhIZ3JnU3BNK25UTHEreStiM3p0
kubectlコマンドが実行できるようになりました。
kubeuser@kubemaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 NotReady master 8m7s v1.18.0
編集済みのcalico.yamlからcalicoを起動します。
※コンテナとして起動します。
kubeuser@kubemaster1:~$ ls calico.yaml kubeadm-config.yaml kubeadm-init.out kubeuser@kubemaster1:~$ kubectl apply -f calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created poddisruptionbudget.policy/calico-kube-controllers created
「-n kube-system」を付けるとcalicoなどのコンテナ(Pod)が確認でき、
数秒毎にkubectl get podsを実行すると、そのステータス変化も確認できます。
kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 Pending 0 15s calico-node-7t474 0/1 Init:0/3 0 15s coredns-66bff467f8-cpd25 0/1 Pending 0 10m coredns-66bff467f8-wtww9 0/1 Pending 0 10m etcd-kubemaster1 1/1 Running 0 10m kube-apiserver-kubemaster1 1/1 Running 0 10m kube-controller-manager-kubemaster1 1/1 Running 0 10m kube-proxy-6gq55 1/1 Running 0 10m kube-scheduler-kubemaster1 1/1 Running 0 10m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 Pending 0 33s calico-node-7t474 0/1 Init:1/3 0 33s coredns-66bff467f8-cpd25 0/1 Pending 0 10m coredns-66bff467f8-wtww9 0/1 Pending 0 10m etcd-kubemaster1 1/1 Running 0 10m kube-apiserver-kubemaster1 1/1 Running 0 10m kube-controller-manager-kubemaster1 1/1 Running 0 10m kube-proxy-6gq55 1/1 Running 0 10m kube-scheduler-kubemaster1 1/1 Running 0 10m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 Pending 0 37s calico-node-7t474 0/1 Init:2/3 0 37s coredns-66bff467f8-cpd25 0/1 Pending 0 10m coredns-66bff467f8-wtww9 0/1 Pending 0 10m etcd-kubemaster1 1/1 Running 0 10m kube-apiserver-kubemaster1 1/1 Running 0 10m kube-controller-manager-kubemaster1 1/1 Running 0 10m kube-proxy-6gq55 1/1 Running 0 10m kube-scheduler-kubemaster1 1/1 Running 0 10m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 Pending 0 38s calico-node-7t474 0/1 Init:2/3 0 38s coredns-66bff467f8-cpd25 0/1 Pending 0 10m coredns-66bff467f8-wtww9 0/1 Pending 0 10m etcd-kubemaster1 1/1 Running 0 10m kube-apiserver-kubemaster1 1/1 Running 0 10m kube-controller-manager-kubemaster1 1/1 Running 0 10m kube-proxy-6gq55 1/1 Running 0 10m kube-scheduler-kubemaster1 1/1 Running 0 10m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 Pending 0 42s calico-node-7t474 0/1 Init:2/3 0 42s coredns-66bff467f8-cpd25 0/1 Pending 0 10m coredns-66bff467f8-wtww9 0/1 Pending 0 10m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 10m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 Pending 0 44s calico-node-7t474 0/1 Init:2/3 0 44s coredns-66bff467f8-cpd25 0/1 Pending 0 10m coredns-66bff467f8-wtww9 0/1 Pending 0 10m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 10m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 Pending 0 45s calico-node-7t474 0/1 Init:2/3 0 45s coredns-66bff467f8-cpd25 0/1 Pending 0 10m coredns-66bff467f8-wtww9 0/1 Pending 0 10m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 10m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 Pending 0 47s calico-node-7t474 0/1 Init:2/3 0 47s coredns-66bff467f8-cpd25 0/1 ContainerCreating 0 10m coredns-66bff467f8-wtww9 0/1 ContainerCreating 0 10m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 10m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 ContainerCreating 0 51s calico-node-7t474 0/1 Init:2/3 0 51s coredns-66bff467f8-cpd25 0/1 ContainerCreating 0 11m coredns-66bff467f8-wtww9 0/1 ContainerCreating 0 11m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 ContainerCreating 0 53s calico-node-7t474 0/1 PodInitializing 0 53s coredns-66bff467f8-cpd25 0/1 ContainerCreating 0 11m coredns-66bff467f8-wtww9 0/1 ContainerCreating 0 11m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 ContainerCreating 0 55s calico-node-7t474 0/1 PodInitializing 0 55s coredns-66bff467f8-cpd25 0/1 ContainerCreating 0 11m coredns-66bff467f8-wtww9 0/1 ContainerCreating 0 11m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 ContainerCreating 0 58s calico-node-7t474 0/1 PodInitializing 0 58s coredns-66bff467f8-cpd25 0/1 ContainerCreating 0 11m coredns-66bff467f8-wtww9 0/1 ContainerCreating 0 11m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 ContainerCreating 0 60s calico-node-7t474 0/1 PodInitializing 0 60s coredns-66bff467f8-cpd25 0/1 ContainerCreating 0 11m coredns-66bff467f8-wtww9 0/1 ContainerCreating 0 11m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 ContainerCreating 0 62s calico-node-7t474 0/1 PodInitializing 0 62s coredns-66bff467f8-cpd25 0/1 ContainerCreating 0 11m coredns-66bff467f8-wtww9 0/1 ContainerCreating 0 11m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 ContainerCreating 0 65s calico-node-7t474 0/1 PodInitializing 0 65s coredns-66bff467f8-cpd25 0/1 ContainerCreating 0 11m coredns-66bff467f8-wtww9 0/1 ContainerCreating 0 11m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 ContainerCreating 0 68s calico-node-7t474 0/1 PodInitializing 0 68s coredns-66bff467f8-cpd25 0/1 ContainerCreating 0 11m coredns-66bff467f8-wtww9 0/1 ContainerCreating 0 11m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 ContainerCreating 0 71s calico-node-7t474 0/1 PodInitializing 0 71s coredns-66bff467f8-cpd25 0/1 ContainerCreating 0 11m coredns-66bff467f8-wtww9 0/1 ContainerCreating 0 11m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 ContainerCreating 0 76s calico-node-7t474 0/1 PodInitializing 0 76s coredns-66bff467f8-cpd25 0/1 ContainerCreating 0 11m coredns-66bff467f8-wtww9 0/1 ContainerCreating 0 11m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 ContainerCreating 0 80s calico-node-7t474 0/1 PodInitializing 0 80s coredns-66bff467f8-cpd25 0/1 ContainerCreating 0 11m coredns-66bff467f8-wtww9 0/1 ContainerCreating 0 11m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 ContainerCreating 0 99s calico-node-7t474 1/1 Running 0 99s coredns-66bff467f8-cpd25 1/1 Running 0 11m coredns-66bff467f8-wtww9 1/1 Running 0 11m etcd-kubemaster1 1/1 Running 0 11m kube-apiserver-kubemaster1 1/1 Running 0 11m kube-controller-manager-kubemaster1 1/1 Running 0 11m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 11m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 0/1 Running 0 102s calico-node-7t474 1/1 Running 0 102s coredns-66bff467f8-cpd25 1/1 Running 0 11m coredns-66bff467f8-wtww9 1/1 Running 0 11m etcd-kubemaster1 1/1 Running 0 12m kube-apiserver-kubemaster1 1/1 Running 0 12m kube-controller-manager-kubemaster1 1/1 Running 0 12m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 12m kubeuser@kubemaster1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77ff9c69dd-m5jbz 1/1 Running 0 106s calico-node-7t474 1/1 Running 0 106s coredns-66bff467f8-cpd25 1/1 Running 0 11m coredns-66bff467f8-wtww9 1/1 Running 0 11m etcd-kubemaster1 1/1 Running 0 12m kube-apiserver-kubemaster1 1/1 Running 0 12m kube-controller-manager-kubemaster1 1/1 Running 0 12m kube-proxy-6gq55 1/1 Running 0 11m kube-scheduler-kubemaster1 1/1 Running 0 12m kubeuser@kubemaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 Ready master 12m v1.18.0
kubeadm-config.yamlに関する内容もコマンドで確認ができます。
kubeuser@kubemaster1:~$ sudo kubeadm config print init-defaults W1201 11:39:51.998223 62685 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 1.2.3.4 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: kubemaster1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageReposit
ここからは1台目のworkerノードで操作をします。
Workerノードの追加
/etc/hostsの編集。
kubeuser@kubeworker:~$ cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 kube_worker 192.168.1.251 k8smaster1 192.168.1.252 k8smaster2 192.168.1.249 k8smaster3 192.168.1.253 k8sworker 192.168.1.254 k8sworker # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
masterで出力された"kubeadm join"を確認します。
kubeuser@kubemaster1:~$ ls calico.yaml kubeadm-config.yaml kubeadm-init.out kubeuser@kubemaster1:~$ cat kubeadm-init.out . . . kubeadm join k8smaster1:6443 --token cde4rl.f5gk2l5tt0k45nkh \ --discovery-token-ca-cert-hash sha256:239cf7568930a6eca59742e0d1d4ce6fe1371238a68833b8293a72580b0a08ae
kubeadm joinの実行。
kubeuser@kubeworker:~$ sudo kubeadm join k8smaster1:6443 --token cde4rl.f5gk2l5tt0k45nkh --discovery-token-ca-cert-hash sha256:239cf7568930a6eca59742e0d1d4ce6fe1371238a68833b8293a72580b0a08ae W1201 11:53:27.578257 19408 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
masterノードで"kubectl get nodes"を実行すると、1分過ぎたあたりで"Ready"となることが確認できます。
kubeuser@kubemaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 Ready master 30m v1.18.0 kubeuser@kubemaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 Ready master 30m v1.18.0 kubeworker NotReady <none> 1s v1.18.0 kubeuser@kubemaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 Ready master 31m v1.18.0 kubeworker NotReady <none> 27s v1.18.0 kubeuser@kubemaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 Ready master 31m v1.18.0 kubeworker NotReady <none> 43s v1.18.0 kubeuser@kubemaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 Ready master 31m v1.18.0 kubeworker NotReady <none> 68s v1.18.0 kubeuser@kubemaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 Ready master 32m v1.18.0 kubeworker Ready <none> 73s v1.18.0
続いて、2台目のmasterノードで操作します。
Masterノードの追加
/etc/hostsの編集。
kubeuser@kubemaster2:~$ cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 kube_master2 192.168.1.251 k8smaster1 192.168.1.252 k8smaster2 192.168.1.249 k8smaster3 192.168.1.253 k8sworker 192.168.1.254 k8sworker2 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
masterノードの"kubeadm join"を確認します。
kubeuser@kubemaster1:~$ ls calico.yaml kubeadm-config.yaml kubeadm-init.out kubeuser@kubemaster1:~$ cat kubeadm-init.out . . . You can now join any number of the control-plane node running the following command on each as root: kubeadm join k8smaster1:6443 --token cde4rl.f5gk2l5tt0k45nkh \ --discovery-token-ca-cert-hash sha256:239cf7568930a6eca59742e0d1d4ce6fe1371238a68833b8293a72580b0a08ae \ --control-plane --certificate-key 3017c568250f52e8e22d2b3a5b8838f071016c0fe5e740c38652f85ffa99198a . . .
kubeadm joinを実行します。
kubeuser@kubemaster2:~$ sudo kubeadm join k8smaster1:6443 --token cde4rl.f5gk2l5tt0k45nkh \ > --discovery-token-ca-cert-hash sha256:239cf7568930a6eca59742e0d1d4ce6fe1371238a68833b8293a72580b0a08ae \ > --control-plane --certificate-key 3017c568250f52e8e22d2b3a5b8838f071016c0fe5e740c38652f85ffa99198a [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubemaster2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8smaster1] and IPs [10.96.0.1 192.168.1.252] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubemaster2 localhost] and IPs [192.168.1.252 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubemaster2 localhost] and IPs [192.168.1.252 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" W1201 12:00:06.325890 20690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-controller-manager" W1201 12:00:06.337079 20690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W1201 12:00:06.338505 20690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [etcd] Announced new etcd member joining to the existing etcd cluster [etcd] Creating static Pod manifest for "etcd" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s [kubelet-check] Initial timeout of 40s passed. {"level":"warn","ts":"2021-12-01T12:01:02.843Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.1.252:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"} [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [mark-control-plane] Marking the node kubemaster2 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kubemaster2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane (master) label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster.
ノードの追加されていく過程。
kubeuser@kubemaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 Ready master 37m v1.18.0 kubeworker Ready <none> 6m35s v1.18.0 kubeuser@kubemaster1:~$ kubectl get nodes Error from server: etcdserver: request timed out kubeuser@kubemaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 Ready master 38m v1.18.0 kubemaster2 NotReady <none> 60s v1.18.0 kubeworker Ready <none> 7m36s v1.18.0 kubeuser@kubemaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 Ready master 38m v1.18.0 kubemaster2 Ready master 85s v1.18.0 kubeworker Ready <none> 8m1s v1.18.0
ユーザによるkubectlの実行環境の設定。
kubeuser@kubemaster2:~$ mkdir -p $HOME/.kube kubeuser@kubemaster2:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config kubeuser@kubemaster2:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config kubeuser@kubemaster2:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 Ready master 40m v1.18.0 kubemaster2 Ready master 3m15s v1.18.0 kubeworker Ready <none> 9m51s v1.18.0
ノード追加時のトークンの確認。
kubeuser@kubemaster1:~$ sudo kubeadm token list [sudo] password for kubeuser: TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS cde4rl.f5gk2l5tt0k45nkh 23h 2021-12-02T11:22:55Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token qy7yo2.tfic23hneed7cpuu 1h 2021-12-01T13:22:54Z <none> Proxy for managing TTL for the kubeadm-certs secret <none>
Token期限切れ時の対応
ノード(master/worker)追加時に発行されたトークンの期限(24時間)を過ぎた場合、
以下のようにエラーとなります。
kubeuser@kubeworker2:~$ kubeadm join k8smaster1:6443 --token cde4rl.f5gk2l5tt0k45nkh \ --discovery-token-ca-cert-hash sha256:239cf7568930a6eca59742e0d1d4ce6fe1371238a68833b8293a72580b0a08ae W1201 12:47:00.138352 18523 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "cde4rl" To see the stack trace of this error execute with --v=5 or higher
どのようにするかというと、トークンの作成でOKです。
あたらに作成されたトークンに"kubeamd join"のオプションを変更すれば大丈夫。
kubeuser@kubemaster1:~$ sudo kubeadm token create W1201 12:20:28.289334 103522 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] lfa4k5.6k57rdwzrvo4sar8 kubeuser@kubemaster1:~$ sudo kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS cde4rl.f5gk2l5tt0k45nkh 23h 2021-12-02T11:22:55Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token lfa4k5.6k57rdwzrvo4sar8 23h 2021-12-02T12:20:28Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token qy7yo2.tfic23hneed7cpuu 1h 2021-12-01T13:22:54Z <none> Proxy for managing TTL for the kubeadm-certs secret <none>
では確かめてみましょう。
2台目のworkerノートの追加。
/etc/hostsを編集して、kubeadm joinを実行します。
kubeuser@kubeworker2:~$ sudo vi /etc/hosts [sudo] password for kubeuser: kubeuser@kubeworker2:~$ cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 kube_worker2 192.168.1.251 k8smaster1 192.168.1.252 k8smaster2 192.168.1.249 k8smaster3 192.168.1.253 k8sworker 192.168.1.254 k8sworker2 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters kubeuser@kubeworker2:~$ sudo kubeadm join k8smaster1:6443 --token lfa4k5.6k57rdw zrvo4sar8 --discovery-token-ca-cert-hash sha256:239cf7568930a6eca59742e0d1d4 ce6fe1371238a68833b8293a72580b0a08ae W1201 12:23:19.569456 23296 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
3台目のmasterノードでも同様にできます。
kubeuser@kubemaster3:~$ sudo vi /etc/hosts [sudo] password for kubeuser: kubeuser@kubemaster3:~$ cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 kube_master3 192.168.1.251 k8smaster1 192.168.1.252 k8smaster2 192.168.1.249 k8smaster3 192.168.1.253 k8sworker 192.168.1.254 k8sworker2 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters kubeuser@kubemaster3:~$ sudo kubeadm join k8smaster1:6443 --token lfa4k5.6k57rdwzrvo4sar8 \ > --discovery-token-ca-cert-hash sha256:239cf7568930a6eca59742e0d1d4ce6fe1371238a68833b8293a72580b0a08ae \ > --control-plane --certificate-key 3017c568250f52e8e22d2b3a5b8838f071016c0fe5e740c38652f85ffa99198a [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubemaster3 localhost] and IPs [192.168.1.249 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubemaster3 localhost] and IPs [192.168.1.249 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubemaster3 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8smaster1] and IPs [10.96.0.1 192.168.1.249] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" W1201 12:27:45.927652 24196 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-controller-manager" W1201 12:27:45.945850 24196 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W1201 12:27:45.947338 24196 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [etcd] Announced new etcd member joining to the existing etcd cluster [etcd] Creating static Pod manifest for "etcd" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s {"level":"warn","ts":"2021-12-01T12:28:17.138Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.1.249:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"} [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [mark-control-plane] Marking the node kubemaster3 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kubemaster3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane (master) label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster. kubeuser@kubemaster3:~$ mkdir -p $HOME/.kube kubeuser@kubemaster3:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config kubeuser@kubemaster3:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config kubeuser@kubemaster3:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 Ready master 66m v1.18.0 kubemaster2 Ready master 29m v1.18.0 kubemaster3 Ready master 104s v1.18.0 kubeworker Ready <none> 36m v1.18.0 kubeworker2 Ready <none> 5m54s v1.18.0
kubernetesクラスターの完成
これでkubernetesクラスターの完成です!!!!!!!
各ノードの詳細は、"kubectl describe node"で確認できます。
kubeuser@kubemaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster1 Ready master 68m v1.18.0 kubemaster2 Ready master 30m v1.18.0 kubemaster3 Ready master 3m16s v1.18.0 kubeworker Ready <none> 37m v1.18.0 kubeworker2 Ready <none> 7m26s v1.18.0 kubeuser@kubemaster1:~$ kubectl describe node kubemaster1 Name: kubemaster1 Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=kubemaster1 kubernetes.io/os=linux node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 projectcalico.org/IPv4Address: 192.168.1.251/24 projectcalico.org/IPv4IPIPTunnelAddr: 10.0.237.64 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 01 Dec 2021 11:22:48 +0000 Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false Lease: HolderIdentity: kubemaster1 AcquireTime: <unset> RenewTime: Wed, 01 Dec 2021 12:31:15 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Wed, 01 Dec 2021 11:34:36 +0000 Wed, 01 Dec 2021 11:34:36 +0000 CalicoIsUp Calico is running on this node MemoryPressure False Wed, 01 Dec 2021 12:30:26 +0000 Wed, 01 Dec 2021 11:22:48 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 01 Dec 2021 12:30:26 +0000 Wed, 01 Dec 2021 11:22:48 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 01 Dec 2021 12:30:26 +0000 Wed, 01 Dec 2021 11:22:48 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 01 Dec 2021 12:30:26 +0000 Wed, 01 Dec 2021 11:34:07 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 192.168.1.251 Hostname: kubemaster1 Capacity: cpu: 2 ephemeral-storage: 20511312Ki hugepages-2Mi: 0 memory: 2035144Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 18903225108 hugepages-2Mi: 0 memory: 1932744Ki pods: 110 System Info: Machine ID: e0a37321b6f94a7aa6e6107d7f656965 System UUID: e0a37321-b6f9-4a7a-a6e6-107d7f656965 Boot ID: 3acfbc50-dc18-4c7b-a26e-e85d769d1349 Kernel Version: 5.4.0-81-generic OS Image: Ubuntu 20.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.7 Kubelet Version: v1.18.0 Kube-Proxy Version: v1.18.0 PodCIDR: 10.0.0.0/24 PodCIDRs: 10.0.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system calico-kube-controllers-77ff9c69dd-m5jbz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 58m kube-system calico-node-7t474 250m (12%) 0 (0%) 0 (0%) 0 (0%) 58m kube-system coredns-66bff467f8-cpd25 100m (5%) 0 (0%) 70Mi (3%) 170Mi (9%) 68m kube-system coredns-66bff467f8-wtww9 100m (5%) 0 (0%) 70Mi (3%) 170Mi (9%) 68m kube-system etcd-kubemaster1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 68m kube-system kube-apiserver-kubemaster1 250m (12%) 0 (0%) 0 (0%) 0 (0%) 68m kube-system kube-controller-manager-kubemaster1 200m (10%) 0 (0%) 0 (0%) 0 (0%) 68m kube-system kube-proxy-6gq55 0 (0%) 0 (0%) 0 (0%) 0 (0%) 68m kube-system kube-scheduler-kubemaster1 100m (5%) 0 (0%) 0 (0%) 0 (0%) 68m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1 (50%) 0 (0%) memory 140Mi (7%) 340Mi (18%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 68m (x8 over 68m) kubelet, kubemaster1 Node kubemaster1 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 68m (x8 over 68m) kubelet, kubemaster1 Node kubemaster1 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 68m (x7 over 68m) kubelet, kubemaster1 Node kubemaster1 status is now: NodeHasSufficientPID Normal Starting 68m kubelet, kubemaster1 Starting kubelet. Normal NodeHasSufficientMemory 68m kubelet, kubemaster1 Node kubemaster1 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 68m kubelet, kubemaster1 Node kubemaster1 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 68m kubelet, kubemaster1 Node kubemaster1 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 68m kubelet, kubemaster1 Updated Node Allocatable limit across pods Normal Starting 68m kube-proxy, kubemaster1 Starting kube-proxy. Normal NodeReady 57m kubelet, kubemaster1 Node kubemaster1 status is now: NodeReady
おわりに
今回、初めてkubernetesクラスターを構築してみましたが、
難しいと感じながらも、出来たときの達成感がめちゃくちゃあります!
また、とりあえずやってみようというような感じで構築するのではなく、
「クラスターの完成形をイメージした上で構築をする」、
これ大事かもしれません。