LinuCエヴァンジェリストの鯨井貴博@opensourcetechです。
予め作成したKVM環境上のVM(Ubuntu Server 20.04.3 LTS×2台)にkubeadmでkubernetesクラスターを構築してみます。
KVMでのVM作成方法
virt-install --name kube_newmaster --ram 4096 --disk size=50 --vcpus 2 --os-variant ubuntu20.04 --network bridge=virbr0 --graphics none --console pty,target_type=serial --location /home/ubuntu/ubuntu-20.04.3-live-server-amd64.iso,kernel=casper/vmlinuz,initrd=casper/initrd --extra-args 'console=ttyS0,115200n8 serial' virt-install --name kube_newworker --ram 4096 --disk size=50 --vcpus 2 --os-variant ubuntu20.04 --network bridge=virbr0 --graphics none --console pty,target_type=serial --location /home/ubuntu/ubuntu-20.04.3-live-server-amd64.iso,kernel=casper/vmlinuz,initrd=casper/initrd --extra-args 'console=ttyS0,115200n8 serial'
VMのスペック
- メモリー:4GB
- ストレージ:50GB
- vCPUP:2
今回構築するKubernetesクラスターの構成
192.168.1.251/24(Masterノード、ホスト名:kube_newmaster1)
192.168.1.252/24(Workerノード、ホスト名:kube_newworker1)
Podネットワーク:10.0.0.0/16
なお、参照したドキュメントは、以下です。
kubenetesクラスターの構築(kubeadm) on Ubuntu Server 20.04.3 LTS
kubeadmを使用したクラスターの作成
kubeadmのインストール
CRIのインストール
クラスターのネットワーク
ここからはクラスターに参加する全ノードで実施します。
始める前に
まず、インストール開始前の確認です。
swapが有効になっていたので、無効化します。
※起動時の読込(/etc/fstab)も忘れないように。
kubeuser@kubenewworker1:~$ sudo swapon -s [sudo] password for kubeuser: Filename Type Size Used Priority /swap.img file 4030460 0 -2 kubeuser@kubenewmaster1:~$ sudo swapoff -a kubeuser@kubenewmaster1:~$ sudo vi /etc/fstab kubeuser@kubenewmaster1:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation /dev/disk/by-id/dm-uuid-LVM-QczQjvcU5IQylbRISyfifgcwEixnXcJuTICwSKwlsYA1F6k5jKm0k9ihEsWRijhF / ext4 defaults 0 1 # /boot was on /dev/vda2 during curtin installation /dev/disk/by-uuid/2e334452-39bf-451a-96cb-94ce72134ca3 /boot ext4 defaults 0 1 #/swap.img none swap sw 0 0
iptablesがブリッジを通過するトラフィックを処理できるようにする
br_netfilterモジュールのロード。
kubeuser@kubeworker2:~$ lsmod | grep br_netfilter kubeuser@kubeworker2:~$ sudo modprobe br_netfilter [sudo] password for kubeuser: kubeuser@kubeworker2:~$ lsmod | grep br_netfilter br_netfilter 28672 0 bridge 176128 1 br_netfilter
そして、Linuxノードのiptablesがブリッジを通過するトラフィックを正確に処理するための設定をします。
kubeuser@kubenewmaster1:~$ sudo vi /etc/sysctl.d/k8s.conf ubeuser@kubenewmaster1:~$ cat /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 kubeuser@kubenewmaster1:~$ sudo sysctl --system * Applying /etc/sysctl.d/10-console-messages.conf ... kernel.printk = 4 4 1 7 * Applying /etc/sysctl.d/10-ipv6-privacy.conf ... net.ipv6.conf.all.use_tempaddr = 2 net.ipv6.conf.default.use_tempaddr = 2 * Applying /etc/sysctl.d/10-kernel-hardening.conf ... kernel.kptr_restrict = 1 * Applying /etc/sysctl.d/10-link-restrictions.conf ... fs.protected_hardlinks = 1 fs.protected_symlinks = 1 * Applying /etc/sysctl.d/10-magic-sysrq.conf ... kernel.sysrq = 176 * Applying /etc/sysctl.d/10-network-security.conf ... net.ipv4.conf.default.rp_filter = 2 net.ipv4.conf.all.rp_filter = 2 * Applying /etc/sysctl.d/10-ptrace.conf ... kernel.yama.ptrace_scope = 1 * Applying /etc/sysctl.d/10-zeropage.conf ... vm.mmap_min_addr = 65536 * Applying /usr/lib/sysctl.d/50-default.conf ... net.ipv4.conf.default.promote_secondaries = 1 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument net.ipv4.ping_group_range = 0 2147483647 net.core.default_qdisc = fq_codel fs.protected_regular = 1 fs.protected_fifos = 1 * Applying /usr/lib/sysctl.d/50-pid-max.conf ... kernel.pid_max = 4194304 * Applying /etc/sysctl.d/99-sysctl.conf ... * Applying /etc/sysctl.d/k8s.conf ... net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 * Applying /usr/lib/sysctl.d/protect-links.conf ... fs.protected_fifos = 1 fs.protected_hardlinks = 1 fs.protected_regular = 2 fs.protected_symlinks = 1 * Applying /etc/sysctl.conf ...
iptablesがnftablesバックエンドを使用しないようにする
nftablesバックエンドは現在のkubeadmパッケージと互換性がないので、iptables-legacyが使用されるように、変更します。
kubeuser@kubenewmaster1:~$ sudo apt-get install -y iptables arptables ebtables kubeuser@kubenewmaster1:~$ sudo update-alternatives --set iptables /usr/sbin/iptables-legacy kubeuser@kubenewmaster1:~$ sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy kubeuser@kubenewmaster1:~$ sudo update-alternatives --set arptables /usr/sbin/arptables-legacy update-alternatives: using /usr/sbin/arptables-legacy to provide /usr/sbin/arptables (arptables) in manual mode kubeuser@kubenewmaster1:~$ sudo update-alternatives --set ebtables /usr/sbin/ebtables-legacy update-alternatives: using /usr/sbin/ebtables-legacy to provide /usr/sbin/ebtables (ebtables) in manual mode
ランタイムのインストール
Docker・CRI-O・Containerdなどから選択できますが、
今回はContainerdを使いました。
参考:https://kubernetes.io/ja/docs/setup/production-environment/container-runtimes/
kubeuser@kubenewmaster1:~$ sudo vi /etc/modules-load.d/containerd.conf kubeuser@kubenewmaster1:~$ cat /etc/modules-load.d/containerd.conf overlay br_netfilter kubeuser@kubenewmaster1:~$ sudo modprobe overlay kubeuser@kubenewmaster1:~$ sudo modprobe br_netfilter kubeuser@kubenewmaster1:~$ sudo vi /etc/sysctl.d/99-kubernetes-cri.conf kubeuser@kubenewmaster1:~$ cat /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 kubeuser@kubenewmaster1:~$ sudo sysctl --system * Applying /etc/sysctl.d/10-console-messages.conf ... kernel.printk = 4 4 1 7 * Applying /etc/sysctl.d/10-ipv6-privacy.conf ... net.ipv6.conf.all.use_tempaddr = 2 net.ipv6.conf.default.use_tempaddr = 2 * Applying /etc/sysctl.d/10-kernel-hardening.conf ... kernel.kptr_restrict = 1 * Applying /etc/sysctl.d/10-link-restrictions.conf ... fs.protected_hardlinks = 1 fs.protected_symlinks = 1 * Applying /etc/sysctl.d/10-magic-sysrq.conf ... kernel.sysrq = 176 * Applying /etc/sysctl.d/10-network-security.conf ... net.ipv4.conf.default.rp_filter = 2 net.ipv4.conf.all.rp_filter = 2 * Applying /etc/sysctl.d/10-ptrace.conf ... kernel.yama.ptrace_scope = 1 * Applying /etc/sysctl.d/10-zeropage.conf ... vm.mmap_min_addr = 65536 * Applying /usr/lib/sysctl.d/50-default.conf ... net.ipv4.conf.default.promote_secondaries = 1 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument net.ipv4.ping_group_range = 0 2147483647 net.core.default_qdisc = fq_codel fs.protected_regular = 1 fs.protected_fifos = 1 * Applying /usr/lib/sysctl.d/50-pid-max.conf ... kernel.pid_max = 4194304 * Applying /etc/sysctl.d/99-kubernetes-cri.conf ... net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 * Applying /etc/sysctl.d/99-sysctl.conf ... * Applying /etc/sysctl.d/k8s.conf ... net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 * Applying /usr/lib/sysctl.d/protect-links.conf ... fs.protected_fifos = 1 fs.protected_hardlinks = 1 fs.protected_regular = 2 fs.protected_symlinks = 1 * Applying /etc/sysctl.conf ...
kubeuser@kubenewmaster1:~$ sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common kubeuser@kubenewmaster1:~$ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - kubeuser@kubenewmaster1:~$ sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" kubeuser@kubenewmaster1:~$ sudo apt-get update && sudo apt-get install -y containerd.io kubeuser@kubenewmaster1:~$ sudo mkdir -p /etc/containerd kubeuser@kubenewmaster1:~$ containerd config default | sudo tee /etc/containerd/config.toml kubeuser@kubenewmaster1:~$ sudo systemctl restart containerd kubeuser@kubenewmaster1:~$
59行目が変更箇所です。
kubeuser@kubenewmaster1:~$ sudo vi /etc/containerd/config.toml kubeuser@kubenewmaster1:~$ cat -n /etc/containerd/config.toml 1 version = 2 2 root = "/var/lib/containerd" 3 state = "/run/containerd" 4 plugin_dir = "" 5 disabled_plugins = [] 6 required_plugins = [] 7 oom_score = 0 8 9 [grpc] 10 address = "/run/containerd/containerd.sock" 11 tcp_address = "" 12 tcp_tls_cert = "" 13 tcp_tls_key = "" 14 uid = 0 15 gid = 0 16 max_recv_message_size = 16777216 17 max_send_message_size = 16777216 18 19 [ttrpc] 20 address = "" 21 uid = 0 22 gid = 0 23 24 [debug] 25 address = "" 26 uid = 0 27 gid = 0 28 level = "" 29 30 [metrics] 31 address = "" 32 grpc_histogram = false 33 34 [cgroup] 35 path = "" 36 37 [timeouts] 38 "io.containerd.timeout.shim.cleanup" = "5s" 39 "io.containerd.timeout.shim.load" = "5s" 40 "io.containerd.timeout.shim.shutdown" = "3s" 41 "io.containerd.timeout.task.state" = "2s" 42 43 [plugins] 44 [plugins."io.containerd.gc.v1.scheduler"] 45 pause_threshold = 0.02 46 deletion_threshold = 0 47 mutation_threshold = 100 48 schedule_delay = "0s" 49 startup_delay = "100ms" 50 [plugins."io.containerd.grpc.v1.cri"] 51 disable_tcp_service = true 52 stream_server_address = "127.0.0.1" 53 stream_server_port = "0" 54 stream_idle_timeout = "4h0m0s" 55 enable_selinux = false 56 selinux_category_range = 1024 57 sandbox_image = "k8s.gcr.io/pause:3.2" 58 stats_collect_period = 10 59 systemd_cgroup = true ・・・・ここを"false"から"true"に変更する 60 enable_tls_streaming = false 61 max_container_log_line_size = 16384 62 disable_cgroup = false 63 disable_apparmor = false 64 restrict_oom_score_adj = false 65 max_concurrent_downloads = 3 66 disable_proc_mount = false 67 unset_seccomp_profile = "" 68 tolerate_missing_hugetlb_controller = true 69 disable_hugetlb_controller = true 70 ignore_image_defined_volumes = false 71 [plugins."io.containerd.grpc.v1.cri".containerd] 72 snapshotter = "overlayfs" 73 default_runtime_name = "runc" 74 no_pivot = false 75 disable_snapshot_annotations = true 76 discard_unpacked_layers = false 77 [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime] 78 runtime_type = "" 79 runtime_engine = "" 80 runtime_root = "" 81 privileged_without_host_devices = false 82 base_runtime_spec = "" 83 [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime] 84 runtime_type = "" 85 runtime_engine = "" 86 runtime_root = "" 87 privileged_without_host_devices = false 88 base_runtime_spec = "" 89 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] 90 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] 91 runtime_type = "io.containerd.runc.v2" 92 runtime_engine = "" 93 runtime_root = "" 94 privileged_without_host_devices = false 95 base_runtime_spec = "" 96 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] 97 [plugins."io.containerd.grpc.v1.cri".cni] 98 bin_dir = "/opt/cni/bin" 99 conf_dir = "/etc/cni/net.d" 100 max_conf_num = 1 101 conf_template = "" 102 [plugins."io.containerd.grpc.v1.cri".registry] 103 [plugins."io.containerd.grpc.v1.cri".registry.mirrors] 104 [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] 105 endpoint = ["https://registry-1.docker.io"] 106 [plugins."io.containerd.grpc.v1.cri".image_decryption] 107 key_model = "" 108 [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming] 109 tls_cert_file = "" 110 tls_key_file = "" 111 [plugins."io.containerd.internal.v1.opt"] 112 path = "/opt/containerd" 113 [plugins."io.containerd.internal.v1.restart"] 114 interval = "10s" 115 [plugins."io.containerd.metadata.v1.bolt"] 116 content_sharing_policy = "shared" 117 [plugins."io.containerd.monitor.v1.cgroups"] 118 no_prometheus = false 119 [plugins."io.containerd.runtime.v1.linux"] 120 shim = "containerd-shim" 121 runtime = "runc" 122 runtime_root = "" 123 no_shim = false 124 shim_debug = false 125 [plugins."io.containerd.runtime.v2.task"] 126 platforms = ["linux/amd64"] 127 [plugins."io.containerd.service.v1.diff-service"] 128 default = ["walking"] 129 [plugins."io.containerd.snapshotter.v1.devmapper"] 130 root_path = "" 131 pool_name = "" 132 base_image_size = "" 133 async_remove = false 134
2022/9/22追記
v1.23.0の場合、上記箇所(systemd_cgroup) は falseのまま、
以下の125行目にある"SystemdCgroup"を trueにしないと、
Master作業のkubectl init実行時にエラーとなるので注意。
114 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] 115 BinaryName = "" 116 CriuImagePath = "" 117 CriuPath = "" 118 CriuWorkPath = "" 119 IoGid = 0 120 IoUid = 0 121 NoNewKeyring = false 122 NoPivotRoot = false 123 Root = "" 124 ShimCgroup = "" 125 SystemdCgroup = true
kubectl init実行時のエラー
ubuntu@xxxxxx:~$ sudo kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out [init] Using Kubernetes version: v1.23.0 [preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR CRI]: container runtime is not running: output: E0922 04:45:51.123456 2209 remote_runtime.go:948] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" time="2022-09-22T04:45:51Z" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService" , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
本家サイトでは、以下に記載あり。
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd
kubeadm、kubelet、kubectlのインストール
apt-transport-httpsとcurlのインストール。
これからの作業用に必要なものです。
kubeuser@kubenewmaster1:~$ sudo apt-get update && sudo apt-get install -y apt-transport-https curl kubeuser@kubenewmaster1:~$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - kubeuser@kubenewmaster1:~$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list > deb https://apt.kubernetes.io/ kubernetes-xenial main > EOF deb https://apt.kubernetes.io/ kubernetes-xenial main
kubernetes用のAPTレポジトリ追加してレポジトリキャッシュの更新、
kubeadm・kubelet・kubectlをインストールします。
kubeuser@kubenewmaster1:~$ sudo apt-get update kubeuser@kubenewmaster1:~$ sudo apt-get install -y kubelet=1.22.0-00 kubeadm=1.22.0-00 kubectl=1.22.0-00 kubeuser@kubenewmaster1:~$ sudo apt-mark hold kubelet kubeadm kubectl
コントロールプレーンノードのkubeletによって使用されるcgroupドライバーの設定
kubeuser@kubenewmaster1:~$ sudo systemctl daemon-reload kubeuser@kubenewmaster1:~$ sudo systemctl restart kubelet
ここからは1台目のmasterノードのみで実施します。
CNI(Container Networking Interface)の設定
コンテナが稼働するPodネットワークに関する設定。
今回は、Calicoを使いました。
参考:https://kubernetes.io/ja/docs/concepts/cluster-administration/networking/
以下にあるcalico.yamlを編集して使用します。
https://docs.projectcalico.org/manifests/calico.yaml
kubeuser@kubenewmaster1:~$ wget https://docs.projectcalico.org/manifests/calico.yaml --2022-02-14 12:56:30-- https://docs.projectcalico.org/manifests/calico.yaml Resolving docs.projectcalico.org (docs.projectcalico.org)... 2406:da18:880:3800:1655:e904:cce5:66a5, 2400:6180:0:d1::5be:9001, 178.128.124.245, ... Connecting to docs.projectcalico.org (docs.projectcalico.org)|2406:da18:880:3800:1655:e904:cce5:66a5|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 217523 (212K) [text/yaml] Saving to: ‘calico.yaml’ calico.yaml 100%[===================>] 212.42K 429KB/s in 0.4s 2022-02-14 12:56:31 (429 KB/s) - ‘calico.yaml’ saved [217523/217523] kubeuser@kubemaster1:~$ ls kubeuser@kubemaster1:~$ cat -n calico.yaml 1 --- 2 # Source: calico/templates/calico-config.yaml 3 # This ConfigMap is used to configure a self-hosted Calico installation. 4 kind: ConfigMap 5 apiVersion: v1 6 metadata: 7 name: calico-config . . . 4221 # no effect. This should fall within `--cluster-cidr`. 4222 # - name: CALICO_IPV4POOL_CIDR 4223 # value: "192.168.0.0/16" 4224 # Disable file logging so `kubectl logs` works. . . .
4222行目、及び4223行目を編集します。
※どのサブネットを使うかは、それぞれの環境に合わせてください。
4221 # no effect. This should fall within `--cluster-cidr`. 4222 - name: CALICO_IPV4POOL_CIDR 4223 value: "10.0.0.0/16" 4224 # Disable file logging so `kubectl logs` works.
/etc/hostsを編集します。
※クラスターを組む、masterノード・workerノードを追記しています。
kubeuser@kubenewmaster1:~$ cat /etc/hosts 192.168.1.251 kubenewmaster1 192.168.1.252 kubenewworker1 127.0.0.1 localhost 127.0.1.1 kube_newmaster1 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
kubeadm-config.yamlの編集
kubeuser@kubenewmaster1:~$ sudo vi kubeadm-config.yaml kubeuser@kubenewmaster1:~$ cat kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.22.0 controlPlaneEndpoint: "kubenewmaster1:6443" networking: podSubnet: 10.0.0.0/16 ・・・ここを編集(calico.yamlのCALICO_IPV4POOL_CIDRと一致させる)
kubeadm initを実行して、クラスター1台目のmasterノードを組み込みます。
出力の最後に、masterノード・workerノードを追加するためのコマンドが出力されます。
kubeuser@kubenewmaster1:~$ sudo kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out [init] Using Kubernetes version: v1.22.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubenewmaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.251] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubenewmaster1 localhost] and IPs [192.168.1.251 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubenewmaster1 localhost] and IPs [192.168.1.251 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 36.503818 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 4f989d31501adb1b8cde0bde622c0d7d039745c1c198db13d056995257e4ca93 [mark-control-plane] Marking the node kubenewmaster1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node kubenewmaster1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: t3scpi.fdyrvey2uqxlvbfr [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join kubenewmaster1:6443 --token t3scpi.fdyrvey2uqxlvbfr \ --discovery-token-ca-cert-hash sha256:b474d033d2ff00d7f6f552e968ed7b4449a3547da10f11df51632a5fdf379812 \ --control-plane --certificate-key 4f989d31501adb1b8cde0bde622c0d7d039745c1c198db13d056995257e4ca93 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join kubenewmaster1:6443 --token t3scpi.fdyrvey2uqxlvbfr \ --discovery-token-ca-cert-hash sha256:b474d033d2ff00d7f6f552e968ed7b4449a3547da10f11df51632a5fdf379812 kubeuser@kubenewmaster1:~$ ls calico.yaml kubeadm-config.yaml kubeadm-init.out
kubectlコマンドを現在のユーザーで実行できるようにします。
※このコマンドも、kubeadm initの出力に含まれています。
kubeuser@kubenewmaster1:~$ mkdir -p $HOME/.kube kubeuser@kubenewmaster1:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config kubeuser@kubenewmaster1:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config kubeuser@kubenewmaster1:~$ cat $HOME/.kube/config apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1ESXhOREV6TURVeU5Wb1hEVE15TURJeE1qRXpNRFV5TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS0VGCk5HaWtvK2xNcTBadE9zQk1TNFRWZUh6VGFiNjdxYkhKVGMwYktROVRaNnZlZE1jV29vcGNvOFJYSjdzbk5TaksKN0g4WEdKQm1lV1ZnckRYZHprSU9raE50WmZkVnoyaENYWU5LczNCV0RHWXZ3T0ozSDF3ajRmTmlRakkyYXo2Ygozcnd1T1RmaTdQRnZvRlBDQjNqdEFXOHVuWGZ3eVJxaS9xVEpZcDltMUVzRGRCa0dvQzd6MXpTUzBOWERiNEdHCkRKY2ZYK3UvL0d2bCtuVXdRdURiQUtOQ2lRbEllSlJtZHlBWlVRaXYyTDJMYy9rRkdUMTVNbVVvMW8wVzFaTG8KVXFWZW96QVRsZk4xbXpEM2FzV1VpS29WK05waGwzMTNBSWpZYWRkU0wrREkxNVd5YWJxdHBHY2YweUhJcEZmZQp0YmpvUjU2SHIyNS90OWN5QzVVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZLYzVuL0pKSXBzdy9zQnJzSWR3N0JKZFRScmFNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRU9HajZTa256R21FNTZ2LzBWcAo2eDlva08yZDM5eEgzODEzM1RZeU5kU0d4cU0xSFdNVWlBaHV3ZG5KSU1PQlZtczNHS2g4WFFiSjdlSTlqdjZUCktMbnp0dFhxWVcwOERrYlZtOEM5SmQyRUoxVnlQVEZZdXRHSitXSVlKdHk2QlBEV2g3cU1VeTZVMm5VKzZQbEYKNVpmSDcwNEF1R0YwS1JIME5sVTJ3SVIvUHI0NnljMFo4WktlRUdOdzBPcHRPbysxMmtiVmkzYVBxQkVLbkVrTwp0N3cvL2k0RFVxbzJ5RGU4NkRHS2xIOSttMWJ3ZWpVUEJZM2xDUnRrMW5LRWExY0YvaXdUUW51cjVkZmxvZzhFCm4zUEw5OXNEMnVUSmhJMUR3L3hwdldKRG9mb1hJeHpiejd0VHBEejNrV1NUOFU4VjNpRmNPa2oreCtuem5XMFEKN0ZrPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https://kubenewmaster1:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJYkZnWFZDVzd4azR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TWpBeU1UUXhNekExTWpWYUZ3MHlNekF5TVRReE16QTFNekZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXgxejIvTTI1OGpSRWpkZ2sKejFScitpMU15M2NSZTRGZW1IbGJ4QlhNRUNjNzVzSWRmQ0VUdlI1ZmlFc3pOdkNHeXVnL0kxaUVVakFXd3JtVwpzdTJ5cUNkWkc4OUpRelI3bWk2OTdNc01KVGNHRDNFK0xDZjNNVzhKelBFRnJtbnJHeDNVOFNnM3RrbWEreVVyCmEzMDd6UXBwSTBYTHIrMXVOemI2MFdRSHVraGNnZFY1bFlvMGVvd1p5d3N2dFV6MnBQT2hScTFsVmxrUytKQXcKR0NMeFN5a1I4dnl6cStnQXB6SFpmR0l5RXJmV0lIWE9LY2hlQTRwS1NjTGZOVUhVWHBQMlMwcUN2L2QyWlArOQpPMWE3NDErcWtKSW9WaWdQK2ZORUpXdUF5NGQrMTFRZktETlNzdHlmeGpmVm1nMEJUNXRoZjZabFNCdTJBUnFRCnEyYi94d0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JTbk9aL3lTU0tiTVA3QWE3Q0hjT3dTWFUwYQoyakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBTko3dmE5NDlxYmlseGtHa3pIUzdkZEZPTHR5c0dBMHZaSUUyCmxOMzEvQUlNOEMrWXVpQ2tndFdBcGZPSnRXejYzblNNZlh2amZPdlBKQW1GeDFvQXY0cHpWR2M2bEtRTVBFK0wKZEZKZWpJT1BRa1JRN1BHTzloSk4ramppZmhhanpkZ3RIWnExOURwM0xtTjVGcnJQT3RwM3N4OE1BTUxDU1FVRwpXajd3ek82KzNlZTVhemlmYi9CRUUveFB5dWR1aFQ4ODJsMnpiMTNMdldmczRYZVpUT1ZMdFdIU3hvL1diZE9oCjZ2TVROYUxzRFNveCtpWWdHSmtGYWl6NDdlRzVEVWpFa3F4VzdiSXEwTDNHdWJtRWhNOW5la3JsYlBseDZlODAKd0dGWnZaKy9zak51T3NQVWZjVFJYTWtKUUUvZzNVd3pTSUZGWlIydXVtUGFhRFBrVWc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBeDF6Mi9NMjU4alJFamRna3oxUnIraTFNeTNjUmU0RmVtSGxieEJYTUVDYzc1c0lkCmZDRVR2UjVmaUVzek52Q0d5dWcvSTFpRVVqQVd3cm1Xc3UyeXFDZFpHODlKUXpSN21pNjk3TXNNSlRjR0QzRSsKTENmM01XOEp6UEVGcm1uckd4M1U4U2czdGttYSt5VXJhMzA3elFwcEkwWExyKzF1TnpiNjBXUUh1a2hjZ2RWNQpsWW8wZW93Wnl3c3Z0VXoycFBPaFJxMWxWbGtTK0pBd0dDTHhTeWtSOHZ5enErZ0FwekhaZkdJeUVyZldJSFhPCktjaGVBNHBLU2NMZk5VSFVYcFAyUzBxQ3YvZDJaUCs5TzFhNzQxK3FrSklvVmlnUCtmTkVKV3VBeTRkKzExUWYKS0ROU3N0eWZ4amZWbWcwQlQ1dGhmNlpsU0J1MkFScVFxMmIveHdJREFRQUJBb0lCQUNoaXNMWHRodW1GcFEyRwo1NDRJY0FjeC9naUppa1VXbys4SFJvdW1UcnhHOWw5OG16UjJEdVdVclkyU2prRm00Q2RpZk1mUU9wM2JtQURDClQ4RFhYZ1dxVXViTFN2QU9SYXVxSkZjL21xby9SejhCbGJLa05mTVJwMDZZMUtuTVV4QWZMdS9iVWMzZmcwRzAKK2VMQWI4ak5meGJpSUt6MjBBam5Yay9rajV3d2lPUGhjeVNreEhUMmtDVDVTVURsNCswd3l2Z1prN0VkWm1PYQppdlVjU2pseWNFcTBsMnBmM2tWK2hYVk5ZSWxoY0kwM3dGWDFIMEdBREcvT3lnNDNQVWxmb3ZCQWU1TDJLVkcvCmFVUXFqazkvZytPaXY2anRwdWU1b2dEbGthN0hjQ0lDMkdOT0k3S2hCNVhUbFV6eWY1MlNMTlNZeVplVnRWb3YKUm9FbjNBRUNnWUVBMzJXN2RQby9ZYTA3MjJNNU5QK3dCeFRzbHlBamQwblFESHgxcExpbkRzMWNBUmpWRnZxcgpJRHhna2wrQTJDRGJpWkg2dW9SZHRsRm5CYnhZRlVocDNMMjFaaE1BR2wzU092K0libmIzK0daVTJKdTljWURqCnZLRklvQ0RRMnJOQVVveWE4djF2WHRhR1krL245T2pGQlJlVnptenZuU1F6a24rTUo4N29tVWNDZ1lFQTVIVk0KUCtGb0hMem5Qd2dTZnQ3NDNGL2lLdW9PbWJFd3VCbUM3bXF1dWpIM3cyUFJWb1cvMVhBemtGQnRPNVF2UVdYNwpGZG1iVFQ4NW1UN3BldXhIV3VLUldUb25LWFJrWmI1T3dXU1RlVk5IWVZ4S2JrSW15WkFKWnZWRFhQZWhOT2NXCnc1eTNvWGdnK0Y1M0o1RDNocXM0MGgwRXVENTBIVlp0ZGdyaHBZRUNnWUFlLzdqaFpKQkM5NHpreG9IN3JyYzQKWkZqb0o1ZUVTQVBNbDhDaldOUWxvNjF1b1ltQUpNeDJMcXFmNVF5MThPbEZ6N0hoQzlrTklZS1FNekJ0MDV5TQordTRlK2VmN3dLVVpkcmZ4ekNSZ25hS01aQ0FIamdFTC9iMWNLdkdRUjJ0WGlSYy9QSmVscTFMK3J4MmF5R24rCmFPVnF2WWNLWVNtZTNJQVFUZy9NcFFLQmdIWGQzcDBHbWtSWllhVXZjUHRyNWxFc1Z1OTFHbHRKQTYyMzI4bE4KMlIvUEw5anE0dElVNTBnalB6Y3hoMm01cGpmRGVhdG9QYXU0OXVxTmZzQWdyeC9Bek9TUUVDeGZGSDA1bGtCSQp0NTFjemZMNVBwMXNHNzdhUlQrTlFsZndtb2RFd29YaGtRd0pnbGtodzYveUp3S2Z6QXo3VTdnSzRMVlNKZDlFCjllNEJBb0dBRUViZTRzTkhnUDFVS3NBdlJOOFN3MUZZeHdPWFFsWmxIMDFKQXQ1d290UXV2WUo0elQwRkJSdTYKR292VDF5RU9xRzdqbndQczZwZVRlVU56c09nblBRR1FtUitmSUl1QURXekFsUTNxSk55T2VwWEFsSDhJVDlGagp1QzcrMjZUSHJNNUJQTTN4dFVlenlLV2s5T1VvbVh5UWlEMHBWWXRJNTdWNHBabDdYREk9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
kubectlコマンドが実行できるようになりました。
kubeuser@kubenewmaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubenewmaster1 NotReady control-plane,master 97s v1.22.0
編集済みのcalico.yamlからcalicoを起動します。
※コンテナとして起動します。
kubeuser@kubenewmaster1:~$ kubectl apply -f calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget poddisruptionbudget.policy/calico-kube-controllers created
起動まで1分ほどかかりましたが、各コンポ(pod)が起動しました。
kubeuser@kubenewmaster1:~$ kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-958545d87-bgdpj 1/1 Running 0 114s calico-node-mk6d2 1/1 Running 0 115s coredns-78fcd69978-5kctf 1/1 Running 0 3m31s coredns-78fcd69978-crtpx 1/1 Running 0 3m31s etcd-kubenewmaster1 1/1 Running 0 3m38s kube-apiserver-kubenewmaster1 1/1 Running 0 3m38s kube-controller-manager-kubenewmaster1 1/1 Running 0 3m46s kube-proxy-x4kpk 1/1 Running 0 3m31s kube-scheduler-kubenewmaster1 1/1 Running 0 3m47s
kubeadm-config.yamlに関する内容もコマンドで確認ができます。
kubeuser@kubenewmaster1:~$ sudo kubeadm config print init-defaults apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 1.2.3.4 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock imagePullPolicy: IfNotPresent name: node taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: 1.22.0 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 scheduler: {}
ここからは1台目のworkerノードで操作をします。
Workerノードの追加
/etc/hostsの編集。
kubeuser@kubenewworker1:~$ sudo vi /etc/hosts kubeuser@kubenewworker1:~$ cat /etc/hosts 192.168.1.251 kubenewmaster1 192.168.1.252 kubenewworker1 127.0.0.1 localhost 127.0.1.1 kube_newworker1 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
masterで出力された"kubeadm join"を確認します。
kubeuser@kubenewmaster1:~$ cat kubeadm-init.out [init] Using Kubernetes version: v1.22.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster . . . To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join kubenewmaster1:6443 --token t3scpi.fdyrvey2uqxlvbfr \ --discovery-token-ca-cert-hash sha256:b474d033d2ff00d7f6f552e968ed7b4449a3547da10f11df51632a5fdf379812 \ --control-plane --certificate-key 4f989d31501adb1b8cde0bde622c0d7d039745c1c198db13d056995257e4ca93 Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join kubenewmaster1:6443 --token t3scpi.fdyrvey2uqxlvbfr \ --discovery-token-ca-cert-hash sha256:b474d033d2ff00d7f6f552e968ed7b4449a3547da10f11df51632a5fdf379812
kubeadm joinの実行。
kubeuser@kubenewworker1:~$ sudo kubeadm join kubenewmaster1:6443 --token t3scpi.fdyrvey2uqxlvbfr \ > --discovery-token-ca-cert-hash sha256:b474d033d2ff00d7f6f552e968ed7b4449a3547da10f11df51632a5fdf379812 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
masterノードで"kubectl get nodes"を実行すると、1分過ぎたあたりで"Ready"となることが確認できます。
kubeuser@kubenewmaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubenewmaster1 Ready control-plane,master 7m5s v1.22.0 kubenewworker1 Ready <none> 61s v1.22.0
kubernetesクラスターの完成
これでkubernetesクラスターの完成です!!!!!!!
各ノードの詳細は、"kubectl describe node"で確認できます。
kubeuser@kubenewmaster1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubenewmaster1 Ready control-plane,master 153m v1.22.0 kubenewworker1 Ready <none> 147m v1.22.0 kubeuser@kubenewmaster1:~$ kubectl describe nodes kubenewmaster1 Name: kubenewmaster1 Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=kubenewmaster1 kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock node.alpha.kubernetes.io/ttl: 0 projectcalico.org/IPv4Address: 192.168.1.251/24 projectcalico.org/IPv4IPIPTunnelAddr: 10.0.13.192 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 14 Feb 2022 13:06:02 +0000 Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false Lease: HolderIdentity: kubenewmaster1 AcquireTime: <unset> RenewTime: Mon, 14 Feb 2022 15:41:34 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Mon, 14 Feb 2022 13:09:11 +0000 Mon, 14 Feb 2022 13:09:11 +0000 CalicoIsUp Calico is running on this node MemoryPressure False Mon, 14 Feb 2022 15:41:08 +0000 Mon, 14 Feb 2022 13:05:57 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 14 Feb 2022 15:41:08 +0000 Mon, 14 Feb 2022 13:05:57 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 14 Feb 2022 15:41:08 +0000 Mon, 14 Feb 2022 13:05:57 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 14 Feb 2022 15:41:08 +0000 Mon, 14 Feb 2022 13:08:37 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 192.168.1.251 Hostname: kubenewmaster1 Capacity: cpu: 2 ephemeral-storage: 25151748Ki hugepages-2Mi: 0 memory: 4026020Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 23179850919 hugepages-2Mi: 0 memory: 3923620Ki pods: 110 System Info: Machine ID: 6028533b8c254d49b0939d56a73c3e09 System UUID: 6028533b-8c25-4d49-b093-9d56a73c3e09 Boot ID: a85d7ecc-a138-424e-bc7a-e853f5444e18 Kernel Version: 5.4.0-99-generic OS Image: Ubuntu 20.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: containerd://1.4.12 Kubelet Version: v1.22.0 Kube-Proxy Version: v1.22.0 PodCIDR: 10.0.0.0/24 PodCIDRs: 10.0.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system calico-kube-controllers-958545d87-bgdpj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 153m kube-system calico-node-mk6d2 250m (12%) 0 (0%) 0 (0%) 0 (0%) 153m kube-system coredns-78fcd69978-5kctf 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 155m kube-system coredns-78fcd69978-crtpx 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 155m kube-system etcd-kubenewmaster1 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 155m kube-system kube-apiserver-kubenewmaster1 250m (12%) 0 (0%) 0 (0%) 0 (0%) 155m kube-system kube-controller-manager-kubenewmaster1 200m (10%) 0 (0%) 0 (0%) 0 (0%) 155m kube-system kube-proxy-x4kpk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 155m kube-system kube-scheduler-kubenewmaster1 100m (5%) 0 (0%) 0 (0%) 0 (0%) 155m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1100m (55%) 0 (0%) memory 240Mi (6%) 340Mi (8%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: <none> kubeuser@kubenewmaster1:~$ kubectl describe nodes kubenewworker1 Name: kubenewworker1 Roles: <none> Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=kubenewworker1 kubernetes.io/os=linux Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock node.alpha.kubernetes.io/ttl: 0 projectcalico.org/IPv4Address: 192.168.1.252/24 projectcalico.org/IPv4IPIPTunnelAddr: 10.0.105.64 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 14 Feb 2022 13:11:52 +0000 Taints: <none> Unschedulable: false Lease: HolderIdentity: kubenewworker1 AcquireTime: <unset> RenewTime: Mon, 14 Feb 2022 15:41:44 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Mon, 14 Feb 2022 13:13:27 +0000 Mon, 14 Feb 2022 13:13:27 +0000 CalicoIsUp Calico is running on this node MemoryPressure False Mon, 14 Feb 2022 15:40:28 +0000 Mon, 14 Feb 2022 13:11:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 14 Feb 2022 15:40:28 +0000 Mon, 14 Feb 2022 13:11:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 14 Feb 2022 15:40:28 +0000 Mon, 14 Feb 2022 13:11:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 14 Feb 2022 15:40:28 +0000 Mon, 14 Feb 2022 13:12:53 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 192.168.1.252 Hostname: kubenewworker1 Capacity: cpu: 2 ephemeral-storage: 25151748Ki hugepages-2Mi: 0 memory: 4026028Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 23179850919 hugepages-2Mi: 0 memory: 3923628Ki pods: 110 System Info: Machine ID: a2d3bfc6f2fd4acfbd920d62bac227f5 System UUID: a2d3bfc6-f2fd-4acf-bd92-0d62bac227f5 Boot ID: 400c7d17-1825-4fb7-80bf-c89cad8fdf53 Kernel Version: 5.4.0-99-generic OS Image: Ubuntu 20.04.3 LTS Operating System: linux Architecture: amd64 Container Runtime Version: containerd://1.4.12 Kubelet Version: v1.22.0 Kube-Proxy Version: v1.22.0 PodCIDR: 10.0.1.0/24 PodCIDRs: 10.0.1.0/24 Non-terminated Pods: (2 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system calico-node-thn5v 250m (12%) 0 (0%) 0 (0%) 0 (0%) 149m kube-system kube-proxy-46dhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 149m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 250m (12%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: <none>
おわりに
前回は、v1.18.0(CRIにDocker)でクラスターを構築したのですが、
v1.22.0(CRIにcontainerd)でもなんとかできました。