LinuCエヴァンジェリストの鯨井貴博@opensourcetechです。
はじめに
今回は、Kubernetes v1.26.00のDual Stack(IPv4 & IPv6の両方使える)構築を行っていきます。
※kubernetesでは、v1.16からDual Stackが利用可能となっている。
構築する環境
・Clusterの構成
Master Node 1台 & Worker Node 2台
※KVM上のVMを使いました。
virt-install --name master01 --ram 4096 --disk size=40 --vcpus 2 --os-variant ubuntu22.04 --network bridge=br0 --graphics none --console pty,target_type=serial --location /home/ubuntu/ubuntu-22.04.2-live-server-amd64.iso,kernel=casper/vmlinuz,initrd=casper/initrd --extra-args 'console=ttyS0,115200n8 serial'
・使用するOS
Ubuntu 22.04 Server
※isoイメージはこちらのものを使用しました
kubeuser@master01:~$ cat /etc/os-release PRETTY_NAME="Ubuntu 22.04.2 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.2 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy
・Master NodeのPCスペック
2CPUs・Memory4GB・ストレージ40GB ・IPv6有効化
・Worker NodeのPCスペック
2CPUs・Memory2GB・ストレージ40GB・IPv6有効化
・Master Node、及びWorker NodeのIPアドレス
192.168.1.41 master01
192.168.1.45 worker01
192.168.1.46 worker02
240f:32:57b8:1:5054:ff:fe8e:5428 master01
240f:32:57b8:1:5054:ff:fe93:acfc worker01
240f:32:57b8:1:5054:ff:fe9e:4f00 worker02
fe80::5054:ff:fe8e:5428 master01
fe80::5054:ff:fe93:acfc worker01
fe80::5054:ff:fe9e:4f00 worker02
・Podネットワーク
IPv4:10.0.0.0/16
IPv6:fd12:b5e0:383e::/64
※NICのMACアドレスからULAを作成
・Serviceネットワーク
IPv4:10.1.0.0/16
IPv6:fd12:b5e0:383f::/112
kubeuser@master01:~$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:8e:54:28 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.41/24 brd 192.168.1.255 scope global enp1s0
valid_lft forever preferred_lft forever
inet6 240f:32:57b8:1:5054:ff:fe8e:5428/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 279sec preferred_lft 279sec
inet6 fe80::5054:ff:fe8e:5428/64 scope link
valid_lft forever preferred_lft forever
全Node共通設定① Swapの無効化
※ログは、Master Nodeだけ記載しています。
swapon -sで有効になっているswapがないか確認し、
あればswapoff -aで無効にします。
また、/etc/fstabでも起動時無効としておきます。
kubeuser@master01:~$ sudo swapon -s Filename Type Size Used Priority /dev/vda4 partition 1048572 0 -2 kubeuser@master01:~$ sudo swapoff -a kubeuser@master01:~$ sudo swapon -s kubeuser@master01:~$ sudo vi /etc/fstab kubeuser@master01:~$ sudo cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> #/dev/disk/by-uuid/695ee706-e063-4b47-90aa-48f9d4d84178 none swap sw 0 0 # / was on /dev/vda5 during curtin installation /dev/disk/by-uuid/7edad1f2-4d3d-4fe0-b705-207e4c89323d / ext4 defaults 0 1 # /boot was on /dev/vda3 during curtin installation /dev/disk/by-uuid/82f132b8-d30f-424e-b2a9-7aeccfaf92d0 /boot ext4 defaults 0 1
全Node共通設定② /etc/hostsの編集
※ログは、Master Nodeだけ記載しています。
各Nodeの/etc/hostsに、クラスターを組む3台分のIPv4/IPv6(グローバルユニキャストアドレスとリンクローカルユニキャストアドレス)をip addr showなどで確認し追記します。
kubeuser@master01:~$ sudo vi /etc/hosts kubeuser@master01:~$ sudo cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 master01 192.168.1.41 master01 192.168.1.45 worker01 192.168.1.46 worker02 240f:32:57b8:1:5054:ff:fe8e:5428 master01 240f:32:57b8:1:5054:ff:fe93:acfc worker01 240f:32:57b8:1:5054:ff:fe9e:4f00 worker02 fe80::5054:ff:fe8e:5428 master01 fe80::5054:ff:fe93:acfc worker01 fe80::5054:ff:fe9e:4f00 worker02 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
全Node共通設定③ iptablesがブリッジを通過するトラフィックを処理できるようにする
※ログは、Master Nodeだけ記載しています。
必要なモジュールのロードとLinuxカーネルのパラメータを有効化します。
kubeuser@master01:~$ lsmod | grep br_netfilter kubeuser@master01:~$ sudo modprobe br_netfilter kubeuser@master01:~$ lsmod | grep br_netfilter br_netfilter 32768 0 bridge 307200 1 br_netfilter
kubeuser@master01:~$ sudo sysctl --system * Applying /etc/sysctl.d/10-console-messages.conf ... kernel.printk = 4 4 1 7 * Applying /etc/sysctl.d/10-ipv6-privacy.conf ... net.ipv6.conf.all.use_tempaddr = 2 net.ipv6.conf.default.use_tempaddr = 2 * Applying /etc/sysctl.d/10-kernel-hardening.conf ... kernel.kptr_restrict = 1 * Applying /etc/sysctl.d/10-magic-sysrq.conf ... kernel.sysrq = 176 * Applying /etc/sysctl.d/10-network-security.conf ... net.ipv4.conf.default.rp_filter = 2 net.ipv4.conf.all.rp_filter = 2 * Applying /etc/sysctl.d/10-ptrace.conf ... kernel.yama.ptrace_scope = 1 * Applying /etc/sysctl.d/10-zeropage.conf ... vm.mmap_min_addr = 65536 * Applying /usr/lib/sysctl.d/50-default.conf ... kernel.core_uses_pid = 1 net.ipv4.conf.default.rp_filter = 2 net.ipv4.conf.default.accept_source_route = 0 sysctl: setting key "net.ipv4.conf.all.accept_source_route": Invalid argument net.ipv4.conf.default.promote_secondaries = 1 sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument net.ipv4.ping_group_range = 0 2147483647 net.core.default_qdisc = fq_codel fs.protected_hardlinks = 1 fs.protected_symlinks = 1 fs.protected_regular = 1 fs.protected_fifos = 1 * Applying /usr/lib/sysctl.d/50-pid-max.conf ... kernel.pid_max = 4194304 * Applying /usr/lib/sysctl.d/99-protect-links.conf ... fs.protected_fifos = 1 fs.protected_hardlinks = 1 fs.protected_regular = 2 fs.protected_symlinks = 1 * Applying /etc/sysctl.d/99-sysctl.conf ... * Applying /etc/sysctl.d/k8s.conf ... net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.ipv6.conf.all.forwarding = 1 * Applying /etc/sysctl.conf ...
全Node共通設定④ iptablesがnftablesバックエンドを使用しないようにする
※ログは、Master Nodeだけ記載しています。
nftablesバックエンドとkubeadmパッケージの互換性の調整のため、iptables-legacyが使用されるように変更します。
kubeuser@master01:~$ sudo apt install -y iptables arptables ebtables
Reading state information... Done
iptables is already the newest version (1.8.7-1ubuntu5).
iptables set to manually installed.
The following NEW packages will be installed:
arptables ebtables
0 upgraded, 2 newly installed, 0 to remove and 16 not upgraded.
Need to get 123 kB of archives.
After this operation, 373 kB of additional disk space will be used.
[Working]
Get:1 http://jp.archive.ubuntu.com/ubuntu jammy/universe amd64 arptables amd64 0.0.5-3 [38.1 kB]
(Reading database ... 100%
(Reading database ... 73929 files and directories currently installed.)
Preparing to unpack .../arptables_0.0.5-3_amd64.deb ...
Scanning processes...
Scanning linux images...
Running kernel seems to be up-to-date.
No services need to be restarted.
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
kubeuser@master01:~$ sudo update-alternatives --set iptables /usr/sbin/iptables-legacy update-alternatives: using /usr/sbin/iptables-legacy to provide /usr/sbin/iptables (iptables) in manual mode kubeuser@master01:~$ sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy update-alternatives: using /usr/sbin/ip6tables-legacy to provide /usr/sbin/ip6tables (ip6tables) in manual mode kubeuser@master01:~$ sudo update-alternatives --set arptables /usr/sbin/arptables-legacy update-alternatives: using /usr/sbin/arptables-legacy to provide /usr/sbin/arptables (arptables) in manual mode kubeuser@master01:~$ sudo update-alternatives --set ebtables /usr/sbin/ebtables-legacy update-alternatives: using /usr/sbin/ebtables-legacy to provide /usr/sbin/ebtables (ebtables) in manual mode
全Node共通設定⑤ コンテナランタイムのインストール
コンテナランライム:ざっくりいうと、コンテナを起動してくれるプログラムのことです。
※ログは、Master Nodeだけ記載しています。
Docker・CRI-O・Containerdなどから選択できますが、
今回はContainerdを使いました。
参考:https://kubernetes.io/ja/docs/setup/production-environment/container-runtimes/
kubeuser@master01:~$ sudo vi /etc/modules-load.d/containerd.conf kubeuser@master01:~$ sudo cat /etc/modules-load.d/containerd.conf overlay br_netfilter kubeuser@master01:~$ sudo modprobe overlay kubeuser@master01:~$ sudo modprobe br_netfilter
kubeuser@master01:~$ sudo apt update && sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
.
.
.
ca-certificates is already the newest version (20211016ubuntu0.22.04.1).
ca-certificates set to manually installed.
curl is already the newest version (7.81.0-1ubuntu1.8).
curl set to manually installed.
The following additional packages will be installed:
python3-software-properties
The following NEW packages will be installed:
apt-transport-https
The following packages will be upgraded:
python3-software-properties software-properties-common
2 upgraded, 1 newly installed, 0 to remove and 14 not upgraded.
Need to get 44.4 kB of archives.
After this operation, 169 kB of additional disk space will be used.
.
.
.
kubeuser@master01:~$ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK
kubeuser@master01:~$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Repository: 'deb [arch=amd64] https://download.docker.com/linux/ubuntu jammy stable'
Description:
Archive for codename: jammy components: stable
More info: https://download.docker.com/linux/ubuntu
Adding repository.
Press [ENTER] to continue or Ctrl-c to cancel.
Adding deb entry to /etc/apt/sources.list.d/archive_uri-https_download_docker_com_linux_ubuntu-jammy.list
Adding disabled deb-src entry to /etc/apt/sources.list.d/archive_uri-https_download_docker_com_linux_ubuntu-jammy.list
0% [Working]
Get:1 https://download.docker.com/linux/ubuntu jammy InRelease [48.9 kB]
.
.
.
kubeuser@master01:~$ sudo apt update && sudo apt install -y containerd.io
kubeuser@master01:~$ sudo mkdir -p /etc/containerd
kubeuser@master01:~$ containerd config default | sudo tee /etc/containerd/config.toml
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2
[cgroup]
path = ""
[debug]
address = ""
format = ""
.
.
.
kubeuser@master01:~$ sudo vi /etc/containerd/config.toml
kubeuser@master01:~$ cat -n /etc/con tainerd/config.toml
1 disabled_plugins = []
2 imports = []
3 oom_score = 0
4 plugin_dir = ""
5 required_plugins = []
6 root = "/var/lib/containerd"
7 state = "/run/containerd"
8 temp = ""
9 version = 2
10
11 [cgroup]
12 path = ""
13
14 [debug]
15 address = ""
16 format = ""
17 gid = 0
18 level = ""
19 uid = 0
20
21 [grpc]
22 address = "/run/containerd/containerd.sock"
23 gid = 0
24 max_recv_message_size = 16777216
25 max_send_message_size = 16777216
26 tcp_address = ""
27 tcp_tls_ca = ""
28 tcp_tls_cert = ""
29 tcp_tls_key = ""
30 uid = 0
31
32 [metrics]
33 address = ""
34 grpc_histogram = false
35
36 [plugins]
37
38 [plugins."io.containerd.gc.v1.scheduler"]
39 deletion_threshold = 0
40 mutation_threshold = 100
41 pause_threshold = 0.02
42 schedule_delay = "0s"
43 startup_delay = "100ms"
44
45 [plugins."io.containerd.grpc.v1.cri"]
46 device_ownership_from_security_context = false
47 disable_apparmor = false
48 disable_cgroup = false
49 disable_hugetlb_controller = true
50 disable_proc_mount = false
51 disable_tcp_service = true
52 enable_selinux = false
53 enable_tls_streaming = false
54 enable_unprivileged_icmp = false
55 enable_unprivileged_ports = false
56 ignore_image_defined_volumes = false
57 max_concurrent_downloads = 3
58 max_container_log_line_size = 16384
59 netns_mounts_under_state_dir = false
60 restrict_oom_score_adj = false
61 sandbox_image = "registry.k8s.io/pause:3.6"
62 selinux_category_range = 1024
63 stats_collect_period = 10
64 stream_idle_timeout = "4h0m0s"
65 stream_server_address = "127.0.0.1"
66 stream_server_port = "0"
67 systemd_cgroup = false
68 tolerate_missing_hugetlb_controller = true
69 unset_seccomp_profile = ""
70
71 [plugins."io.containerd.grpc.v1.cri".cni]
72 bin_dir = "/opt/cni/bin"
73 conf_dir = "/etc/cni/net.d"
74 conf_template = ""
75 ip_pref = ""
76 max_conf_num = 1
77
78 [plugins."io.containerd.grpc.v1.cri".containerd]
79 default_runtime_name = "runc"
80 disable_snapshot_annotations = true
81 discard_unpacked_layers = false
82 ignore_rdt_not_enabled_errors = false
83 no_pivot = false
84 snapshotter = "overlayfs"
85
86 [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
87 base_runtime_spec = ""
88 cni_conf_dir = ""
89 cni_max_conf_num = 0
90 container_annotations = []
91 pod_annotations = []
92 privileged_without_host_devices = false
93 runtime_engine = ""
94 runtime_path = ""
95 runtime_root = ""
96 runtime_type = ""
97
98 [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
99
100 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
101
102 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
103 base_runtime_spec = ""
104 cni_conf_dir = ""
105 cni_max_conf_num = 0
106 container_annotations = []
107 pod_annotations = []
108 privileged_without_host_devices = false
109 runtime_engine = ""
110 runtime_path = ""
111 runtime_root = ""
112 runtime_type = "io.containerd.runc.v2"
113
114 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
115 BinaryName = ""
116 CriuImagePath = ""
117 CriuPath = ""
118 CriuWorkPath = ""
119 IoGid = 0
120 IoUid = 0
121 NoNewKeyring = false
122 NoPivotRoot = false
123 Root = ""
124 ShimCgroup = ""
125 SystemdCgroup = true //変更箇所
126
127 [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
128 base_runtime_spec = ""
129 cni_conf_dir = ""
130 cni_max_conf_num = 0
131 container_annotations = []
132 pod_annotations = []
133 privileged_without_host_devices = false
134 runtime_engine = ""
135 runtime_path = ""
136 runtime_root = ""
137 runtime_type = ""
138
139 [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
140
141 [plugins."io.containerd.grpc.v1.cri".image_decryption]
142 key_model = "node"
143
144 [plugins."io.containerd.grpc.v1.cri".registry]
145 config_path = ""
146
147 [plugins."io.containerd.grpc.v1.cri".registry.auths]
148
149 [plugins."io.containerd.grpc.v1.cri".registry.configs]
150
151 [plugins."io.containerd.grpc.v1.cri".registry.headers]
152
153 [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
154
155 [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
156 tls_cert_file = ""
157 tls_key_file = ""
158
159 [plugins."io.containerd.internal.v1.opt"]
160 path = "/opt/containerd"
161
162 [plugins."io.containerd.internal.v1.restart"]
163 interval = "10s"
164
165 [plugins."io.containerd.internal.v1.tracing"]
166 sampling_ratio = 1.0
167 service_name = "containerd"
168
169 [plugins."io.containerd.metadata.v1.bolt"]
170 content_sharing_policy = "shared"
171
172 [plugins."io.containerd.monitor.v1.cgroups"]
173 no_prometheus = false
174
175 [plugins."io.containerd.runtime.v1.linux"]
176 no_shim = false
177 runtime = "runc"
178 runtime_root = ""
179 shim = "containerd-shim"
180 shim_debug = false
181
182 [plugins."io.containerd.runtime.v2.task"]
183 platforms = ["linux/amd64"]
184 sched_core = false
185
186 [plugins."io.containerd.service.v1.diff-service"]
187 default = ["walking"]
188
189 [plugins."io.containerd.service.v1.tasks-service"]
190 rdt_config_file = ""
191
192 [plugins."io.containerd.snapshotter.v1.aufs"]
193 root_path = ""
194
195 [plugins."io.containerd.snapshotter.v1.btrfs"]
196 root_path = ""
197
198 [plugins."io.containerd.snapshotter.v1.devmapper"]
199 async_remove = false
200 base_image_size = ""
201 discard_blocks = false
202 fs_options = ""
203 fs_type = ""
204 pool_name = ""
205 root_path = ""
206
207 [plugins."io.containerd.snapshotter.v1.native"]
208 root_path = ""
209
210 [plugins."io.containerd.snapshotter.v1.overlayfs"]
211 root_path = ""
212 upperdir_label = false
213
214 [plugins."io.containerd.snapshotter.v1.zfs"]
215 root_path = ""
216
217 [plugins."io.containerd.tracing.processor.v1.otlp"]
218 endpoint = ""
219 insecure = false
220 protocol = ""
221
222 [proxy_plugins]
223
224 [stream_processors]
225
226 [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
227 accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
228 args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
229 env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
230 path = "ctd-decoder"
231 returns = "application/vnd.oci.image.layer.v1.tar"
232
233 [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
234 accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
235 args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
236 env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
237 path = "ctd-decoder"
238 returns = "application/vnd.oci.image.layer.v1.tar+gzip"
239
240 [timeouts]
241 "io.containerd.timeout.bolt.open" = "0s"
242 "io.containerd.timeout.shim.cleanup" = "5s"
243 "io.containerd.timeout.shim.load" = "5s"
244 "io.containerd.timeout.shim.shutdown" = "3s"
245 "io.containerd.timeout.task.state" = "2s"
246
247 [ttrpc]
248 address = ""
249 gid = 0
250 uid = 0
kubeuser@master01:~$ sudo systemctl restart containerd
125行目の変更に関しては、以下に記載あり。
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd
全Node共通設定⑥ kubeadm、kubelet、kubectlのインストール
※ログは、Master Nodeだけ記載しています。
kubeuser@master01:~$ sudo apt update && sudo apt install -y apt-transport-https curl kubeuser@master01:~$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). OK kubeuser@master01:~$ sudo cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list > deb https://apt.kubernetes.io/ kubernetes-xenial main > EOF deb https://apt.kubernetes.io/ kubernetes-xenial main kubeuser@master01:~$ sudo apt update kubeuser@master01:~$ sudo apt install -y kubelet=1.26.0-00 kubeadm=1.26.0-00 kubectl=1.26.0-0 kubeuser@master01:~$ sudo apt-mark hold kubelet kubeadm kubectl kubelet set on hold. kubeadm set on hold. kubectl set on hold.
全Node共通設定⑦ コントロールプレーンノードのkubeletによって使用されるcgroupドライバーの設定
※ログは、Master Nodeだけ記載しています。
kubeuser@master01:~$ sudo systemctl daemon-reload kubeuser@master01:~$ sudo systemctl restart kubelet
Master Nodeのみ① CNI(Container Networking Interface)の設定
コンテナが稼働するPodネットワークに関する設定をします。
今回は、Calicoを使いました。
参考:https://kubernetes.io/ja/docs/concepts/cluster-administration/networking/
以下にあるcalico.yamlを編集して使用します。
https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml
kubeuser@master01:~$ curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 232k 100 232k 0 0 612k 0 --:--:-- --:--:-- --:--:-- 611k
kubeuser@master01:~$ ls
calico.yaml
kubeuser@master01:~$ sudo cp -p calico.yaml calico.yaml_original
kubeuser@master01:~$ sudo vi calico.yaml
kubeuser@master01:~$ diff calico.yaml calico.yaml_original
65c65,67
< "type": "calico-ipam"
---
> "type": "calico-ipam",
> "assign_ipv4": "true",
> "assign_ipv6": "true"
4601,4602c4603,4606
< # - name: CALICO_IPV4POOL_CIDR
< # value: "192.168.0.0/16"
---
> - name: CALICO_IPV4POOL_CIDR
> value: "10.0.0.0/16"
> - name: CALICO_IPV6POOL_CIDR
> value: "fd12:b5e0:383e::/64"
4611c4615,4619
< value: "false"
---
> value: "true"
> - name: CALICO_IPV6POOL_NAT_OUTGOING
> value: "true"
> - name: IP6
> value: "autodetect"
Master Nodeのみ② kubeadm-config.yamlの作成
kubeuser@master01:~$ sudo vi kubeadm-config.yaml kubeuser@master01:~$ sudo cat kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration kubernetesVersion: 1.26.0 controlPlaneEndpoint: "master01:6443" networking: podSubnet: 10.0.0.0/16,fd12:b5e0:383e::/64 serviceSubnet: 10.1.0.0/16,fd12:b5e0:383f::/112
Master Nodeのみ③ kubeadm initの実行
kubeadm initのオプション --v= は出力ログのレベル(数字が大きくなればより多くの情報が出力される)なので、任意です。
kubeuser@master01:~$ sudo kubeadm init --v=5 --config=kubeadm-config2.yaml --upload-certs | tee kubeadm-init2.out
I0314 13:45:37.608183 2125702 initconfiguration.go:254] loading configuration from "kubeadm-config2.yaml"
I0314 13:45:37.609310 2125702 initconfiguration.go:116] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I0314 13:45:37.609588 2125702 interface.go:432] Looking for default routes with IPv4 addresses
I0314 13:45:37.609662 2125702 interface.go:437] Default route transits interface "enp1s0"
I0314 13:45:37.609825 2125702 interface.go:209] Interface enp1s0 is up
I0314 13:45:37.609973 2125702 interface.go:257] Interface "enp1s0" has 3 addresses :[192.168.1.41/24 240f:32:57b8:1:5054:ff:fe8e:5428/64 fe80::5054:ff:fe8e:5428/64].
I0314 13:45:37.610068 2125702 interface.go:224] Checking addr 192.168.1.41/24.
I0314 13:45:37.610128 2125702 interface.go:231] IP found 192.168.1.41
I0314 13:45:37.610208 2125702 interface.go:263] Found valid IPv4 address 192.168.1.41 for interface "enp1s0".
I0314 13:45:37.610266 2125702 interface.go:443] Found active IP 192.168.1.41
I0314 13:45:37.610343 2125702 kubelet.go:196] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
I0314 13:45:37.615297 2125702 checks.go:568] validating Kubernetes and kubeadm version
I0314 13:45:37.615563 2125702 checks.go:168] validating if the firewall is enabled and active
I0314 13:45:37.625582 2125702 checks.go:203] validating availability of port 6443
I0314 13:45:37.626031 2125702 checks.go:203] validating availability of port 10259
I0314 13:45:37.626307 2125702 checks.go:203] validating availability of port 10257
I0314 13:45:37.626551 2125702 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0314 13:45:37.626765 2125702 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0314 13:45:37.626959 2125702 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0314 13:45:37.627152 2125702 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0314 13:45:37.627354 2125702 checks.go:430] validating if the connectivity type is via proxy or direct
I0314 13:45:37.627575 2125702 checks.go:469] validating http connectivity to first IP address in the CIDR
I0314 13:45:37.627794 2125702 checks.go:469] validating http connectivity to first IP address in the CIDR
I0314 13:45:37.628039 2125702 checks.go:469] validating http connectivity to first IP address in the CIDR
I0314 13:45:37.628232 2125702 checks.go:469] validating http connectivity to first IP address in the CIDR
I0314 13:45:37.628452 2125702 checks.go:104] validating the container runtime
I0314 13:45:37.656814 2125702 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0314 13:45:37.657174 2125702 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0314 13:45:37.657378 2125702 checks.go:644] validating whether swap is enabled or not
I0314 13:45:37.657574 2125702 checks.go:370] validating the presence of executable crictl
I0314 13:45:37.657743 2125702 checks.go:370] validating the presence of executable conntrack
I0314 13:45:37.657896 2125702 checks.go:370] validating the presence of executable ip
I0314 13:45:37.658043 2125702 checks.go:370] validating the presence of executable iptables
I0314 13:45:37.658122 2125702 checks.go:370] validating the presence of executable mount
I0314 13:45:37.658296 2125702 checks.go:370] validating the presence of executable nsenter
I0314 13:45:37.658375 2125702 checks.go:370] validating the presence of executable ebtables
I0314 13:45:37.658539 2125702 checks.go:370] validating the presence of executable ethtool
I0314 13:45:37.658616 2125702 checks.go:370] validating the presence of executable socat
I0314 13:45:37.658764 2125702 checks.go:370] validating the presence of executable tc
I0314 13:45:37.658838 2125702 checks.go:370] validating the presence of executable touch
I0314 13:45:37.658987 2125702 checks.go:516] running all checks
I0314 13:45:37.673680 2125702 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost
I0314 13:45:37.673871 2125702 checks.go:610] validating kubelet version
I0314 13:45:37.741815 2125702 checks.go:130] validating if the "kubelet" service is enabled and active
I0314 13:45:37.756234 2125702 checks.go:203] validating availability of port 10250
I0314 13:45:37.756578 2125702 checks.go:203] validating availability of port 2379
I0314 13:45:37.756854 2125702 checks.go:203] validating availability of port 2380
I0314 13:45:37.757089 2125702 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0314 13:45:37.757552 2125702 checks.go:832] using image pull policy: IfNotPresent
I0314 13:45:37.786680 2125702 checks.go:841] image exists: registry.k8s.io/kube-apiserver:v1.26.0
I0314 13:45:37.834252 2125702 checks.go:841] image exists: registry.k8s.io/kube-controller-manager:v1.26.0
I0314 13:45:37.864441 2125702 checks.go:841] image exists: registry.k8s.io/kube-scheduler:v1.26.0
I0314 13:45:37.890492 2125702 checks.go:841] image exists: registry.k8s.io/kube-proxy:v1.26.0
I0314 13:45:37.921816 2125702 checks.go:841] image exists: registry.k8s.io/pause:3.9
I0314 13:45:37.954614 2125702 checks.go:841] image exists: registry.k8s.io/etcd:3.5.6-0
I0314 13:45:37.994535 2125702 checks.go:841] image exists: registry.k8s.io/coredns/coredns:v1.9.3
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0314 13:45:37.995108 2125702 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0314 13:45:38.504513 2125702 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [10.1.0.1 192.168.1.41]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0314 13:45:38.716295 2125702 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0314 13:45:39.085490 2125702 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0314 13:45:39.199612 2125702 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0314 13:45:39.353883 2125702 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [192.168.1.41 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [192.168.1.41 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0314 13:45:40.356280 2125702 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0314 13:45:40.749327 2125702 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0314 13:45:40.919179 2125702 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0314 13:45:41.366913 2125702 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0314 13:45:41.570280 2125702 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0314 13:45:41.729673 2125702 kubelet.go:67] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0314 13:45:42.178081 2125702 manifests.go:99] [control-plane] getting StaticPodSpecs
I0314 13:45:42.178455 2125702 certs.go:519] validating certificate period for CA certificate
I0314 13:45:42.178590 2125702 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0314 13:45:42.178634 2125702 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0314 13:45:42.178662 2125702 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0314 13:45:42.178673 2125702 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0314 13:45:42.178682 2125702 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0314 13:45:42.178690 2125702 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0314 13:45:42.182979 2125702 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0314 13:45:42.183445 2125702 manifests.go:99] [control-plane] getting StaticPodSpecs
I0314 13:45:42.184036 2125702 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0314 13:45:42.184247 2125702 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0314 13:45:42.184444 2125702 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0314 13:45:42.184614 2125702 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0314 13:45:42.184743 2125702 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0314 13:45:42.185050 2125702 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0314 13:45:42.185069 2125702 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0314 13:45:42.185077 2125702 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0314 13:45:42.186007 2125702 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0314 13:45:42.186056 2125702 manifests.go:99] [control-plane] getting StaticPodSpecs
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0314 13:45:42.186345 2125702 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0314 13:45:42.186885 2125702 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0314 13:45:42.187576 2125702 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0314 13:45:42.187674 2125702 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
[apiclient] All control plane components are healthy after 12.003729 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0314 13:45:54.192627 2125702 uploadconfig.go:111] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
I0314 13:45:54.337636 2125702 uploadconfig.go:125] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0314 13:45:54.459517 2125702 uploadconfig.go:130] [upload-config] Preserving the CRISocket information for the control-plane node
I0314 13:45:54.459684 2125702 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/containerd/containerd.sock" to the Node API object "master01" as an annotation
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
d3597a9dc5ed2473c637a6f336af14a46d229c066b8317cffb45d356b59dea29
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 9btmem.3ve14961dmv38455
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0314 13:45:55.920206 2125702 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
I0314 13:45:55.920790 2125702 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I0314 13:45:55.921098 2125702 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I0314 13:45:56.018938 2125702 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0314 13:45:56.082950 2125702 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
I0314 13:45:56.084787 2125702 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join master01:6443 --token 9btmem.3ve14961dmv38455 \
--discovery-token-ca-cert-hash sha256:d1bdc801f112b1ef5e51dc4a087623a29b57493c97fde957851e02c076165d67 \
--control-plane --certificate-key d3597a9dc5ed2473c637a6f336af14a46d229c066b8317cffb45d356b59dea29
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join master01:6443 --token 9btmem.3ve14961dmv38455 \
--discovery-token-ca-cert-hash sha256:d1bdc801f112b1ef5e51dc4a087623a29b57493c97fde957851e02c076165d67
kubeuser@master01:~$ ls
calico.yaml calico.yaml_original kubeadm-config.yaml kubeadm-init.out
Master Nodeのみ③ kubernetesクラスター認証情報の作成
kubeuser@master01:~$ mkdir -p $HOME/.kube
kubeuser@master01:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
kubeuser@master01:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeuser@master01:~$ cat $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1ETXhNakF4TURRd09Gb1hEVE16TURNd09UQXhNRFF3T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSnNxCkFYVGsweVRieUJROE9oSlpIWlFoM3NUSnYwQ0tpZXkwV3owaTR5c2lZMlhjWW02bWFMTnBiNTZRQ0VTYjlTNmEKeGpudi9QZlc0blVncTRvVUR2SEY1d3lvMmlzQ3dmNWh5SFhpNE84RzdSNUE5TTdFVndIMWt6Y2R6by82RWIxQwoyaW84aGcyNDhtUExYcldBQUxFTjZBdmtKemk0aU1FT0hTd1lVVkg5TkdQT3pNWFNneE5vMEZlKzBEVTZma0lkCmxWODkrMjJoOFV4R2JtWk5YTXNUSUhSVVhOU3BMa3pIS0RTWVBjczcyRVBGQU1QcFo2QXIyeFpSaHBUTlhNSDQKclo1eitSeHJHQXFIMWU2OWtRNkJ1NHIvRTJXWTFHTW1Td25Ma2xsNjU1QnJBZXBKelB1MExNUWxmU1dZTHZyaAo5WFVKSFdWSnIySEVLVVBtdVZVQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZFeXJWOVdVNXJNMzU4c0NnKzZ3V2lqRmdBNitNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRmFrOVNhY2ZhZ2tib0dWREt1LwpUZ21yZE9nY1ZNeWdVZS9Sdnd2M3hFQXNsSVhJQW00S3MybTZ1UGVOR3Z6RlBUeUFYaWU0NnJFOTVvMnFvMEQwCkZLeWdjaEhrYWRYQVBmMWliRWFOUUR4bzhLcHBreUd4SjdZOG9TVU13Yk1BZ0t3Zk94L2JtT0FOQzFuSE1VUkwKdzhJZmcvSDlRcE15ZDB4NHo0M1pGdlowMHIvb0NpcVhhQVZ5QWJybWpxUFJXdDZ5emw0Mkh6dUNIdjV2dmMyLwpIQVpYck1wMEpmWW1DRTRJdzNWWERZYnZNVTdMczEvSmVoeklCRVo3Y2Q3bjQ0OU41TUdPZ3BHaStoN2lEOTZnClNCTmVWOVkyWHJXeWdBWEc4bmJ5VVdaSlQvdUtTRmpVZlFNREIzc0ZKUG9xWHYva2YrYzhlZDgyWjE5VXdHbU4KcG1nPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://master01:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJRjQ0T1N4VDdrc013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBek1USXdNVEEwTURoYUZ3MHlOREF6TVRFd01UQTBNVEZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXYraExWK0V4a1ZNT0pNWFYKT0s2ZWRSWitWSFcwRkkxUWFYdEpXd1FiU2FCcFZOWUQ3UzdlaWNidE1iMVNCMFNHYWk5TWlqTHlSR3ZEVzBXagpibXVucWRPNGdOeFdBTUswVEVWdGdVK3dtUUh4RlkrVGNoZ0pFSVRyZ3RKNU1lR09zUkt3T2o1SmJVSUpjeWZMCjdlRGthck5vOTRmMHFGWmplVzVjb0JFZVh2eUxTdlVJallVQk0wTExjL1djd2pJb3M4aGJkQ3FMVkpjZDBlVEkKclgrdTkxS01JNzZNeS9wcDRZRmtPMlBPWHRQb1E0VE0vZVh2aEVVdThpWEVSRXpOdmdObDd1bE5TOGVoYUdrRgpRcmp2M0tRWjNvU2hHVGpDVDFwVllCaVJFdGgycGp3Y0grdS82WWNqMjdkQXduUGlqeHdVdUc1clQ2NTlLVCtTCmZvYUVWUUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JSTXExZlZsT2F6TitmTEFvUHVzRm9veFlBTwp2akFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBUGg0MU1malJObklCelFxeFNCV085U2NETVFFUUVCRkduZkRMCnpFY3NvVUQxRDlPT21LYlV6OVY4UDkyZWYyb2JwNzNJckxUS1VFcUtqYTkxeUtWRVE4aVJQNCtNVzUzTjhzY3EKYTNMZVp6dm9DSnlRd1hoMlk0a3FPMGRVYi8xNlVDYVBwY0xCZVhSZzcva1Z0K08vWFBPZ0JuOUJ1U1cxQU5LNwoycjJrUlR5YW9UbE5lbjhUQklnZ2g3OVlnODJkSzJabHY2VzJxTGdaQlltb2FseGd2NE5INEIxQUkyeWRqNHMzCmp4djl3c0ZsOUREdktvWGhCMFBRQXhrTDNKcWkyYVpWWnRjTWRodnc2THBiM0VDUVFucFQyVkxHcHNsbVVNMnkKZnNGNU9mK1dWcHFBUmVBZm1HY3duZzd5OE12azd3Y01tSzJCc3BCamhWbkJmR2VjdEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBditoTFYrRXhrVk1PSk1YVk9LNmVkUlorVkhXMEZJMVFhWHRKV3dRYlNhQnBWTllECjdTN2VpY2J0TWIxU0IwU0dhaTlNaWpMeVJHdkRXMFdqYm11bnFkTzRnTnhXQU1LMFRFVnRnVSt3bVFIeEZZK1QKY2hnSkVJVHJndEo1TWVHT3NSS3dPajVKYlVJSmN5Zkw3ZURrYXJObzk0ZjBxRlpqZVc1Y29CRWVYdnlMU3ZVSQpqWVVCTTBMTGMvV2N3aklvczhoYmRDcUxWSmNkMGVUSXJYK3U5MUtNSTc2TXkvcHA0WUZrTzJQT1h0UG9RNFRNCi9lWHZoRVV1OGlYRVJFek52Z05sN3VsTlM4ZWhhR2tGUXJqdjNLUVozb1NoR1RqQ1QxcFZZQmlSRXRoMnBqd2MKSCt1LzZZY2oyN2RBd25QaWp4d1V1RzVyVDY1OUtUK1Nmb2FFVlFJREFRQUJBb0lCQUgyMWV2VThESzQzaThKRwozVjlkODJxYnEzRkVFUXlOYlNsTG0wZkZydUpSOCsyZ3E3M0l2L25jbHkvSDVsM2dZM1JYTzNvajJWTThqQ0hUCndqVG96RkdRNFFGNFU5WDN0UWRwUzB3em1Xa0JQcDF6Q1pEcGNiYWllMnVjMThyM0IvT3lYRUlxM3dwMUFaK3YKYUFTUkZzOVdhdUlLNnhjQ1QvTVJlaGRZWDE2MFNRejkzZFVHK3BiamRMT05KTWpnVXVaRlpmcEdnQkIzeTZkeQpsU0Q2aE5KbEVlV1d2cDYwU0txb0RlOXM5dTZmK1pxWWFrU1dvTDJBc20waC9HbmE2bkxYWHlEY1V0VW0zMU5NCkJrcWNoR250VHdEOEgweEM3V25LWGp5N3VqZDE0MXFOYkVPV012ay9heUl2bGo5bUhrYlYxaWNtUkZ0WkF1WHYKbjQ4T000RUNnWUVBMDNPdGtlWjY1SEJrN09hMDMzY05pdUZmaW5OWFhhK2dxMTkxTEIvRnlDTnM3bVJsQnAxVwp4S0ZQUGZmZ1lDYi9wN3NqN2F4bnlLemhvR3k3V0pReDd3cUJCSTdKQ2ZRbGF0Y3lyNmo1SUVoNXlMMnFkU0hICkhmQmE0SXE0NXVwZ3B6akRDODVzOGpSc3hBMGgyZnZkS241aUlEVjU4R1N0UHZtZ3FjU3dKQ1VDZ1lFQTZGYUUKTUJFWWwrRE5PUGkvRXQyQ1Q4S1g0R0RYYVA4TEpvcEFaMGZ5YkFwQ0VWNnh4Q3E3NGptTWNNNklzeXJBbTZLUgpreGM5S3pleXB5bC9IWnRWUzZaMmdIcG5NL3Y0R3F1ZkE4Z2dUUm4xREUreFFDZmFEMEJKNmFWYnRoejRhRjU2CkQ0bC90L0h2N1FxcUdtU3J2MFp1SlNRNG5scUZPQW5NamUwZlVIRUNnWUJxbldUaXI2Yy9EenlVQmk4a2pVNlMKdTlnRVl1dW1IU3VSdk92RGQ3R3RtODhNMUNuc0QrRHorN0dNdVRLMHlIVVhDVkN3UWNHQ2VVaTZMcGkzck9FUQplZWRiZVBMOHhkRW44YUZvMkhYa1JTYkNoSDh4MS9vaHFsTG43SW9XUkE2L3dlcjJSUHJCbEpWU3RKeGc0SkUvCkg4SXlJMFI4WlFiRlBmQTRLU2YyMFFLQmdBQzJhemRlaGczSk1iZndBMTRDY2VqZXR1cUlRWURmNzEvUjRycXUKWE03NkJSUGFqMzhEaG9uK0ZURXZZUG56c3AySGxSeTNZSWVtWnhUZUtyYWppRkp3RTBMM25TTnFyV3NmaGFCVQpWODBFdkZ5cVRlZmRkMnkrakx0N3QxbEtvM1JtZmNkWWE1emIwQm1SQTg2SzZuL3VybDNNeTZPb3NXbm5sY29GCnBTZnhBb0dBRWVHUm9JNUNRbnV1REEvUzhBQzNtSDFEQWpIeFUwSG5DS1NiWStudUFMaHRYT1VjTG9GN1U0VVAKNDg1R3BYTjRMV1BraDJEQm5yYWRlVVNEa2V1N2Y3MXhacW8vbjJOeXZLeEt3VXBmRFpPMkZRQjNRb3AwNjRxaQpqYWVseWFSeG13bzBSajQxeGV2MUdFcDE4allLTml1QmN0N0Jxa3lMcnFRajRJc0RYT3M9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
kubectlコマンドが実行できるようになりました。
※master01は、まだSTATUS "NotReady"。
kubeuser@master01:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION master01 NotReady control-plane 113s v1.26.0 kubeuser@master01:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-787d4945fb-67vzt 0/1 Pending 0 3m3s coredns-787d4945fb-jhwwk 0/1 Pending 0 3m4s etcd-master01 1/1 Running 0 3m17s kube-apiserver-master01 1/1 Running 0 3m17s kube-controller-manager-master01 1/1 Running 0 3m22s kube-proxy-c9bq2 1/1 Running 0 3m4s kube-scheduler-master01 1/1 Running 0 3m26s
Master Nodeのみ④ Calicoを起動
作成済みのcalico.yamlをapplyします。
※"Running"になるまで、数分かかります。
kubeuser@master01:~$ kubectl apply -f calico.yaml poddisruptionbudget.policy/calico-kube-controllers created serviceaccount/calico-kube-controllers created serviceaccount/calico-node created configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created deployment.apps/calico-kube-controllers created kubeuser@master01:~$ kubectl get pods -n kube-system -w NAME READY STATUS RESTARTS AGE calico-kube-controllers-57b57c56f-lnxzn 0/1 ContainerCreating 0 109s calico-node-q95hr 1/1 Running 0 109s coredns-787d4945fb-67vzt 1/1 Running 0 5m15s coredns-787d4945fb-jhwwk 1/1 Running 0 5m16s etcd-master01 1/1 Running 0 5m29s kube-apiserver-master01 1/1 Running 0 5m29s kube-controller-manager-master01 1/1 Running 1 (39s ago) 5m34s kube-proxy-c9bq2 1/1 Running 0 5m16s kube-scheduler-master01 1/1 Running 0 5m38s calico-kube-controllers-57b57c56f-lnxzn 0/1 Running 0 112s calico-kube-controllers-57b57c56f-lnxzn 1/1 Running 0 112s
master01が、"Ready"になりました。
kubeuser@master01:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION master01 Ready control-plane 6m16s v1.26.0
kubeuser@master01:~$ kubectl get nodes master01 -o yaml
apiVersion: v1
kind: Node
metadata:
annotations:
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: "0"
projectcalico.org/IPv4Address: 192.168.1.41/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.0.241.64
projectcalico.org/IPv6Address: 240f:32:57b8:1:5054:ff:fe8e:5428/64
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2023-03-12T01:04:25Z"
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/arch: amd64
kubernetes.io/hostname: master01
kubernetes.io/os: linux
node-role.kubernetes.io/control-plane: ""
node.kubernetes.io/exclude-from-external-load-balancers: ""
name: master01
resourceVersion: "989"
uid: 953fefb1-d434-47f9-8dda-f09150140b8d
spec:
podCIDR: 10.0.0.0/24
podCIDRs:
- 10.0.0.0/24
- fd12:b5e0:383e::/64
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
status:
addresses:
- address: 192.168.1.41
type: InternalIP
- address: master01
type: Hostname
allocatable:
cpu: "2"
ephemeral-storage: "35855017516"
hugepages-2Mi: "0"
memory: 3915728Ki
pods: "110"
capacity:
cpu: "2"
ephemeral-storage: 38905184Ki
hugepages-2Mi: "0"
memory: 4018128Ki
pods: "110"
conditions:
- lastHeartbeatTime: "2023-03-12T01:09:30Z"
lastTransitionTime: "2023-03-12T01:09:30Z"
message: Calico is running on this node
reason: CalicoIsUp
status: "False"
type: NetworkUnavailable
- lastHeartbeatTime: "2023-03-12T01:10:16Z"
lastTransitionTime: "2023-03-12T01:04:25Z"
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
status: "False"
type: MemoryPressure
- lastHeartbeatTime: "2023-03-12T01:10:16Z"
lastTransitionTime: "2023-03-12T01:04:25Z"
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
status: "False"
type: DiskPressure
- lastHeartbeatTime: "2023-03-12T01:10:16Z"
lastTransitionTime: "2023-03-12T01:04:25Z"
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
status: "False"
type: PIDPressure
- lastHeartbeatTime: "2023-03-12T01:10:16Z"
lastTransitionTime: "2023-03-12T01:08:54Z"
message: kubelet is posting ready status. AppArmor enabled
reason: KubeletReady
status: "True"
type: Ready
daemonEndpoints:
kubeletEndpoint:
Port: 10250
images:
- names:
- registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c
- registry.k8s.io/etcd:3.5.6-0
sizeBytes: 102542580
- names:
- docker.io/calico/cni@sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977
- docker.io/calico/cni:v3.25.0
sizeBytes: 87984941
- names:
- docker.io/calico/node@sha256:a85123d1882832af6c45b5e289c6bb99820646cb7d4f6006f98095168808b1e6
- docker.io/calico/node:v3.25.0
sizeBytes: 87185935
- names:
- registry.k8s.io/kube-apiserver@sha256:d230a0b88a3daf14e4cce03b906b992c8153f37da878677f434b1af8c4e8cc75
- registry.k8s.io/kube-apiserver:v1.26.0
sizeBytes: 35317868
- names:
- registry.k8s.io/kube-controller-manager@sha256:26e260b50ec46bd1da7352565cb8b34b6dd2cb006cebbd2f35170d50935fb9ec
- registry.k8s.io/kube-controller-manager:v1.26.0
sizeBytes: 32244989
- names:
- docker.io/calico/kube-controllers@sha256:c45af3a9692d87a527451cf544557138fedf86f92b6e39bf2003e2fdb848dce3
- docker.io/calico/kube-controllers:v3.25.0
sizeBytes: 31271800
- names:
- registry.k8s.io/kube-proxy@sha256:1e9bbe429e4e2b2ad32681c91deb98a334f1bf4135137df5f84f9d03689060fe
- registry.k8s.io/kube-proxy:v1.26.0
sizeBytes: 21536465
- names:
- registry.k8s.io/kube-scheduler@sha256:34a142549f94312b41d4a6cd98e7fddabff484767a199333acb7503bf46d7410
- registry.k8s.io/kube-scheduler:v1.26.0
sizeBytes: 17484038
- names:
- registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a
- registry.k8s.io/coredns/coredns:v1.9.3
sizeBytes: 14837849
- names:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause:3.9
sizeBytes: 321520
- names:
- registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db
- registry.k8s.io/pause:3.6
sizeBytes: 301773
nodeInfo:
architecture: amd64
bootID: 12876a8f-6572-45b6-bc20-3004c2e26ba5
containerRuntimeVersion: containerd://1.6.18
kernelVersion: 5.15.0-67-generic
kubeProxyVersion: v1.26.0
kubeletVersion: v1.26.0
machineID: 3539d58920f24c1b9d3236a97d5bc5e7
operatingSystem: linux
osImage: Ubuntu 22.04.2 LTS
systemUUID: 3539d589-20f2-4c1b-9d32-36a97d5bc5e7
Worker Nodeのみ kubernetesクラスターへの参加(join)
Worker Node 2台で実施します。
kubeuser@worker01:~$ sudo kubeadm join master01:6443 --token 9btmem.3ve14961dmv38455 \
--discovery-token-ca-cert-hash sha256:d1bdc801f112b1ef5e51dc4a087623a29b57493c97fde957851e02c076165d67
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Master NodeのCLIで確認すると、kubernetesクラスラーに参加出来ていることが分かります。
kubeuser@master01:~$ kubectl get nodes -w NAME STATUS ROLES AGE VERSION master01 Ready control-plane 8m22s v1.26.0 worker01 NotReady <none> 0s v1.26.0 worker01 NotReady <none> 0s v1.26.0 worker01 NotReady <none> 0s v1.26.0 worker01 NotReady <none> 1s v1.26.0 worker01 NotReady <none> 4s v1.26.0 worker01 NotReady <none> 10s v1.26.0 worker01 NotReady <none> 21s v1.26.0 worker01 NotReady <none> 52s v1.26.0 worker01 NotReady <none> 82s v1.26.0 worker01 Ready <none> 99s v1.26.0 worker01 Ready <none> 100s v1.26.0 worker01 Ready <none> 100s v1.26.0 worker02 NotReady <none> 0s v1.26.0 worker02 NotReady <none> 2s v1.26.0 worker02 NotReady <none> 3s v1.26.0 worker02 NotReady <none> 3s v1.26.0 master01 Ready control-plane 10m v1.26.0 worker02 NotReady <none> 4s v1.26.0 worker01 Ready <none> 2m21s v1.26.0 worker01 Ready <none> 2m21s v1.26.0 worker01 Ready <none> 2m22s v1.26.0 worker01 Ready <none> 2m23s v1.26.0 worker02 NotReady <none> 13s v1.26.0 worker02 NotReady <none> 23s v1.26.0 worker02 NotReady <none> 54s v1.26.0 worker02 Ready <none> 69s v1.26.0 worker02 Ready <none> 69s v1.26.0 worker02 Ready <none> 74s v1.26.0 kubeuser@master01:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION master01 Ready control-plane 12m v1.26.0 worker01 Ready <none> 3m31s v1.26.0 worker02 Ready <none> 80s v1.26.0 kubeuser@master01:~$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master01 Ready control-plane 12m v1.26.0 192.168.1.41 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic containerd://1.6.18 worker01 Ready <none> 3m35s v1.26.0 192.168.1.45 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic containerd://1.6.18 worker02 Ready <none> 84s v1.26.0 192.168.1.46 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic containerd://1.6.18
Master Nodeのみ Pods(コンテナ)の起動
正常にコンテナが起動出来るか、IPv6のアドレスが使えるのかを確認するため、
nginxのコンテナを使って確かめてみます。
kubeuser@master01:~$ kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx.yaml
kubeuser@master01:~$ cat nginx.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
kubeuser@master01:~$ kubectl apply -f nginx.yaml
コンテナにIPv4 & IPv6が付与されているのが確認できます。
kubeuser@master01:~$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 5s
nginx 1/1 Running 0 23s
kubeuser@master01:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 60s 10.0.30.65 worker02 <none> <none>
kubeuser@master01:~$ kubectl descrive pods nginx
Name: nginx
Namespace: default
Priority: 0
Service Account: default
Node: worker02/192.168.1.46
Start Time: Sun, 12 Mar 2023 01:20:07 +0000
Labels: run=nginx
Annotations: cni.projectcalico.org/containerID: 7dfc69715c957c400b429cba116fe80ee2e661423762f53c3703ca967321a430
cni.projectcalico.org/podIP: 10.0.30.65/32
cni.projectcalico.org/podIPs: 10.0.30.65/32,fd12:b5e0:383e:0:7bf:50a7:b256:1e40/128
Status: Running
IP: 10.0.30.65
IPs:
IP: 10.0.30.65
IP: fd12:b5e0:383e:0:7bf:50a7:b256:1e40
Containers:
nginx:
Container ID: containerd://f99a9cf182393a9795abe704abc7cafb0d9f48ebd6946b043eef22ad63c80ac8
Image: nginx
Image ID: docker.io/library/nginx@sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 12 Mar 2023 01:20:29 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-87kxb (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-87kxb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 71s default-scheduler Successfully assigned default/nginx to worker02
Normal Pulling 42s kubelet Pulling image "nginx"
Normal Pulled 23s kubelet Successfully pulled image "nginx" in 18.833976993s (18.83398486s including waiting)
Normal Created 22s kubelet Created container nginx
Normal Started 22s kubelet Started container nginx
PodにアサインされているIPv4 & IPv6のアドレスにアクセス成功。
kubeuser@master01:~$ curl http://10.0.30.65
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
kubeuser@master01:~$ curl http://[fd12:b5e0:383e:0:7bf:50a7:b256:1e40]
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
今回の作業での気づきなど
今回はDual Stackのkubernetesクラスターを作ってみましたが、
色々と学ぶことがありました。
IPv6
ざっくりとは知っていましたが、構築で使うIPv6のアドレス帯を決定するにあたりIPv6のアドレス体系を理解する必要がありました。
・グローバルユニキャストアドレス(IPv4でいうグローバルアドレス)
・ユニークローカルユニキャストアドレス(IPv4でいうプライベートアドレス)
・リンクローカルユニキャストアドレス(同一セグメントないのみ通信可能)
・その他(マルチキャストアドレス/エニーキャストアドレス)
今までは、
IPv4(32bits)の4倍(128bits)という広大なアドレス帯、
それ故に枯渇を気にしなくてよいメリットがある、
IPv4同様にグローバル/プライベートの区別がある、
という程度でしたが実際に使ってみることで理解が深まりました。
Dual Stackの恩恵
キャリアやISP・大企業などではその広大なアドレス帯から、
IPv4での限界(/8のプライベートアドレス帯を使っても、約16,77万が上限)を気にしなくてもよい。
IPv6のhttp URL指定方法
"http://[IPv6のアドレス]" という形式で指定する。
kubernetesや関連ソフトウェアにおけるIPv6の設定
デフォルトで無効化されているものが結構多く、
明示的に有効にしてやる必要がある。
おわりに
これからは今回構築したkubernetesクラスターを使って、
色々と検証してみようと思います♪
いつも思うことだけど、実際にやってみる これが一番経験値貯まりますね!