LinuCエヴァンジェリストの鯨井貴博@opensourcetechです。
はじめに
今回は、kubernetes環境でkubectl topコマンドを実施できるようにします。
なお、作業はこちらの記事で作成したkubernetesクラスター v1.26.00で実施します。
kubectl topとは
kubectl topにあるように、PodやNodeのCPUとメモリーの使用量を表示するコマンドです。
※https://kubernetes.io/ja/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/
kubeuser@master01:~/metrics_server$ kubectl top --help Display Resource (CPU/Memory) usage. The top command allows you to see the resource consumption for nodes or pods. This command requires Metrics Server to be correctly configured and working on the server. Available Commands: node Display resource (CPU/memory) usage of nodes pod Display resource (CPU/memory) usage of pods Usage: kubectl top [flags] [options] Use "kubectl <command> --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands).
なお、metrics-serverが未インストールの場合は以下のようにエラーとなり出力を得ることができません。
kubeuser@master01:~/metrics_server$ kubectl top pod error: Metrics API not available kubeuser@master01:~/metrics_server$ kubectl top node error: Metrics API not available
metrics-serverのインストール
ここにあるYamlファイルを直接、もしくはダウンロードしてインストールします。
kubeuser@master01:~/metrics_server$ wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml kubeuser@master01:~/metrics_server$ kubectl apply -f components.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
しかくこのままでは起動できないので、ちょっと一手間必要です。
kubeuser@master01:~/metrics_server$ kubectl get all -n kube-system NAME READY STATUS RESTARTS AGE pod/calico-kube-controllers-57b57c56f-p6xds 1/1 Running 0 11d pod/calico-node-phmkb 1/1 Running 0 11d pod/calico-node-wjdqx 1/1 Running 0 11d pod/calico-node-xdkfv 1/1 Running 0 11d pod/coredns-787d4945fb-6n79l 1/1 Running 0 11d pod/coredns-787d4945fb-dfplr 1/1 Running 0 11d pod/etcd-master01 1/1 Running 3 11d pod/kube-apiserver-master01 1/1 Running 2 11d pod/kube-controller-manager-master01 1/1 Running 0 11d pod/kube-proxy-2n7b2 1/1 Running 0 11d pod/kube-proxy-7k425 1/1 Running 0 11d pod/kube-proxy-c5pkt 1/1 Running 0 11d pod/kube-scheduler-master01 1/1 Running 3 11d pod/metrics-server-6f6cdbf67d-8gh4v 0/1 Running 0 26s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-dns ClusterIP 10.1.0.10 <none> 53/UDP,53/TCP,9153/TCP 11d service/metrics-server ClusterIP 10.1.34.39 <none> 443/TCP 26s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/calico-node 3 3 3 3 3 kubernetes.io/os=linux 11d daemonset.apps/kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 11d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/calico-kube-controllers 1/1 1 1 11d deployment.apps/coredns 2/2 2 2 11d deployment.apps/metrics-server 0/1 1 0 26s NAME DESIRED CURRENT READY AGE replicaset.apps/calico-kube-controllers-57b57c56f 1 1 1 11d replicaset.apps/coredns-787d4945fb 2 2 2 11d replicaset.apps/metrics-server-6f6cdbf67d 1 1 0 26s kubeuser@master01:~/metrics_server$ kubectl describe pods metrics-server-6f6cdbf67d-8gh4v -n kube-system Name: metrics-server-6f6cdbf67d-8gh4v Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: metrics-server Node: worker02/192.168.1.46 Start Time: Sun, 26 Mar 2023 11:37:04 +0000 Labels: k8s-app=metrics-server pod-template-hash=6f6cdbf67d Annotations: cni.projectcalico.org/containerID: 38d980c6e10661588b5afd88dc75d37719bda93875384e1c26f7346cd6d4eaf8 cni.projectcalico.org/podIP: 10.0.30.110/32 cni.projectcalico.org/podIPs: 10.0.30.110/32,fd12:b5e0:383e:0:7bf:50a7:b256:1e4b/128 Status: Running IP: 10.0.30.110 IPs: IP: 10.0.30.110 IP: fd12:b5e0:383e:0:7bf:50a7:b256:1e4b Controlled By: ReplicaSet/metrics-server-6f6cdbf67d Containers: metrics-server: Container ID: containerd://d164819a2d84f606b231515cb7730b74fa717efa2b83d4919e29bf751fbb1bf8 Image: registry.k8s.io/metrics-server/metrics-server:v0.6.3 Image ID: registry.k8s.io/metrics-server/metrics-server@sha256:c60778fa1c44d0c5a0c4530ebe83f9243ee6fc02f4c3dc59226c201931350b10 Port: 4443/TCP Host Port: 0/TCP Args: --cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s State: Running Started: Sun, 26 Mar 2023 11:37:06 +0000 Ready: False Restart Count: 0 Requests: cpu: 100m memory: 200Mi Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: <none> Mounts: /tmp from tmp-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-th6g7 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: tmp-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kube-api-access-th6g7: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 77s default-scheduler Successfully assigned kube-system/metrics-server-6f6cdbf67d-8gh4v to worker02 Normal Pulled 75s kubelet Container image "registry.k8s.io/metrics-server/metrics-server:v0.6.3" already present on machine Normal Created 75s kubelet Created container metrics-server Normal Started 75s kubelet Started container metrics-server Warning Unhealthy 7s (x5 over 47s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500 kubeuser@master01:~/metrics_server$ kubectl logs metrics-server-6f6cdbf67d-8gh4v -n kube-system I0326 11:37:07.335451 1 serving.go:342] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key) E0326 11:37:07.777411 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.41:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.41 because it doesn't contain any IP SANs" node="master01" E0326 11:37:07.784121 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.46:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.46 because it doesn't contain any IP SANs" node="worker02" E0326 11:37:07.797663 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.45:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.45 because it doesn't contain any IP SANs" node="worker01" I0326 11:37:07.863230 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I0326 11:37:07.863425 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController I0326 11:37:07.863746 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0326 11:37:07.863854 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0326 11:37:07.864002 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I0326 11:37:07.864094 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0326 11:37:07.865437 1 secure_serving.go:267] Serving securely on [::]:4443 W0326 11:37:07.865738 1 shared_informer.go:372] The sharedIndexInformer has started, run more than once is not allowed I0326 11:37:07.865844 1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key" I0326 11:37:07.866179 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0326 11:37:07.964353 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController I0326 11:37:07.964360 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0326 11:37:07.964930 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file E0326 11:37:22.790510 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.41:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.41 because it doesn't contain any IP SANs" node="master01" E0326 11:37:22.790589 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.46:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.46 because it doesn't contain any IP SANs" node="worker02" E0326 11:37:22.795875 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.45:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.45 because it doesn't contain any IP SANs" node="worker01" I0326 11:37:34.586548 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve" E0326 11:37:37.786964 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.46:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.46 because it doesn't contain any IP SANs" node="worker02" E0326 11:37:37.796824 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.45:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.45 because it doesn't contain any IP SANs" node="worker01" E0326 11:37:37.797289 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.41:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.41 because it doesn't contain any IP SANs" node="master01" I0326 11:37:44.587322 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve" E0326 11:37:52.779593 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.45:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.45 because it doesn't contain any IP SANs" node="worker01" E0326 11:37:52.780493 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.46:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.46 because it doesn't contain any IP SANs" node="worker02" E0326 11:37:52.791260 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.41:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.41 because it doesn't contain any IP SANs" node="master01" I0326 11:37:54.586488 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve" I0326 11:38:04.587930 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve" E0326 11:38:07.778400 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.45:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.45 because it doesn't contain any IP SANs" node="worker01" E0326 11:38:07.779877 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.46:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.46 because it doesn't contain any IP SANs" node="worker02" E0326 11:38:07.780601 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.41:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.41 because it doesn't contain any IP SANs" node="master01" I0326 11:38:14.587349 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve" E0326 11:38:22.785367 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.46:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.46 because it doesn't contain any IP SANs" node="worker02" E0326 11:38:22.795234 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.45:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.45 because it doesn't contain any IP SANs" node="worker01" E0326 11:38:22.798048 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.41:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.41 because it doesn't contain any IP SANs" node="master01" I0326 11:38:24.587266 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve" I0326 11:38:30.767041 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve" I0326 11:38:34.586619 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve" E0326 11:38:37.783809 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.41:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.41 because it doesn't contain any IP SANs" node="master01" E0326 11:38:37.789472 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.45:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.45 because it doesn't contain any IP SANs" node="worker01" E0326 11:38:37.791841 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.46:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.46 because it doesn't contain any IP SANs" node="worker02" I0326 11:38:44.585860 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve" E0326 11:38:52.780580 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.46:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.46 because it doesn't contain any IP SANs" node="worker02" E0326 11:38:52.786490 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.45:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.45 because it doesn't contain any IP SANs" node="worker01" E0326 11:38:52.798278 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.41:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.41 because it doesn't contain any IP SANs" node="master01" I0326 11:38:54.586623 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve" I0326 11:39:04.586387 1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve" E0326 11:39:07.774446 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.46:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.46 because it doesn't contain any IP SANs" node="worker02" E0326 11:39:07.788372 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.45:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.45 because it doesn't contain any IP SANs" node="worker01" E0326 11:39:07.794935 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.41:10250/metrics/resource\": x509: cannot validate certificate for 192.168.1.41 because it doesn't contain any IP SANs" node="master01"
metrics-serverが起動しない原因
こちらに記載がありますが、
kubeletの証明書がクラスター認証局で署名されていないので、証明書の検証がエラーとなっているためです。
これは、証明書の検証を無効化(metric-serverの起動オプションに--kubelet-insecure-tls を追加)すればOKです。
metrics-serverの修正と再インストール
components.yamlの修正内容は、以下の通り。
kubeuser@master01:~/metrics_server$ cp components.yaml components_original.yaml kubeuser@master01:~/metrics_server$ ls components.yaml components_original.yaml kubeuser@master01:~/metrics_server$ vi components.yaml kubeuser@master01:~/metrics_server$ diff components.yaml components_original.yaml 140d139 < - --kubelet-insecure-tls
そして、改めてyamlファイルの適用。
kubeuser@master01:~/metrics_server$ kubectl apply -f components.yaml serviceaccount/metrics-server unchanged clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged clusterrole.rbac.authorization.k8s.io/system:metrics-server unchanged rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server unchanged service/metrics-server unchanged deployment.apps/metrics-server configured apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged kubeuser@master01:~/metrics_server$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-57b57c56f-p6xds 1/1 Running 0 11d calico-node-phmkb 1/1 Running 0 11d calico-node-wjdqx 1/1 Running 0 11d calico-node-xdkfv 1/1 Running 0 11d coredns-787d4945fb-6n79l 1/1 Running 0 11d coredns-787d4945fb-dfplr 1/1 Running 0 11d etcd-master01 1/1 Running 3 11d kube-apiserver-master01 1/1 Running 2 11d kube-controller-manager-master01 1/1 Running 0 11d kube-proxy-2n7b2 1/1 Running 0 11d kube-proxy-7k425 1/1 Running 0 11d kube-proxy-c5pkt 1/1 Running 0 11d kube-scheduler-master01 1/1 Running 3 11d metrics-server-6b6f9ccc7-qmtb9 1/1 Running 0 38s kubeuser@master01:~/metrics_server$ kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-57b57c56f-p6xds 1/1 Running 0 11d 10.0.241.65 master01 <none> <none> calico-node-phmkb 1/1 Running 0 11d 192.168.1.46 worker02 <none> <none> calico-node-wjdqx 1/1 Running 0 11d 192.168.1.45 worker01 <none> <none> calico-node-xdkfv 1/1 Running 0 11d 192.168.1.41 master01 <none> <none> coredns-787d4945fb-6n79l 1/1 Running 0 11d 10.0.241.66 master01 <none> <none> coredns-787d4945fb-dfplr 1/1 Running 0 11d 10.0.241.67 master01 <none> <none> etcd-master01 1/1 Running 3 11d 192.168.1.41 master01 <none> <none> kube-apiserver-master01 1/1 Running 2 11d 192.168.1.41 master01 <none> <none> kube-controller-manager-master01 1/1 Running 0 11d 192.168.1.41 master01 <none> <none> kube-proxy-2n7b2 1/1 Running 0 11d 192.168.1.45 worker01 <none> <none> kube-proxy-7k425 1/1 Running 0 11d 192.168.1.46 worker02 <none> <none> kube-proxy-c5pkt 1/1 Running 0 11d 192.168.1.41 master01 <none> <none> kube-scheduler-master01 1/1 Running 3 11d 192.168.1.41 master01 <none> <none> metrics-server-6b6f9ccc7-qmtb9 1/1 Running 0 42s 10.0.30.111 worker02 <none> <none>
無事に起動しました!
kubectl topを実施するとCPUとメモリーの使用量が出力されました。
kubeuser@master01:~/metrics_server$ kubectl top pod NAME CPU(cores) MEMORY(bytes) nginx-6cbc9bb4f5-jm8mr 0m 2Mi nginx-6cbc9bb4f5-pfhfz 0m 2Mi nginx2-f96cfc57b-vnsfm 0m 2Mi kubeuser@master01:~/metrics_server$ kubectl top pod -n kube-system NAME CPU(cores) MEMORY(bytes) calico-kube-controllers-57b57c56f-p6xds 3m 25Mi calico-node-phmkb 31m 79Mi calico-node-wjdqx 40m 78Mi calico-node-xdkfv 30m 78Mi coredns-787d4945fb-6n79l 2m 12Mi coredns-787d4945fb-dfplr 2m 12Mi etcd-master01 35m 64Mi kube-apiserver-master01 67m 418Mi kube-controller-manager-master01 23m 52Mi kube-proxy-2n7b2 1m 14Mi kube-proxy-7k425 1m 15Mi kube-proxy-c5pkt 1m 12Mi kube-scheduler-master01 5m 19Mi metrics-server-6b6f9ccc7-qmtb9 5m 15Mi kubeuser@master01:~/metrics_server$ kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% master01 206m 10% 2973Mi 77% worker01 79m 3% 1512Mi 80% worker02 71m 3% 1576Mi 84%
おわりに
Linuxだと特に何も気にすることなく、topコマンドを実施すれば各種リソースの使用状況が確認できますが、kubenetesだとmetrics-server使う必要があるんですね。
kubernetesはあくまでもコンテナを柔軟に使う機能で、その他オプションは追加導入する必要がある。
top - 12:06:43 up 16 days, 4:12, 2 users, load average: 0.36, 0.38, 0.43 Tasks: 172 total, 1 running, 171 sleeping, 0 stopped, 0 zombie %Cpu(s): 4.2 us, 2.6 sy, 0.0 ni, 86.0 id, 7.1 wa, 0.0 hi, 0.2 si, 0.0 st MiB Mem : 3924.0 total, 190.7 free, 1051.2 used, 2682.1 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 2586.7 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2126138 root 20 0 1264992 447892 82240 S 5.6 11.1 1046:43 kube-apiserver 2126053 root 20 0 10.7g 68980 25488 S 2.7 1.7 595:35.65 etcd 2126203 root 20 0 1650012 117220 71632 S 2.7 2.9 466:50.78 kubelet 2126042 root 20 0 833468 118364 67168 S 2.3 2.9 363:08.39 kube-controller 2127618 root 20 0 1673244 62092 46320 S 1.7 1.5 316:19.14 calico-node 2126103 root 20 0 765768 57904 39276 S 1.0 1.4 72:55.90 kube-scheduler 2847531 root 20 0 755480 47772 31188 S 0.7 1.2 86:14.33 speaker 13 root 20 0 0 0 0 S 0.3 0.0 3:19.28 ksoftirqd/0 1792 kubeuser 20 0 17568 9924 7800 S 0.3 0.2 34:38.85 systemd . . .