LinuCエヴァンジェリストの鯨井貴博@opensourcetechです。
はじめに
今回は、helmを使ってKubernetesのクラスターへサービスを展開してみます。
helmとは?
そのままですが、kubernetes用のパッケージマネージャです。
分かりやすい例だと、RPM系におけるyum・Debian系におけるaptなどと同じような関係となります。
https://helm.sh/ja/
少し補足しておくと、CNCF(Cloud Native Computing Foundation)のあるプロジェクトの中で、Graduated(成熟したプロジェクト)の一つです。
helmのインストール
バイナリーを取得し、システム内に配置(例えば、/usr/local/binなど)すればOKです。
kubeuser@kubemaster1:/tmp$ wget https://get.helm.sh/helm-v3.8.0-linux-amd64.tar.gz --2022-01-29 14:49:14-- https://get.helm.sh/helm-v3.8.0-linux-amd64.tar.gz Resolving get.helm.sh (get.helm.sh)... 2606:2800:247:1cb7:261b:1f9c:2074:3c, 152.199.39.108 Connecting to get.helm.sh (get.helm.sh)|2606:2800:247:1cb7:261b:1f9c:2074:3c|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 13626774 (13M) [application/x-tar] Saving to: ‘helm-v3.8.0-linux-amd64.tar.gz’ helm-v3.8.0-linux-amd64.tar.gz 100%[===========================================================================>] 13.00M 7.02MB/s in 1.9s 2022-01-29 14:49:16 (7.02 MB/s) - ‘helm-v3.8.0-linux-amd64.tar.gz’ saved [13626774/13626774] kubeuser@kubemaster1:/tmp$ ls -l total 13324 -rw-rw-r-- 1 kubeuser kubeuser 13626774 Jan 24 16:30 helm-v3.8.0-linux-amd64.tar.gz kubeuser@kubemaster1:/tmp$ tar zxvf helm-v3.8.0-linux-amd64.tar.gz linux-amd64/ linux-amd64/helm linux-amd64/LICENSE linux-amd64/README.md kubeuser@kubemaster1:/tmp$ ls helm-v3.8.0-linux-amd64.tar.gz systemd-private-f53571ff57534875ae3f7ac2c4728949-systemd-logind.service-0v59Yi linux-amd64 systemd-private-f53571ff57534875ae3f7ac2c4728949-systemd-resolved.service-xGFEih snap.lxd systemd-private-f53571ff57534875ae3f7ac2c4728949-systemd-timesyncd.service-hFGQQg kubeuser@kubemaster1:/tmp$ ls linux-amd64/ LICENSE README.md helm kubeuser@kubemaster1:/tmp$ ls linux-amd64/helm linux-amd64/helm kubeuser@kubemaster1:/tmp$ ls linux-amd64/helm -l -rwxr-xr-x 1 kubeuser kubeuser 45068288 Jan 24 16:18 linux-amd64/helm kubeuser@kubemaster1:/tmp$ sudo cp -p ./linux-amd64/helm /usr/local/bin/ [sudo] password for kubeuser: kubeuser@kubemaster1:/tmp$ ls /usr/local/bin/ helm kubeuser@kubemaster1:/tmp$ helm --help The Kubernetes package manager Common actions for Helm: - helm search: search for charts - helm pull: download a chart to your local directory to view - helm install: upload the chart to Kubernetes - helm list: list releases of charts Environment variables: | Name | Description | |------------------------------------|-----------------------------------------------------------------------------------| | $HELM_CACHE_HOME | set an alternative location for storing cached files. | | $HELM_CONFIG_HOME | set an alternative location for storing Helm configuration. | | $HELM_DATA_HOME | set an alternative location for storing Helm data. | | $HELM_DEBUG | indicate whether or not Helm is running in Debug mode | | $HELM_DRIVER | set the backend storage driver. Values are: configmap, secret, memory, sql. | | $HELM_DRIVER_SQL_CONNECTION_STRING | set the connection string the SQL storage driver should use. | | $HELM_MAX_HISTORY | set the maximum number of helm release history. | | $HELM_NAMESPACE | set the namespace used for the helm operations. | | $HELM_NO_PLUGINS | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins. | | $HELM_PLUGINS | set the path to the plugins directory | | $HELM_REGISTRY_CONFIG | set the path to the registry config file. | | $HELM_REPOSITORY_CACHE | set the path to the repository cache directory | | $HELM_REPOSITORY_CONFIG | set the path to the repositories file. | | $KUBECONFIG | set an alternative Kubernetes configuration file (default "~/.kube/config") | | $HELM_KUBEAPISERVER | set the Kubernetes API Server Endpoint for authentication | | $HELM_KUBECAFILE | set the Kubernetes certificate authority file. | | $HELM_KUBEASGROUPS | set the Groups to use for impersonation using a comma-separated list. | | $HELM_KUBEASUSER | set the Username to impersonate for the operation. | | $HELM_KUBECONTEXT | set the name of the kubeconfig context. | | $HELM_KUBETOKEN | set the Bearer KubeToken used for authentication. | Helm stores cache, configuration, and data based on the following configuration order: - If a HELM_*_HOME environment variable is set, it will be used - Otherwise, on systems supporting the XDG base directory specification, the XDG variables will be used - When no other location is set a default location will be used based on the operating system By default, the default directories depend on the Operating System. The defaults are listed below: | Operating System | Cache Path | Configuration Path | Data Path | |------------------|---------------------------|--------------------------------|-------------------------| | Linux | $HOME/.cache/helm | $HOME/.config/helm | $HOME/.local/share/helm | | macOS | $HOME/Library/Caches/helm | $HOME/Library/Preferences/helm | $HOME/Library/helm | | Windows | %TEMP%\helm | %APPDATA%\helm | %APPDATA%\helm | Usage: helm [command] Available Commands: completion generate autocompletion scripts for the specified shell create create a new chart with the given name dependency manage a chart's dependencies env helm client environment information get download extended information of a named release help Help about any command history fetch release history install install a chart lint examine a chart for possible issues list list releases package package a chart directory into a chart archive plugin install, list, or uninstall Helm plugins pull download a chart from a repository and (optionally) unpack it in local directory push push a chart to remote registry login to or logout from a registry repo add, list, remove, update, and index chart repositories rollback roll back a release to a previous revision search search for a keyword in charts show show information of a chart status display the status of the named release template locally render templates test run tests for a release uninstall uninstall a release upgrade upgrade a release verify verify that a chart at the given path has been signed and is valid version print the client version information Flags: --debug enable verbose output -h, --help help for helm --kube-apiserver string the address and the port for the Kubernetes API server --kube-as-group stringArray group to impersonate for the operation, this flag can be repeated to specify multiple groups. --kube-as-user string username to impersonate for the operation --kube-ca-file string the certificate authority file for the Kubernetes API server connection --kube-context string name of the kubeconfig context to use --kube-token string bearer token used for authentication --kubeconfig string path to the kubeconfig file -n, --namespace string namespace scope for this request --registry-config string path to the registry config file (default "/home/kubeuser/.config/helm/registry/config.json") --repository-cache string path to the file containing cached repository indexes (default "/home/kubeuser/.cache/helm/repository") --repository-config string path to the file containing repository names and URLs (default "/home/kubeuser/.config/helm/repositories.yaml") Use "helm [command] --help" for more information about a command.
Chartレポジトリの追加
パッケージの取得先(レポジトリ)を登録します。
※https://kubernetes-charts.storage.googleapis.com/からhttps://charts.helm.sh/stableへ変更された模様。
kubeuser@kubemaster1:/tmp$ helm repo list NAME URL newrelic https://helm-charts.newrelic.com kubeuser@kubemaster1:/tmp$ helm repo add bitnami https://charts.bitnami.com/bitnami "bitnami" has been added to your repositories kubeuser@kubemaster1:/tmp$ helm repo list NAME URL newrelic https://helm-charts.newrelic.com bitnami https://charts.bitnami.com/bitnami kubeuser@kubemaster1:/tmp$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/ Error: repo "https://kubernetes-charts.storage.googleapis.com/" is no longer available; try "https://charts.helm.sh/stable" instead kubeuser@kubemaster1:/tmp$ helm repo add stable https://charts.helm.sh/stable "stable" has been added to your repositories kubeuser@kubemaster1:/tmp$ helm repo list NAME URL newrelic https://helm-charts.newrelic.com bitnami https://charts.bitnami.com/bitnami stable https://charts.helm.sh/stable
Chartの検索
helmでは、kubernetesクラスターへ展開するコンテナ等のリソース情報をchartと呼ばれるデータで構成します。
登録したレポジトリやHelm hubで検索できます。
kubeuser@kubemaster1:/tmp$ helm search repo nginx NAME CHART VERSION APP VERSION DESCRIPTION bitnami/nginx 9.7.5 1.21.6 NGINX Open Source is a web server that can be a... bitnami/nginx-ingress-controller 9.1.5 1.1.1 NGINX Ingress Controller is an Ingress controll... bitnami/nginx-intel 0.1.1 0.4.7 NGINX Open Source for Intel is a lightweight se... newrelic/simple-nginx 1.1.1 1.1 A Helm chart for installing a simple nginx stable/nginx-ingress 1.41.3 v0.34.1 DEPRECATED! An nginx Ingress controller that us... stable/nginx-ldapauth-proxy 0.1.6 1.13.5 DEPRECATED - nginx proxy with ldapauth stable/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego bitnami/kong 5.0.2 2.7.0 Kong is a scalable, open source API layer (aka ... stable/gcloud-endpoints 0.1.2 1 DEPRECATED Develop, deploy, protect and monitor...
また、chartの中身はサブコマンドshowで確認可能です。
kubeuser@kubemaster1:/tmp$ helm show values bitnami/nginx ## @section Global parameters ## Global Docker image parameters ## Please, note that this will override the image parameters, including dependencies, configured to use the global value ## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass ## @param global.imageRegistry Global Docker image registry ## @param global.imagePullSecrets Global Docker registry secret names as an array ## global: imageRegistry: "" ## E.g. ## imagePullSecrets: ## - myRegistryKeySecretName ## imagePullSecrets: [] ## @section Common parameters ## @param nameOverride String to partially override nginx.fullname template (will maintain the release name) ## nameOverride: "" ## @param fullnameOverride String to fully override nginx.fullname template ## fullnameOverride: "" ## @param kubeVersion Force target Kubernetes version (using Helm capabilities if not set) ## kubeVersion: "" ## @param clusterDomain Kubernetes Cluster Domain ## clusterDomain: cluster.local ## @param extraDeploy Extra objects to deploy (value evaluated as a template) ## extraDeploy: [] ## @param commonLabels Add labels to all the deployed resources ## commonLabels: {} ## @param commonAnnotations Add annotations to all the deployed resources ## commonAnnotations: {} ## @section NGINX parameters ## Bitnami NGINX image version ## ref: https://hub.docker.com/r/bitnami/nginx/tags/ ## @param image.registry NGINX image registry ## @param image.repository NGINX image repository ## @param image.tag NGINX image tag (immutable tags are recommended) ## @param image.pullPolicy NGINX image pull policy ## @param image.pullSecrets Specify docker-registry secret names as an array ## @param image.debug Set to true if you would like to see extra information on logs ## image: registry: docker.io repository: bitnami/nginx tag: 1.21.6-debian-10-r0 ## Specify a imagePullPolicy ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' ## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## E.g.: ## pullSecrets: ## - myRegistryKeySecretName ## pullSecrets: [] ## Set to true if you would like to see extra information on logs ## debug: false ## @param hostAliases Deployment pod host aliases ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ ## hostAliases: [] ## Command and args for running the container (set to default if not set). Use array form ## @param command Override default container command (useful when using custom images) ## @param args Override default container args (useful when using custom images) ## command: [] args: [] ## @param extraEnvVars Extra environment variables to be set on NGINX containers ## E.g: ## extraEnvVars: ## - name: FOO ## value: BAR ## extraEnvVars: [] ## @param extraEnvVarsCM ConfigMap with extra environment variables ## extraEnvVarsCM: "" ## @param extraEnvVarsSecret Secret with extra environment variables ## extraEnvVarsSecret: "" ## @section NGINX deployment parameters ## @param replicaCount Number of NGINX replicas to deploy ## replicaCount: 1 ## @param podLabels Additional labels for NGINX pods ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ ## podLabels: {} ## @param podAnnotations Annotations for NGINX pods ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ ## podAnnotations: {} ## @param podAffinityPreset Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity ## podAffinityPreset: "" ## @param podAntiAffinityPreset Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity ## podAntiAffinityPreset: soft ## Node affinity preset ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity ## nodeAffinityPreset: ## @param nodeAffinityPreset.type Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` ## type: "" ## @param nodeAffinityPreset.key Node label key to match Ignored if `affinity` is set. ## E.g. ## key: "kubernetes.io/e2e-az-name" ## key: "" ## @param nodeAffinityPreset.values Node label values to match. Ignored if `affinity` is set. ## E.g. ## values: ## - e2e-az1 ## - e2e-az2 ## values: [] ## @param affinity Affinity for pod assignment ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set ## affinity: {} ## @param nodeSelector Node labels for pod assignment. Evaluated as a template. ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: {} ## @param tolerations Tolerations for pod assignment. Evaluated as a template. ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ ## tolerations: {} ## @param priorityClassName Priority class name ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass ## priorityClassName: "" ## NGINX pods' Security Context. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod ## @param podSecurityContext.enabled Enabled NGINX pods' Security Context ## @param podSecurityContext.fsGroup Set NGINX pod's Security Context fsGroup ## @param podSecurityContext.sysctls sysctl settings of the NGINX pods ## podSecurityContext: enabled: false fsGroup: 1001 ## sysctl settings ## Example: ## sysctls: ## - name: net.core.somaxconn ## value: "10000" ## sysctls: [] ## NGINX containers' Security Context. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container ## @param containerSecurityContext.enabled Enabled NGINX containers' Security Context ## @param containerSecurityContext.runAsUser Set NGINX container's Security Context runAsUser ## @param containerSecurityContext.runAsNonRoot Set NGINX container's Security Context runAsNonRoot ## containerSecurityContext: enabled: false runAsUser: 1001 runAsNonRoot: true ## Configures the ports NGINX listens on ## @param containerPorts.http Sets http port inside NGINX container ## @param containerPorts.https Sets https port inside NGINX container ## containerPorts: http: 8080 https: "" ## NGINX containers' resource requests and limits ## ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## We usually recommend not to specify default resources and to leave this as a conscious ## choice for the user. This also increases chances charts run on environments with little ## resources, such as Minikube. If you do want to specify resources, uncomment the following ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. ## @param resources.limits The resources limits for the NGINX container ## @param resources.requests The requested resources for the NGINX container ## resources: ## Example: ## limits: ## cpu: 100m ## memory: 128Mi limits: {} ## Examples: ## requests: ## cpu: 100m ## memory: 128Mi requests: {} ## NGINX containers' liveness probe. ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes ## @param livenessProbe.enabled Enable livenessProbe ## @param livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe ## @param livenessProbe.periodSeconds Period seconds for livenessProbe ## @param livenessProbe.timeoutSeconds Timeout seconds for livenessProbe ## @param livenessProbe.failureThreshold Failure threshold for livenessProbe ## @param livenessProbe.successThreshold Success threshold for livenessProbe ## livenessProbe: enabled: true initialDelaySeconds: 30 timeoutSeconds: 5 periodSeconds: 10 failureThreshold: 6 successThreshold: 1 ## NGINX containers' readiness probe. ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes ## @param readinessProbe.enabled Enable readinessProbe ## @param readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe ## @param readinessProbe.periodSeconds Period seconds for readinessProbe ## @param readinessProbe.timeoutSeconds Timeout seconds for readinessProbe ## @param readinessProbe.failureThreshold Failure threshold for readinessProbe ## @param readinessProbe.successThreshold Success threshold for readinessProbe ## readinessProbe: enabled: true initialDelaySeconds: 5 timeoutSeconds: 3 periodSeconds: 5 failureThreshold: 3 successThreshold: 1 ## @param customLivenessProbe Override default liveness probe ## customLivenessProbe: {} ## @param customReadinessProbe Override default readiness probe ## customReadinessProbe: {} ## Autoscaling parameters ## @param autoscaling.enabled Enable autoscaling for NGINX deployment ## @param autoscaling.minReplicas Minimum number of replicas to scale back ## @param autoscaling.maxReplicas Maximum number of replicas to scale out ## @param autoscaling.targetCPU Target CPU utilization percentage ## @param autoscaling.targetMemory Target Memory utilization percentage ## autoscaling: enabled: false minReplicas: "" maxReplicas: "" targetCPU: "" targetMemory: "" ## @param extraVolumes Array to add extra volumes ## extraVolumes: [] ## @param extraVolumeMounts Array to add extra mount ## extraVolumeMounts: [] ## Pods Service Account ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ ## serviceAccount: ## @param serviceAccount.create Enable creation of ServiceAccount for nginx pod ## create: false ## @param serviceAccount.name The name of the ServiceAccount to use. ## If not set and create is true, a name is generated using the `common.names.fullname` template name: "" ## @param serviceAccount.annotations Annotations for service account. Evaluated as a template. ## Only used if `create` is `true`. ## annotations: {} ## @param serviceAccount.autoMount Auto-mount the service account token in the pod ## autoMount: false ## @param sidecars Sidecar parameters ## e.g: ## sidecars: ## - name: your-image-name ## image: your-image ## imagePullPolicy: Always ## ports: ## - name: portname ## containerPort: 1234 ## sidecars: [] ## @param sidecarSingleProcessNamespace Enable sharing the process namespace with sidecars ## This will switch pod.spec.shareProcessNamespace parameter ## sidecarSingleProcessNamespace: false ## @param initContainers Extra init containers ## initContainers: [] ## Pod Disruption Budget configuration ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ ## pdb: ## @param pdb.create Created a PodDisruptionBudget ## create: false ## @param pdb.minAvailable Min number of pods that must still be available after the eviction ## minAvailable: 1 ## @param pdb.maxUnavailable Max number of pods that can be unavailable after the eviction ## maxUnavailable: 0 ## @section Custom NGINX application parameters ## Get the server static content from a git repository ## NOTE: This will override staticSiteConfigmap and staticSitePVC ## cloneStaticSiteFromGit: ## @param cloneStaticSiteFromGit.enabled Get the server static content from a Git repository ## enabled: false ## Bitnami Git image version ## ref: https://hub.docker.com/r/bitnami/git/tags/ ## @param cloneStaticSiteFromGit.image.registry Git image registry ## @param cloneStaticSiteFromGit.image.repository Git image repository ## @param cloneStaticSiteFromGit.image.tag Git image tag (immutable tags are recommended) ## @param cloneStaticSiteFromGit.image.pullPolicy Git image pull policy ## @param cloneStaticSiteFromGit.image.pullSecrets Specify docker-registry secret names as an array ## image: registry: docker.io repository: bitnami/git tag: 2.35.0-debian-10-r1 ## Specify a imagePullPolicy ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' ## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## e.g: ## pullSecrets: ## - myRegistryKeySecretName ## pullSecrets: [] ## @param cloneStaticSiteFromGit.repository Git Repository to clone static content from ## repository: "" ## @param cloneStaticSiteFromGit.branch Git branch to checkout ## branch: "" ## @param cloneStaticSiteFromGit.interval Interval for sidecar container pull from the Git repository ## interval: 60 ## Additional configuration for git-clone-repository initContainer ## gitClone: ## @param cloneStaticSiteFromGit.gitClone.command Override default container command for git-clone-repository ## command: [] ## @param cloneStaticSiteFromGit.gitClone.args Override default container args for git-clone-repository ## args: [] ## Additional configuration for the git-repo-syncer container ## gitSync: ## @param cloneStaticSiteFromGit.gitSync.command Override default container command for git-repo-syncer ## command: [] ## @param cloneStaticSiteFromGit.gitSync.args Override default container args for git-repo-syncer ## args: [] ## @param cloneStaticSiteFromGit.extraEnvVars Additional environment variables to set for the in the containers that clone static site from git ## E.g: ## extraEnvVars: ## - name: FOO ## value: BAR ## extraEnvVars: [] ## @param cloneStaticSiteFromGit.extraVolumeMounts Add extra volume mounts for the Git containers ## Useful to mount keys to connect through ssh. (normally used with extraVolumes) ## E.g: ## extraVolumeMounts: ## - name: ssh-dir ## mountPath: /root/.ssh/ ## extraVolumeMounts: [] ## @param serverBlock Custom server block to be added to NGINX configuration ## PHP-FPM example server block: ## serverBlock: |- ## server { ## listen 0.0.0.0:8080; ## root /app; ## location / { ## index index.html index.php; ## } ## location ~ \.php$ { ## fastcgi_pass phpfpm-server:9000; ## fastcgi_index index.php; ## include fastcgi.conf; ## } ## } ## serverBlock: "" ## @param existingServerBlockConfigmap ConfigMap with custom server block to be added to NGINX configuration ## NOTE: This will override serverBlock ## existingServerBlockConfigmap: "" ## @param staticSiteConfigmap Name of existing ConfigMap with the server static site content ## staticSiteConfigmap: "" ## @param staticSitePVC Name of existing PVC with the server static site content ## NOTE: This will override staticSiteConfigmap ## staticSitePVC: "" ## @section LDAP parameters ## LDAP Auth Daemon Properties ## Daemon that will proxy LDAP requests between NGINX and a given LDAP Server ## ldapDaemon: ## @param ldapDaemon.enabled Enable LDAP Auth Daemon proxy ## enabled: false ## Bitnami NGINX LDAP Auth Daemon image ## ref: https://hub.docker.com/r/bitnami/nginx-ldap-auth-daemon/tags/ ## @param ldapDaemon.image.registry LDAP AUth Daemon Image registry ## @param ldapDaemon.image.repository LDAP Auth Daemon Image repository ## @param ldapDaemon.image.tag LDAP Auth Daemon Image tag (immutable tags are recommended) ## @param ldapDaemon.image.pullPolicy LDAP Auth Daemon Image pull policy ## image: registry: docker.io repository: bitnami/nginx-ldap-auth-daemon tag: 0.20200116.0-debian-10-r581 pullPolicy: IfNotPresent ## @param ldapDaemon.port LDAP Auth Daemon port ## port: 8888 ## LDAP Auth Daemon Configuration ## ## These different properties define the form of requests performed ## against the given LDAP server ## ## BEWARE THAT THESE VALUES WILL BE IGNORED IF A CUSTOM LDAP SERVER BLOCK ## ALREADY SPECIFIES THEM. ## ## ldapConfig: ## @param ldapDaemon.ldapConfig.uri LDAP Server URI, `ldap[s]:/<hostname>:<port>` ## Must follow the pattern -> ldap[s]:/<hostname>:<port> ## uri: "" ## @param ldapDaemon.ldapConfig.baseDN LDAP root DN to begin the search for the user ## baseDN: "" ## @param ldapDaemon.ldapConfig.bindDN DN of user to bind to LDAP ## bindDN: "" ## @param ldapDaemon.ldapConfig.bindPassword Password for the user to bind to LDAP ## bindPassword: "" ## @param ldapDaemon.ldapConfig.filter LDAP search filter for search ## filter: "" ## @param ldapDaemon.ldapConfig.httpRealm LDAP HTTP auth realm ## httpRealm: "" ## @param ldapDaemon.ldapConfig.httpCookieName HTTP cookie name to be used in LDAP Auth ## httpCookieName: "" ## @param ldapDaemon.nginxServerBlock [string] NGINX server block that configures LDAP communication. Overrides `ldapDaemon.ldapConfig` ## NGINX Configuration File containing the directives (that define how LDAP requests are performed) and tells NGINX to ## use the LDAP Daemon as proxy. Besides, it defines the routes that will require of LDAP auth ## in order to be accessed. ## ## If LDAP directives are provided, they will take precedence over ## the ones specified in ldapConfig. ## ## This will be evaluated as a template. ## nginxServerBlock: |- server { listen 0.0.0.0:{{ .Values.containerPorts.http }}; # You can provide a special subPath or the root location = / { auth_request /auth-proxy; } location = /auth-proxy { internal; proxy_pass http://127.0.0.1:{{ .Values.ldapDaemon.port }}; ############################################################### # YOU SHOULD CHANGE THE FOLLOWING TO YOUR LDAP CONFIGURATION # ############################################################### # URL and port for connecting to the LDAP server # proxy_set_header X-Ldap-URL "ldap://YOUR_LDAP_SERVER_IP:YOUR_LDAP_SERVER_PORT"; # Base DN # proxy_set_header X-Ldap-BaseDN "dc=example,dc=org"; # Bind DN # proxy_set_header X-Ldap-BindDN "cn=admin,dc=example,dc=org"; # Bind password # proxy_set_header X-Ldap-BindPass "adminpassword"; } } ## @param ldapDaemon.existingNginxServerBlockSecret Name of existing Secret with a NGINX server block to use for LDAP communication ## Use an existing Secret holding an NGINX Configuration file that configures LDAP requests ## If provided, both nginxServerBlock and ldapConfig properties are ignored. ## existingNginxServerBlockSecret: "" ## LDAP Auth Daemon containers' liveness probe. ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes ## @param ldapDaemon.livenessProbe.enabled Enable livenessProbe ## @param ldapDaemon.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe ## @param ldapDaemon.livenessProbe.periodSeconds Period seconds for livenessProbe ## @param ldapDaemon.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe ## @param ldapDaemon.livenessProbe.failureThreshold Failure threshold for livenessProbe ## @param ldapDaemon.livenessProbe.successThreshold Success threshold for livenessProbe ## livenessProbe: enabled: true initialDelaySeconds: 30 timeoutSeconds: 5 periodSeconds: 10 failureThreshold: 6 successThreshold: 1 ## LDAP Auth Daemon containers' readiness probe. ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes ## @param ldapDaemon.readinessProbe.enabled Enable readinessProbe ## @param ldapDaemon.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe ## @param ldapDaemon.readinessProbe.periodSeconds Period seconds for readinessProbe ## @param ldapDaemon.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe ## @param ldapDaemon.readinessProbe.failureThreshold Failure threshold for readinessProbe ## @param ldapDaemon.readinessProbe.successThreshold Success threshold for readinessProbe ## readinessProbe: enabled: true initialDelaySeconds: 5 timeoutSeconds: 3 periodSeconds: 5 failureThreshold: 3 successThreshold: 1 ## @param ldapDaemon.customLivenessProbe Custom Liveness probe ## customLivenessProbe: {} ## @param ldapDaemon.customReadinessProbe Custom Rediness probe ## customReadinessProbe: {} ## @section Traffic Exposure parameters ## NGINX Service properties ## service: ## @param service.type Service type ## type: LoadBalancer ## @param service.port Service HTTP port ## port: 80 ## @param service.httpsPort Service HTTPS port ## httpsPort: 443 ## @param service.nodePorts [object] Specify the nodePort(s) value(s) for the LoadBalancer and NodePort service types. ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport ## nodePorts: http: "" https: "" ## @param service.targetPort [object] Target port reference value for the Loadbalancer service types can be specified explicitly. ## Listeners for the Loadbalancer can be custom mapped to the http or https service. ## Example: Mapping the https listener to targetPort http [http: https] ## targetPort: http: http https: https ## @param service.loadBalancerIP LoadBalancer service IP address ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer ## loadBalancerIP: "" ## @param service.annotations Service annotations ## This can be used to set the LoadBalancer service type to internal only. ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer ## annotations: {} ## @param service.externalTrafficPolicy Enable client source IP preservation ## ref https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip ## externalTrafficPolicy: Cluster ## Configure the ingress resource that allows you to access the ## Nginx installation. Set up the URL ## ref: https://kubernetes.io/docs/user-guide/ingress/ ## ingress: ## @param ingress.enabled Set to true to enable ingress record generation ## enabled: false ## DEPRECATED: Use ingress.annotations instead of ingress.certManager ## certManager: false ## ## @param ingress.pathType Ingress path type ## pathType: ImplementationSpecific ## @param ingress.apiVersion Force Ingress API version (automatically detected if not set) ## apiVersion: "" ## @param ingress.hostname Default host for the ingress resource ## hostname: nginx.local ## @param ingress.path The Path to Nginx. You may need to set this to '/*' in order to use this with ALB ingress controllers. ## path: / ## @param ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. ## For a full list of possible ingress annotations, please see ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md ## Use this parameter to set the required annotations for cert-manager, see ## ref: https://cert-manager.io/docs/usage/ingress/#supported-annotations ## ## e.g: ## annotations: ## kubernetes.io/ingress.class: nginx ## cert-manager.io/cluster-issuer: cluster-issuer-name ## annotations: {} ## @param ingress.tls Create TLS Secret ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }} ## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it ## tls: false ## @param ingress.extraHosts The list of additional hostnames to be covered with this ingress record. ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array ## extraHosts: ## - name: nginx.local ## path: / ## extraHosts: [] ## @param ingress.extraPaths Any additional arbitrary paths that may need to be added to the ingress under the main host. ## For example: The ALB ingress controller requires a special rule for handling SSL redirection. ## extraPaths: ## - path: /* ## backend: ## serviceName: ssl-redirect ## servicePort: use-annotation ## extraPaths: [] ## @param ingress.extraTls The tls configuration for additional hostnames to be covered with this ingress record. ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls ## extraTls: ## - hosts: ## - nginx.local ## secretName: nginx.local-tls ## extraTls: [] ## @param ingress.secrets If you're providing your own certificates, please use this to add the certificates as secrets ## key and certificate should start with -----BEGIN CERTIFICATE----- or ## -----BEGIN RSA PRIVATE KEY----- ## ## name should line up with a tlsSecret set further up ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set ## ## It is also possible to create and manage the certificates outside of this helm chart ## Please see README.md for more information ## e.g: ## - name: nginx.local-tls ## key: ## certificate: ## secrets: [] ## Health Ingress parameters ## healthIngress: ## @param healthIngress.enabled Set to true to enable health ingress record generation ## enabled: false ## DEPRECATED: Use ingress.annotations instead of ingress.certManager ## certManager: false ## ## @param healthIngress.pathType Ingress path type ## pathType: ImplementationSpecific ## @param healthIngress.hostname When the health ingress is enabled, a host pointing to this will be created ## hostname: example.local ## @param healthIngress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. ## For a full list of possible ingress annotations, please see ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md ## Use this parameter to set the required annotations for cert-manager, see ## ref: https://cert-manager.io/docs/usage/ingress/#supported-annotations ## ## e.g: ## annotations: ## kubernetes.io/ingress.class: nginx ## cert-manager.io/cluster-issuer: cluster-issuer-name ## annotations: {} ## @param healthIngress.tls Enable TLS configuration for the hostname defined at `healthIngress.hostname` parameter ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.healthIngress.hostname }} ## You can use the healthIngress.secrets parameter to create this TLS secret, relay on cert-manager to create it, or ## let the chart create self-signed certificates for you ## tls: false ## @param healthIngress.extraHosts The list of additional hostnames to be covered with this health ingress record ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array ## E.g. ## extraHosts: ## - name: example.local ## path: / ## extraHosts: [] ## @param healthIngress.extraTls TLS configuration for additional hostnames to be covered ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls ## E.g. ## extraTls: ## - hosts: ## - example.local ## secretName: example.local-tls ## extraTls: [] ## @param healthIngress.secrets TLS Secret configuration ## If you're providing your own certificates, please use this to add the certificates as secrets ## key and certificate should start with -----BEGIN CERTIFICATE----- or -----BEGIN RSA PRIVATE KEY----- ## name should line up with a secretName set further up ## If it is not set and you're using cert-manager, this is unneeded, as it will create the secret for you ## If it is not set and you're NOT using cert-manager either, self-signed certificates will be created ## It is also possible to create and manage the certificates outside of this helm chart ## Please see README.md for more information ## ## E.g. ## secrets: ## - name: example.local-tls ## key: ## certificate: ## secrets: [] ## @section Metrics parameters ## Prometheus Exporter / Metrics ## metrics: ## @param metrics.enabled Start a Prometheus exporter sidecar container ## enabled: false ## @param metrics.port NGINX Container Status Port scraped by Prometheus Exporter ## Defaults to specified http port port: "" ## Bitnami NGINX Prometheus Exporter image ## ref: https://hub.docker.com/r/bitnami/nginx-exporter/tags/ ## @param metrics.image.registry NGINX Prometheus exporter image registry ## @param metrics.image.repository NGINX Prometheus exporter image repository ## @param metrics.image.tag NGINX Prometheus exporter image tag (immutable tags are recommended) ## @param metrics.image.pullPolicy NGINX Prometheus exporter image pull policy ## @param metrics.image.pullSecrets Specify docker-registry secret names as an array ## image: registry: docker.io repository: bitnami/nginx-exporter tag: 0.10.0-debian-10-r34 pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## e.g: ## pullSecrets: ## - myRegistryKeySecretName ## pullSecrets: [] ## @param metrics.podAnnotations Additional annotations for NGINX Prometheus exporter pod(s) ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ ## podAnnotations: {} ## Container Security Context ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ ## @param metrics.securityContext.enabled Enabled NGINX Exporter containers' Security Context ## @param metrics.securityContext.runAsUser Set NGINX Exporter container's Security Context runAsUser ## securityContext: enabled: false runAsUser: 1001 ## Prometheus exporter service parameters ## service: ## @param metrics.service.port NGINX Prometheus exporter service port ## port: 9113 ## @param metrics.service.annotations [object] Annotations for the Prometheus exporter service ## annotations: prometheus.io/scrape: "true" prometheus.io/port: "{{ .Values.metrics.service.port }}" ## NGINX Prometheus exporter resource requests and limits ## ref: https://kubernetes.io/docs/user-guide/compute-resources/ ## We usually recommend not to specify default resources and to leave this as a conscious ## choice for the user. This also increases chances charts run on environments with little ## resources, such as Minikube. If you do want to specify resources, uncomment the following ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. ## @param metrics.resources.limits The resources limits for the NGINX Prometheus exporter container ## @param metrics.resources.requests The requested resources for the NGINX Prometheus exporter container ## resources: ## Example: ## limits: ## cpu: 100m ## memory: 128Mi limits: {} ## Examples: ## requests: ## cpu: 100m ## memory: 128Mi requests: {} ## Prometheus Operator ServiceMonitor configuration ## serviceMonitor: ## @param metrics.serviceMonitor.enabled Creates a Prometheus Operator ServiceMonitor (also requires `metrics.enabled` to be `true`) ## enabled: false ## @param metrics.serviceMonitor.namespace Namespace in which Prometheus is running ## namespace: "" ## @param metrics.serviceMonitor.interval Interval at which metrics should be scraped. ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint ## e.g: ## interval: 10s ## interval: "" ## @param metrics.serviceMonitor.scrapeTimeout Timeout after which the scrape is ended ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint ## e.g: ## scrapeTimeout: 10s ## scrapeTimeout: "" ## @param metrics.serviceMonitor.selector Prometheus instance selector labels ## ref: https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#prometheus-configuration ## ## selector: ## prometheus: my-prometheus ## selector: {} ## @param metrics.serviceMonitor.additionalLabels Additional labels that can be used so PodMonitor will be discovered by Prometheus ## additionalLabels: {} ## @param metrics.serviceMonitor.relabelings RelabelConfigs to apply to samples before scraping ## relabelings: [] ## @param metrics.serviceMonitor.metricRelabelings MetricRelabelConfigs to apply to samples before ingestion ## metricRelabelings: [] ## Prometheus Operator PrometheusRule configuration ## prometheusRule: ## @param metrics.prometheusRule.enabled if `true`, creates a Prometheus Operator PrometheusRule (also requires `metrics.enabled` to be `true` and `metrics.prometheusRule.rules`) ## enabled: false ## @param metrics.prometheusRule.namespace Namespace for the PrometheusRule Resource (defaults to the Release Namespace) ## namespace: "" ## @param metrics.prometheusRule.additionalLabels Additional labels that can be used so PrometheusRule will be discovered by Prometheus ## additionalLabels: {} ## @param metrics.prometheusRule.rules Prometheus Rule definitions ## - alert: LowInstance ## expr: up{service="{{ template "common.names.fullname" . }}"} < 1 ## for: 1m ## labels: ## severity: critical ## annotations: ## description: Service {{ template "common.names.fullname" . }} Tomcat is down since 1m. ## summary: Tomcat instance is down. ## rules: []
helm chartを使ったkubernetesクラスターへのアプリケーションデプロイ
helmを使ったアプリケーション(nginxのDeployment)をデプロイしてみます。
kubeuser@kubemaster1:/tmp$ helm install test-nginx bitnami/nginx NAME: test-nginx LAST DEPLOYED: Sat Jan 29 15:00:02 2022 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: CHART NAME: nginx CHART VERSION: 9.7.5 APP VERSION: 1.21.6 ** Please be patient while the chart is being deployed ** NGINX can be accessed through the following DNS name from within your cluster: test-nginx.default.svc.cluster.local (port 80) To access NGINX from outside the cluster, follow the steps below: 1. Get the NGINX URL by running these commands: NOTE: It may take a few minutes for the LoadBalancer IP to be available. Watch the status with: 'kubectl get svc --namespace default -w test-nginx' export SERVICE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].port}" services test-nginx) export SERVICE_IP=$(kubectl get svc --namespace default test-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo "http://${SERVICE_IP}:${SERVICE_PORT}"
デプロイの確認
helm listでも、kubectlでも確認できます。
kubeuser@kubemaster1:/tmp$ helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION test-nginx default 1 2022-01-29 15:00:02.831501906 +0000 UTC deployed nginx-9.7.5 1.21.6 kubeuser@kubemaster1:/tmp$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-78c485856b-m9kgl 1/1 Running 0 6h16m test-nginx-6c4b694447-b2cff 1/1 Running 0 4m16s kubeuser@kubemaster1:/tmp$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 1/1 1 1 6d6h test-nginx 1/1 1 1 4m24s kubeuser@kubemaster1:/tmp$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 59d nginx-deployment-svc NodePort 10.106.207.100 <none> 80:32222/TCP 6d test-nginx LoadBalancer 10.105.123.34 <pending> 80:32168/TCP 32m
上記だと、Deployment/PodとServiceがデプロイされています。
helm templateを使うと、YAML形式で出力可能です。
">(リダイレクト)"を使えばファイルに格納可能。
kubeuser@kubemaster1:/tmp$ helm template test-nginx bitnami/nginx --- # Source: nginx/templates/server-block-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: test-nginx-server-block labels: app.kubernetes.io/name: nginx helm.sh/chart: nginx-9.7.5 app.kubernetes.io/instance: test-nginx app.kubernetes.io/managed-by: Helm data: server-blocks-paths.conf: |- include "/opt/bitnami/nginx/conf/server_blocks/ldap/*.conf"; include "/opt/bitnami/nginx/conf/server_blocks/common/*.conf"; --- # Source: nginx/templates/svc.yaml apiVersion: v1 kind: Service metadata: name: test-nginx labels: app.kubernetes.io/name: nginx helm.sh/chart: nginx-9.7.5 app.kubernetes.io/instance: test-nginx app.kubernetes.io/managed-by: Helm spec: type: LoadBalancer externalTrafficPolicy: "Cluster" ports: - name: http port: 80 targetPort: http selector: app.kubernetes.io/name: nginx app.kubernetes.io/instance: test-nginx --- # Source: nginx/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: test-nginx labels: app.kubernetes.io/name: nginx helm.sh/chart: nginx-9.7.5 app.kubernetes.io/instance: test-nginx app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: nginx app.kubernetes.io/instance: test-nginx template: metadata: labels: app.kubernetes.io/name: nginx helm.sh/chart: nginx-9.7.5 app.kubernetes.io/instance: test-nginx app.kubernetes.io/managed-by: Helm spec: automountServiceAccountToken: false shareProcessNamespace: false serviceAccountName: default affinity: podAffinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: nginx app.kubernetes.io/instance: test-nginx namespaces: - "default" topologyKey: kubernetes.io/hostname weight: 1 nodeAffinity: containers: - name: nginx image: docker.io/bitnami/nginx:1.21.6-debian-10-r0 imagePullPolicy: "IfNotPresent" env: - name: BITNAMI_DEBUG value: "false" ports: - name: http containerPort: 8080 livenessProbe: tcpSocket: port: http periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 readinessProbe: tcpSocket: port: http initialDelaySeconds: 5 periodSeconds: 5 timeoutSeconds: 3 successThreshold: 1 failureThreshold: 3 resources: limits: {} requests: {} volumeMounts: volumes: - name: nginx-server-block-paths configMap: name: test-nginx-server-block items: - key: server-blocks-paths.conf path: server-blocks-paths.conf
おわりに
使ってみた感想ですが、
想像以上に簡単で使いやすいツールですね。
※yaml作ってデプロイする苦労が分かるので、その良さが分かるのかも。。。