From 8ec4706285f974d8316236049f8e747b1c1c9293 Mon Sep 17 00:00:00 2001 From: thekatstevens Date: Mon, 29 Jul 2024 13:59:23 +0930 Subject: [PATCH] Update k8s target docs with new copies, and redirect old locations --- .../deployment-targets/kubernetes/index.md | 33 +- .../automated-installation.md | 218 +--------- .../kubernetes-agent/ha-cluster-support.md | 62 +-- .../kubernetes/kubernetes-agent/index.md | 261 +----------- .../kubernetes-agent/permissions.md | 125 +----- .../kubernetes/kubernetes-agent/storage.md | 99 +---- .../kubernetes-agent/troubleshooting.md | 69 +-- .../kubernetes/kubernetes-api/index.md | 400 +----------------- .../kubernetes-api/openshift/index.md | 73 +--- .../kubernetes-api/rancher/index.md | 62 +-- src/pages/docs/kubernetes/targets/index.md | 8 +- .../automated-installation.md | 213 ++++++++++ .../kubernetes-agent/ha-cluster-support.md | 57 +++ .../targets/kubernetes-agent/index.md | 158 ++++++- .../{permissions/index.md => permissions.md} | 6 +- .../{storage/index.md => storage.md} | 48 ++- .../kubernetes-agent/troubleshooting.md | 64 +++ .../targets/kubernetes-api/index.md | 21 +- 18 files changed, 604 insertions(+), 1373 deletions(-) create mode 100644 src/pages/docs/kubernetes/targets/kubernetes-agent/automated-installation.md create mode 100644 src/pages/docs/kubernetes/targets/kubernetes-agent/ha-cluster-support.md rename src/pages/docs/kubernetes/targets/kubernetes-agent/{permissions/index.md => permissions.md} (99%) rename src/pages/docs/kubernetes/targets/kubernetes-agent/{storage/index.md => storage.md} (59%) create mode 100644 src/pages/docs/kubernetes/targets/kubernetes-agent/troubleshooting.md diff --git a/src/pages/docs/infrastructure/deployment-targets/kubernetes/index.md b/src/pages/docs/infrastructure/deployment-targets/kubernetes/index.md index ce25746b98..f314c9b12b 100644 --- a/src/pages/docs/infrastructure/deployment-targets/kubernetes/index.md +++ b/src/pages/docs/infrastructure/deployment-targets/kubernetes/index.md @@ -1,28 +1,9 @@ --- -layout: src/layouts/Default.astro -pubDate: 2023-01-01 -modDate: 2024-04-24 -title: Kubernetes -navTitle: Overview -navSection: Kubernetes -description: Kubernetes deployment targets -navOrder: 50 +layout: src/layouts/Redirect.astro +title: Redirect +redirect: docs/kubernetes/targets +pubDate: 2024-07-29 +navSearch: false +navSitemap: false +navMenu: false --- - -There are two different deployment targets for deploying to Kubernetes using Octopus Deploy, the [Kubernetes Agent](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent) and the [Kubernetes API](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api) targets. - -The following table summarizes the key differences between the two targets. - -| | [Kubernetes Agent](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent) | [Kubernetes API](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api) | -| :--------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------- | -| Connection method | [Polling agent](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication#polling-tentacles) in cluster | Direct API communication | -| Setup complexity | Generally simpler | Requires more setup | -| Security | No need to configure firewall
No need to provide external access to cluster | Depends on the cluster configuration | -| Requires workers | No | Yes | -| Requires public IP | No | Yes | -| Requires service account in Octopus | No | Yes | -| Limit deployments to a namespace | Yes | No | -| Planned support for upcoming observability features | Yes | No | -| Recommended usage scenario | | If you cannot install an agent on a cluster | -| Step configuration | Simple (you need to specify target tag) | More complex (requires target tags, workers, execution container images) | -| Maintenance | credentials | | diff --git a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/automated-installation.md b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/automated-installation.md index 4b8a423ed3..56a189bb5c 100644 --- a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/automated-installation.md +++ b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/automated-installation.md @@ -1,213 +1,9 @@ --- -layout: src/layouts/Default.astro -pubDate: 2024-05-14 -modDate: 2024-05-14 -title: Automated Installation -description: How to automate the installation and management of the Kubernetes agent -navOrder: 50 +layout: src/layouts/Redirect.astro +title: Redirect +redirect: docs/kubernetes/targets/kubernetes-agent/automated-installation +pubDate: 2024-07-29 +navSearch: false +navSitemap: false +navMenu: false --- - -## Automated installation via Terraform -The Kubernetes agent can be installed and managed using a combination of the Kubernetes agent [Helm chart <= v1.1.0](https://hub.docker.com/r/octopusdeploy/kubernetes-agent), [Octopus Deploy <= v0.20.0 Terraform provider](https://registry.terraform.io/providers/OctopusDeployLabs/octopusdeploy/latest) and/or [Helm Terraform provider](https://registry.terraform.io/providers/hashicorp/helm). - -### Octopus Deploy & Helm -Using a combination of the Octopus Deploy and Helm providers you can completely manage the Kubernetes agent via Terraform. - -:::div{.warning} -To ensure that the Kubernetes agent and the deployment target within Octopus associate with each other correctly the some of the Helm chart values and deployment target properties must meet the following criteria: -`octopusdeploy_kubernetes_agent_deployment_target.name` and `agent.targetName` have the same values. -`octopusdeploy_kubernetes_agent_deployment_target.uri` and `agent.serverSubscriptionId` have the same values. -`octopusdeploy_kubernetes_agent_deployment_target.thumbprint` is the thumbprint calculated from the certificate used in `agent.certificate`. -::: - -```hcl -terraform { - required_providers { - octopusdeploy = { - source = "octopus.com/com/octopusdeploy" - version = "0.20.0" - } - - helm = { - source = "hashicorp/helm" - version = "2.13.2" - } - } -} - -locals { - octopus_api_key = "API-XXXXXXXXXXXXXXXX" - octopus_address = "https://myinstance.octopus.app" - octopus_polling_address = "https://polling.myinstance.octopus.app" -} - -provider "helm" { - kubernetes { - # Configure authentication for me - } -} - -provider "octopusdeploy" { - address = local.octopus_address - api_key = local.octopus_api_key -} - -resource "octopusdeploy_space" "agent_space" { - name = "agent space" - space_managers_teams = ["teams-everyone"] -} - -resource "octopusdeploy_environment" "dev_env" { - name = "Development" - space_id = octopusdeploy_space.agent_space.id -} - -resource "octopusdeploy_polling_subscription_id" "agent_subscription_id" {} -resource "octopusdeploy_tentacle_certificate" "agent_cert" {} - -resource "octopusdeploy_kubernetes_agent_deployment_target" "agent" { - name = "agent-one" - space_id = octopusdeploy_space.agent_space.id - environments = [octopusdeploy_environment.dev_env.id] - roles = ["role-1", "role-2", "role-3"] - thumbprint = octopusdeploy_tentacle_certificate.agent_cert.thumbprint - uri = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri -} - -resource "helm_release" "octopus_agent" { - name = "octopus-agent-release" - repository = "oci://registry-1.docker.io" - chart = "octopusdeploy/kubernetes-agent" - version = "1.*.*" - atomic = true - create_namespace = true - namespace = "octopus-agent-target" - - set { - name = "agent.acceptEula" - value = "Y" - } - - set { - name = "agent.targetName" - value = octopusdeploy_kubernetes_agent_deployment_target.agent.name - } - - set_sensitive { - name = "agent.serverApiKey" - value = local.octopus_api_key - } - - set { - name = "agent.serverUrl" - value = local.octopus_address - } - - set { - name = "agent.serverCommsAddress" - value = local.octopus_polling_address - } - - set { - name = "agent.serverSubscriptionId" - value = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri - } - - set_sensitive { - name = "agent.certificate" - value = octopusdeploy_tentacle_certificate.agent_cert.base64 - } - - set { - name = "agent.space" - value = octopusdeploy_space.agent_space.name - } - - set_list { - name = "agent.targetEnvironments" - value = octopusdeploy_kubernetes_agent_deployment_target.agent.environments - } - - set_list { - name = "agent.targetRoles" - value = octopusdeploy_kubernetes_agent_deployment_target.agent.roles - } -} -``` - -### Helm -The Kubernetes agent can be installed using just the Helm provider but the associated deployment target that is created in Octopus when the agent registers itself cannot be managed solely using the Helm provider, as the Helm chart values relating to the deployment target are only used on initial installation and any modifications to them will not trigger an update to the deployment target unless you perform a complete reinstall of the agent. This option is useful if you plan on managing the configuration of the deployment target via means such as the Portal or API. - -```hcl -terraform { - required_providers { - helm = { - source = "hashicorp/helm" - version = "2.13.2" - } - } -} - -provider "helm" { - kubernetes { - # Configure authentication for me - } -} - -locals { - octopus_api_key = "API-XXXXXXXXXXXXXXXX" - octopus_address = "https://myinstance.octopus.app" - octopus_polling_address = "https://polling.myinstance.octopus.app" -} - -resource "helm_release" "octopus_agent" { - name = "octopus-agent-release" - repository = "oci://registry-1.docker.io" - chart = "octopusdeploy/kubernetes-agent" - version = "1.*.*" - atomic = true - create_namespace = true - namespace = "octopus-agent-target" - - set { - name = "agent.acceptEula" - value = "Y" - } - - set { - name = "agent.targetName" - value = "octopus-agent" - } - - set_sensitive { - name = "agent.serverApiKey" - value = local.octopus_api_key - } - - set { - name = "agent.serverUrl" - value = local.octopus_address - } - - set { - name = "agent.serverCommsAddress" - value = local.octopus_polling_address - } - - set { - name = "agent.space" - value = "Default" - } - - set_list { - name = "agent.targetEnvironments" - value = ["Development"] - } - - - set_list { - name = "agent.targetRoles" - value = ["Role-1"] - } -} -``` \ No newline at end of file diff --git a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/ha-cluster-support.md b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/ha-cluster-support.md index cb73394a9b..3e35b5d95b 100644 --- a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/ha-cluster-support.md +++ b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/ha-cluster-support.md @@ -1,57 +1,9 @@ --- -layout: src/layouts/Default.astro -pubDate: 2024-05-14 -modDate: 2024-05-14 -title: HA Cluster Support -description: How to install/update the agent when running Octopus in an HA Cluster -navOrder: 60 +layout: src/layouts/Redirect.astro +title: Redirect +redirect: docs/kubernetes/targets/kubernetes-agent/ha-cluster-support +pubDate: 2024-07-29 +navSearch: false +navSitemap: false +navMenu: false --- - -## Octopus Deploy HA Cluster - -Similarly to Polling Tentacles, the Kubernetes agent must have a URL for each individual node in the HA Cluster so that it receive commands from all clusters. These URLs must be provided when registering the agent or some deployments may fail depending on which node the tasks are executing. - -To read more about selecting the right URL for your nodes, see [Polling Tentacles and Kubernetes agents with HA](/docs/administration/high-availability/maintain/polling-tentacles-with-ha). - -## Agent Installation on an HA Cluster - -### Octopus Deploy 2024.3+ - -To make things easier, Octopus will detect when it's running HA and show an extra configuration page in the Kubernetes agent creation wizard which asks you to give a unique URL for each cluster node. - -:::figure -![Kubernetes Agent HA Cluster Configuration Page](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-ha-cluster-configuration-page.png) -::: - -Once these values are provided the generated helm upgrade command will configure your new agent to receive commands from all nodes. - -### Octopus Deploy 2024.2 - -To install the agent with Octopus Deploy 2024.2 you need to adjust the Helm command produced by the wizard before running it. - -1. Use the wizard to produce the Helm command to install the agent. - 1. You may need to provide a ServerCommsAddress: you can just provide any valid URL to progress the wizard. -2. Replace the `--set agent.serverCommsAddress="..."` property with -``` ---set agent.serverCommsAddresses="{https://:/,https://:/,https://:/}" -``` -where each `:` is a unique address for an individual node. - -3. Execute the Helm command in a terminal connected to the target cluster. - -:::div{.warning} -The new property name is `agent.serverCommsAddresses`. Note that "Addresses" is plural. -::: - -## Upgrading the Agent after Adding/Removing Cluster nodes - -If you add or remove cluster nodes, you need to update your agent's configuration so that it continues to connect to all nodes in the cluster. To do this, you can simply run a helm upgrade command with the urls of all current cluster nodes. The agent will take remove any old urls and replace them with the provided ones. - -```bash -helm upgrade --atomic \ ---reuse-values \ ---set agent.serverCommsAddresses="{https://:/,https://:/,https://:/}" \ ---namespace \ - \ -oci://registry-1.docker.io/octopusdeploy/kubernetes-agent -``` diff --git a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/index.md b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/index.md index 684a374b9d..7c33ad4ae7 100644 --- a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/index.md +++ b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/index.md @@ -1,256 +1,9 @@ --- -layout: src/layouts/Default.astro -pubDate: 2024-04-22 -modDate: 2024-07-02 -title: Kubernetes agent -navTitle: Overview -navSection: Kubernetes agent -description: How to configure a Kubernetes agent as a deployment target in Octopus -navOrder: 10 +layout: src/layouts/Redirect.astro +title: Redirect +redirect: docs/kubernetes/targets/kubernetes-agent +pubDate: 2024-07-29 +navSearch: false +navSitemap: false +navMenu: false --- - -Kubernetes agent targets are a mechanism for executing [Kubernetes steps](/docs/deployments/kubernetes) from inside the target Kubernetes cluster, rather than via an external API connection. - -Similar to the [Octopus Tentacle](/docs/infrastructure/deployment-targets/tentacle), the Kubernetes agent is a small, lightweight application that is installed into the target Kubernetes cluster. - -## Benefits of the Kubernetes agent - -The Kubernetes agent provides a number of improvements over the [Kubernetes API](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api) target: - -### Polling communication - -The agent uses the same [polling communication](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication#polling-tentacles) protocol as [Octopus Tentacle](/docs/infrastructure/deployment-targets/tentacle). It lets the agent initiate the connection from the cluster to Octopus Server, solving network access issues such as publicly addressable clusters. - -### In-cluster authentication - -As the agent is already running inside the target cluster, Octopus Server no longer needs authentication credentials to the cluster to perform deployments. It can use the in-cluster authentication support of Kubernetes to run deployments using Kubernetes Service Accounts and Kubernetes RBAC local to the cluster. - -### Cluster-aware tooling - -As the agent is running in the cluster, it can retrieve the cluster's version and correctly use tooling that's specific to that version. You also need a lot less tooling as there are no longer any requirements for custom authentication plugins. See the [agent tooling](#agent-tooling) section for more details. - -## Requirements - -The Kubernetes agent follows [semantic versioning](https://semver.org/), so a major agent version is locked to a Octopus Server version range. Updating to the latest major agent version requires updating to a supported Octopus Server. The supported versions for each agent major version are: - -| Kubernetes agent | Octopus Server | Kubernetes cluster | -| ---------------- | ------------------------ | -------------------- | -| 1.\*.\* | **2024.2.6580** or newer | **1.26** to **1.29** | - -Additionally, the Kubernetes agent only supports **Linux AMD64** and **Linux ARM64** Kubernetes nodes. - -## Installing the Kubernetes agent - -The Kubernetes agent is installed using [Helm](https://helm.sh) via the [octopusdeploy/kubernetes-agent](https://hub.docker.com/r/octopusdeploy/kubernetes-agent) chart. - -To simplify this, there is an installation wizard in Octopus to generate the required values. - -:::div{.warning} -Helm will use your current kubectl config, so make sure your kubectl config is pointing to the correct cluster before executing the following helm commands. -You can see the current kubectl config by executing: -```bash -kubectl config view -``` -::: - -### Configuration - -1. Navigate to **Infrastructure ➜ Deployment Targets**, and click **Add Deployment Target**. -2. Select **KUBERNETES** and click **ADD** on the Kubernetes Agent card. -3. This launches the Add New Kubernetes Agent dialog - -:::figure -![Kubernetes Agent Wizard Config Page](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png) -::: - -1. Enter a unique display name for the target. This name is used to generate the Kubernetes namespace, as well as the Helm release name -2. Select at least one [environment](/docs/infrastructure/environments) for the target. -3. Select at least one [target tag](/docs/infrastructure/deployment-targets/target-tags) for the target. -4. Optionally, add the name of an existing [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/) for the agent to use. The storage class must support the ReadWriteMany [access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). -If no storage class name is added, the default Network File System (NFS) storage will be used. - -#### Advanced options - -:::figure -![Kubernetes Agent default namespace](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-default-namespace.png) -::: - -You can choose a default Kubernetes namespace that resources are deployed to. This is only used if the step configuration or Kubernetes manifests don’t specify a namespace. - -### NFS CSI driver - -If no [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/) name is set, the default NFS storage pod will be used. This runs a small NFS pod next to the agent pod and provides shared storage to the agent and script pods. - -A requirement of using the NFS pod is the installation of the [NFS CSI Driver](https://github.com/kubernetes-csi/csi-driver-nfs). This can be achieved by executing the presented helm command in a terminal connected to the target Kubernetes cluster. - -:::figure -![Kubernetes Agent Wizard NFS CSI Page](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-nfs.png) -::: - -:::div{.warning} -If you receive an error with the text `failed to download` or `no cached repo found` when attempting to install the NFS CSI driver via helm, try executing the following command and then retrying the install command: -```bash -helm repo update -``` -::: - -### Installation helm command - -At the end of the wizard, Octopus generates a Helm command that you copy and paste into a terminal connected to the target cluster. After it's executed, Helm installs all the required resources and starts the agent. - -:::figure -![Kubernetes Agent Wizard Helm command Page](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png) -::: - -:::div{.hint} -The helm command includes a 1 hour bearer token that is used when the agent first initializes, to register itself with Octopus Server. -::: - -:::div{.hint} -The terminal Kubernetes context must have enough permissions to create namespaces and install resources into that namespace. If you wish to install the agent into an existing namespace, remove the `--create-namespace` flag and change the value after `--namespace` -::: - -If left open, the installation dialog waits for the agent to establish a connection and run a health check. Once successful, the Kubernetes agent target is ready for use! - -:::figure -![Kubernetes Agent Wizard successful installation](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-success.png) -::: - -:::div{.hint} -A successful health check indicates that deployments can successfully be executed. -::: - -## Configuring the agent with Tenants - -While the wizard doesn't support selecting Tenants or Tenant tags, the agent can be configured for tenanted deployments in two ways: - -1. Use the Deployment Target settings UI at **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Settings** to add a Tenant and set the Tenanted Deployment Participation as required. This is done after the agent has successfully installed and registered. - -:::figure -![Kubernetes Agent ](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-settings-page-tenants.png) -::: - -2. Set additional variables in the helm command to allow the agent to register itself with associated Tenants or Tenant tags. You also need to provider a value for the `TenantedDeploymentParticipation` value. Possible values are `Untenanted` (default), `Tenanted`, and `TenantedOrUntenanted`. - -example to add these values: -```bash ---set agent.tenants="{,}" \ ---set agent.tenantTags="{,}" \ ---set agent.tenantedDeploymentParticipation="TenantedOrUntenanted" \ -``` - -:::div{.hint} -You don't need to provide both Tenants and Tenant Tags, but you do need to provider the tenanted deployment participation value. -::: - -In a full command: -```bash -helm upgrade --install --atomic \ ---set agent.acceptEula="Y" \ ---set agent.targetName="" \ ---set agent.serverUrl="" \ ---set agent.serverCommsAddress="" \ ---set agent.space="Default" \ ---set agent.targetEnvironments="{,}" \ ---set agent.targetRoles="{,}" \ ---set agent.tenants="{,}" \ ---set agent.tenantTags="{,}" \ ---set agent.tenantedDeploymentParticipation="TenantedOrUntenanted" \ ---set agent.bearerToken="" \ ---version "1.*.*" \ ---create-namespace --namespace \ - \ -oci://registry-1.docker.io/octopusdeploy/kubernetes-agent -``` - -## Trusting custom/internal Octopus Server certificates - -:::div{.hint} -Server certificate support was added in Kubernetes agent 1.7.0 -::: - -It is common for organizations to have their Octopus Deploy server hosted in an environment where it has an SSL/TLS certificate that is not part of the global certificate trust chain. As a result, the Kubernetes agent will fail to register with the target server due to certificate errors. A typical error looks like this: - -``` -2024-06-21 04:12:01.4189 | ERROR | The following certificate errors were encountered when establishing the HTTPS connection to the server: RemoteCertificateNameMismatch, RemoteCertificateChainErrors -Certificate subject name: CN=octopus.corp.domain -Certificate thumbprint: 42983C1D517D597B74CDF23F054BBC106F4BB32F -``` - -To resolve this, you need to provide the Kubernetes agent with a base64-encoded string of the public key of the certificate in either `.pem` or `.crt` format. When viewed as text, this will look similar to this: - -``` ------BEGIN CERTIFICATE----- -MII... ------END CERTIFICATE----- -``` - -Once encoded, this string can be provided as part of the agent installation helm command via the `agent.serverCertificate` helm value. - -To include this in the installation command, add the following to the generated installation command: - -```bash ---set agent.serverCertificate="" -``` - -## Agent tooling - -By default, the agent will look for a [container image](/docs/projects/steps/execution-containers-for-workers) for the workload it's executing against the cluster. If one isn't specified, Octopus will execute the Kubernetes workload using the `octopusdeploy/kubernetes-agent-tools-base` container. It will correctly select the version of the image that's specific to the cluster's version. - -This image contains the minimum required tooling to run Kubernetes workloads for Octopus Deploy, namely: - -- `kubectl` -- `helm` -- `powershell` - -## Upgrading the Kubernetes agent - -The Kubernetes agent can be upgraded automatically by Octopus Server, manually in the Octopus portal or via a `helm` command. - -### Automatic updates - -:::div{.hint} -Automatic updating was added in 2024.2.8584 -::: - -By default, the Kubernetes agent is automatically updated by Octopus Server when a new version is released. These version checks typically occur after a health check. When an update is required, Octopus will start a task to update the agent to the latest version. - -This behavior is controlled by the [Machine Policy](/docs/infrastructure/deployment-targets/machine-policies) associated with the agent. You can change this behavior to **Manually** in the [Machine policy settings](/docs/infrastructure/deployment-targets/machine-policies#configure-machine-updates). - -### Manual updating via Octopus portal - -To check if a Kubernetes agent can be manually upgraded, navigate to the **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Connectivity** page. If the agent can be upgraded, there will be an *Upgrade available* banner. Clicking **Upgrade to latest** button will trigger the upgrade via a new task. If the upgrade fails, the previous version of the agent is restored. - -:::figure -![Kubernetes Agent updated interface](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-upgrade-portal.png) -::: - -### Helm upgrade command - -To upgrade a Kubernetes agent via `helm`, note the following fields from the **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Connectivity** page: -* Helm Release Name -* Namespace - -Then, from a terminal connected to the cluster containing the instance, execute the following command: - -```bash -helm upgrade --atomic --namespace NAMESPACE HELM_RELEASE_NAME oci://registry-1.docker.io/octopusdeploy/kubernetes-agent -``` -__Replace NAMESPACE and HELM_RELEASE_NAME with the values noted__ - -If after the upgrade command has executed, you find that there is issues with the agent, you can rollback to the previous helm release by executing: - -```bash -helm rollback --namespace NAMESPACE HELM_RELEASE_NAME -``` - - -## Uninstalling the Kubernetes agent - -To fully remove the Kubernetes agent, you need to delete the agent from the Kubernetes cluster as well as delete the deployment target from Octopus Deploy - -The deployment target deletion confirmation dialog will provide you with the commands to delete the agent from the cluster.Once these have been successfully executed, you can then click **Delete** and delete the deployment target. - -:::figure -![Kubernetes Agent delete dialog](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-delete-dialog.png) -::: diff --git a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/permissions.md b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/permissions.md index 5a31e5511a..32a94673f4 100644 --- a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/permissions.md +++ b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/permissions.md @@ -1,120 +1,9 @@ --- -layout: src/layouts/Default.astro -pubDate: 2024-04-29 -modDate: 2024-04-29 -title: Permissions -description: Information about what permissions are required and how to adjust them -navOrder: 20 +layout: src/layouts/Redirect.astro +title: Redirect +redirect: docs/kubernetes/targets/kubernetes-agent/permissions +pubDate: 2024-07-29 +navSearch: false +navSitemap: false +navMenu: false --- - -The Kubernetes agent uses service accounts to manage access to cluster objects. - -There are 3 main components that run with different permissions in the Kubernetes agent: -- **Agent Pod** - This is the main component and is responsible for receiving work from Octopus Server and scheduling it in the cluster. -- **Script Pods** - These are run to execute work on the cluster. When Octopus issues work to the agent, the Tentacle will schedule a pod to run the script to execute the required work. These are short-lived, single-use pods which are removed by Tentacle when they are complete. -- **NFS Server Pod** - This optional component is used if no StorageClass is specified during installation. - -# Agent Pod Permissions - -The agent pod uses a service account which only allows the agent to create, view and modify pods, pod logs, config maps, and secrets in the agent namespace. Adjusting these permissions is not supported. - -| Variable Name | Description | Default Value | -|:-----------------------------------|:-----------------------------------------|:-------------------------| -| `agent.serviceAccount.name` | The name of the agent service account | `-tentacle` | -| `agent.serviceAccount.annotations` | Annotations given to the service account | `[]` | - -# Script Pod Permissions - -By default, the script pods (the pods which run your deployment steps) are given cluster wide admin access to deploy any and all cluster objects in any namespaces as configured in your deployment processes. - -The service account for script pods can be customized in a few ways: - -| Variable Name | Description | Default Value | -|:----------------------------------------------|:-----------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `scriptPods.serviceAccount.targetNamespaces` | Limit the namespaces that the service account can interact with. | `[]`
(When empty, all namespaces are allowed.) | -| `scriptPods.serviceAccount.clusterRole.rules` | Give the service account custom rules |
- apiGroups:
  - '\*'
  resources:
  - '\*'
  verbs:
  - '\*'
- nonResourceURLs:
  - '\*'
  verbs:
  - '\*'
| -| `scriptPods.serviceAccount.name` | The name of the scriptPods service account | `-tentacle` | -| `scriptPods.serviceAccount.annotations` | Annotations given to the service account | `[]` | - -### Examples -
-Target Namespaces - -`scriptPods.serviceAccount.targetNamespaces` - -
- -**command:** -```bash -helm upgrade --install --atomic \ ---set scriptPods.serviceAccount.targetNamespaces="{development,preproduction}" \ ---set agent.acceptEula="Y" \ ---set agent.targetName="Nonproduction Agent" \ ---set agent.serverUrl="http://localhost:5000/" \ ---set agent.serverCommsAddress="http://localhost:10943/" \ ---set agent.space="Default" \ ---set agent.targetEnvironments="{Development,Preproduction}" \ ---set agent.targetRoles="{k8s-cluster-tag}" \ ---set agent.bearerToken="XXXX" \ ---version "1.*.*" \ ---create-namespace --namespace octopus-agent-my-agent \ -my-agent\ -oci://registry-1.docker.io/octopusdeploy/kubernetes-agent -``` -
- -
-Cluster Role Rules - -`scriptPods.serviceAccount.clusterRole.rules` - -
- -**values.yaml:** -```yaml -scriptPods: - serviceAccount: - clusterRole: - rules: - - apiGroups: - - '*' - resources: - - 'configmaps' - - 'deployments' - - 'services' - verbs: - - '*' - - nonResourceURLs: - - '*' - verbs: - - '*' - -agent: - acceptEula: 'Y' - targetName: 'No Secret Access Production Agent' - serverUrl: 'http://localhost:5000/' - serverCommsAddress: 'http://localhost:10943/' - space: 'Default' - targetEnvironments: - - 'Production' - targetRoles: - - 'k8s-cluster-tag' - bearerToken: 'XXXX' -``` -
- -**command:** -```bash -helm upgrade --install --atomic \ ---values values.yaml \ ---version "1.*.*" \ ---create-namespace --namespace octopus-agent-my-agent\ -my-agent \ -oci://registry-1.docker.io/octopusdeploy/kubernetes-agent -``` -
- - -# NFS Server Pod Permissions - -If you have not provided a predefined storageClassName for persistence, an NFS pod will be used. This NFS Server pod requires `privileged` access. For more information see [Kubernetes agent Storage](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/storage#nfs-storage). \ No newline at end of file diff --git a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/storage.md b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/storage.md index a33fbaa97c..fb8e924a05 100644 --- a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/storage.md +++ b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/storage.md @@ -1,94 +1,9 @@ --- -layout: src/layouts/Default.astro -pubDate: 2024-04-29 -modDate: 2024-04-29 -title: Storage -description: How to configure storage for a Kubernetes agent -navOrder: 30 +layout: src/layouts/Redirect.astro +title: Redirect +redirect: docs/kubernetes/targets/kubernetes-agent/storage +pubDate: 2024-07-29 +navSearch: false +navSitemap: false +navMenu: false --- - -During a deployment, Octopus Server first sends any required scripts and packages to [Tentacle](https://octopus.com/docs/infrastructure/deployment-targets/tentacle) which writes them to the file system. The actual script execution then takes place in a different process called [Calamari](https://github.com/OctopusDeploy/Calamari), which retrieves the scripts and packages directly from the file system. - -On a Kubernetes agent, scripts are executed in separate Kubernetes pods (script pod) as opposed to in a local shell (Bash/Powershell). This means the Tentacle pod and script pods don’t automatically share a common file system. - -Since the Kubernetes agent is built on the Tentacle codebase, it is necessary to configure shared storage so that the Tentacle Pod can write the files in a place that the script pods can read from. - -We offer two options for configuring the shared storage - you can use either the default NFS storage or specify a Storage Class name during setup: - -:::figure -![Kubernetes Agent Wizard Config Page](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png) -::: - - -## NFS storage - -By default, the Kubernetes agent Helm chart will set up an NFS server suitable for use by the agent inside your cluster. The server runs as a `StatefulSet` in the same namespace as the Kubernetes agent, and uses `EmptyDir` storage, as the working files of the agent are not required to be long-lived. - -This NFS server is referenced in the `StorageClass` that the Kubernetes agent and the script pod use. This `StorageClass` will then instruct the `NFS CSI Driver` to mount the server as directed. - -This default implementation is made to let you try the Kubernetes agent without worrying about installing a `ReadWriteMany` compatible `StorageClass` yourself. There are some drawbacks to this approach: - -### Privileges -The NFS server requires `privileged` access when running as a container, which may not be permitted depending on the cluster configuration. Access to the NFS pod should be kept to a minimum since it enables access to the host. - -:::div{.warning} -Red Hat OpenShift does not enable `privileged` access by default. When enabled, we have also encountered inconsistent file access issues using the NFS storage. We highly recommend the use of a [custom storage class](#custom-storage-class) when using Red Hat OpenShift. -::: - -### Reliability -Since the NFS server runs inside your Kubernetes cluster, upgrades and other cluster operations can cause the NFS server to restart. Due to how NFS stores and allows access to shared data, script pods will not be able to recover cleanly from an NFS server restart. This causes running deployments to fail when the NFS server is restarted. - -If you have a use case that can’t tolerate occasional deployment failures, it’s recommended to provide your own `StorageClass` instead of using the default NFS implementation. - -## Custom StorageClass \{#custom-storage-class} - -If you need a more reliable storage solution, then you can specify your own `StorageClass`. This `StorageClass` must be capable of `ReadWriteMany` (also known as `RWX`) access mode. - -Many managed Kubernetes offerings will provide storage that require little effort to set up. These will be a “provisioner” (named as such as they “provision” storage for a `StorageClass`), which you can then tie to a `StorageClass`. Some examples are listed below: - -|**Offering** |**Provisioner** |**Default StorageClass name** | -|----------------------------------|-----------------------------------|------------------------------------| -|[Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/concepts-storage) |`file.csi.azure.com` |`azurefile` | -|[Elastic Kubernetes Service (EKS)](https://docs.aws.amazon.com/eks/latest/userguide/storage.html) |`efs.csi.aws.com` |`efs-sc` | -|[Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/docs/concepts/storage-overview) |`filestore.csi.storage.gke.io` |`standard-rwx` | - -If you manage your own cluster and don’t have offerings from cloud providers available, there are some in-cluster options you could explore: -- [Longhorn](https://longhorn.io/) -- [Rook (CephFS)](https://rook.io/) -- [GlusterFS](https://www.gluster.org/) - -## Migrating from NFS storage to a custom StorageClass - -If you installed the Kubernetes agent using the default NFS storage, and want to change to a custom `StorageClass` instead, simply rerun the installation Helm command with specified values for `persistence.storageClassName`. - -The following steps assume your Kubernetes agent is in the `octopus-agent-nfs-to-pv` namespace: - -### Step 1: Find your Helm release {#KubernetesAgentStorage-Step1-FindYourHelmRelease} - -Take note of the current Helm release name and Chart version for your Kubernetes agent by running the following command: -```bash -helm list --namespace octopus-agent-nfs-to-pv -``` - -The output should look like this: -:::figure -![Helm list command](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-helm-list.png) -::: - -In this example, the release name is `nfs-to-pv` while the chart version is `1.0.1`. - -### Step 2: Change Persistence {#KubernetesAgentStorage-Step2-ChangePersistence} - -Run the following command (substitute the placeholders with your own values): -```bash -helm upgrade --reuse-values --atomic --set persistence.storageClassName="" --namespace --version "" oci://registry-1.docker.io/octopusdeploy/kubernetes-agent` -``` - -Here is an example to convert the `nfs-to-pv` Helm release in the `octopus-agent-nfs-to-pv` namespace to use the `octopus-agent-nfs-migration` `StorageClass`: -```bash -helm upgrade --reuse-values --atomic --set persistence.storageClassName="octopus-agent-nfs-migration" --namespace octopus-agent-nfs-to-pv --version "1.0.1" nfs-to-pv oci://registry-1.docker.io/octopusdeploy/kubernetes-agent` -``` - -:::div{.warning} -If you are using an existing `PersistentVolume` via its `StorageClassName`, then you must set the `persistence.size` value in the Helm command to match the capacity of the `PersistentVolume` for the `PersistentVolume` to bind. -::: diff --git a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/troubleshooting.md b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/troubleshooting.md index bc8d20ee2a..8e6c3009f5 100644 --- a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/troubleshooting.md +++ b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/troubleshooting.md @@ -1,64 +1,9 @@ --- -layout: src/layouts/Default.astro -pubDate: 2024-05-08 -modDate: 2024-05-30 -title: Troubleshooting -description: How to troubleshoot common Kubernetes Agent issues -navOrder: 40 +layout: src/layouts/Redirect.astro +title: Redirect +redirect: docs/kubernetes/targets/kubernetes-agent/troubleshooting +pubDate: 2024-07-29 +navSearch: false +navSitemap: false +navMenu: false --- - -This page will help you diagnose and solve issues with the Kubernetes agent. - -## Installation Issues - -### Helm command fails with `context deadline exceeded` - -The generated helm commands use the [`--atomic`](https://helm.sh/docs/helm/helm_upgrade/#options) flag, which automatically rollbacks the changes if it fails to execute within a specified timeout (default 5 min). - -If the helm command fails, then it may print an error message containing context deadline exceeded -This indicates that the timeout was exceeded and the Kubernetes resources did not correctly start. - -To help diagnose these issues, the `kubectl` commands [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/) and [`logs`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/) can be used _while the helm command is executing_ to help debug any issues. - -#### NFS CSI driver install command - -``` -kubectl describe pods -l app.kubernetes.io/name=csi-driver-nfs -n kube-system -``` - -#### Agent install command - -``` -# To get pod information -kubectl describe pods -l app.kubernetes.io/name=octopus-agent -n [NAMESPACE] -# To get pod logs -kubectl logs -l app.kubernetes.io/name=octopus-agent -n [NAMESPACE] -``` -_Replace `[NAMESPACE]` with the namespace in the agent installation command_ - -If the Agent install command fails with a timeout error, it could be that: - -- There is an error in the connection information provided -- The bearer token or API Key has expired or has been revoked -- The agent is unable to connect to Octopus Server due to a networking issue -- (if using the NFS storage solution) The NFS CSI driver has not been installed -- (if using a custom Storage Class) the Storage Class name doesn't match - -## Script Execution Issues - -### `Unexpected Script Pod log line number, expected: expected-line-no, actual: actual-line-no` - -This error indicates that the logs from the script pods are incomplete or malformed. - -When scripts are executed, any outputs or logs are stored in the script pod's container logs. The Tentacle pod then reads from the container logs to feed back to Octopus Server. - -There's a limit to the size of logs kept before they are [rotated](https://kubernetes.io/docs/concepts/cluster-administration/logging/#log-rotation) out. If a particular log line is rotated before Octopus Server reads it, then it means log lines are missing - hence we fail the deployment prevent unexpected changes from being hidden. - -### `The Script Pod 'octopus-script-xyz' could not be found` - -This error indicates that the script pods were deleted unexpectedly - typically being evicted/terminated by Kubernetes. - -If you are using the default NFS storage however, then the script pod would be deleted if the NFS server pod is restarted. Some possible causes are: - -- being evicted due to exceeding its storage quota -- being moved or restarted as part of routine cluster operation diff --git a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/index.md b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/index.md index d8feef3c71..3c6c7ab327 100644 --- a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/index.md +++ b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/index.md @@ -1,395 +1,9 @@ --- -layout: src/layouts/Default.astro -pubDate: 2023-01-01 -modDate: 2024-06-27 -title: Kubernetes API -description: How to configure a Kubernetes cluster as a deployment target in Octopus -navOrder: 20 +layout: src/layouts/Redirect.astro +title: Redirect +redirect: docs/kubernetes/targets/kubernetes-api +pubDate: 2024-07-29 +navSearch: false +navSitemap: false +navMenu: false --- -Kubernetes API targets are used by the [Kubernetes steps](/docs/deployments/kubernetes) to define the context in which deployments and scripts are run. - -Conceptually, a Kubernetes API target represent a permission boundary and an endpoint. Kubernetes [permissions](https://oc.to/KubernetesRBAC) and [quotas](https://oc.to/KubernetesQuotas) are defined against a namespace, and both the account and namespace are captured as a Kubernetes API target, along with the cluster endpoint URL. A namespace is required when registering the Kubernetes API target with Octopus Deploy. By default, the namespace used in the registration is used in health checks and deployments. The namespace can be overwritten in the deployment process. - -:::div{.hint} -From **Octopus 2022.2**, AKS target discovery has been added to the -Kubernetes Target Discovery Early Access Preview and is enabled via **Configuration ➜ Features**. - -From **Octopus 2022.3** will include EKS cluster support. -::: - -## Discovering Kubernetes targets - -Octopus can discover Kubernetes API targets in _Azure Kubernetes Service_ (AKS) or _Amazon Elastic Container Service for Kubernetes_ (EKS) as part of your deployment using tags on your AKS or EKS resource. - -:::div{.hint} -From **Octopus 2022.3**, you can configure the well-known variables used to discover Kubernetes targets when editing your deployment process in the Web Portal. See [cloud target discovery](/docs/infrastructure/deployment-targets/cloud-target-discovery) for more information. -::: - -To discover targets use the following steps: - -- Add an Azure account variable named **Octopus.Azure.Account** or the appropriate AWS authentication variables ([more info here](/docs/infrastructure/deployment-targets/cloud-target-discovery/#aws)) to your project. -- [Add cloud resource template tags](/docs/infrastructure/deployment-targets/cloud-target-discovery/#tag-cloud-resources) to your cluster so that Octopus can match it to your deployment step and environment. -- Add any of the Kubernetes built-in steps to your deployment process. During deployment, the target tag on the step will be used along with the environment being deployed to, to discover Kubernetes targets to deploy to. - -Kubernetes targets discovered will not have a namespace set, the namespace on the step will be used during deployment (or the default namespace in the cluster if no namespace is set on the step). - -See [cloud target discovery](/docs/infrastructure/deployment-targets/cloud-target-discovery) for more information. - -## A sample config file - -The YAML file below shows a sample **kubectl** configuration file. Existing Kubernetes users will likely have a similar configuration file. - -A number of the fields in this configuration file map directly to the fields in an Octopus Kubernetes API target, as noted in the next section. - -```yaml -apiVersion: v1 -clusters: -- cluster: - certificate-authority-data: XXXXXXXXXXXXXXXX... - server: https://kubernetes.example.org:443 - name: k8s-cluster -contexts: -- context: - cluster: k8s-cluster - user: k8s_user - name: k8s_user -current-context: k8s-cluster -kind: Config -preferences: {} -users: -- name: k8s_user - user: - client-certificate-data: XXXXXXXXXXXXXXXX... - client-key-data: XXXXXXXXXXXXXXXX... - token: 1234567890xxxxxxxxxxxxx -- name: k8s_user2 - user: - password: some-password - username: exp -- name: k8s_user3 - user: - token: 1234567890xxxxxxxxxxxxx -``` - -## Add a Kubernetes target - -1. Navigate to **Infrastructure ➜ Deployment Targets**, and click **Add Deployment Target**. -2. Select **KUBERNETES** and click **ADD** on the Kubernetes API card. -3. Enter a display name for the Kubernetes API target. -4. Select at least one [environment](/docs/infrastructure/environments) for the target. -5. Select at least one [target tag](/docs/infrastructure/deployment-targets/target-tags) for the target. -6. Select the authentication method. Kubernetes targets support multiple [account types](https://oc.to/KubernetesAuthentication): - - **Usernames/Password**: In the example YAML above, the user name is found in the `username` field, and the password is found in the `password` field. These values can be added as an Octopus [Username and Password](/docs/infrastructure/accounts/username-and-password) account. - - **Tokens**: In the example YAML above, the token is defined in the `token` field. This value can be added as an Octopus [Token](/docs/infrastructure/accounts/tokens) account. - - **Azure Service Principal**: When using an AKS cluster, [Azure Service Principal accounts](/docs/infrastructure/accounts/azure) allow Azure Active Directory accounts to be used. - - The Azure Service Principal is only used with AKS clusters. To log into ACS or ACS-Engine clusters, standard Kubernetes credentials like certificates or service account tokens must be used. - - :::div{.hint} - From Kubernetes 1.26, [the default azure auth plugin has been removed from kubectl](https://github.com/kubernetes/kubernetes/blob/ad18954259eae3db51bac2274ed4ca7304b923c4/CHANGELOG/CHANGELOG-1.26.md#deprecation) so clusters targeting Kubernetes 1.26+ that have [Local Account Access disabled](https://oc.to/AKSDisableLocalAccount) in Azure, will require the worker or execution container to have access to the [kubelogin](https://oc.to/Kubelogin) CLI tool, as well as the Octopus Deployment Target setting **Login with administrator credentials** disabled. This requires **Octopus 2023.3*. - - If Local Account access is enabled on the AKS cluster, the Octopus Deployment Target setting Login with administrator credentials will also need to be enabled so that the Local Accounts are used instead of the default auth plugin. - ::: - - - **AWS Account**: When using an EKS cluster, [AWS accounts](/docs/infrastructure/accounts/aws) allow IAM accounts and roles to be used. - - The interaction between AWS IAM and Kubernetes Role Based Access Control (RBAC) can be tricky. We highly recommend reading the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/managing-auth.html). - - :::div{.hint} - **Common issues:** - From **Octopus 2022.4**, you can use the `aws cli` to authenticate to an EKS cluster, earlier versions rely on the `aws-iam-authenticator`. If using the AWS account type, the Octopus Server or worker must have either the `aws cli` (1.16.156 or later) or `aws-iam-authenticator` executable on the path. If both are present the `aws cli` will be used. The EKS api version is selected based on the kubectl version. For Octopus 2022.3 and earlier `kubectl` `1.23.6` and `aws-iam-authenticator` version `0.5.3` or earlier must be used, these target `v1alpha1` endpoints. For `kubectl` `1.24.0` and later `v1beta1` endpoints are used and versions `0.5.5` and later of the `aws-iam-authenticator` are required. See the [AWS documentation](https://oc.to/AWSEKSKubectl) for download links. - - The error `You must be logged into the server (the server has asked for the client to provide credentials)` generally indicates the AWS account does not have permissions in the Kubernetes cluster. - - When you create an Amazon EKS cluster, the IAM entity user or role that creates the cluster is automatically granted `system:master` permissions in the cluster's RBAC configuration. To grant additional AWS users or roles the ability to interact with your cluster, you must edit the `aws-auth` ConfigMap within Kubernetes. See the [Managing Users or IAM Roles for your Cluster](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html). - ::: - - **Google Cloud Account**: When using a GKE cluster, [Google Cloud accounts](/docs/infrastructure/accounts/google-cloud) allow you to authenticate using a Google Cloud IAM service account. - - :::div{.hint} - From `kubectl` version `1.26`, authentication against a GKE cluster [requires an additional plugin called `gke-cloud-auth-plugin` to be available](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke) on the PATH where your step is executing. If you manage your own execution environment (eg self-hosted workers, custom execution containers etc), you will need to ensure the auth plugin is available alongside `kubectl` - ::: - - **Client Certificate**: When authenticating with certificates, both the certificate and private key must be provided. - - In the example YAML above, the `client-certificate-data` field is a base 64 encoded certificate, and the `client-key-data` field is a base 64 encoded private key (both have been truncated for readability in this example). - - The certificate and private key can be combined and saved in a single pfx file. The script below accepts the base 64 encoded certificate and private key and uses the [Windows OpenSSL binary from Shining Light Productions](https://oc.to/OpenSSLWindows) to save them in a single pfx file. - - ```powershell - param ( - [Parameter(Mandatory = $true)] - [string]$Certificate, - [Parameter(Mandatory = $true)] - [string]$PrivateKey - ) - - [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($Certificate)) | ` - Set-Content -Path certificate.crt - [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($PrivateKey)) | ` - Set-Content -Path private.key - C:\OpenSSL-Win32\bin\openssl pkcs12 ` - -passout pass: ` - -export ` - -out certificate_and_key.pfx ` - -in certificate.crt ` - -inkey private.key - ``` - ```bash - #!/bin/bash - echo $1 | base64 --decode > certificate.crt - echo $2 | base64 --decode > private.key - openssl pkcs12 \ - -passout pass: \ - -export \ - -out certificate_and_key.pfx \ - -in certificate.crt \ - -inkey private.key - ``` - - This file can then be uploaded to the [Octopus certificate management area](/docs/deployments/certificates), after which, it will be made available to the Kubernetes target. - - The Certificates Library can be accessed via **Library ➜ Certificates**. - -7. Enter the Kubernetes cluster URL. Each Kubernetes target requires the cluster URL, which is defined in the `Kubernetes cluster URL` field. In the example YAML about, this is defined in the `server` field. -8. Optionally, select the certificate authority if you've added one. Kubernetes clusters are often protected with self-signed certificates. In the YAML example above the certificate is saved as a base 64 encoded string in the `certificate-authority-data` field. - -To communicate with a Kubernetes cluster with a self signed certificate over HTTPS, you can either select the **Skip TLS verification** option, or supply the certificate in `The optional cluster certificate authority` field. - -Decoding the `certificate-authority-data` field results in a string that looks something like this (the example has been truncated for readability): - -``` ------BEGIN CERTIFICATE----- -XXXXXXXXXXXXXXXX... ------END CERTIFICATE----- -``` - -Save this text to a file called `ca.pem`, and upload it to the [Octopus certificate management area](https://oc.to/CertificatesDocumentation). The certificate can then be selected in the `cluster certificate authority` field. - -9. Enter the Kubernetes Namespace. -When a single Kubernetes cluster is shared across environments, resources deployed to the cluster will often be separated by environment and by application, team, or service. In this situation, the recommended approach is to create a namespace for each application and environment (e.g., `my-application-development` and `my-application-production`), and create a Kubernetes service account that has permissions to just that namespace. - -Where each environment has its own Kubernetes cluster, namespaces can be assigned to each application, team or service (e.g. `my-application`). - -In both scenarios, a target is then created for each Kubernetes cluster and namespace. The `Target Role` tag is set to the application name (e.g. `my-application`), and the `Environments` are set to the matching environment. - -When a Kubernetes target is used, the namespace it references is created automatically if it does not already exist. - -10. Select a worker pool for the target. -To make use of the Kubernetes steps, the Octopus Server or workers that will run the steps need to have the `kubectl` executable installed. Linux workers also need to have the `jq`, `xargs` and `base64` applications installed. -11. Click **SAVE**. - -:::div{.warning} -Setting the Worker Pool in a Deployment Process will override the Worker Pool defined directly on the Deployment Target. -::: - -## Create service accounts - -The recommended approach to configuring a Kubernetes target is to have a service account for each application and namespace. - -In the example below, a service account called `jenkins-development` is created to represent the deployment of an application called `jenkins` to an environment called `development`. This service account has permissions to perform all operations (i.e. `get`, `list`, `watch`, `create`, `update`, `patch`, `delete`) on the resources created by the `Deploy kubernetes containers` step (i.e. `deployments`, `replicasets`, `pods`, `services`, `ingresses`, `secrets`, `configmaps`). - -```yaml ---- -kind: Namespace -apiVersion: v1 -metadata: - name: jenkins-development ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: jenkins-deployer - namespace: jenkins-development ---- -kind: Role -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - namespace: jenkins-development - name: jenkins-deployer-role -rules: -- apiGroups: ["", "extensions", "apps"] - resources: ["deployments", "replicasets", "pods", "services", "ingresses", "secrets", "configmaps"] - verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] -- apiGroups: [""] - resources: ["namespaces"] - verbs: ["get"] ---- -kind: RoleBinding -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: jenkins-deployer-binding - namespace: jenkins-development -subjects: -- kind: ServiceAccount - name: jenkins-deployer - apiGroup: "" -roleRef: - kind: Role - name: jenkins-deployer-role - apiGroup: "" -``` - -In cases where it is necessary to have an administrative service account created (for example, when using AWS EKS because the initial admin account is tied to an IAM role), the following YAML can be used. - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: octopus-administrator - namespace: default ---- -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: octopus-administrator-binding - namespace: default -subjects: -- kind: ServiceAccount - name: octopus-administrator - namespace: default - apiGroup: "" -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin -``` - -Creating service accounts automatically results in a token being generated. The PowerShell snippet below returns the token for the `jenkins-deployer` account. - -```powershell -$user="jenkins-deployer" -$namespace="jenkins-development" -$data = kubectl get secret $(kubectl get serviceaccount $user -o jsonpath="{.secrets[0].name}" --namespace=$namespace) -o jsonpath="{.data.token}" --namespace=$namespace -[System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String($data)) -``` - -This bash snippet also returns the token value. - -```bash -kubectl get secret $(kubectl get serviceaccount jenkins-deployer -o jsonpath="{.secrets[0].name}" --namespace=jenkins-development) -o jsonpath="{.data.token}" --namespace=jenkins-development | base64 --decode -``` - -The token can then be saved as a Token Octopus account, and assigned to the Kubernetes target. - -:::div{.warning} -Kubernetes versions 1.24+ no longer automatically create tokens for service accounts and they need to be manually created using the **create token** command: -```bash -kubectl create token jenkins-deployer -``` - -From Kubernetes version 1.29, a warning will be displayed when using automatically created Tokens. Make sure to rotate any Octopus Token Accounts to use manually created tokens via **create token** instead. -::: - -## Kubectl - -Kubernetes targets use the `kubectl` executable to communicate with the Kubernetes cluster. This executable must be available on the path on the target where the step is run. When using workers, this means the `kubectl` executable must be in the path on the worker that is executing the step. Otherwise the `kubectl` executable must be in the path on the Octopus Server itself. - -## Vendor Authentication Plugins {#vendor-authentication-plugins} -Prior to `kubectl` version 1.26, the logic for authenticating against various cloud providers (eg Azure Kubernetes Services, Google Kubernetes Engine) was included "in-tree" in `kubectl`. From version 1.26 onward, the cloud-vendor specific authentication code has been removed from `kubectl`, in favor of a plugin approach. - -What this means for your deployments: - -* Amazon Elastic Container Services (ECS): No change required. Octopus already supports using either the AWS CLI or the `aws-iam-authenticator` plugin. -* Azure Kubernetes Services (AKS): No change required. The way Octopus authenticates against AKS clusters never used the in-tree Azure authentication code, and will continue to function as normal. - - From **Octopus 2023.3**, you will need to ensure that the [kubelogin](https://oc.to/Kubelogin) CLI tool is also available if you have disabled local Kubernetes accounts. -* Google Kubernetes Engine (GKE): If you upgrade to `kubectl` 1.26 or higher, you will need to ensure that the `gke-gcloud-auth-plugin` tool is also available. More information can be found on [Google's announcement about this change](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke). - -## Helm - -When a Kubernetes target is used with a Helm step, the `helm` executable must be on the target where the step is run. - -## Dynamic targets - -Kubernetes targets can be created dynamically at deploy time with the PowerShell function `New-OctopusKubernetesTarget`. - -See [Create Kubernetes Target Command](/docs/infrastructure/deployment-targets/dynamic-infrastructure/kubernetes-target) for more information. - -## Troubleshooting - -If you're running into issues with your Kubernetes targets, it's possible you'll be able to resolve the issue using some of these troubleshooting tips. If this section doesn't help, please [get in touch](https://octopus.com/support). - -### Debugging - -Setting the Octopus variable `Octopus.Action.Kubernetes.OutputKubeConfig` to `True` for any deployment or runbook using a Kubernetes target will cause the generated kube config file to be printed into the logs (with passwords masked). This can be used to verify the configuration file used to connect to the Kubernetes cluster. - -If Kubernetes targets fail their health checks, the best way to diagnose the issue to to run a `Run a kubectl CLI Script` step with a script that can inspect the various settings that must be in place for a Kubernetes target to function correctly. Octopus deployments will run against unhealthy targets by default, so the fact that the target failed its health check does not prevent these kinds of debugging steps from running. - -An example script for debugging a Kubernetes target is shown below: - -```powershell -$ErrorActionPreference = 'SilentlyContinue' - -# The details of the AWS Account. This will be populated for EKS clusters using the AWS authentication scheme. -# AWS_SECRET_ACCESS_KEY will be redacted, but that means it was populated successfully. -Write-Host "Getting the AWS user" -Write-Host "AWS_ACCESS_KEY_ID: $($env:AWS_ACCESS_KEY_ID)" -Write-Host "AWS_SECRET_ACCESS_KEY: $($env:AWS_SECRET_ACCESS_KEY)" - -# The details of the Azure Account. This will be populated for an AKS cluster using the Azure authentication scheme. -Write-Host "Getting the Azure user" -cat azure-cli/azureProfile.json - -# View the generated config. kubectl will redact any secrets from this output. -Write-Host "kubectl config view" -kubectl config view - -# View the environment variable that defines the kube config path -Write-Host "KUBECONFIG is $($env:KUBECONFIG)" - -# Save kube config as artifact (will expose credentials in log). This is useful to take the generated config file -# and run it outside of octopus. -# New-OctopusArtifact $env:KUBECONFIG - -# List any proxies. Failure to connect to the cluster when a proxy is configured may be caused by the proxy. -Write-Host "HTTP_PROXY: $($env:HTTP_PROXY)" -Write-Host "HTTPS_PROXY: $($env:HTTPS_PROXY)" -Write-Host "NO_PROXY: $($env:NO_PROXY)" - -# Execute the same command that the target health check runs. -Write-Host "Simulating a health check" -kubectl version --client --output=yaml - -# Write a custom kube config. This is useful when you have a config that works, and you want to confirm it works in Octopus. -Write-Host "Health check with custom config file" -Set-Content -Path "my-config.yml" -Value @" -apiVersion: v1 -clusters: -- cluster: - certificate-authority-data: ca-cert-goes-here - server: https://myk8scluster - name: test -contexts: -- context: - cluster: test - user: test_admin - name: test_admin -- context: - cluster: test - user: test - name: test -current-context: test -kind: Config -preferences: {} -users: -- name: test_admin - user: - token: auth-token-goes-here -- name: test - user: - client-certificate-data: certificate-data-goes-here - client-key-data: certificate-key-goes-here -"@ - -kubectl version --short --kubeconfig my-config.yml - -exit 0 - -``` - -### API calls failing - -If you are finding that certain API calls are failing, for example `https://your-octopus-url/api/users/Users-1/apikeys?take=2147483647`, it's possible that your WAF is blocking the traffic. To confirm this you should investigate your WAF logs to determine why the API call is being blocked and make the necessary adjustments to your WAF rules. - -## Learn more - -- [Kubernetes Deployment](/docs/deployments/kubernetes) -- [Kubernetes blog posts](https://octopus.com/blog/tag/kubernetes) diff --git a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/openshift/index.md b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/openshift/index.md index 64550fc38c..bb7cd29bb3 100644 --- a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/openshift/index.md +++ b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/openshift/index.md @@ -1,68 +1,9 @@ --- -layout: src/layouts/Default.astro -pubDate: 2023-01-01 -modDate: 2023-10-04 -title: OpenShift Kubernetes cluster -description: How to configure an OpenShift Kubernetes cluster as a deployment target in Octopus. -navOrder: 40 +layout: src/layouts/Redirect.astro +title: Redirect +redirect: docs/kubernetes/targets/kubernetes-api/openshift +pubDate: 2024-07-29 +navSearch: false +navSitemap: false +navMenu: false --- - -[OpenShift](https://www.openshift.com/) is a popular Kubernetes (K8s) management platform by Red Hat. OpenShift provides an interface to manage and deploy containers to your K8s cluster as well as centrally manage security. The OpenShift Container Platform rides on top of standard Kubernetes, this means that it can easily be integrated with Octopus Deploy as a deployment target. - -## Authentication - -To connect your OpenShift K8s cluster to Octopus Deploy, you must first create a means to authenticate with. We recommend that you create a [Service Account](https://docs.openshift.com/container-platform/4.4/authentication/understanding-and-creating-service-accounts.html) for Octopus Deploy to use. - -:::div{.hint} -Service Accounts in OpenShift are project specific. You will need to create a Service Account per project (namespace) for Octopus Deploy in OpenShift. -::: - -### Create service account - -Each project within OpenShift has a section where you can define service accounts. After your project has been created: - -- Expand **User Management**. -- Click **Service Accounts**. -- Click **Create Service Account**. - -### Create role binding - -The Service Account will need to have a role so it can create resources on the cluster. - -In this example, the Service Account `octopusdeploy` is granted the role `cluster-admin` for the currently logged in project: - -``` -C:\Users\Shawn.Sesna\.kube>oc.exe policy add-role-to-user cluster-admin -z octopusdeploy -``` - -### Service Account Token - -OpenShift will automatically create a Token for your Service Account. This Token is how the Service Account authenticates to OpenShift from Octopus Deploy. To retrieve the value of the token: - -- Click Service Accounts. -- Click octopusdeploy (or whatever you named yours). -- Scroll down to the Secrets section. -- Click on the entry that has the `type` of `kubernetes.io/service-account-token`. - -:::figure -![OpenShift Service Account](/docs/infrastructure/deployment-targets/kubernetes-target/openshift/openshift-service-account-secrets.png) -::: - -Copy the Token value by clicking on the copy to clipboard icon on the right hand side. - -#### Getting the cluster URL - -To add OpenShift as a deployment target, you need the URL to the cluster. The `status` argument for the `oc.exe` command-line tool will display the URL of the OpenShift K8s cluster: - -``` -C:\Users\Shawn.Sesna\.kube>oc.exe status -In project testproject on server https://api.crc.testing:6443 -``` - -#### Project names are Namespaces - -When you create projects within OpenShift, you are creating Namespaces in the K8s cluster. The project name of your project is the Namespace within the K8s cluster. - -## Connecting an OpenShift Kubernetes Deployment Target - -Adding an OpenShift K8s target is done in exactly the same way you would add any other [Kubernetes target](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/#add-a-kubernetes-target). \ No newline at end of file diff --git a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/rancher/index.md b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/rancher/index.md index 1764197d64..1e552fbfe6 100644 --- a/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/rancher/index.md +++ b/src/pages/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/rancher/index.md @@ -1,57 +1,9 @@ --- -layout: src/layouts/Default.astro -pubDate: 2023-01-01 -modDate: 2023-01-01 -title: Rancher Kubernetes cluster -description: How to configure a Rancher Kubernetes cluster as a deployment target in Octopus -navOrder: 40 +layout: src/layouts/Redirect.astro +title: Redirect +redirect: docs/kubernetes/targets/kubernetes-api/rancher +pubDate: 2024-07-29 +navSearch: false +navSitemap: false +navMenu: false --- -[Rancher](http://www.rancher.com) is a Kubernetes (K8s) cluster management tool that can be used to manage K8s clusters on local infrastructure, cloud infrastructure, and even cloud managed K8s services. Not only can Rancher be used to centrally manage all of your K8s clusters, it can also be used to provide a central point for deployment, proxying commands through Rancher to the K8s clusters it manages. This provides the advantage of managing access to your K8s clusters without having to add users to the clusters individually. - -## Authentication - -Before you can add your Rancher managed cluster, you must first create a means of authenticating to it. This can be accomplished using the Rancher UI to create an access key. - -1. Log into Rancher, then click on your profile in the upper right-hand corner. -1. Select **API & Keys**. -1. Click on **Add Key**. -1. Give the API Key an expiration and a scope. -1. We recommend adding a description so you know what this key will be used for, then click **Create**. - -After you click create, you will be shown the API Key information: -- Access Key (username): Used for Username/Password accounts in Octopus Deploy. -- Secret Key (password): Used for Username/Password accounts in Octopus Deploy. -- Bearer Token: Used for Token accounts in Octopus Deploy. - -**Save this information, you will not be able to retrieve it later.** - -## Rancher cluster endpoints - -As previously mentioned, you can proxy communication to your clusters through Rancher. Instead of connecting to the individual K8s API endpoints directly, you can use API endpoints within Rancher to issue commands. The format of the URL is as follows: `https:///k8s/clusters/`. - -A quick way to find the correct URL is to grab it from the provided Kubeconfig file information. For each cluster you define, Rancher provides a *Kubeconfig file* that can be downloaded directly from the UI. To find it, select the cluster you need from the Global dashboard, and click the **Kubeconfig File** button: - -:::figure -![Rancher Kubeconfig file](/docs/infrastructure/deployment-targets/kubernetes-target/rancher/rancher-kubeconfig-file.png) -::: - -The next screen has the Kubeconfig file which contains the specific URL you need to use to connect your cluster to Octopus Deploy: - -:::figure -![Rancher cluster URL](/docs/infrastructure/deployment-targets/kubernetes-target/rancher/rancher-cluster-url.png) -::: - -## Add the account to Octopus Deploy - -In order for Octopus Deploy to deploy to the cluster, it needs credentials to log in with. In the Octopus Web Portal, navigate to the **Infrastructure** tab and click **Accounts**. - -Use one of the methods Rancher provided you with, *Username and Password* or *Token*. - -1. Click **ADD ACCOUNT**. -1. Select which account type you want to create. -1. Enter the values for your selection, then click **SAVE**. - - -## Connecting a Rancher Kubernetes Deployment Target - -Adding a Rancher K8s target is done in exactly the same way you would add any other [Kubernetes target](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api/#add-a-kubernetes-target). The only Rancher specific component is the URL. Other than that, the process is exactly the same. \ No newline at end of file diff --git a/src/pages/docs/kubernetes/targets/index.md b/src/pages/docs/kubernetes/targets/index.md index a4a787aa2c..ce25746b98 100644 --- a/src/pages/docs/kubernetes/targets/index.md +++ b/src/pages/docs/kubernetes/targets/index.md @@ -2,15 +2,13 @@ layout: src/layouts/Default.astro pubDate: 2023-01-01 modDate: 2024-04-24 -title: Kubernetes Targets +title: Kubernetes navTitle: Overview -navSection: Targets +navSection: Kubernetes description: Kubernetes deployment targets -navOrder: 30 -hideInThisSectionHeader: true +navOrder: 50 --- - There are two different deployment targets for deploying to Kubernetes using Octopus Deploy, the [Kubernetes Agent](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent) and the [Kubernetes API](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-api) targets. The following table summarizes the key differences between the two targets. diff --git a/src/pages/docs/kubernetes/targets/kubernetes-agent/automated-installation.md b/src/pages/docs/kubernetes/targets/kubernetes-agent/automated-installation.md new file mode 100644 index 0000000000..4b8a423ed3 --- /dev/null +++ b/src/pages/docs/kubernetes/targets/kubernetes-agent/automated-installation.md @@ -0,0 +1,213 @@ +--- +layout: src/layouts/Default.astro +pubDate: 2024-05-14 +modDate: 2024-05-14 +title: Automated Installation +description: How to automate the installation and management of the Kubernetes agent +navOrder: 50 +--- + +## Automated installation via Terraform +The Kubernetes agent can be installed and managed using a combination of the Kubernetes agent [Helm chart <= v1.1.0](https://hub.docker.com/r/octopusdeploy/kubernetes-agent), [Octopus Deploy <= v0.20.0 Terraform provider](https://registry.terraform.io/providers/OctopusDeployLabs/octopusdeploy/latest) and/or [Helm Terraform provider](https://registry.terraform.io/providers/hashicorp/helm). + +### Octopus Deploy & Helm +Using a combination of the Octopus Deploy and Helm providers you can completely manage the Kubernetes agent via Terraform. + +:::div{.warning} +To ensure that the Kubernetes agent and the deployment target within Octopus associate with each other correctly the some of the Helm chart values and deployment target properties must meet the following criteria: +`octopusdeploy_kubernetes_agent_deployment_target.name` and `agent.targetName` have the same values. +`octopusdeploy_kubernetes_agent_deployment_target.uri` and `agent.serverSubscriptionId` have the same values. +`octopusdeploy_kubernetes_agent_deployment_target.thumbprint` is the thumbprint calculated from the certificate used in `agent.certificate`. +::: + +```hcl +terraform { + required_providers { + octopusdeploy = { + source = "octopus.com/com/octopusdeploy" + version = "0.20.0" + } + + helm = { + source = "hashicorp/helm" + version = "2.13.2" + } + } +} + +locals { + octopus_api_key = "API-XXXXXXXXXXXXXXXX" + octopus_address = "https://myinstance.octopus.app" + octopus_polling_address = "https://polling.myinstance.octopus.app" +} + +provider "helm" { + kubernetes { + # Configure authentication for me + } +} + +provider "octopusdeploy" { + address = local.octopus_address + api_key = local.octopus_api_key +} + +resource "octopusdeploy_space" "agent_space" { + name = "agent space" + space_managers_teams = ["teams-everyone"] +} + +resource "octopusdeploy_environment" "dev_env" { + name = "Development" + space_id = octopusdeploy_space.agent_space.id +} + +resource "octopusdeploy_polling_subscription_id" "agent_subscription_id" {} +resource "octopusdeploy_tentacle_certificate" "agent_cert" {} + +resource "octopusdeploy_kubernetes_agent_deployment_target" "agent" { + name = "agent-one" + space_id = octopusdeploy_space.agent_space.id + environments = [octopusdeploy_environment.dev_env.id] + roles = ["role-1", "role-2", "role-3"] + thumbprint = octopusdeploy_tentacle_certificate.agent_cert.thumbprint + uri = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri +} + +resource "helm_release" "octopus_agent" { + name = "octopus-agent-release" + repository = "oci://registry-1.docker.io" + chart = "octopusdeploy/kubernetes-agent" + version = "1.*.*" + atomic = true + create_namespace = true + namespace = "octopus-agent-target" + + set { + name = "agent.acceptEula" + value = "Y" + } + + set { + name = "agent.targetName" + value = octopusdeploy_kubernetes_agent_deployment_target.agent.name + } + + set_sensitive { + name = "agent.serverApiKey" + value = local.octopus_api_key + } + + set { + name = "agent.serverUrl" + value = local.octopus_address + } + + set { + name = "agent.serverCommsAddress" + value = local.octopus_polling_address + } + + set { + name = "agent.serverSubscriptionId" + value = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri + } + + set_sensitive { + name = "agent.certificate" + value = octopusdeploy_tentacle_certificate.agent_cert.base64 + } + + set { + name = "agent.space" + value = octopusdeploy_space.agent_space.name + } + + set_list { + name = "agent.targetEnvironments" + value = octopusdeploy_kubernetes_agent_deployment_target.agent.environments + } + + set_list { + name = "agent.targetRoles" + value = octopusdeploy_kubernetes_agent_deployment_target.agent.roles + } +} +``` + +### Helm +The Kubernetes agent can be installed using just the Helm provider but the associated deployment target that is created in Octopus when the agent registers itself cannot be managed solely using the Helm provider, as the Helm chart values relating to the deployment target are only used on initial installation and any modifications to them will not trigger an update to the deployment target unless you perform a complete reinstall of the agent. This option is useful if you plan on managing the configuration of the deployment target via means such as the Portal or API. + +```hcl +terraform { + required_providers { + helm = { + source = "hashicorp/helm" + version = "2.13.2" + } + } +} + +provider "helm" { + kubernetes { + # Configure authentication for me + } +} + +locals { + octopus_api_key = "API-XXXXXXXXXXXXXXXX" + octopus_address = "https://myinstance.octopus.app" + octopus_polling_address = "https://polling.myinstance.octopus.app" +} + +resource "helm_release" "octopus_agent" { + name = "octopus-agent-release" + repository = "oci://registry-1.docker.io" + chart = "octopusdeploy/kubernetes-agent" + version = "1.*.*" + atomic = true + create_namespace = true + namespace = "octopus-agent-target" + + set { + name = "agent.acceptEula" + value = "Y" + } + + set { + name = "agent.targetName" + value = "octopus-agent" + } + + set_sensitive { + name = "agent.serverApiKey" + value = local.octopus_api_key + } + + set { + name = "agent.serverUrl" + value = local.octopus_address + } + + set { + name = "agent.serverCommsAddress" + value = local.octopus_polling_address + } + + set { + name = "agent.space" + value = "Default" + } + + set_list { + name = "agent.targetEnvironments" + value = ["Development"] + } + + + set_list { + name = "agent.targetRoles" + value = ["Role-1"] + } +} +``` \ No newline at end of file diff --git a/src/pages/docs/kubernetes/targets/kubernetes-agent/ha-cluster-support.md b/src/pages/docs/kubernetes/targets/kubernetes-agent/ha-cluster-support.md new file mode 100644 index 0000000000..cb73394a9b --- /dev/null +++ b/src/pages/docs/kubernetes/targets/kubernetes-agent/ha-cluster-support.md @@ -0,0 +1,57 @@ +--- +layout: src/layouts/Default.astro +pubDate: 2024-05-14 +modDate: 2024-05-14 +title: HA Cluster Support +description: How to install/update the agent when running Octopus in an HA Cluster +navOrder: 60 +--- + +## Octopus Deploy HA Cluster + +Similarly to Polling Tentacles, the Kubernetes agent must have a URL for each individual node in the HA Cluster so that it receive commands from all clusters. These URLs must be provided when registering the agent or some deployments may fail depending on which node the tasks are executing. + +To read more about selecting the right URL for your nodes, see [Polling Tentacles and Kubernetes agents with HA](/docs/administration/high-availability/maintain/polling-tentacles-with-ha). + +## Agent Installation on an HA Cluster + +### Octopus Deploy 2024.3+ + +To make things easier, Octopus will detect when it's running HA and show an extra configuration page in the Kubernetes agent creation wizard which asks you to give a unique URL for each cluster node. + +:::figure +![Kubernetes Agent HA Cluster Configuration Page](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-ha-cluster-configuration-page.png) +::: + +Once these values are provided the generated helm upgrade command will configure your new agent to receive commands from all nodes. + +### Octopus Deploy 2024.2 + +To install the agent with Octopus Deploy 2024.2 you need to adjust the Helm command produced by the wizard before running it. + +1. Use the wizard to produce the Helm command to install the agent. + 1. You may need to provide a ServerCommsAddress: you can just provide any valid URL to progress the wizard. +2. Replace the `--set agent.serverCommsAddress="..."` property with +``` +--set agent.serverCommsAddresses="{https://:/,https://:/,https://:/}" +``` +where each `:` is a unique address for an individual node. + +3. Execute the Helm command in a terminal connected to the target cluster. + +:::div{.warning} +The new property name is `agent.serverCommsAddresses`. Note that "Addresses" is plural. +::: + +## Upgrading the Agent after Adding/Removing Cluster nodes + +If you add or remove cluster nodes, you need to update your agent's configuration so that it continues to connect to all nodes in the cluster. To do this, you can simply run a helm upgrade command with the urls of all current cluster nodes. The agent will take remove any old urls and replace them with the provided ones. + +```bash +helm upgrade --atomic \ +--reuse-values \ +--set agent.serverCommsAddresses="{https://:/,https://:/,https://:/}" \ +--namespace \ + \ +oci://registry-1.docker.io/octopusdeploy/kubernetes-agent +``` diff --git a/src/pages/docs/kubernetes/targets/kubernetes-agent/index.md b/src/pages/docs/kubernetes/targets/kubernetes-agent/index.md index 498c5fc534..684a374b9d 100644 --- a/src/pages/docs/kubernetes/targets/kubernetes-agent/index.md +++ b/src/pages/docs/kubernetes/targets/kubernetes-agent/index.md @@ -1,8 +1,10 @@ --- layout: src/layouts/Default.astro pubDate: 2024-04-22 -modDate: 2024-04-22 +modDate: 2024-07-02 title: Kubernetes agent +navTitle: Overview +navSection: Kubernetes agent description: How to configure a Kubernetes agent as a deployment target in Octopus navOrder: 10 --- @@ -17,7 +19,7 @@ The Kubernetes agent provides a number of improvements over the [Kubernetes API] ### Polling communication -The agent uses the same [polling communication](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication#polling-tentacles) protocol as [Octopus Tentacle](/docs/infrastructure/deployment-targets/tentacle). It lets the agent initiate the connection from the cluster to Octopus Server, solving network access issues such as publicly addressable clusters.. +The agent uses the same [polling communication](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication#polling-tentacles) protocol as [Octopus Tentacle](/docs/infrastructure/deployment-targets/tentacle). It lets the agent initiate the connection from the cluster to Octopus Server, solving network access issues such as publicly addressable clusters. ### In-cluster authentication @@ -25,13 +27,15 @@ As the agent is already running inside the target cluster, Octopus Server no lon ### Cluster-aware tooling -As the agent is running in the cluster, it can retrieve the cluster's version and correctly use tooling that's specific to that version. You also need a lot less tooling as there are no longer any requirements for custom authentication plugins. +As the agent is running in the cluster, it can retrieve the cluster's version and correctly use tooling that's specific to that version. You also need a lot less tooling as there are no longer any requirements for custom authentication plugins. See the [agent tooling](#agent-tooling) section for more details. ## Requirements -The Kubernetes agent is supported on the following versions: -* Octopus Server **2024.2.6580** or newer -* Kubernetes **1.26** to **1.29** (inclusive) +The Kubernetes agent follows [semantic versioning](https://semver.org/), so a major agent version is locked to a Octopus Server version range. Updating to the latest major agent version requires updating to a supported Octopus Server. The supported versions for each agent major version are: + +| Kubernetes agent | Octopus Server | Kubernetes cluster | +| ---------------- | ------------------------ | -------------------- | +| 1.\*.\* | **2024.2.6580** or newer | **1.26** to **1.29** | Additionally, the Kubernetes agent only supports **Linux AMD64** and **Linux ARM64** Kubernetes nodes. @@ -44,7 +48,7 @@ To simplify this, there is an installation wizard in Octopus to generate the req :::div{.warning} Helm will use your current kubectl config, so make sure your kubectl config is pointing to the correct cluster before executing the following helm commands. You can see the current kubectl config by executing: -``` +```bash kubectl config view ``` ::: @@ -61,7 +65,7 @@ kubectl config view 1. Enter a unique display name for the target. This name is used to generate the Kubernetes namespace, as well as the Helm release name 2. Select at least one [environment](/docs/infrastructure/environments) for the target. -3. Select at least one [target tag](/docs/infrastructure/deployment-targets/#target-roles) for the target. +3. Select at least one [target tag](/docs/infrastructure/deployment-targets/target-tags) for the target. 4. Optionally, add the name of an existing [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/) for the agent to use. The storage class must support the ReadWriteMany [access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). If no storage class name is added, the default Network File System (NFS) storage will be used. @@ -83,6 +87,13 @@ A requirement of using the NFS pod is the installation of the [NFS CSI Driver](h ![Kubernetes Agent Wizard NFS CSI Page](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-nfs.png) ::: +:::div{.warning} +If you receive an error with the text `failed to download` or `no cached repo found` when attempting to install the NFS CSI driver via helm, try executing the following command and then retrying the install command: +```bash +helm repo update +``` +::: + ### Installation helm command At the end of the wizard, Octopus generates a Helm command that you copy and paste into a terminal connected to the target cluster. After it's executed, Helm installs all the required resources and starts the agent. @@ -109,36 +120,137 @@ If left open, the installation dialog waits for the agent to establish a connect A successful health check indicates that deployments can successfully be executed. ::: -## Uninstalling the Kubernetes agent +## Configuring the agent with Tenants -To fully remove the Kubernetes agent, you need to delete the agent from the Kubernetes cluster as well as delete the deployment target from Octopus Deploy +While the wizard doesn't support selecting Tenants or Tenant tags, the agent can be configured for tenanted deployments in two ways: -The deployment target deletion confirmation dialog will provide you with the commands to delete the agent from the cluster.Once these have been successfully executed, you can then click **Delete** and delete the deployment target. +1. Use the Deployment Target settings UI at **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Settings** to add a Tenant and set the Tenanted Deployment Participation as required. This is done after the agent has successfully installed and registered. :::figure -![Kubernetes Agent delete dialog](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-delete-dialog.png) +![Kubernetes Agent ](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-settings-page-tenants.png) ::: -## Troubleshooting +2. Set additional variables in the helm command to allow the agent to register itself with associated Tenants or Tenant tags. You also need to provider a value for the `TenantedDeploymentParticipation` value. Possible values are `Untenanted` (default), `Tenanted`, and `TenantedOrUntenanted`. -### Helm command fails with context deadline exceeded +example to add these values: +```bash +--set agent.tenants="{,}" \ +--set agent.tenantTags="{,}" \ +--set agent.tenantedDeploymentParticipation="TenantedOrUntenanted" \ +``` -The generated helm commands use the [`--atomic`](https://helm.sh/docs/helm/helm_upgrade/#options) flag, which automatically rollbacks the changes if it fails to execute within a specified timeout (default 5 min). +:::div{.hint} +You don't need to provide both Tenants and Tenant Tags, but you do need to provider the tenanted deployment participation value. +::: -If the helm command fails, then it may print an error message containing context deadline exceeded -This indicates that the timeout was exceeded and the Kubernetes resources did not correctly start. +In a full command: +```bash +helm upgrade --install --atomic \ +--set agent.acceptEula="Y" \ +--set agent.targetName="" \ +--set agent.serverUrl="" \ +--set agent.serverCommsAddress="" \ +--set agent.space="Default" \ +--set agent.targetEnvironments="{,}" \ +--set agent.targetRoles="{,}" \ +--set agent.tenants="{,}" \ +--set agent.tenantTags="{,}" \ +--set agent.tenantedDeploymentParticipation="TenantedOrUntenanted" \ +--set agent.bearerToken="" \ +--version "1.*.*" \ +--create-namespace --namespace \ + \ +oci://registry-1.docker.io/octopusdeploy/kubernetes-agent +``` -To help diagnose these issues, the `kubectl` command [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/) can be used _while the helm command is executing_ to help debug any issues. +## Trusting custom/internal Octopus Server certificates -#### NFS install command +:::div{.hint} +Server certificate support was added in Kubernetes agent 1.7.0 +::: + +It is common for organizations to have their Octopus Deploy server hosted in an environment where it has an SSL/TLS certificate that is not part of the global certificate trust chain. As a result, the Kubernetes agent will fail to register with the target server due to certificate errors. A typical error looks like this: ``` -kubectl describe pods -l app.kubernetes.io/name=csi-driver-nfs -n kube-system +2024-06-21 04:12:01.4189 | ERROR | The following certificate errors were encountered when establishing the HTTPS connection to the server: RemoteCertificateNameMismatch, RemoteCertificateChainErrors +Certificate subject name: CN=octopus.corp.domain +Certificate thumbprint: 42983C1D517D597B74CDF23F054BBC106F4BB32F ``` -#### Agent install command +To resolve this, you need to provide the Kubernetes agent with a base64-encoded string of the public key of the certificate in either `.pem` or `.crt` format. When viewed as text, this will look similar to this: ``` -kubectl describe pods -l app.kubernetes.io/name=octopus-agent -n [NAMESPACE] +-----BEGIN CERTIFICATE----- +MII... +-----END CERTIFICATE----- ``` -_Replace `[NAMESPACE]` with the namespace in the agent installation command_ + +Once encoded, this string can be provided as part of the agent installation helm command via the `agent.serverCertificate` helm value. + +To include this in the installation command, add the following to the generated installation command: + +```bash +--set agent.serverCertificate="" +``` + +## Agent tooling + +By default, the agent will look for a [container image](/docs/projects/steps/execution-containers-for-workers) for the workload it's executing against the cluster. If one isn't specified, Octopus will execute the Kubernetes workload using the `octopusdeploy/kubernetes-agent-tools-base` container. It will correctly select the version of the image that's specific to the cluster's version. + +This image contains the minimum required tooling to run Kubernetes workloads for Octopus Deploy, namely: + +- `kubectl` +- `helm` +- `powershell` + +## Upgrading the Kubernetes agent + +The Kubernetes agent can be upgraded automatically by Octopus Server, manually in the Octopus portal or via a `helm` command. + +### Automatic updates + +:::div{.hint} +Automatic updating was added in 2024.2.8584 +::: + +By default, the Kubernetes agent is automatically updated by Octopus Server when a new version is released. These version checks typically occur after a health check. When an update is required, Octopus will start a task to update the agent to the latest version. + +This behavior is controlled by the [Machine Policy](/docs/infrastructure/deployment-targets/machine-policies) associated with the agent. You can change this behavior to **Manually** in the [Machine policy settings](/docs/infrastructure/deployment-targets/machine-policies#configure-machine-updates). + +### Manual updating via Octopus portal + +To check if a Kubernetes agent can be manually upgraded, navigate to the **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Connectivity** page. If the agent can be upgraded, there will be an *Upgrade available* banner. Clicking **Upgrade to latest** button will trigger the upgrade via a new task. If the upgrade fails, the previous version of the agent is restored. + +:::figure +![Kubernetes Agent updated interface](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-upgrade-portal.png) +::: + +### Helm upgrade command + +To upgrade a Kubernetes agent via `helm`, note the following fields from the **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Connectivity** page: +* Helm Release Name +* Namespace + +Then, from a terminal connected to the cluster containing the instance, execute the following command: + +```bash +helm upgrade --atomic --namespace NAMESPACE HELM_RELEASE_NAME oci://registry-1.docker.io/octopusdeploy/kubernetes-agent +``` +__Replace NAMESPACE and HELM_RELEASE_NAME with the values noted__ + +If after the upgrade command has executed, you find that there is issues with the agent, you can rollback to the previous helm release by executing: + +```bash +helm rollback --namespace NAMESPACE HELM_RELEASE_NAME +``` + + +## Uninstalling the Kubernetes agent + +To fully remove the Kubernetes agent, you need to delete the agent from the Kubernetes cluster as well as delete the deployment target from Octopus Deploy + +The deployment target deletion confirmation dialog will provide you with the commands to delete the agent from the cluster.Once these have been successfully executed, you can then click **Delete** and delete the deployment target. + +:::figure +![Kubernetes Agent delete dialog](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-delete-dialog.png) +::: diff --git a/src/pages/docs/kubernetes/targets/kubernetes-agent/permissions/index.md b/src/pages/docs/kubernetes/targets/kubernetes-agent/permissions.md similarity index 99% rename from src/pages/docs/kubernetes/targets/kubernetes-agent/permissions/index.md rename to src/pages/docs/kubernetes/targets/kubernetes-agent/permissions.md index 6b967fe2bd..5a31e5511a 100644 --- a/src/pages/docs/kubernetes/targets/kubernetes-agent/permissions/index.md +++ b/src/pages/docs/kubernetes/targets/kubernetes-agent/permissions.md @@ -4,7 +4,7 @@ pubDate: 2024-04-29 modDate: 2024-04-29 title: Permissions description: Information about what permissions are required and how to adjust them -navOrder: 10 +navOrder: 20 --- The Kubernetes agent uses service accounts to manage access to cluster objects. @@ -45,7 +45,7 @@ The service account for script pods can be customized in a few ways:
**command:** -```Bash +```bash helm upgrade --install --atomic \ --set scriptPods.serviceAccount.targetNamespaces="{development,preproduction}" \ --set agent.acceptEula="Y" \ @@ -104,7 +104,7 @@ agent:
**command:** -```Bash +```bash helm upgrade --install --atomic \ --values values.yaml \ --version "1.*.*" \ diff --git a/src/pages/docs/kubernetes/targets/kubernetes-agent/storage/index.md b/src/pages/docs/kubernetes/targets/kubernetes-agent/storage.md similarity index 59% rename from src/pages/docs/kubernetes/targets/kubernetes-agent/storage/index.md rename to src/pages/docs/kubernetes/targets/kubernetes-agent/storage.md index 45d209eb55..a33fbaa97c 100644 --- a/src/pages/docs/kubernetes/targets/kubernetes-agent/storage/index.md +++ b/src/pages/docs/kubernetes/targets/kubernetes-agent/storage.md @@ -4,14 +4,14 @@ pubDate: 2024-04-29 modDate: 2024-04-29 title: Storage description: How to configure storage for a Kubernetes agent -navOrder: 10 +navOrder: 30 --- During a deployment, Octopus Server first sends any required scripts and packages to [Tentacle](https://octopus.com/docs/infrastructure/deployment-targets/tentacle) which writes them to the file system. The actual script execution then takes place in a different process called [Calamari](https://github.com/OctopusDeploy/Calamari), which retrieves the scripts and packages directly from the file system. On a Kubernetes agent, scripts are executed in separate Kubernetes pods (script pod) as opposed to in a local shell (Bash/Powershell). This means the Tentacle pod and script pods don’t automatically share a common file system. -Since the Kubernetes agent is built on the Tentacle codebase, it is necessary to configure shared storage so that the Tentacle Pod can write the files in a place that the Script Pods can read from. +Since the Kubernetes agent is built on the Tentacle codebase, it is necessary to configure shared storage so that the Tentacle Pod can write the files in a place that the script pods can read from. We offer two options for configuring the shared storage - you can use either the default NFS storage or specify a Storage Class name during setup: @@ -31,16 +31,20 @@ This default implementation is made to let you try the Kubernetes agent without ### Privileges The NFS server requires `privileged` access when running as a container, which may not be permitted depending on the cluster configuration. Access to the NFS pod should be kept to a minimum since it enables access to the host. +:::div{.warning} +Red Hat OpenShift does not enable `privileged` access by default. When enabled, we have also encountered inconsistent file access issues using the NFS storage. We highly recommend the use of a [custom storage class](#custom-storage-class) when using Red Hat OpenShift. +::: + ### Reliability Since the NFS server runs inside your Kubernetes cluster, upgrades and other cluster operations can cause the NFS server to restart. Due to how NFS stores and allows access to shared data, script pods will not be able to recover cleanly from an NFS server restart. This causes running deployments to fail when the NFS server is restarted. If you have a use case that can’t tolerate occasional deployment failures, it’s recommended to provide your own `StorageClass` instead of using the default NFS implementation. -## Custom StorageClass +## Custom StorageClass \{#custom-storage-class} If you need a more reliable storage solution, then you can specify your own `StorageClass`. This `StorageClass` must be capable of `ReadWriteMany` (also known as `RWX`) access mode. -Many managed Kubernetes offerings will have offerings that require little effort to use. These will be a “provisioner” (named as such as they “provision” storage for a storage class), which you can then tie to a storage class. Some examples are listed below: +Many managed Kubernetes offerings will provide storage that require little effort to set up. These will be a “provisioner” (named as such as they “provision” storage for a `StorageClass`), which you can then tie to a `StorageClass`. Some examples are listed below: |**Offering** |**Provisioner** |**Default StorageClass name** | |----------------------------------|-----------------------------------|------------------------------------| @@ -52,3 +56,39 @@ If you manage your own cluster and don’t have offerings from cloud providers a - [Longhorn](https://longhorn.io/) - [Rook (CephFS)](https://rook.io/) - [GlusterFS](https://www.gluster.org/) + +## Migrating from NFS storage to a custom StorageClass + +If you installed the Kubernetes agent using the default NFS storage, and want to change to a custom `StorageClass` instead, simply rerun the installation Helm command with specified values for `persistence.storageClassName`. + +The following steps assume your Kubernetes agent is in the `octopus-agent-nfs-to-pv` namespace: + +### Step 1: Find your Helm release {#KubernetesAgentStorage-Step1-FindYourHelmRelease} + +Take note of the current Helm release name and Chart version for your Kubernetes agent by running the following command: +```bash +helm list --namespace octopus-agent-nfs-to-pv +``` + +The output should look like this: +:::figure +![Helm list command](/docs/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-helm-list.png) +::: + +In this example, the release name is `nfs-to-pv` while the chart version is `1.0.1`. + +### Step 2: Change Persistence {#KubernetesAgentStorage-Step2-ChangePersistence} + +Run the following command (substitute the placeholders with your own values): +```bash +helm upgrade --reuse-values --atomic --set persistence.storageClassName="" --namespace --version "" oci://registry-1.docker.io/octopusdeploy/kubernetes-agent` +``` + +Here is an example to convert the `nfs-to-pv` Helm release in the `octopus-agent-nfs-to-pv` namespace to use the `octopus-agent-nfs-migration` `StorageClass`: +```bash +helm upgrade --reuse-values --atomic --set persistence.storageClassName="octopus-agent-nfs-migration" --namespace octopus-agent-nfs-to-pv --version "1.0.1" nfs-to-pv oci://registry-1.docker.io/octopusdeploy/kubernetes-agent` +``` + +:::div{.warning} +If you are using an existing `PersistentVolume` via its `StorageClassName`, then you must set the `persistence.size` value in the Helm command to match the capacity of the `PersistentVolume` for the `PersistentVolume` to bind. +::: diff --git a/src/pages/docs/kubernetes/targets/kubernetes-agent/troubleshooting.md b/src/pages/docs/kubernetes/targets/kubernetes-agent/troubleshooting.md new file mode 100644 index 0000000000..bc8d20ee2a --- /dev/null +++ b/src/pages/docs/kubernetes/targets/kubernetes-agent/troubleshooting.md @@ -0,0 +1,64 @@ +--- +layout: src/layouts/Default.astro +pubDate: 2024-05-08 +modDate: 2024-05-30 +title: Troubleshooting +description: How to troubleshoot common Kubernetes Agent issues +navOrder: 40 +--- + +This page will help you diagnose and solve issues with the Kubernetes agent. + +## Installation Issues + +### Helm command fails with `context deadline exceeded` + +The generated helm commands use the [`--atomic`](https://helm.sh/docs/helm/helm_upgrade/#options) flag, which automatically rollbacks the changes if it fails to execute within a specified timeout (default 5 min). + +If the helm command fails, then it may print an error message containing context deadline exceeded +This indicates that the timeout was exceeded and the Kubernetes resources did not correctly start. + +To help diagnose these issues, the `kubectl` commands [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/) and [`logs`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/) can be used _while the helm command is executing_ to help debug any issues. + +#### NFS CSI driver install command + +``` +kubectl describe pods -l app.kubernetes.io/name=csi-driver-nfs -n kube-system +``` + +#### Agent install command + +``` +# To get pod information +kubectl describe pods -l app.kubernetes.io/name=octopus-agent -n [NAMESPACE] +# To get pod logs +kubectl logs -l app.kubernetes.io/name=octopus-agent -n [NAMESPACE] +``` +_Replace `[NAMESPACE]` with the namespace in the agent installation command_ + +If the Agent install command fails with a timeout error, it could be that: + +- There is an error in the connection information provided +- The bearer token or API Key has expired or has been revoked +- The agent is unable to connect to Octopus Server due to a networking issue +- (if using the NFS storage solution) The NFS CSI driver has not been installed +- (if using a custom Storage Class) the Storage Class name doesn't match + +## Script Execution Issues + +### `Unexpected Script Pod log line number, expected: expected-line-no, actual: actual-line-no` + +This error indicates that the logs from the script pods are incomplete or malformed. + +When scripts are executed, any outputs or logs are stored in the script pod's container logs. The Tentacle pod then reads from the container logs to feed back to Octopus Server. + +There's a limit to the size of logs kept before they are [rotated](https://kubernetes.io/docs/concepts/cluster-administration/logging/#log-rotation) out. If a particular log line is rotated before Octopus Server reads it, then it means log lines are missing - hence we fail the deployment prevent unexpected changes from being hidden. + +### `The Script Pod 'octopus-script-xyz' could not be found` + +This error indicates that the script pods were deleted unexpectedly - typically being evicted/terminated by Kubernetes. + +If you are using the default NFS storage however, then the script pod would be deleted if the NFS server pod is restarted. Some possible causes are: + +- being evicted due to exceeding its storage quota +- being moved or restarted as part of routine cluster operation diff --git a/src/pages/docs/kubernetes/targets/kubernetes-api/index.md b/src/pages/docs/kubernetes/targets/kubernetes-api/index.md index aa0bcff16c..d8feef3c71 100644 --- a/src/pages/docs/kubernetes/targets/kubernetes-api/index.md +++ b/src/pages/docs/kubernetes/targets/kubernetes-api/index.md @@ -1,7 +1,7 @@ --- layout: src/layouts/Default.astro pubDate: 2023-01-01 -modDate: 2024-03-04 +modDate: 2024-06-27 title: Kubernetes API description: How to configure a Kubernetes cluster as a deployment target in Octopus navOrder: 20 @@ -28,8 +28,8 @@ From **Octopus 2022.3**, you can configure the well-known variables used to disc To discover targets use the following steps: - Add an Azure account variable named **Octopus.Azure.Account** or the appropriate AWS authentication variables ([more info here](/docs/infrastructure/deployment-targets/cloud-target-discovery/#aws)) to your project. -- [Add tags](/docs/infrastructure/deployment-targets/cloud-target-discovery/#tag-cloud-resources) to your cluster so that Octopus can match it to your deployment step and environment. -- Add any of the Kubernetes built-in steps to your deployment process. During deployment, the target role on the step will be used along with the environment being deployed to, to discover Kubernetes targets to deploy to. +- [Add cloud resource template tags](/docs/infrastructure/deployment-targets/cloud-target-discovery/#tag-cloud-resources) to your cluster so that Octopus can match it to your deployment step and environment. +- Add any of the Kubernetes built-in steps to your deployment process. During deployment, the target tag on the step will be used along with the environment being deployed to, to discover Kubernetes targets to deploy to. Kubernetes targets discovered will not have a namespace set, the namespace on the step will be used during deployment (or the default namespace in the cluster if no namespace is set on the step). @@ -77,7 +77,7 @@ users: 2. Select **KUBERNETES** and click **ADD** on the Kubernetes API card. 3. Enter a display name for the Kubernetes API target. 4. Select at least one [environment](/docs/infrastructure/environments) for the target. -5. Select at least one [target role](/docs/infrastructure/deployment-targets/#target-roles) for the target. +5. Select at least one [target tag](/docs/infrastructure/deployment-targets/target-tags) for the target. 6. Select the authentication method. Kubernetes targets support multiple [account types](https://oc.to/KubernetesAuthentication): - **Usernames/Password**: In the example YAML above, the user name is found in the `username` field, and the password is found in the `password` field. These values can be added as an Octopus [Username and Password](/docs/infrastructure/accounts/username-and-password) account. - **Tokens**: In the example YAML above, the token is defined in the `token` field. This value can be added as an Octopus [Token](/docs/infrastructure/accounts/tokens) account. @@ -178,7 +178,7 @@ To make use of the Kubernetes steps, the Octopus Server or workers that will run 11. Click **SAVE**. :::div{.warning} -Setting the Worker Pool directly on the Deployment Target will override the Worker Pool defined in a Deployment Process. +Setting the Worker Pool in a Deployment Process will override the Worker Pool defined directly on the Deployment Target. ::: ## Create service accounts @@ -270,6 +270,15 @@ kubectl get secret $(kubectl get serviceaccount jenkins-deployer -o jsonpath="{. The token can then be saved as a Token Octopus account, and assigned to the Kubernetes target. +:::div{.warning} +Kubernetes versions 1.24+ no longer automatically create tokens for service accounts and they need to be manually created using the **create token** command: +```bash +kubectl create token jenkins-deployer +``` + +From Kubernetes version 1.29, a warning will be displayed when using automatically created Tokens. Make sure to rotate any Octopus Token Accounts to use manually created tokens via **create token** instead. +::: + ## Kubectl Kubernetes targets use the `kubectl` executable to communicate with the Kubernetes cluster. This executable must be available on the path on the target where the step is run. When using workers, this means the `kubectl` executable must be in the path on the worker that is executing the step. Otherwise the `kubectl` executable must be in the path on the Octopus Server itself. @@ -378,7 +387,7 @@ exit 0 ### API calls failing -If you are finding that certain API calls are failing, for example `https://your.octopus.app/api/users/Users-1/apikeys?take=2147483647`, it's possible that your WAF is blocking the traffic. To confirm this you should investigate your WAF logs to determine why the API call is being blocked and make the necessary adjustments to your WAF rules. +If you are finding that certain API calls are failing, for example `https://your-octopus-url/api/users/Users-1/apikeys?take=2147483647`, it's possible that your WAF is blocking the traffic. To confirm this you should investigate your WAF logs to determine why the API call is being blocked and make the necessary adjustments to your WAF rules. ## Learn more