diff --git a/src/content/getting-started/access-to-platform-api/_index.md b/src/content/getting-started/access-to-platform-api/_index.md index b59688caf7..3e16d03b2f 100644 --- a/src/content/getting-started/access-to-platform-api/_index.md +++ b/src/content/getting-started/access-to-platform-api/_index.md @@ -21,10 +21,10 @@ You can have multiple management clusters, for example, if different cloud provi Usually, to interact with the platform API, you have three options: 1. Use GitOps flavour using Flux -2. Use the `kubectl` command-line tool with our custom plugin -3. Use the [Giant Swarm Web UI](https://docs.giantswarm.io/ui-api/) +2. Use the `kubectl` command-line tool with our [custom plugin]({{< relref "/reference/kubectl-gs" >}}) +3. Use the [Giant Swarm Web UI]({{< relref "/vintage/platform-overview/web-interface/overview" >}}) -This guide focuses on the second option, using the `kubectl` command-line tool. However, you can find more information about the other options in the [tutorials](https://docs.giantswarm.io/tutorials/). +This guide focuses on the second option, using the `kubectl` command-line tool. However, you can find more information about the other options in the [tutorials]({{< relref "/tutorials" >}}). ## Requirements diff --git a/src/content/overview/architecture/authentication/_index.md b/src/content/overview/architecture/authentication/_index.md index 655d8a741a..3e01256331 100644 --- a/src/content/overview/architecture/authentication/_index.md +++ b/src/content/overview/architecture/authentication/_index.md @@ -31,7 +31,7 @@ We utilize Kubernetes-native RBAC to control access to resources in the platform ### Authentication: Workload Cluster -For the workload cluster - where you run your applications - we don't enforce any specific OpenID Connect (OIDC) tool to enable single sign-on (SSO). However, if you wish to implement SSO for accessing your workload cluster, we provide a detailed guide on how to configure Dex for this purpose, you can follow our comprehensive guide: [Configure OIDC using Dex to access your clusters](https://docs.giantswarm.io/vintage/advanced/access-management/configure-dex-in-your-cluster/). +For the workload cluster - where you run your applications - we don't enforce any specific OpenID Connect (OIDC) tool to enable single sign-on (SSO). However, if you wish to implement SSO for accessing your workload cluster, we provide a detailed guide on how to configure Dex for this purpose, you can follow our comprehensive guide: [Configure OIDC using Dex to access your clusters]({{< relref "/vintage/advanced/access-management/configure-dex-in-your-cluster/" >}}). ### Authorization: Workload Cluster diff --git a/src/content/tutorials/connectivity/ingress/configuration/index.md b/src/content/tutorials/connectivity/ingress/configuration/index.md index 77a9a34d85..825a7766c1 100644 --- a/src/content/tutorials/connectivity/ingress/configuration/index.md +++ b/src/content/tutorials/connectivity/ingress/configuration/index.md @@ -340,7 +340,7 @@ Ingress nginx controller allows you to define the timeout that waits to close a Many other timeouts can be customized when configuring an ingress. Take a look at the [official docs](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-timeouts). -__Warning__: When running in cloud provider environments, you may often rely on integrated services like AWS NLBs or Azure LBs. Those intermediate Load Balancers could have their own settings which can be in the request path conflicting with values defined in ingress Resources. [Read how to configure ingress nginx controller in cloud environments to avoid unexpected results]({{< relref "/vintage/advanced/connectivity/ingress/service-type-loadbalancer/index.md" >}}#other-aws-elb-configuration-options). +__Warning__: When running in cloud provider environments, you may often rely on integrated services like AWS NLBs or Azure LBs. Those intermediate Load Balancers could have their own settings which can be in the request path conflicting with values defined in ingress Resources. [Read how to configure ingress nginx controller in cloud environments to avoid unexpected results]({{< relref "/vintage/advanced/connectivity/ingress/service-type-loadbalancer#other-aws-elb-configuration-options" >}}). ### Session affinity diff --git a/src/content/tutorials/continuous-deployment/apps/add_app_template.md b/src/content/tutorials/continuous-deployment/apps/add_app_template.md index 2296b96e93..650f66556e 100644 --- a/src/content/tutorials/continuous-deployment/apps/add_app_template.md +++ b/src/content/tutorials/continuous-deployment/apps/add_app_template.md @@ -45,7 +45,7 @@ cd bases/apps/ mkdir ${APP_NAME} ``` -Now, navigate to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [`App` resource](https://docs.giantswarm.io/ui-api/kubectl-gs/template-app/): +Now, navigate to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [`App` resource]({{< relref "/reference/kubectl-gs/template-app" >}}): ```sh cd ${APP_NAME}/ diff --git a/src/content/tutorials/continuous-deployment/apps/add_appcr.md b/src/content/tutorials/continuous-deployment/apps/add_appcr.md index 297af42ef2..5c35e45ad9 100644 --- a/src/content/tutorials/continuous-deployment/apps/add_appcr.md +++ b/src/content/tutorials/continuous-deployment/apps/add_appcr.md @@ -62,7 +62,7 @@ export APP_CATALOG=APP_CATALOG export APP_NAMESPACE=APP_NAMESPACE ``` -Go to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [`App` resource](https://docs.giantswarm.io/ui-api/kubectl-gs/template-app/): +Go to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [`App` resource]({{< relref "/reference/kubectl-gs/template-app/" >}}): ```sh cd ${APP_NAME}/ @@ -84,7 +84,7 @@ Additionally you can provide a default configuration adding these flags to the p __Note__: Including `${cluster_name}` in the app name avoids collision between clusters running same apps within the same organization. -Reference [the app configuration](https://docs.giantswarm.io/app-platform/app-configuration/) for more details on how to create respective `ConfigMaps` or secrets. +Reference [the app configuration]({{< relref "/tutorials/fleet-management/app-platform/app-configuration" >}}) for more details on how to create respective `ConfigMaps` or secrets. As optional step, you can place the `ConfigMap` and `Secret` with values as the `configmap.yaml` and `secret.enc.yaml` files respectively: diff --git a/src/content/tutorials/continuous-deployment/bases/index.md b/src/content/tutorials/continuous-deployment/bases/index.md index 929a3b1b73..dc48324f1a 100644 --- a/src/content/tutorials/continuous-deployment/bases/index.md +++ b/src/content/tutorials/continuous-deployment/bases/index.md @@ -14,7 +14,7 @@ owner: last_review_date: 2024-11-11 --- -In Giant Swarm the interface to define a workload cluster is built on top of `Helm` and [the app platform](https://docs.giantswarm.io/overview/fleet-management/app-management/). The application custom resource contains the specification and configuration of the cluster in this format: +In Giant Swarm the interface to define a workload cluster is built on top of `Helm` and [the app platform]({{< relref "/overview/fleet-management/app-management/" >}}). The application custom resource contains the specification and configuration of the cluster in this format: ```yaml apiVersion: application.giantswarm.io/v1alpha1 diff --git a/src/content/tutorials/continuous-deployment/manage-workload-clusters/index.md b/src/content/tutorials/continuous-deployment/manage-workload-clusters/index.md index 56ea32f595..456f48186c 100644 --- a/src/content/tutorials/continuous-deployment/manage-workload-clusters/index.md +++ b/src/content/tutorials/continuous-deployment/manage-workload-clusters/index.md @@ -134,7 +134,7 @@ flux-demo True Fetched revision: main/74f8d19cc2ac9bee6f45660236344a054 ### Setting up secrets {#setting-up-secrets} -The next step, you configure the keys used by `Flux` in the management cluster to decipher secrets kept in the repository. Our recommendation is to keep secrets encrypted in the repository together with your applications but if your company policy doesn't allow it you can use [`external secret operator`](https://docs.giantswarm.io/vintage/advanced/security/external-secrets-operator/) to use different sources. +The next step, you configure the keys used by `Flux` in the management cluster to decipher secrets kept in the repository. Our recommendation is to keep secrets encrypted in the repository together with your applications but if your company policy doesn't allow it you can use [`external secret operator`]({{< relref "/vintage/advanced/security/external-secrets-operator/" >}}) to use different sources. Giant Swarm uses `sops` with `pgp` for key management, creating master keys for all the `kustomizations` in the management cluster. In `kubectl-gs` you can generate a master and public key for the management cluster. diff --git a/src/content/tutorials/fleet-management/cluster-management/migration-to-cluster-api/_index.md b/src/content/tutorials/fleet-management/cluster-management/migration-to-cluster-api/_index.md index daf93a932e..cd385275be 100644 --- a/src/content/tutorials/fleet-management/cluster-management/migration-to-cluster-api/_index.md +++ b/src/content/tutorials/fleet-management/cluster-management/migration-to-cluster-api/_index.md @@ -22,7 +22,7 @@ This guide outlines the migration path from our AWS vintage platform to the [Clu Before you begin the migration: -1. Your cluster should be at least on a AWS vintage version [`20.0.0`](https://docs.giantswarm.io/changes/workload-cluster-releases-aws/releases/aws-v20.0.0/). +1. Your cluster should be at least on a AWS vintage version [`20.0.0`](/changes/workload-cluster-releases-aws/releases/aws-v20.0.0/). 2. The AWS IAM role, with the specific name `giantswarm-{CAPI_MC_NAME}-capa-controller`, must be created for the workload cluster's (WC) AWS account before starting the migration. [For more information please refer to this guide]({{< relref "/vintage/getting-started/cloud-provider-accounts/cluster-api/aws/" >}}). 3. In case of using GitOps, Flux must be turned off during the migration since some of the cluster custom resources will be modified or removed by the migration scripts. diff --git a/src/content/vintage/advanced/connectivity/ingress/configuration/index.md b/src/content/vintage/advanced/connectivity/ingress/configuration/index.md index e041a48882..6ed4adf537 100644 --- a/src/content/vintage/advanced/connectivity/ingress/configuration/index.md +++ b/src/content/vintage/advanced/connectivity/ingress/configuration/index.md @@ -336,7 +336,7 @@ Ingress NGINX Controller allows to define the timeout that waits to close a conn There are many other timeouts that can be customized when configuring an Ingress. Take a look at the [official docs](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-timeouts). -**Warning**: When running in cloud provider environments you may often rely on integrated services like AWS NLBs or Azure LBs. Those intermediate Load Balancers could have their own settings which can be in the request path conflicting with values defined in Ingress Resources. [Read how to configure Ingress NGINX Controller in cloud environments to avoid unexpected results]({{< relref "/vintage/advanced/connectivity/ingress/service-type-loadbalancer/index.md" >}}#other-aws-elb-configuration-options). +**Warning**: When running in cloud provider environments you may often rely on integrated services like AWS NLBs or Azure LBs. Those intermediate Load Balancers could have their own settings which can be in the request path conflicting with values defined in Ingress Resources. [Read how to configure Ingress NGINX Controller in cloud environments to avoid unexpected results]({{< relref "/vintage/advanced/connectivity/ingress/service-type-loadbalancer#other-aws-elb-configuration-options" >}}). ### Session Affinity diff --git a/src/content/vintage/advanced/gitops/apps/add_app_template.md b/src/content/vintage/advanced/gitops/apps/add_app_template.md index 6b4a5c8c4e..f570c518ca 100644 --- a/src/content/vintage/advanced/gitops/apps/add_app_template.md +++ b/src/content/vintage/advanced/gitops/apps/add_app_template.md @@ -49,7 +49,7 @@ export APP_USER_VALUES=CONFIGMAP_OR_SECRET_PATH mkdir ${APP_NAME} ``` -2. Navigate to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [App CR](https://docs.giantswarm.io/ui-api/kubectl-gs/template-app/): +2. Navigate to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [App CR]({{< relref "/reference/kubectl-gs/template-app/" >}}): ```nohighlight cd ${APP_NAME}/ @@ -71,7 +71,7 @@ export APP_USER_VALUES=CONFIGMAP_OR_SECRET_PATH __Note__: We're including `${cluster_name}` in the app name to avoid a problem when two or more clusters in the same organization want to deploy the same app with its default name. - Reference [the App Configuration](https://docs.giantswarm.io/app-platform/app-configuration/) for more details about how to properly create the respective ConfigMaps or Secrets. + Reference [the App Configuration]({{< relref "/tutorials/fleet-management/app-platform/app-configuration" >}}) for more details about how to properly create the respective ConfigMaps or Secrets. In case you used `kubectl gs` command you realized the output is an App Custom Resource plus the ConfigMap. In case you want to manage the values in plain YAML, you could rely on the ConfigMap generator feature of [Kustomize](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#generating-resources). diff --git a/src/content/vintage/advanced/gitops/apps/add_appcr.md b/src/content/vintage/advanced/gitops/apps/add_appcr.md index 7651de4a18..2ab4f2a175 100644 --- a/src/content/vintage/advanced/gitops/apps/add_appcr.md +++ b/src/content/vintage/advanced/gitops/apps/add_appcr.md @@ -71,7 +71,7 @@ export APP_NAME="${WC_NAME}-APP_NAME" export APP_USER_VALUES=CONFIGMAP_OR_SECRET_PATH ``` -2. Go to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [App CR](https://docs.giantswarm.io/ui-api/kubectl-gs/template-app/): +2. Go to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [App CR]({{< relref "/reference/kubectl-gs/template-app/" >}}): ```nohighlight cd ${APP_NAME}/ @@ -93,7 +93,7 @@ export APP_NAME="${WC_NAME}-APP_NAME" __Note__: We're including `${cluster_name}` in the app name to avoid a problem when two or more clusters in the same organization want to deploy the same app with its default name. - Reference [the App Configuration](https://docs.giantswarm.io/app-platform/app-configuration/) for more details on how to properly create respective ConfigMaps or Secrets. + Reference [the App Configuration]({{< relref "/tutorials/fleet-management/app-platform/app-configuration/" >}}) for more details on how to properly create respective ConfigMaps or Secrets. 3. (optional - if adding configuration) Place ConfigMap and Secrets with values as the `configmap.yaml` and `secret.enc.yaml` files respectively: @@ -166,7 +166,7 @@ export APP_NAME="${WC_NAME}-APP_NAME" Please note, that the block marked "configuration override block" is needed only if you override the default config and/or the secret config (from the Template). In case you don't override any, skip both parts in `kustomization.yaml` and also the next three configuration points below. -1. (optional - if you override either config or secret) Create a patch configuration file, that will enhance your App Template with a `userConfig` attribute (refer to [the App Configuration](https://docs.giantswarm.io/app-platform/app-configuration/) for more details about how `config` and `userConfig` properties of App CR are used). +1. (optional - if you override either config or secret) Create a patch configuration file, that will enhance your App Template with a `userConfig` attribute (refer to [the App Configuration]({{< relref "/tutorials/fleet-management/app-platform/app-configuration" >}}) for more details about how `config` and `userConfig` properties of App CR are used). ```nohighlight cat < config_patch.yaml diff --git a/src/content/vintage/advanced/gitops/bases/index.md b/src/content/vintage/advanced/gitops/bases/index.md index c33647d07f..e9e01366b3 100644 --- a/src/content/vintage/advanced/gitops/bases/index.md +++ b/src/content/vintage/advanced/gitops/bases/index.md @@ -16,11 +16,11 @@ owner: last_review_date: 2023-12-11 --- -Our CAPx (CAPI provider-specific clusters) are delivered by Giant Swarm as a set of two applications. The first one is an [App Custom Resource](https://docs.giantswarm.io/platform-overview/app-platform/)(CR) with a Cluster instance definition, while the second one is an App CR containing all the default applications needed for a cluster to run correctly. As such, creating a CAPx cluster means that you need to deliver two configured App CRs to the Management Cluster. +Our CAPx (CAPI provider-specific clusters) are delivered by Giant Swarm as a set of two applications. The first one is an [App Custom Resource]({{< relref "/overview/fleet-management/app-management/" >}}) (CR) with a Cluster instance definition, while the second one is an App CR containing all the default applications needed for a cluster to run correctly. As such, creating a CAPx cluster means that you need to deliver two configured App CRs to the Management Cluster. Adding definitions can be done on two levels: shared cluster template and version-specific template, see [create shared template base](#create-shared-template-base) and [create versioned base](#create-versioned-base-optional). -**IMPORTANT**, CAPx configuration utilizes the [App Platform Configuration Levels](/getting-started/app-platform/app-configuration/#levels), in the following manner: +**IMPORTANT**, CAPx configuration utilizes the [App Platform Configuration Levels]({{< relref "/vintage/getting-started/app-platform/app-configuration#levels" >}}), in the following manner: - cluster templates provide default configuration via App' `config` field, - cluster instances provide custom configuration via App' `extraConfig` field, which is overlaid on top of `config`. The file set with higher priority will prevail in case of colliding config values. diff --git a/src/content/vintage/advanced/observability/logging/disable/index.md b/src/content/vintage/advanced/observability/logging/disable/index.md index 765a8f2a54..2fa7246dc2 100644 --- a/src/content/vintage/advanced/observability/logging/disable/index.md +++ b/src/content/vintage/advanced/observability/logging/disable/index.md @@ -24,9 +24,9 @@ In this article you will learn how you can disable managed logging for your clus The managed logging stack allows Giant Swarm to provide 24/7 support based on your workload cluster logs. Currently, this component is pre-installed on workload clusters for the following implementations: -- {{% impl_title "vintage_aws" %}}, release [v19.2.0](https://docs.giantswarm.io/changes/workload-cluster-releases-aws/releases/aws-v19.2.0/) or newer +- {{% impl_title "vintage_aws" %}}, release [v19.2.0](/changes/workload-cluster-releases-aws/releases/aws-v19.2.0/) or newer -Logs of components deployed in the `kube-system` and `giantswarm` namespaces, as well as Kubernetes and node audit logs are collected by managed `promtail` pods and sent to a Loki instance running in your management cluster. You can access its logs by [accessing to the managed Grafana](https://docs.giantswarm.io/getting-started/observability/visualization/access). +Logs of components deployed in the `kube-system` and `giantswarm` namespaces, as well as Kubernetes and node audit logs are collected by managed `promtail` pods and sent to a Loki instance running in your management cluster. You can access its logs by [accessing to the managed Grafana]({{< relref "/getting-started/observe-your-clusters-and-apps" >}}). ## Why would I like to disable logging? diff --git a/src/content/vintage/advanced/security/external-secrets-operator/index.md b/src/content/vintage/advanced/security/external-secrets-operator/index.md index 1928d55480..ac4cb10850 100644 --- a/src/content/vintage/advanced/security/external-secrets-operator/index.md +++ b/src/content/vintage/advanced/security/external-secrets-operator/index.md @@ -108,7 +108,7 @@ themselves and can find the application chart at For more information on configuring apps within the Giant Swarm App Platform, please follow the documentation at -[https://docs.giantswarm.io/getting-started/app-platform/app-configuration/](https://docs.giantswarm.io/getting-started/app-platform/app-configuration/) +[https://docs.giantswarm.io/getting-started/app-platform/app-configuration/]({{< relref "/overview/fleet-management/app-management" >}}) ### Combining ESO and SOPs diff --git a/src/content/vintage/getting-started/create-workload-cluster/index.md b/src/content/vintage/getting-started/create-workload-cluster/index.md index f3cb1c0b2b..2bab954d89 100644 --- a/src/content/vintage/getting-started/create-workload-cluster/index.md +++ b/src/content/vintage/getting-started/create-workload-cluster/index.md @@ -72,7 +72,7 @@ You can template a cluster ([command reference]({{< relref "/reference/kubectl-g {{< tabs >}} {{< tab id="cluster-vintage-aws" for-impl="vintage_aws">}} -[Choose a release version here](https://docs.giantswarm.io/changes/workload-cluster-releases-for-aws), or use `kubectl gs get releases`, and fill it into this example command: +[Choose a release version here](/changes/workload-cluster-releases-for-aws), or use `kubectl gs get releases`, and fill it into this example command: ```sh kubectl gs template cluster \ @@ -88,7 +88,7 @@ For backward compatibility, vintage cluster templating does not require the `--n {{< /tab >}} {{< tab id="cluster-capa-ec2" for-impl="capa_ec2">}} -[Choose a release version here](https://docs.giantswarm.io/changes/workload-cluster-releases-for-capa/), or use `kubectl gs get releases`, and fill it into this example command: +[Choose a release version here](/changes/workload-cluster-releases-for-capa/), or use `kubectl gs get releases`, and fill it into this example command: ```sh kubectl gs template cluster \ @@ -144,7 +144,7 @@ If no `aws-cluster-role-identity-name` is passed, then we assume a `AWSClusterRo {{< /tab >}} {{< tab id="cluster-capz-azure-vms" for-impl="capz_vms">}} -[Choose a release version here](https://docs.giantswarm.io/changes/workload-cluster-releases-for-azure/), or use `kubectl gs get releases`, and fill it into this example command: +[Choose a release version here](/changes/workload-cluster-releases-for-azure/), or use `kubectl gs get releases`, and fill it into this example command: ```sh kubectl gs template cluster \ diff --git a/src/content/vintage/getting-started/observability/monitoring/disable/index.md b/src/content/vintage/getting-started/observability/monitoring/disable/index.md index 712e83c1d1..88dbc00d82 100644 --- a/src/content/vintage/getting-started/observability/monitoring/disable/index.md +++ b/src/content/vintage/getting-started/observability/monitoring/disable/index.md @@ -25,7 +25,7 @@ In this article you will learn how you can disable monitoring for your cluster. Each cluster created on the Giant Swarm platform benefits from our monitoring which allow us to provide you with 24/7 support to ensure best quality of service. Each cluster is monitored by a dedicated Prometheus instance. -This comes by default and storage is reserved for data retention, storage size can be adjusted via [Prometheus Volume Sizing](https://docs.giantswarm.io/getting-started/observability/prometheus/volume-size/) feature. +This comes by default and storage is reserved for data retention, storage size can be adjusted via [Prometheus Volume Sizing]({{< relref "/vintage/getting-started/observability/monitoring/prometheus/volume-size" >}}) feature. ## Why would I like to disable monitoring? diff --git a/src/content/vintage/platform-overview/cluster-management/vintage/aws/index.md b/src/content/vintage/platform-overview/cluster-management/vintage/aws/index.md index 53d12f4969..6d55c53543 100644 --- a/src/content/vintage/platform-overview/cluster-management/vintage/aws/index.md +++ b/src/content/vintage/platform-overview/cluster-management/vintage/aws/index.md @@ -97,7 +97,7 @@ CNI used until AWS release 18. #### Cilium CNI -CNI used until AWS release [19](https://docs.giantswarm.io/advanced/cluster-management/upgrades/aws-19-release/). +CNI used until AWS release [19]({{< relref "/vintage/advanced/cluster-management/upgrades/aws-19-release" >}}). [Cilium CNI](https://docs.cilium.io/en/stable/overview/intro/) offers advanced [eBPF](https://ebpf.io/) networking without overlay.