Skip to content

Commit

Permalink
Replace plain links with relref shortcode (#2368)
Browse files Browse the repository at this point in the history
Co-authored-by: Fernando Ripoll <[email protected]>
  • Loading branch information
marians and pipo02mix authored Dec 2, 2024
1 parent 6838edf commit 532bf08
Show file tree
Hide file tree
Showing 17 changed files with 27 additions and 27 deletions.
6 changes: 3 additions & 3 deletions src/content/getting-started/access-to-platform-api/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@ You can have multiple management clusters, for example, if different cloud provi
Usually, to interact with the platform API, you have three options:

1. Use GitOps flavour using Flux
2. Use the `kubectl` command-line tool with our custom plugin
3. Use the [Giant Swarm Web UI](https://docs.giantswarm.io/ui-api/)
2. Use the `kubectl` command-line tool with our [custom plugin]({{< relref "/reference/kubectl-gs" >}})
3. Use the [Giant Swarm Web UI]({{< relref "/vintage/platform-overview/web-interface/overview" >}})

This guide focuses on the second option, using the `kubectl` command-line tool. However, you can find more information about the other options in the [tutorials](https://docs.giantswarm.io/tutorials/).
This guide focuses on the second option, using the `kubectl` command-line tool. However, you can find more information about the other options in the [tutorials]({{< relref "/tutorials" >}}).

## Requirements

Expand Down
2 changes: 1 addition & 1 deletion src/content/overview/architecture/authentication/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ We utilize Kubernetes-native RBAC to control access to resources in the platform

### Authentication: Workload Cluster

For the workload cluster - where you run your applications - we don't enforce any specific OpenID Connect (OIDC) tool to enable single sign-on (SSO). However, if you wish to implement SSO for accessing your workload cluster, we provide a detailed guide on how to configure Dex for this purpose, you can follow our comprehensive guide: [Configure OIDC using Dex to access your clusters](https://docs.giantswarm.io/vintage/advanced/access-management/configure-dex-in-your-cluster/).
For the workload cluster - where you run your applications - we don't enforce any specific OpenID Connect (OIDC) tool to enable single sign-on (SSO). However, if you wish to implement SSO for accessing your workload cluster, we provide a detailed guide on how to configure Dex for this purpose, you can follow our comprehensive guide: [Configure OIDC using Dex to access your clusters]({{< relref "/vintage/advanced/access-management/configure-dex-in-your-cluster/" >}}).

### Authorization: Workload Cluster

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -340,7 +340,7 @@ Ingress nginx controller allows you to define the timeout that waits to close a

Many other timeouts can be customized when configuring an ingress. Take a look at the [official docs](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-timeouts).

__Warning__: When running in cloud provider environments, you may often rely on integrated services like AWS NLBs or Azure LBs. Those intermediate Load Balancers could have their own settings which can be in the request path conflicting with values defined in ingress Resources. [Read how to configure ingress nginx controller in cloud environments to avoid unexpected results]({{< relref "/vintage/advanced/connectivity/ingress/service-type-loadbalancer/index.md" >}}#other-aws-elb-configuration-options).
__Warning__: When running in cloud provider environments, you may often rely on integrated services like AWS NLBs or Azure LBs. Those intermediate Load Balancers could have their own settings which can be in the request path conflicting with values defined in ingress Resources. [Read how to configure ingress nginx controller in cloud environments to avoid unexpected results]({{< relref "/vintage/advanced/connectivity/ingress/service-type-loadbalancer#other-aws-elb-configuration-options" >}}).

### Session affinity

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ cd bases/apps/
mkdir ${APP_NAME}
```

Now, navigate to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [`App` resource](https://docs.giantswarm.io/ui-api/kubectl-gs/template-app/):
Now, navigate to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [`App` resource]({{< relref "/reference/kubectl-gs/template-app" >}}):

```sh
cd ${APP_NAME}/
Expand Down
4 changes: 2 additions & 2 deletions src/content/tutorials/continuous-deployment/apps/add_appcr.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ export APP_CATALOG=APP_CATALOG
export APP_NAMESPACE=APP_NAMESPACE
```

Go to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [`App` resource](https://docs.giantswarm.io/ui-api/kubectl-gs/template-app/):
Go to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [`App` resource]({{< relref "/reference/kubectl-gs/template-app/" >}}):

```sh
cd ${APP_NAME}/
Expand All @@ -84,7 +84,7 @@ Additionally you can provide a default configuration adding these flags to the p

__Note__: Including `${cluster_name}` in the app name avoids collision between clusters running same apps within the same organization.

Reference [the app configuration](https://docs.giantswarm.io/app-platform/app-configuration/) for more details on how to create respective `ConfigMaps` or secrets.
Reference [the app configuration]({{< relref "/tutorials/fleet-management/app-platform/app-configuration" >}}) for more details on how to create respective `ConfigMaps` or secrets.

As optional step, you can place the `ConfigMap` and `Secret` with values as the `configmap.yaml` and `secret.enc.yaml` files respectively:

Expand Down
2 changes: 1 addition & 1 deletion src/content/tutorials/continuous-deployment/bases/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ owner:
last_review_date: 2024-11-11
---

In Giant Swarm the interface to define a workload cluster is built on top of `Helm` and [the app platform](https://docs.giantswarm.io/overview/fleet-management/app-management/). The application custom resource contains the specification and configuration of the cluster in this format:
In Giant Swarm the interface to define a workload cluster is built on top of `Helm` and [the app platform]({{< relref "/overview/fleet-management/app-management/" >}}). The application custom resource contains the specification and configuration of the cluster in this format:

```yaml
apiVersion: application.giantswarm.io/v1alpha1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ flux-demo True Fetched revision: main/74f8d19cc2ac9bee6f45660236344a054

### Setting up secrets {#setting-up-secrets}

The next step, you configure the keys used by `Flux` in the management cluster to decipher secrets kept in the repository. Our recommendation is to keep secrets encrypted in the repository together with your applications but if your company policy doesn't allow it you can use [`external secret operator`](https://docs.giantswarm.io/vintage/advanced/security/external-secrets-operator/) to use different sources.
The next step, you configure the keys used by `Flux` in the management cluster to decipher secrets kept in the repository. Our recommendation is to keep secrets encrypted in the repository together with your applications but if your company policy doesn't allow it you can use [`external secret operator`]({{< relref "/vintage/advanced/security/external-secrets-operator/" >}}) to use different sources.
Giant Swarm uses `sops` with `pgp` for key management, creating master keys for all the `kustomizations` in the management cluster. In `kubectl-gs` you can generate a master and public key for the management cluster.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ This guide outlines the migration path from our AWS vintage platform to the [Clu

Before you begin the migration:

1. Your cluster should be at least on a AWS vintage version [`20.0.0`](https://docs.giantswarm.io/changes/workload-cluster-releases-aws/releases/aws-v20.0.0/).
1. Your cluster should be at least on a AWS vintage version [`20.0.0`](/changes/workload-cluster-releases-aws/releases/aws-v20.0.0/).
2. The AWS IAM role, with the specific name `giantswarm-{CAPI_MC_NAME}-capa-controller`, must be created for the workload cluster's (WC) AWS account before starting the migration. [For more information please refer to this guide]({{< relref "/vintage/getting-started/cloud-provider-accounts/cluster-api/aws/" >}}).
3. In case of using GitOps, Flux must be turned off during the migration since some of the cluster custom resources will be modified or removed by the migration scripts.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -336,7 +336,7 @@ Ingress NGINX Controller allows to define the timeout that waits to close a conn

There are many other timeouts that can be customized when configuring an Ingress. Take a look at the [official docs](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-timeouts).

**Warning**: When running in cloud provider environments you may often rely on integrated services like AWS NLBs or Azure LBs. Those intermediate Load Balancers could have their own settings which can be in the request path conflicting with values defined in Ingress Resources. [Read how to configure Ingress NGINX Controller in cloud environments to avoid unexpected results]({{< relref "/vintage/advanced/connectivity/ingress/service-type-loadbalancer/index.md" >}}#other-aws-elb-configuration-options).
**Warning**: When running in cloud provider environments you may often rely on integrated services like AWS NLBs or Azure LBs. Those intermediate Load Balancers could have their own settings which can be in the request path conflicting with values defined in Ingress Resources. [Read how to configure Ingress NGINX Controller in cloud environments to avoid unexpected results]({{< relref "/vintage/advanced/connectivity/ingress/service-type-loadbalancer#other-aws-elb-configuration-options" >}}).

### Session Affinity

Expand Down
4 changes: 2 additions & 2 deletions src/content/vintage/advanced/gitops/apps/add_app_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ export APP_USER_VALUES=CONFIGMAP_OR_SECRET_PATH
mkdir ${APP_NAME}
```
2. Navigate to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [App CR](https://docs.giantswarm.io/ui-api/kubectl-gs/template-app/):
2. Navigate to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [App CR]({{< relref "/reference/kubectl-gs/template-app/" >}}):
```nohighlight
cd ${APP_NAME}/
Expand All @@ -71,7 +71,7 @@ export APP_USER_VALUES=CONFIGMAP_OR_SECRET_PATH
__Note__: We're including `${cluster_name}` in the app name to avoid a problem when two or more clusters in the same organization want to deploy the same app with its default name.
Reference [the App Configuration](https://docs.giantswarm.io/app-platform/app-configuration/) for more details about how to properly create the respective ConfigMaps or Secrets.
Reference [the App Configuration]({{< relref "/tutorials/fleet-management/app-platform/app-configuration" >}}) for more details about how to properly create the respective ConfigMaps or Secrets.
In case you used `kubectl gs` command you realized the output is an App Custom Resource plus the ConfigMap. In case you want to manage the values in plain YAML, you could rely on the ConfigMap generator feature of [Kustomize](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#generating-resources).
Expand Down
6 changes: 3 additions & 3 deletions src/content/vintage/advanced/gitops/apps/add_appcr.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ export APP_NAME="${WC_NAME}-APP_NAME"
export APP_USER_VALUES=CONFIGMAP_OR_SECRET_PATH
```
2. Go to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [App CR](https://docs.giantswarm.io/ui-api/kubectl-gs/template-app/):
2. Go to the newly created directory and use [the kubectl-gs plugin](https://github.com/giantswarm/kubectl-gs) to generate the [App CR]({{< relref "/reference/kubectl-gs/template-app/" >}}):
```nohighlight
cd ${APP_NAME}/
Expand All @@ -93,7 +93,7 @@ export APP_NAME="${WC_NAME}-APP_NAME"
__Note__: We're including `${cluster_name}` in the app name to avoid a problem when two or more clusters in the same organization want to deploy the same app with its default name.
Reference [the App Configuration](https://docs.giantswarm.io/app-platform/app-configuration/) for more details on how to properly create respective ConfigMaps or Secrets.
Reference [the App Configuration]({{< relref "/tutorials/fleet-management/app-platform/app-configuration/" >}}) for more details on how to properly create respective ConfigMaps or Secrets.
3. (optional - if adding configuration) Place ConfigMap and Secrets with values as the `configmap.yaml` and `secret.enc.yaml` files respectively:
Expand Down Expand Up @@ -166,7 +166,7 @@ export APP_NAME="${WC_NAME}-APP_NAME"
Please note, that the block marked "configuration override block" is needed only if you override the default config and/or the secret config (from the Template). In case you don't override any, skip both parts in `kustomization.yaml` and also the next three configuration points below.
1. (optional - if you override either config or secret) Create a patch configuration file, that will enhance your App Template with a `userConfig` attribute (refer to [the App Configuration](https://docs.giantswarm.io/app-platform/app-configuration/) for more details about how `config` and `userConfig` properties of App CR are used).
1. (optional - if you override either config or secret) Create a patch configuration file, that will enhance your App Template with a `userConfig` attribute (refer to [the App Configuration]({{< relref "/tutorials/fleet-management/app-platform/app-configuration" >}}) for more details about how `config` and `userConfig` properties of App CR are used).
```nohighlight
cat <<EOF > config_patch.yaml
Expand Down
4 changes: 2 additions & 2 deletions src/content/vintage/advanced/gitops/bases/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,11 @@ owner:
last_review_date: 2023-12-11
---

Our CAPx (CAPI provider-specific clusters) are delivered by Giant Swarm as a set of two applications. The first one is an [App Custom Resource](https://docs.giantswarm.io/platform-overview/app-platform/)(CR) with a Cluster instance definition, while the second one is an App CR containing all the default applications needed for a cluster to run correctly. As such, creating a CAPx cluster means that you need to deliver two configured App CRs to the Management Cluster.
Our CAPx (CAPI provider-specific clusters) are delivered by Giant Swarm as a set of two applications. The first one is an [App Custom Resource]({{< relref "/overview/fleet-management/app-management/" >}}) (CR) with a Cluster instance definition, while the second one is an App CR containing all the default applications needed for a cluster to run correctly. As such, creating a CAPx cluster means that you need to deliver two configured App CRs to the Management Cluster.

Adding definitions can be done on two levels: shared cluster template and version-specific template, see [create shared template base](#create-shared-template-base) and [create versioned base](#create-versioned-base-optional).

**IMPORTANT**, CAPx configuration utilizes the [App Platform Configuration Levels](/getting-started/app-platform/app-configuration/#levels), in the following manner:
**IMPORTANT**, CAPx configuration utilizes the [App Platform Configuration Levels]({{< relref "/vintage/getting-started/app-platform/app-configuration#levels" >}}), in the following manner:

- cluster templates provide default configuration via App' `config` field,
- cluster instances provide custom configuration via App' `extraConfig` field, which is overlaid on top of `config`. The file set with higher priority will prevail in case of colliding config values.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ In this article you will learn how you can disable managed logging for your clus

The managed logging stack allows Giant Swarm to provide 24/7 support based on your workload cluster logs. Currently, this component is pre-installed on workload clusters for the following implementations:

- {{% impl_title "vintage_aws" %}}, release [v19.2.0](https://docs.giantswarm.io/changes/workload-cluster-releases-aws/releases/aws-v19.2.0/) or newer
- {{% impl_title "vintage_aws" %}}, release [v19.2.0](/changes/workload-cluster-releases-aws/releases/aws-v19.2.0/) or newer

Logs of components deployed in the `kube-system` and `giantswarm` namespaces, as well as Kubernetes and node audit logs are collected by managed `promtail` pods and sent to a Loki instance running in your management cluster. You can access its logs by [accessing to the managed Grafana](https://docs.giantswarm.io/getting-started/observability/visualization/access).
Logs of components deployed in the `kube-system` and `giantswarm` namespaces, as well as Kubernetes and node audit logs are collected by managed `promtail` pods and sent to a Loki instance running in your management cluster. You can access its logs by [accessing to the managed Grafana]({{< relref "/getting-started/observe-your-clusters-and-apps" >}}).

## Why would I like to disable logging?

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ themselves and can find the application chart at
For more information on configuring apps within the Giant Swarm App Platform,
please follow the documentation at
[https://docs.giantswarm.io/getting-started/app-platform/app-configuration/](https://docs.giantswarm.io/getting-started/app-platform/app-configuration/)
[https://docs.giantswarm.io/getting-started/app-platform/app-configuration/]({{< relref "/overview/fleet-management/app-management" >}})
### Combining ESO and SOPs
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ You can template a cluster ([command reference]({{< relref "/reference/kubectl-g
{{< tabs >}}
{{< tab id="cluster-vintage-aws" for-impl="vintage_aws">}}

[Choose a release version here](https://docs.giantswarm.io/changes/workload-cluster-releases-for-aws), or use `kubectl gs get releases`, and fill it into this example command:
[Choose a release version here](/changes/workload-cluster-releases-for-aws), or use `kubectl gs get releases`, and fill it into this example command:

```sh
kubectl gs template cluster \
Expand All @@ -88,7 +88,7 @@ For backward compatibility, vintage cluster templating does not require the `--n
{{< /tab >}}
{{< tab id="cluster-capa-ec2" for-impl="capa_ec2">}}

[Choose a release version here](https://docs.giantswarm.io/changes/workload-cluster-releases-for-capa/), or use `kubectl gs get releases`, and fill it into this example command:
[Choose a release version here](/changes/workload-cluster-releases-for-capa/), or use `kubectl gs get releases`, and fill it into this example command:

```sh
kubectl gs template cluster \
Expand Down Expand Up @@ -144,7 +144,7 @@ If no `aws-cluster-role-identity-name` is passed, then we assume a `AWSClusterRo
{{< /tab >}}
{{< tab id="cluster-capz-azure-vms" for-impl="capz_vms">}}

[Choose a release version here](https://docs.giantswarm.io/changes/workload-cluster-releases-for-azure/), or use `kubectl gs get releases`, and fill it into this example command:
[Choose a release version here](/changes/workload-cluster-releases-for-azure/), or use `kubectl gs get releases`, and fill it into this example command:

```sh
kubectl gs template cluster \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ In this article you will learn how you can disable monitoring for your cluster.
Each cluster created on the Giant Swarm platform benefits from our monitoring which allow us to provide you with 24/7 support to ensure best quality of service.

Each cluster is monitored by a dedicated Prometheus instance.
This comes by default and storage is reserved for data retention, storage size can be adjusted via [Prometheus Volume Sizing](https://docs.giantswarm.io/getting-started/observability/prometheus/volume-size/) feature.
This comes by default and storage is reserved for data retention, storage size can be adjusted via [Prometheus Volume Sizing]({{< relref "/vintage/getting-started/observability/monitoring/prometheus/volume-size" >}}) feature.

## Why would I like to disable monitoring?

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ CNI used until AWS release 18.

#### Cilium CNI

CNI used until AWS release [19](https://docs.giantswarm.io/advanced/cluster-management/upgrades/aws-19-release/).
CNI used until AWS release [19]({{< relref "/vintage/advanced/cluster-management/upgrades/aws-19-release" >}}).

[Cilium CNI](https://docs.cilium.io/en/stable/overview/intro/) offers advanced [eBPF](https://ebpf.io/) networking without overlay.

Expand Down

0 comments on commit 532bf08

Please sign in to comment.