Skip to content

Commit

Permalink
Merge branch 'main' into slack-api
Browse files Browse the repository at this point in the history
  • Loading branch information
TheoBrigitte authored May 6, 2024
2 parents 42189ae + b70575a commit 33b50ad
Show file tree
Hide file tree
Showing 22 changed files with 1,592 additions and 1,362 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ title: cluster-aws release v0.66.0
---

### Added
- Make Cilium ENI-based IP allocation configurable with new high-level `global.connectivity.cilium.ipamMode` value
- Make Cilium ENI-based IP allocation configurable with new high-level `global.connectivity.cilium.ipamMode` value (prototype)
- Add automatic support for deploying to AWS China.
### Changed
- Use cleanup hook job HelmRelease from cluster chart.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
# Generated by scripts/aggregate-changelogs. WARNING: Manual edits to this files will be overwritten.
changes_categories:
- Cluster apps for Azure
changes_entry:
repository: giantswarm/default-apps-azure
url: https://github.com/giantswarm/default-apps-azure/blob/master/CHANGELOG.md#0131---2024-04-30
version: 0.13.1
version_tag: v0.13.1
date: '2024-04-30T12:13:05'
description: Changelog entry for giantswarm/default-apps-azure version 0.13.1, published
on 30 April 2024, 12:13.
title: default-apps-azure release v0.13.1
---


17 changes: 17 additions & 0 deletions src/content/changes/managed-apps/falco-app/v0.8.1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
---
# Generated by scripts/aggregate-changelogs. WARNING: Manual edits to this files will be overwritten.
changes_categories:
- Managed Apps
changes_entry:
repository: giantswarm/falco-app
url: https://github.com/giantswarm/falco-app/blob/master/CHANGELOG.md#081---2024-04-30
version: 0.8.1
version_tag: v0.8.1
date: '2024-04-30T16:39:14'
description: Changelog entry for giantswarm/falco-app version 0.8.1, published on
30 April 2024, 16:39.
title: falco-app release v0.8.1
---

### Changed
- Update Falco CiliumNetworkPolicy to allow communication with Falco Sidekick.
20 changes: 20 additions & 0 deletions src/content/changes/managed-apps/kyverno-app/v0.17.10.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
# Generated by scripts/aggregate-changelogs. WARNING: Manual edits to this files will be overwritten.
changes_categories:
- Managed Apps
changes_entry:
repository: giantswarm/kyverno-app
url: https://github.com/giantswarm/kyverno-app/blob/master/CHANGELOG.md#01710---2024-04-30
version: 0.17.10
version_tag: v0.17.10
date: '2024-04-30T17:14:36'
description: Changelog entry for giantswarm/kyverno-app version 0.17.10, published
on 30 April 2024, 17:14.
title: kyverno-app release v0.17.10
---

### Added
- Add Helm labels and annotations for easy CRD adoption in the future.
### Changed
- Adapt Kyverno Policy Reporter CiliumNetworkPolicy to allow for DNS resolution of the `kyverno-ui` service.
- Disable AdmissionReports and ClusterAdmissionReports cleanup jobs.
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
# Generated by scripts/aggregate-changelogs. WARNING: Manual edits to this files will be overwritten.
changes_categories:
- Managed Apps
changes_entry:
repository: giantswarm/prometheus-operator-app
url: https://github.com/giantswarm/prometheus-operator-app/blob/master/CHANGELOG.md#1000---2024-04-30
version: 10.0.0
version_tag: v10.0.0
date: '2024-05-01T07:00:10'
description: Changelog entry for giantswarm/prometheus-operator-app version 10.0.0,
published on 01 May 2024, 07:00.
title: prometheus-operator-app release v10.0.0
---

- Upgraded chart dependency to [kube-prometheus-stack-58.3.0](https://github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-58.3.0)
- kube-state-metrics from 2.10.0 to [2.12.0](https://github.com/kubernetes/kube-state-metrics/releases/tag/v2.12.0)
- prometheus upgraded from 2.50.1 to 2.51.2
- prometheus-node-exporter upgraded from 1.17.0 to [1.18.0](https://github.com/prometheus/node_exporter/releases/tag/v1.8.0)
- prometheus-operator from 0.71.2 to 0.73.2 also adding Scrape Class support
17 changes: 17 additions & 0 deletions src/content/changes/managed-apps/starboard-exporter/v0.7.9.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
---
# Generated by scripts/aggregate-changelogs. WARNING: Manual edits to this files will be overwritten.
changes_categories:
- Managed Apps
changes_entry:
repository: giantswarm/starboard-exporter
url: https://github.com/giantswarm/starboard-exporter/blob/master/CHANGELOG.md#079---2024-05-03
version: 0.7.9
version_tag: v0.7.9
date: '2024-05-03T11:51:59'
description: Changelog entry for giantswarm/starboard-exporter version 0.7.9, published
on 03 May 2024, 11:51.
title: starboard-exporter release v0.7.9
---

### Changed
- Switched API version from the `HorizontalPodAutoscaler` from `autoscaling/v2beta1` to `autoscaling/v1`.
Original file line number Diff line number Diff line change
Expand Up @@ -34,14 +34,15 @@ This release provides security updates for container linux and a fix for IMDSv2



### cluster-operator [5.11.0](https://github.com/giantswarm/cluster-operator/releases/tag/v5.11.0)
### cluster-operator [5.11.1](https://github.com/giantswarm/cluster-operator/releases/tag/v5.11.1)

#### Changed
- Configure `gsoci.azurecr.io` as the default container image registry.
#### Added
- Add team label in resources.
- Add `global.podSecurityStandards.enforced` value for PSS migration.

### Fixed
- Fix release version check for PSS enforcement.


### containerlinux [3815.2.2](https://www.flatcar-linux.org/releases/#release-3815.2.2)
Expand Down
12 changes: 12 additions & 0 deletions src/content/overview/fleet-management/cluster-management/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
title: Cluster management
description: Supported cloud providers and management of clusters on the Giant Swarm platform.
weight: 40
menu:
principal:
parent: overview
identifier: overview-fleet-management-clusters
last_review_date: 2024-05-02
owner:
- https://github.com/orgs/giantswarm/teams/sig-product
---
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ description: How the Giant Swarm platform leverages the Cluster API standard for
weight: 10
menu:
principal:
parent: reference-fleet-management-clusters
identifier: reference-fleet-management-introduction-to-cluster-api
last_review_date: 2024-04-22
parent: overview-fleet-management-clusters
identifier: overview-fleet-management-introduction-to-cluster-api
last_review_date: 2024-05-02
owner:
- https://github.com/orgs/giantswarm/teams/sig-docs
user_questions:
Expand Down
2 changes: 1 addition & 1 deletion src/content/reference/_index.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Reference
description: Technical documentation about our apps, tools and interfaces. Users can find API schema, CLIs, Chart references and more.
last_review_date: 2024-04-16
last_review_date: 2024-05-02
owner:
- https://github.com/orgs/giantswarm/teams/sig-docs
---
127 changes: 127 additions & 0 deletions src/content/tutorials/aws-cilium-eni-mode.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
---
title: Cilium ENI IPAM mode for AWS
description: Allocate pod IPs directly on the AWS network using a second VPC CIDR with separate security group and subnets.
weight: 10
last_review_date: 2024-05-02
owner:
- https://github.com/orgs/giantswarm/teams/team-phoenix
user_questions:
- How to assign AWS-allocated IPs to pods?
- How do I change the pod network CIDR?
---

<!--
A workload cluster can be configured to choose pod IPs from an AWS-allocated IP range (CIDR). In this mode, the Cilium CNI [allocates ENIs (Elastic Network Interfaces) and pod IPs](https://docs.cilium.io/en/latest/network/concepts/ipam/eni/).
## Advantages
- Pods get directly assigned IPs, belong to an AWS subnet and are assigned to a separate security group. This allows handling pod traffic separately, for example firewalling or peering.
- Pod traffic is not translated by NAT. The pod IPs can be visible in a peered VPC or behind a transit gateway.
## Disadvantages
- Strong limitation for number of pods – each AWS EC2 instance type has a certain maximum number of ENIs (Elastic Network Interfaces) and each of those has a maximum number of assignable IPs (see [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI)). For example, instance type XXX can host up to YYY ENIs and therefore ZZZ pods (TODO). This can lead to higher costs because fewer pods can be run on each node, given this networking restriction, even if more would fit based on available CPU and memory.
- A large CIDR is recommended for the pod network, as each pod will use one IP. This could be a problem if your chosen CIDR must not overlap with others in your network, and you don't have enough CIDRs left to choose from.
## Creating an AWS workload cluster with Cilium ENI IPAM mode
- Must be enabled at creation of the workload cluster.
- When templating the cluster using the `cluster-aws` chart
- Set value [`global.connectivity.cilium.ipamMode=eni`](https://github.com/giantswarm/cluster-aws/blob/main/helm/cluster-aws/README.md#connectivity)
- Set value [`global.connectivity.network.pods.cidrBlocks`](https://github.com/giantswarm/cluster-aws/blob/main/helm/cluster-aws/README.md#connectivity) to the CIDR you want for the pods. This will be associated as secondary CIDR to the VPC. We recommend the value `10.1.0.0/16`. If you need a different CIDR, please also set [`global.connectivity.eniModePodSubnets`](https://github.com/giantswarm/cluster-aws/blob/main/helm/cluster-aws/README.md#connectivity), for example by copy-pasting the documented default list of subnets and changing the CIDR split (default: `10.1.0.0/16` split into three subnet blocks `10.1.0.0/18`, `10.1.64.0/18`, `10.1.128.0/18`).
Template a regular cluster (refer to (TODO link to "Creating a workload cluster" getting-started guide)):
```sh
kubectl gs template cluster \
--provider capa \
--name mycluster \
--organization testing \
> cluster.yaml
```
Open the YAML file in an editor. It should look roughly like this:
```yaml
---
apiVersion: v1
data:
values: |
global:
connectivity:
availabilityZoneUsageLimit: 3
network: {}
topology: {}
controlPlane: {}
metadata:
name: mycluster
organization: testing
# [...]
kind: ConfigMap
metadata:
# [...]
name: mycluster-userconfig
namespace: org-testing
---
apiVersion: application.giantswarm.io/v1alpha1
kind: App
metadata:
# [...]
name: mycluster
namespace: org-testing
spec:
catalog: cluster
# [...]
name: cluster-aws
namespace: org-testing
userConfig:
configMap:
name: mycluster-userconfig
namespace: org-testing
version: # [...]
```
Choose a CIDR for the pod network.
Edit the values in the YAML file:
TODO highlight the changed lines (Hugo supports it: https://gohugo.io/content-management/syntax-highlighting/#highlight-shortcode)
```yaml
---
apiVersion: v1
data:
values: |
global:
connectivity:
availabilityZoneUsageLimit: 3
cilium:
ipamMode: eni
# eniModePodSubnets: <list of subnets> # see above hint - you only need to fill this if the pod CIDR isn't `10.1.0.0/16`
network:
pods:
cidrBlocks:
- 10.1.0.0/16
topology: {}
controlPlane: {}
metadata:
name: mycluster
organization: testing
# [...]
kind: ConfigMap
# [...]
```
Create the workload cluster as usual (TODO link to "Creating a workload cluster" getting-started guide):
```sh
kubectl apply -f cluster.yaml
```
After a few minutes, the cluster should be up. In the AWS EC2 console, you will find the secondary VPC CIDR (pod network), and EC2 instances having secondary network interfaces which list their currently allocated pod IPs:
TODO screenshot(s)
-->
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ description: Management of developer environments, applications, and configurati
weight: 20
menu:
principal:
parent: reference
identifier: reference-fleet-management
last_review_date: 2024-04-16
parent: tutorials
identifier: tutorials-fleet-management
last_review_date: 2024-05-02
owner:
- https://github.com/orgs/giantswarm/teams/sig-docs
user_questions:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
---
title: Clusters
title: Cluster management
description: Management of workload clusters across different regions and cloud providers.
weight: 20
menu:
principal:
parent: reference-fleet-management
identifier: reference-fleet-management-clusters
last_review_date: 2024-04-16
parent: tutorials-fleet-management
identifier: tutorials-fleet-management-clusters
last_review_date: 2024-05-02
owner:
- https://github.com/orgs/giantswarm/teams/sig-docs
user_questions:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,17 @@ description: How the migration from our old AWS vintage management clusters to C
weight: 20
menu:
principal:
parent: reference-fleet-management-clusters
identifier: reference-fleet-management-migration-to-cluster-api
last_review_date: 2024-04-16
parent: tutorials-fleet-management-clusters
identifier: tutorials-fleet-management-migration-to-cluster-api
last_review_date: 2024-05-02
owner:
- https://github.com/orgs/giantswarm/teams/sig-docs
user_questions:
- What are the requirements for migrating a cluster to Cluster API?
- What are the recommendations for a smooth migration?
---

From the outset, Giant Swarm has utilized Kubernetes to build platforms. In the early years, everybody was still figuring out how to manage Kubernetes lifecycle across a fleet of clusters. We built our own tooling, largely based on [operators](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/), which worked well for us and our customers. As the Kubernetes project and the community around it evolved, it became clear that many companies in the ecosystem were trying to solve the same fundamental challenges regarding cluster lifecycle management. With our extensive experience, we saw an opportunity to contribute to a broader solution. We pushed for a joint effort to build a standardized method for cluster lifecycle management. [Cluster API]({{< relref "/reference/fleet-management/" >}}) is backed by the Kubernetes community and covers different providers like AWS, Azure, GCP, and others.
From the outset, Giant Swarm has utilized Kubernetes to build platforms. In the early years, everybody was still figuring out how to manage Kubernetes lifecycle across a fleet of clusters. We built our own tooling, largely based on [operators](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/), which worked well for us and our customers. As the Kubernetes project and the community around it evolved, it became clear that many companies in the ecosystem were trying to solve the same fundamental challenges regarding cluster lifecycle management. With our extensive experience, we saw an opportunity to contribute to a broader solution. We pushed for a joint effort to build a standardized method for cluster lifecycle management. [Cluster API]({{< relref "/overview/fleet-management/cluster-management/introduction-cluster-api" >}}) is backed by the Kubernetes community and covers different providers like AWS, Azure, GCP, and others.

This guide outlines the migration path from our AWS vintage platform to the [Cluster API](https://cluster-api.sigs.k8s.io/) (CAPI) standard, ensuring a seamless transition for customer workload clusters from the previous system to the modern CAPI framework. Within this document, you'll find an overview of the migration procedure in AWS, including its prerequisites and strategic advice, all aimed at facilitating a smooth and successful transition.

Expand Down
Loading

0 comments on commit 33b50ad

Please sign in to comment.