Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow to configure cgroupsv1 per nodepool #368

Merged
merged 1 commit into from
Oct 14, 2024

Conversation

fiunchinho
Copy link
Member

@fiunchinho fiunchinho commented Oct 10, 2024

What does this PR do?

Backport of #348

* Allow to configure cgroupsv1 per nodepool

* Apply suggestions from code review

Co-authored-by: Marco Ebert <[email protected]>

* Regenerate docs

* Move flatcar cgroupsv1 function to worker helper

* Adjust tests to new containerd cgroups configuration

* Apply suggestions from code review

Co-authored-by: Andreas Sommer <[email protected]>

* Still support old field

---------

Co-authored-by: Marco Ebert <[email protected]>
Co-authored-by: Andreas Sommer <[email protected]>
@fiunchinho fiunchinho requested a review from a team October 10, 2024 15:13
@fiunchinho fiunchinho self-assigned this Oct 10, 2024
@fiunchinho fiunchinho requested a review from a team as a code owner October 10, 2024 15:13
@fiunchinho fiunchinho changed the title Allow to configure cgroupsv1 per nodepool (#348) Backport: allow to configure cgroupsv1 per nodepool (#348) Oct 10, 2024
Copy link

There were differences in the rendered Helm template, please check! ⚠️

Output
=== Differences when rendered with values file helm/cluster/ci/test-cgroupsv1-values.yaml ===

(file level)
  - six documents removed:
    ---
    # Source: cluster/templates/apps/apps.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: awesome-observability-policies-user-values
      namespace: org-giantswarm
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 1.5.2
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-1.5.2
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: awesome
        giantswarm.io/organization: giantswarm
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: awesome
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
        another-cluster-label: label-2
        some-cluster-label: label-1
    data:
      values: |
        foo: bar
        
    # Source: cluster/templates/apps/apps.yaml
    apiVersion: application.giantswarm.io/v1alpha1
    kind: App
    metadata:
      annotations:
        cluster.giantswarm.io/description: "Awesome Giant Swarm cluster"
        important-cluster-value: 1000
        robots-need-this-in-the-cluster: eW91IGNhbm5vdCByZWFkIHRoaXMsIGJ1dCByb2JvdHMgY2FuCg==
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 1.5.2
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-1.5.2
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: awesome
        giantswarm.io/organization: giantswarm
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: awesome
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
        another-cluster-label: label-2
        some-cluster-label: label-1
        giantswarm.io/managed-by: cluster
      name: awesome-observability-policies
      namespace: org-giantswarm
    spec:
      catalog: default
      name: observability-policies
      namespace: kube-system
      kubeConfig:
        context:
          name: awesome-admin@awesome
        inCluster: false
        secret:
          name: awesome-kubeconfig
          namespace: org-giantswarm
      version: N/A
      extraConfigs:
      - kind: configMap
        name: my-observability-policies-extra-config
        namespace: org-giantswarm
      userConfig:
        configMap:
          name: awesome-observability-policies-user-values
          namespace: org-giantswarm
    # Source: cluster/templates/clusterapi/workers/kubeadmconfig.yaml
    apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
    kind: KubeadmConfig
    metadata:
      annotations:
        machine-pool.giantswarm.io/name: awesome-def00
        important-cluster-value: 1000
        robots-need-this-in-the-cluster: eW91IGNhbm5vdCByZWFkIHRoaXMsIGJ1dCByb2JvdHMgY2FuCg==
        for-robots-in-nodepool: cm9ib3RzIGFyZSBvcGVyYXRpbmcgb24gdGhpcyBub2RlIHBvb2wK
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 1.5.2
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-1.5.2
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: awesome
        giantswarm.io/organization: giantswarm
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: awesome
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
        another-cluster-label: label-2
        some-cluster-label: label-1
        giantswarm.io/machine-pool: awesome-def00
        nodepool-workload-type: ai
      name: awesome-def00-631ad
      namespace: org-giantswarm
    spec:
      format: ignition
      ignition:
        containerLinuxConfig:
          additionalConfig: |
            systemd:
              units:      
              - name: os-hardening.service
                enabled: true
                contents: |
                  [Unit]
                  Description=Apply os hardening
                  [Service]
                  Type=oneshot
                  ExecStartPre=-/bin/bash -c "gpasswd -d core rkt; gpasswd -d core docker; gpasswd -d core wheel"
                  ExecStartPre=/bin/bash -c "until [ -f '/etc/sysctl.d/hardening.conf' ]; do echo Waiting for sysctl file; sleep 1s;done;"
                  ExecStart=/usr/sbin/sysctl -p /etc/sysctl.d/hardening.conf
                  [Install]
                  WantedBy=multi-user.target
              - name: update-engine.service
                enabled: false
                mask: true
              - name: locksmithd.service
                enabled: false
                mask: true
              - name: sshkeys.service
                enabled: false
                mask: true
              - name: teleport.service
                enabled: true
                contents: |
                  [Unit]
                  Description=Teleport Service
                  After=network.target
                  [Service]
                  Type=simple
                  Restart=on-failure
                  ExecStart=/opt/bin/teleport start --roles=node --config=/etc/teleport.yaml --pid-file=/run/teleport.pid
                  ExecReload=/bin/kill -HUP $MAINPID
                  PIDFile=/run/teleport.pid
                  LimitNOFILE=524288
                  [Install]
                  WantedBy=multi-user.target
              - name: kubeadm.service
                dropins:
                - name: 10-flatcar.conf
                  contents: |
                    [Unit]
                    # kubeadm must run after coreos-metadata populated /run/metadata directory.
                    Requires=coreos-metadata.service
                    After=coreos-metadata.service
                    # kubeadm must run after containerd - see https://github.com/kubernetes-sigs/image-builder/issues/939.
                    After=containerd.service
                    # kubeadm requires having an IP
                    After=network-online.target
                    Wants=network-online.target
                    [Service]
                    # Ensure kubeadm service has access to kubeadm binary in /opt/bin on Flatcar.
                    Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin
                    # To make metadata environment variables available for pre-kubeadm commands.
                    EnvironmentFile=/run/metadata/*
              - name: containerd.service
                enabled: true
                contents: |
                dropins:
                - name: 10-change-cgroup.conf
                  contents: |
                    [Service]
                    CPUAccounting=true
                    MemoryAccounting=true
                    Slice=kubereserved.slice
              - name: auditd.service
                enabled: false      
              - name: var-lib-kubelet.mount
                enabled: true
                mask: false
                contents: |
                  [Unit]
                  Description=kubelet volume
                  DefaultDependencies=no
                  [Mount]
                  What=/dev/disk/by-label/kubelet
                  Where=/var/lib/kubelet
                  Type=xfs
                  [Install]
                  WantedBy=local-fs-pre.target
              - name: var-lib-containerd.mount
                enabled: true
                mask: false
                contents: |
                  [Unit]
                  Description=containerd volume
                  DefaultDependencies=no
                  [Mount]
                  What=/dev/disk/by-label/containerd
                  Where=/var/lib/containerd
                  Type=xfs
                  [Install]
                  WantedBy=local-fs-pre.target
              - name: example2.service
                enabled: false
                mask: false
                dropins:
                - name: hello1.conf
                  contents: |
                    # Multi-line
                    # contents goes here
                - name: hello2.conf
                  contents: |
                    # Multi-line
                    # contents goes here
              - name: var-lib-workload.mount
                enabled: true
                mask: false
                contents: |
                  [Unit]
                  Description=workload volume
                  DefaultDependencies=no
                  [Mount]
                  What=/dev/disk/by-label/workload
                  Where=/var/lib/workload
                  Type=xfs
                  [Install]
                  WantedBy=local-fs-pre.target
              - name: example2-workers.service
                enabled: false
                mask: false
                dropins:
                - name: hello1-workers.conf
                  contents: |
                    # Multi-line
                    # contents goes here
                - name: hello2-workers.conf
                  contents: |
                    # Multi-line
                    # contents goes here
            storage:
              filesystems:      
              directories:      
              - path: /var/lib/kubelet
                mode: 0750      
              - path: /var/lib/kubelet/temporary/stuff
                overwrite: true
                filesystem: kubelet
                mode: 750
                user:
                  id: 12345
                  name: giantswarm
                group:
                  id: 23456
                  name: giantswarm
              - path: /var/lib/kubelet/temporary/stuff/workers
                overwrite: true
                filesystem: kubelet
                mode: 750
                user:
                  id: 12345
                  name: giantswarm
                group:
                  id: 23456
                  name: giantswarm
      joinConfiguration:
        nodeRegistration:
          name: ${COREOS_EC2_HOSTNAME}
          kubeletExtraArgs:
            cgroup-driver: cgroupfs
            cloud-provider: external
            healthz-bind-address: 0.0.0.0
            node-ip: ${COREOS_EC2_IPV4_LOCAL}
            node-labels: "ip=${COREOS_EC2_IPV4_LOCAL},role=worker,giantswarm.io/machine-pool=awesome-def00,workload-type=ai"
            v: 2
          taints:
          - key: supernodepool
            value: hello
            effect: NoSchedule
        patches:
          directory: /etc/kubernetes/patches
      preKubeadmCommands:
      - "envsubst < /etc/kubeadm.yml > /etc/kubeadm.yml.tmp"
      - "mv /etc/kubeadm.yml.tmp /etc/kubeadm.yml"
      - "systemctl restart containerd"
      - "systemctl restart sshd"
      - "export HTTP_PROXY=http://proxy.giantswarm.io"
      - "export HTTPS_PROXY=https://proxy.giantswarm.io"
      - "export NO_PROXY="127.0.0.1,localhost,svc,local,awesome.example.gigantic.io,172.31.0.0/16,100.64.0.0/12,elb.amazonaws.com,169.254.169.254,some.noproxy.awesome.example.gigantic.io,another.noproxy.address.giantswarm.io,proxy1.example.com,proxy2.example.com""
      - "export http_proxy=http://proxy.giantswarm.io"
      - "export https_proxy=https://proxy.giantswarm.io"
      - "export no_proxy="127.0.0.1,localhost,svc,local,awesome.example.gigantic.io,172.31.0.0/16,100.64.0.0/12,elb.amazonaws.com,169.254.169.254,some.noproxy.awesome.example.gigantic.io,another.noproxy.address.giantswarm.io,proxy1.example.com,proxy2.example.com""
      - "echo "aws nodes command before kubeadm""
      - "echo "custom nodes command before kubeadm""
      - "echo "aws workers command before kubeadm""
      - "echo "custom workers command before kubeadm""
      postKubeadmCommands:
      - "echo "aws nodes command after kubeadm""
      - "echo "custom nodes command after kubeadm""
      - "echo "aws workers command after kubeadm""
      - "echo "custom workers command after kubeadm""
      users:
      - name: giantswarm
        groups: sudo
        sudo: "ALL=(ALL) NOPASSWD:ALL"
      files:
      - path: /etc/sysctl.d/hardening.conf
        permissions: 0644
        encoding: base64
        content: ZnMuaW5vdGlmeS5tYXhfdXNlcl93YXRjaGVzID0gMTYzODQKZnMuaW5vdGlmeS5tYXhfdXNlcl9pbnN0YW5jZXMgPSA4MTkyCmtlcm5lbC5rcHRyX3Jlc3RyaWN0ID0gMgprZXJuZWwuc3lzcnEgPSAwCm5ldC5pcHY0LmNvbmYuYWxsLmxvZ19tYXJ0aWFucyA9IDEKbmV0LmlwdjQuY29uZi5hbGwuc2VuZF9yZWRpcmVjdHMgPSAwCm5ldC5pcHY0LmNvbmYuZGVmYXVsdC5hY2NlcHRfcmVkaXJlY3RzID0gMApuZXQuaXB2NC5jb25mLmRlZmF1bHQubG9nX21hcnRpYW5zID0gMQpuZXQuaXB2NC50Y3BfdGltZXN0YW1wcyA9IDAKbmV0LmlwdjYuY29uZi5hbGwuYWNjZXB0X3JlZGlyZWN0cyA9IDAKbmV0LmlwdjYuY29uZi5kZWZhdWx0LmFjY2VwdF9yZWRpcmVjdHMgPSAwCiMgSW5jcmVhc2VkIG1tYXBmcyBiZWNhdXNlIHNvbWUgYXBwbGljYXRpb25zLCBsaWtlIEVTLCBuZWVkIGhpZ2hlciBsaW1pdCB0byBzdG9yZSBkYXRhIHByb3Blcmx5CnZtLm1heF9tYXBfY291bnQgPSAyNjIxNDQKIyBSZXNlcnZlZCB0byBhdm9pZCBjb25mbGljdHMgd2l0aCBrdWJlLWFwaXNlcnZlciwgd2hpY2ggYWxsb2NhdGVzIHdpdGhpbiB0aGlzIHJhbmdlCm5ldC5pcHY0LmlwX2xvY2FsX3Jlc2VydmVkX3BvcnRzPTMwMDAwLTMyNzY3Cm5ldC5pcHY0LmNvbmYuYWxsLnJwX2ZpbHRlciA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2lnbm9yZSA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2Fubm91bmNlID0gMgoKIyBUaGVzZSBhcmUgcmVxdWlyZWQgZm9yIHRoZSBrdWJlbGV0ICctLXByb3RlY3Qta2VybmVsLWRlZmF1bHRzJyBmbGFnCiMgU2VlIGh0dHBzOi8vZ2l0aHViLmNvbS9naWFudHN3YXJtL2dpYW50c3dhcm0vaXNzdWVzLzEzNTg3CnZtLm92ZXJjb21taXRfbWVtb3J5PTEKa2VybmVsLnBhbmljPTEwCmtlcm5lbC5wYW5pY19vbl9vb3BzPTEK
      - path: /etc/selinux/config
        permissions: 0644
        encoding: base64
        content: IyBUaGlzIGZpbGUgY29udHJvbHMgdGhlIHN0YXRlIG9mIFNFTGludXggb24gdGhlIHN5c3RlbSBvbiBib290LgoKIyBTRUxJTlVYIGNhbiB0YWtlIG9uZSBvZiB0aGVzZSB0aHJlZSB2YWx1ZXM6CiMgICAgICAgZW5mb3JjaW5nIC0gU0VMaW51eCBzZWN1cml0eSBwb2xpY3kgaXMgZW5mb3JjZWQuCiMgICAgICAgcGVybWlzc2l2ZSAtIFNFTGludXggcHJpbnRzIHdhcm5pbmdzIGluc3RlYWQgb2YgZW5mb3JjaW5nLgojICAgICAgIGRpc2FibGVkIC0gTm8gU0VMaW51eCBwb2xpY3kgaXMgbG9hZGVkLgpTRUxJTlVYPXBlcm1pc3NpdmUKCiMgU0VMSU5VWFRZUEUgY2FuIHRha2Ugb25lIG9mIHRoZXNlIGZvdXIgdmFsdWVzOgojICAgICAgIHRhcmdldGVkIC0gT25seSB0YXJnZXRlZCBuZXR3b3JrIGRhZW1vbnMgYXJlIHByb3RlY3RlZC4KIyAgICAgICBzdHJpY3QgICAtIEZ1bGwgU0VMaW51eCBwcm90ZWN0aW9uLgojICAgICAgIG1scyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1MZXZlbCBTZWN1cml0eQojICAgICAgIG1jcyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1DYXRlZ29yeSBTZWN1cml0eQojICAgICAgICAgICAgICAgICAgKG1scywgYnV0IG9ubHkgb25lIHNlbnNpdGl2aXR5IGxldmVsKQpTRUxJTlVYVFlQRT1tY3MK
      - path: /etc/systemd/timesyncd.conf
        permissions: 0644
        encoding: base64
        content: W1RpbWVdCk5UUD0xNjkuMjU0LjE2OS4xMjMK
      - path: /etc/ssh/trusted-user-ca-keys.pem
        permissions: 0600
        encoding: base64
        content: c3NoLWVkMjU1MTkgQUFBQUMzTnphQzFsWkRJMU5URTVBQUFBSU00Y3ZaMDFmTG1POWNKYldVajdzZkYrTmhFQ2d5K0NsMGJhelNyWlg3c1UgdmF1bHQtY2FAdmF1bHQub3BlcmF0aW9ucy5naWFudHN3YXJtLmlvCg==
      - path: /etc/ssh/sshd_config
        permissions: 0600
        encoding: base64
        content: IyBVc2UgbW9zdCBkZWZhdWx0cyBmb3Igc3NoZCBjb25maWd1cmF0aW9uLgpTdWJzeXN0ZW0gc2Z0cCBpbnRlcm5hbC1zZnRwCkNsaWVudEFsaXZlSW50ZXJ2YWwgMTgwClVzZUROUyBubwpVc2VQQU0geWVzClByaW50TGFzdExvZyBubyAjIGhhbmRsZWQgYnkgUEFNClByaW50TW90ZCBubyAjIGhhbmRsZWQgYnkgUEFNCiMgTm9uIGRlZmF1bHRzICgjMTAwKQpDbGllbnRBbGl2ZUNvdW50TWF4IDIKUGFzc3dvcmRBdXRoZW50aWNhdGlvbiBubwpUcnVzdGVkVXNlckNBS2V5cyAvZXRjL3NzaC90cnVzdGVkLXVzZXItY2Eta2V5cy5wZW0KTWF4QXV0aFRyaWVzIDUKTG9naW5HcmFjZVRpbWUgNjAKQWxsb3dUY3BGb3J3YXJkaW5nIG5vCkFsbG93QWdlbnRGb3J3YXJkaW5nIG5vCkNBU2lnbmF0dXJlQWxnb3JpdGhtcyBlY2RzYS1zaGEyLW5pc3RwMjU2LGVjZHNhLXNoYTItbmlzdHAzODQsZWNkc2Etc2hhMi1uaXN0cDUyMSxzc2gtZWQyNTUxOSxyc2Etc2hhMi01MTIscnNhLXNoYTItMjU2LHNzaC1yc2EK
      - path: /etc/kubernetes/patches/kubeletconfiguration.yaml
        permissions: 0644
        encoding: base64
        content: YXBpVmVyc2lvbjoga3ViZWxldC5jb25maWcuazhzLmlvL3YxYmV0YTEKa2luZDogS3ViZWxldENvbmZpZ3VyYXRpb24Kc2h1dGRvd25HcmFjZVBlcmlvZDogMzAwcwpzaHV0ZG93bkdyYWNlUGVyaW9kQ3JpdGljYWxQb2RzOiA2MHMKa2VybmVsTWVtY2dOb3RpZmljYXRpb246IHRydWUKZXZpY3Rpb25Tb2Z0OgogIG1lbW9yeS5hdmFpbGFibGU6ICI1MDBNaSIKZXZpY3Rpb25IYXJkOgogIG1lbW9yeS5hdmFpbGFibGU6ICIyMDBNaSIKICBpbWFnZWZzLmF2YWlsYWJsZTogIjE1JSIKZXZpY3Rpb25Tb2Z0R3JhY2VQZXJpb2Q6CiAgbWVtb3J5LmF2YWlsYWJsZTogIjVzIgpldmljdGlvbk1heFBvZEdyYWNlUGVyaW9kOiA2MAprdWJlUmVzZXJ2ZWQ6CiAgY3B1OiAzNTBtCiAgbWVtb3J5OiAzODRNaQogIGVwaGVtZXJhbC1zdG9yYWdlOiAxMDI0TWkKa3ViZVJlc2VydmVkQ2dyb3VwOiAva3ViZXJlc2VydmVkLnNsaWNlCnByb3RlY3RLZXJuZWxEZWZhdWx0czogdHJ1ZQpzeXN0ZW1SZXNlcnZlZDoKICBjcHU6IDI1MG0KICBtZW1vcnk6IDEyODBNaQpzeXN0ZW1SZXNlcnZlZENncm91cDogL3N5c3RlbS5zbGljZQp0bHNDaXBoZXJTdWl0ZXM6Ci0gVExTX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19BRVNfMjU2X0dDTV9TSEEzODQKLSBUTFNfQ0hBQ0hBMjBfUE9MWTEzMDVfU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQ0hBQ0hBMjBfUE9MWTEzMDUKLSBUTFNfRUNESEVfRUNEU0FfV0lUSF9DSEFDSEEyMF9QT0xZMTMwNV9TSEEyNTYKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX1JTQV9XSVRIX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19FQ0RIRV9SU0FfV0lUSF9BRVNfMjU2X0NCQ19TSEEKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1X1NIQTI1NgotIFRMU19SU0FfV0lUSF9BRVNfMTI4X0NCQ19TSEEKLSBUTFNfUlNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX1JTQV9XSVRIX0FFU18yNTZfQ0JDX1NIQQotIFRMU19SU0FfV0lUSF9BRVNfMjU2X0dDTV9TSEEzODQKc2VyaWFsaXplSW1hZ2VQdWxsczogZmFsc2UKc3RyZWFtaW5nQ29ubmVjdGlvbklkbGVUaW1lb3V0OiAxaAphbGxvd2VkVW5zYWZlU3lzY3RsczoKLSAibmV0LioiCmNvbnRhaW5lckxvZ01heFNpemU6IDMwTWkKY29udGFpbmVyTG9nTWF4RmlsZXM6IDIK
      - path: /etc/systemd/logind.conf.d/zzz-kubelet-graceful-shutdown.conf
        permissions: 0700
        encoding: base64
        content: W0xvZ2luXQojIGRlbGF5CkluaGliaXREZWxheU1heFNlYz0zMDAK
      - path: /etc/systemd/system/containerd.service.d/http-proxy.conf
        permissions: 0644
        encoding: base64
        content: W1NlcnZpY2VdCkVudmlyb25tZW50PSJIVFRQX1BST1hZPWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iSFRUUFNfUFJPWFk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iTk9fUFJPWFk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCkVudmlyb25tZW50PSJodHRwX3Byb3h5PWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iaHR0cHNfcHJveHk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0ibm9fcHJveHk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCg==
      - path: /etc/systemd/system/kubelet.service.d/http-proxy.conf
        permissions: 0644
        encoding: base64
        content: W1NlcnZpY2VdCkVudmlyb25tZW50PSJIVFRQX1BST1hZPWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iSFRUUFNfUFJPWFk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iTk9fUFJPWFk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCkVudmlyb25tZW50PSJodHRwX3Byb3h5PWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iaHR0cHNfcHJveHk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0ibm9fcHJveHk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCg==
      - path: /etc/systemd/system/teleport.service.d/http-proxy.conf
        permissions: 0644
        encoding: base64
        content: W1NlcnZpY2VdCkVudmlyb25tZW50PSJIVFRQX1BST1hZPWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iSFRUUFNfUFJPWFk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iTk9fUFJPWFk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCkVudmlyb25tZW50PSJodHRwX3Byb3h5PWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iaHR0cHNfcHJveHk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0ibm9fcHJveHk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCg==
      - path: /etc/teleport-join-token
        permissions: 0644
        contentFrom:
          secret:
            name: awesome-teleport-join-token
            key: joinToken
      - path: /opt/teleport-node-role.sh
        permissions: 0755
        encoding: base64
        content: IyEvYmluL2Jhc2gKCmlmIHN5c3RlbWN0bCBpcy1hY3RpdmUgLXEga3ViZWxldC5zZXJ2aWNlOyB0aGVuCiAgICBpZiBbIC1lICIvZXRjL2t1YmVybmV0ZXMvbWFuaWZlc3RzL2t1YmUtYXBpc2VydmVyLnlhbWwiIF07IHRoZW4KICAgICAgICBlY2hvICJjb250cm9sLXBsYW5lIgogICAgZWxzZQogICAgICAgIGVjaG8gIndvcmtlciIKICAgIGZpCmVsc2UKICAgIGVjaG8gIiIKZmkK
      - path: /etc/teleport.yaml
        permissions: 0644
        encoding: base64
        content: dmVyc2lvbjogdjMKdGVsZXBvcnQ6CiAgZGF0YV9kaXI6IC92YXIvbGliL3RlbGVwb3J0CiAgam9pbl9wYXJhbXM6CiAgICB0b2tlbl9uYW1lOiAvZXRjL3RlbGVwb3J0LWpvaW4tdG9rZW4KICAgIG1ldGhvZDogdG9rZW4KICBwcm94eV9zZXJ2ZXI6IHRlbGVwb3J0LmdpYW50c3dhcm0uaW86NDQzCiAgbG9nOgogICAgb3V0cHV0OiBzdGRlcnIKYXV0aF9zZXJ2aWNlOgogIGVuYWJsZWQ6ICJubyIKc3NoX3NlcnZpY2U6CiAgZW5hYmxlZDogInllcyIKICBjb21tYW5kczoKICAtIG5hbWU6IG5vZGUKICAgIGNvbW1hbmQ6IFtob3N0bmFtZV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogYXJjaAogICAgY29tbWFuZDogW3VuYW1lLCAtbV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogcm9sZQogICAgY29tbWFuZDogWy9vcHQvdGVsZXBvcnQtbm9kZS1yb2xlLnNoXQogICAgcGVyaW9kOiAxbTBzCiAgbGFiZWxzOgogICAgaW5zOiBnaWFudG1jCiAgICBtYzogZ2lhbnRtYwogICAgY2x1c3RlcjogYXdlc29tZQogICAgYmFzZURvbWFpbjogZXhhbXBsZS5naWdhbnRpYy5pbwpwcm94eV9zZXJ2aWNlOgogIGVuYWJsZWQ6ICJubyIK
      - contentFrom:
          secret:
            key: node-stuff
            name: cluster-super-secret
        path: /etc/aws/node/file.yaml
        permissions: 0644
      - contentFrom:
          secret:
            key: node-stuff
            name: cluster-super-secret
        path: /etc/custom/node/file.yaml
        permissions: 0644
      - path: /etc/containerd/config.toml
        permissions: 0644
        contentFrom:
          secret:
            name: awesome-def00-containerd-dbb9d89e
            key: config.toml
      - contentFrom:
          secret:
            key: node-stuff
            name: cluster-super-secret-worker
        path: /etc/aws/worker/node/file.yaml
        permissions: 0644
      - contentFrom:
          secret:
            key: node-stuff
            name: cluster-super-secret-worker
        path: /etc/custom/worker/node/file.yaml
        permissions: 0644
    # Source: cluster/templates/clusterapi/workers/kubeadmconfig.yaml
    apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
    kind: KubeadmConfig
    metadata:
      annotations:
        machine-pool.giantswarm.io/name: awesome-def01
        important-cluster-value: 1000
        robots-need-this-in-the-cluster: eW91IGNhbm5vdCByZWFkIHRoaXMsIGJ1dCByb2JvdHMgY2FuCg==
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 1.5.2
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-1.5.2
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: awesome
        giantswarm.io/organization: giantswarm
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: awesome
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
        another-cluster-label: label-2
        some-cluster-label: label-1
        giantswarm.io/machine-pool: awesome-def01
      name: awesome-def01-00f61
      namespace: org-giantswarm
    spec:
      format: ignition
      ignition:
        containerLinuxConfig:
          additionalConfig: |
            systemd:
              units:      
              - name: os-hardening.service
                enabled: true
                contents: |
                  [Unit]
                  Description=Apply os hardening
                  [Service]
                  Type=oneshot
                  ExecStartPre=-/bin/bash -c "gpasswd -d core rkt; gpasswd -d core docker; gpasswd -d core wheel"
                  ExecStartPre=/bin/bash -c "until [ -f '/etc/sysctl.d/hardening.conf' ]; do echo Waiting for sysctl file; sleep 1s;done;"
                  ExecStart=/usr/sbin/sysctl -p /etc/sysctl.d/hardening.conf
                  [Install]
                  WantedBy=multi-user.target
              - name: update-engine.service
                enabled: false
                mask: true
              - name: locksmithd.service
                enabled: false
                mask: true
              - name: sshkeys.service
                enabled: false
                mask: true
              - name: teleport.service
                enabled: true
                contents: |
                  [Unit]
                  Description=Teleport Service
                  After=network.target
                  [Service]
                  Type=simple
                  Restart=on-failure
                  ExecStart=/opt/bin/teleport start --roles=node --config=/etc/teleport.yaml --pid-file=/run/teleport.pid
                  ExecReload=/bin/kill -HUP $MAINPID
                  PIDFile=/run/teleport.pid
                  LimitNOFILE=524288
                  [Install]
                  WantedBy=multi-user.target
              - name: kubeadm.service
                dropins:
                - name: 10-flatcar.conf
                  contents: |
                    [Unit]
                    # kubeadm must run after coreos-metadata populated /run/metadata directory.
                    Requires=coreos-metadata.service
                    After=coreos-metadata.service
                    # kubeadm must run after containerd - see https://github.com/kubernetes-sigs/image-builder/issues/939.
                    After=containerd.service
                    # kubeadm requires having an IP
                    After=network-online.target
                    Wants=network-online.target
                    [Service]
                    # Ensure kubeadm service has access to kubeadm binary in /opt/bin on Flatcar.
                    Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin
                    # To make metadata environment variables available for pre-kubeadm commands.
                    EnvironmentFile=/run/metadata/*
              - name: containerd.service
                enabled: true
                contents: |
                dropins:
                - name: 10-change-cgroup.conf
                  contents: |
                    [Service]
                    CPUAccounting=true
                    MemoryAccounting=true
                    Slice=kubereserved.slice
              - name: auditd.service
                enabled: false      
              - name: var-lib-kubelet.mount
                enabled: true
                mask: false
                contents: |
                  [Unit]
                  Description=kubelet volume
                  DefaultDependencies=no
                  [Mount]
                  What=/dev/disk/by-label/kubelet
                  Where=/var/lib/kubelet
                  Type=xfs
                  [Install]
                  WantedBy=local-fs-pre.target
              - name: var-lib-containerd.mount
                enabled: true
                mask: false
                contents: |
                  [Unit]
                  Description=containerd volume
                  DefaultDependencies=no
                  [Mount]
                  What=/dev/disk/by-label/containerd
                  Where=/var/lib/containerd
                  Type=xfs
                  [Install]
                  WantedBy=local-fs-pre.target
              - name: example2.service
                enabled: false
                mask: false
                dropins:
                - name: hello1.conf
                  contents: |
                    # Multi-line
                    # contents goes here
                - name: hello2.conf
                  contents: |
                    # Multi-line
                    # contents goes here
              - name: var-lib-workload.mount
                enabled: true
                mask: false
                contents: |
                  [Unit]
                  Description=workload volume
                  DefaultDependencies=no
                  [Mount]
                  What=/dev/disk/by-label/workload
                  Where=/var/lib/workload
                  Type=xfs
                  [Install]
                  WantedBy=local-fs-pre.target
              - name: example2-workers.service
                enabled: false
                mask: false
                dropins:
                - name: hello1-workers.conf
                  contents: |
                    # Multi-line
                    # contents goes here
                - name: hello2-workers.conf
                  contents: |
                    # Multi-line
                    # contents goes here
            storage:
              filesystems:      
              directories:      
              - path: /var/lib/kubelet
                mode: 0750      
              - path: /var/lib/kubelet/temporary/stuff
                overwrite: true
                filesystem: kubelet
                mode: 750
                user:
                  id: 12345
                  name: giantswarm
                group:
                  id: 23456
                  name: giantswarm
              - path: /var/lib/kubelet/temporary/stuff/workers
                overwrite: true
                filesystem: kubelet
                mode: 750
                user:
                  id: 12345
                  name: giantswarm
                group:
                  id: 23456
                  name: giantswarm
      joinConfiguration:
        nodeRegistration:
          name: ${COREOS_EC2_HOSTNAME}
          kubeletExtraArgs:
            cgroup-driver: cgroupfs
            cloud-provider: external
            healthz-bind-address: 0.0.0.0
            node-ip: ${COREOS_EC2_IPV4_LOCAL}
            node-labels: "ip=${COREOS_EC2_IPV4_LOCAL},role=worker,giantswarm.io/machine-pool=awesome-def01"
            v: 2
        patches:
          directory: /etc/kubernetes/patches
      preKubeadmCommands:
      - "envsubst < /etc/kubeadm.yml > /etc/kubeadm.yml.tmp"
      - "mv /etc/kubeadm.yml.tmp /etc/kubeadm.yml"
      - "systemctl restart containerd"
      - "systemctl restart sshd"
      - "export HTTP_PROXY=http://proxy.giantswarm.io"
      - "export HTTPS_PROXY=https://proxy.giantswarm.io"
      - "export NO_PROXY="127.0.0.1,localhost,svc,local,awesome.example.gigantic.io,172.31.0.0/16,100.64.0.0/12,elb.amazonaws.com,169.254.169.254,some.noproxy.awesome.example.gigantic.io,another.noproxy.address.giantswarm.io,proxy1.example.com,proxy2.example.com""
      - "export http_proxy=http://proxy.giantswarm.io"
      - "export https_proxy=https://proxy.giantswarm.io"
      - "export no_proxy="127.0.0.1,localhost,svc,local,awesome.example.gigantic.io,172.31.0.0/16,100.64.0.0/12,elb.amazonaws.com,169.254.169.254,some.noproxy.awesome.example.gigantic.io,another.noproxy.address.giantswarm.io,proxy1.example.com,proxy2.example.com""
      - "echo "aws nodes command before kubeadm""
      - "echo "custom nodes command before kubeadm""
      - "echo "aws workers command before kubeadm""
      - "echo "custom workers command before kubeadm""
      postKubeadmCommands:
      - "echo "aws nodes command after kubeadm""
      - "echo "custom nodes command after kubeadm""
      - "echo "aws workers command after kubeadm""
      - "echo "custom workers command after kubeadm""
      users:
      - name: giantswarm
        groups: sudo
        sudo: "ALL=(ALL) NOPASSWD:ALL"
      files:
      - path: /etc/sysctl.d/hardening.conf
        permissions: 0644
        encoding: base64
        content: ZnMuaW5vdGlmeS5tYXhfdXNlcl93YXRjaGVzID0gMTYzODQKZnMuaW5vdGlmeS5tYXhfdXNlcl9pbnN0YW5jZXMgPSA4MTkyCmtlcm5lbC5rcHRyX3Jlc3RyaWN0ID0gMgprZXJuZWwuc3lzcnEgPSAwCm5ldC5pcHY0LmNvbmYuYWxsLmxvZ19tYXJ0aWFucyA9IDEKbmV0LmlwdjQuY29uZi5hbGwuc2VuZF9yZWRpcmVjdHMgPSAwCm5ldC5pcHY0LmNvbmYuZGVmYXVsdC5hY2NlcHRfcmVkaXJlY3RzID0gMApuZXQuaXB2NC5jb25mLmRlZmF1bHQubG9nX21hcnRpYW5zID0gMQpuZXQuaXB2NC50Y3BfdGltZXN0YW1wcyA9IDAKbmV0LmlwdjYuY29uZi5hbGwuYWNjZXB0X3JlZGlyZWN0cyA9IDAKbmV0LmlwdjYuY29uZi5kZWZhdWx0LmFjY2VwdF9yZWRpcmVjdHMgPSAwCiMgSW5jcmVhc2VkIG1tYXBmcyBiZWNhdXNlIHNvbWUgYXBwbGljYXRpb25zLCBsaWtlIEVTLCBuZWVkIGhpZ2hlciBsaW1pdCB0byBzdG9yZSBkYXRhIHByb3Blcmx5CnZtLm1heF9tYXBfY291bnQgPSAyNjIxNDQKIyBSZXNlcnZlZCB0byBhdm9pZCBjb25mbGljdHMgd2l0aCBrdWJlLWFwaXNlcnZlciwgd2hpY2ggYWxsb2NhdGVzIHdpdGhpbiB0aGlzIHJhbmdlCm5ldC5pcHY0LmlwX2xvY2FsX3Jlc2VydmVkX3BvcnRzPTMwMDAwLTMyNzY3Cm5ldC5pcHY0LmNvbmYuYWxsLnJwX2ZpbHRlciA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2lnbm9yZSA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2Fubm91bmNlID0gMgoKIyBUaGVzZSBhcmUgcmVxdWlyZWQgZm9yIHRoZSBrdWJlbGV0ICctLXByb3RlY3Qta2VybmVsLWRlZmF1bHRzJyBmbGFnCiMgU2VlIGh0dHBzOi8vZ2l0aHViLmNvbS9naWFudHN3YXJtL2dpYW50c3dhcm0vaXNzdWVzLzEzNTg3CnZtLm92ZXJjb21taXRfbWVtb3J5PTEKa2VybmVsLnBhbmljPTEwCmtlcm5lbC5wYW5pY19vbl9vb3BzPTEK
      - path: /etc/selinux/config
        permissions: 0644
        encoding: base64
        content: IyBUaGlzIGZpbGUgY29udHJvbHMgdGhlIHN0YXRlIG9mIFNFTGludXggb24gdGhlIHN5c3RlbSBvbiBib290LgoKIyBTRUxJTlVYIGNhbiB0YWtlIG9uZSBvZiB0aGVzZSB0aHJlZSB2YWx1ZXM6CiMgICAgICAgZW5mb3JjaW5nIC0gU0VMaW51eCBzZWN1cml0eSBwb2xpY3kgaXMgZW5mb3JjZWQuCiMgICAgICAgcGVybWlzc2l2ZSAtIFNFTGludXggcHJpbnRzIHdhcm5pbmdzIGluc3RlYWQgb2YgZW5mb3JjaW5nLgojICAgICAgIGRpc2FibGVkIC0gTm8gU0VMaW51eCBwb2xpY3kgaXMgbG9hZGVkLgpTRUxJTlVYPXBlcm1pc3NpdmUKCiMgU0VMSU5VWFRZUEUgY2FuIHRha2Ugb25lIG9mIHRoZXNlIGZvdXIgdmFsdWVzOgojICAgICAgIHRhcmdldGVkIC0gT25seSB0YXJnZXRlZCBuZXR3b3JrIGRhZW1vbnMgYXJlIHByb3RlY3RlZC4KIyAgICAgICBzdHJpY3QgICAtIEZ1bGwgU0VMaW51eCBwcm90ZWN0aW9uLgojICAgICAgIG1scyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1MZXZlbCBTZWN1cml0eQojICAgICAgIG1jcyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1DYXRlZ29yeSBTZWN1cml0eQojICAgICAgICAgICAgICAgICAgKG1scywgYnV0IG9ubHkgb25lIHNlbnNpdGl2aXR5IGxldmVsKQpTRUxJTlVYVFlQRT1tY3MK
      - path: /etc/systemd/timesyncd.conf
        permissions: 0644
        encoding: base64
        content: W1RpbWVdCk5UUD0xNjkuMjU0LjE2OS4xMjMK
      - path: /etc/ssh/trusted-user-ca-keys.pem
        permissions: 0600
        encoding: base64
        content: c3NoLWVkMjU1MTkgQUFBQUMzTnphQzFsWkRJMU5URTVBQUFBSU00Y3ZaMDFmTG1POWNKYldVajdzZkYrTmhFQ2d5K0NsMGJhelNyWlg3c1UgdmF1bHQtY2FAdmF1bHQub3BlcmF0aW9ucy5naWFudHN3YXJtLmlvCg==
      - path: /etc/ssh/sshd_config
        permissions: 0600
        encoding: base64
        content: IyBVc2UgbW9zdCBkZWZhdWx0cyBmb3Igc3NoZCBjb25maWd1cmF0aW9uLgpTdWJzeXN0ZW0gc2Z0cCBpbnRlcm5hbC1zZnRwCkNsaWVudEFsaXZlSW50ZXJ2YWwgMTgwClVzZUROUyBubwpVc2VQQU0geWVzClByaW50TGFzdExvZyBubyAjIGhhbmRsZWQgYnkgUEFNClByaW50TW90ZCBubyAjIGhhbmRsZWQgYnkgUEFNCiMgTm9uIGRlZmF1bHRzICgjMTAwKQpDbGllbnRBbGl2ZUNvdW50TWF4IDIKUGFzc3dvcmRBdXRoZW50aWNhdGlvbiBubwpUcnVzdGVkVXNlckNBS2V5cyAvZXRjL3NzaC90cnVzdGVkLXVzZXItY2Eta2V5cy5wZW0KTWF4QXV0aFRyaWVzIDUKTG9naW5HcmFjZVRpbWUgNjAKQWxsb3dUY3BGb3J3YXJkaW5nIG5vCkFsbG93QWdlbnRGb3J3YXJkaW5nIG5vCkNBU2lnbmF0dXJlQWxnb3JpdGhtcyBlY2RzYS1zaGEyLW5pc3RwMjU2LGVjZHNhLXNoYTItbmlzdHAzODQsZWNkc2Etc2hhMi1uaXN0cDUyMSxzc2gtZWQyNTUxOSxyc2Etc2hhMi01MTIscnNhLXNoYTItMjU2LHNzaC1yc2EK
      - path: /etc/kubernetes/patches/kubeletconfiguration.yaml
        permissions: 0644
        encoding: base64
        content: YXBpVmVyc2lvbjoga3ViZWxldC5jb25maWcuazhzLmlvL3YxYmV0YTEKa2luZDogS3ViZWxldENvbmZpZ3VyYXRpb24Kc2h1dGRvd25HcmFjZVBlcmlvZDogMzAwcwpzaHV0ZG93bkdyYWNlUGVyaW9kQ3JpdGljYWxQb2RzOiA2MHMKa2VybmVsTWVtY2dOb3RpZmljYXRpb246IHRydWUKZXZpY3Rpb25Tb2Z0OgogIG1lbW9yeS5hdmFpbGFibGU6ICI1MDBNaSIKZXZpY3Rpb25IYXJkOgogIG1lbW9yeS5hdmFpbGFibGU6ICIyMDBNaSIKICBpbWFnZWZzLmF2YWlsYWJsZTogIjE1JSIKZXZpY3Rpb25Tb2Z0R3JhY2VQZXJpb2Q6CiAgbWVtb3J5LmF2YWlsYWJsZTogIjVzIgpldmljdGlvbk1heFBvZEdyYWNlUGVyaW9kOiA2MAprdWJlUmVzZXJ2ZWQ6CiAgY3B1OiAzNTBtCiAgbWVtb3J5OiAzODRNaQogIGVwaGVtZXJhbC1zdG9yYWdlOiAxMDI0TWkKa3ViZVJlc2VydmVkQ2dyb3VwOiAva3ViZXJlc2VydmVkLnNsaWNlCnByb3RlY3RLZXJuZWxEZWZhdWx0czogdHJ1ZQpzeXN0ZW1SZXNlcnZlZDoKICBjcHU6IDI1MG0KICBtZW1vcnk6IDEyODBNaQpzeXN0ZW1SZXNlcnZlZENncm91cDogL3N5c3RlbS5zbGljZQp0bHNDaXBoZXJTdWl0ZXM6Ci0gVExTX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19BRVNfMjU2X0dDTV9TSEEzODQKLSBUTFNfQ0hBQ0hBMjBfUE9MWTEzMDVfU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQ0hBQ0hBMjBfUE9MWTEzMDUKLSBUTFNfRUNESEVfRUNEU0FfV0lUSF9DSEFDSEEyMF9QT0xZMTMwNV9TSEEyNTYKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX1JTQV9XSVRIX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19FQ0RIRV9SU0FfV0lUSF9BRVNfMjU2X0NCQ19TSEEKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1X1NIQTI1NgotIFRMU19SU0FfV0lUSF9BRVNfMTI4X0NCQ19TSEEKLSBUTFNfUlNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX1JTQV9XSVRIX0FFU18yNTZfQ0JDX1NIQQotIFRMU19SU0FfV0lUSF9BRVNfMjU2X0dDTV9TSEEzODQKc2VyaWFsaXplSW1hZ2VQdWxsczogZmFsc2UKc3RyZWFtaW5nQ29ubmVjdGlvbklkbGVUaW1lb3V0OiAxaAphbGxvd2VkVW5zYWZlU3lzY3RsczoKLSAibmV0LioiCmNvbnRhaW5lckxvZ01heFNpemU6IDMwTWkKY29udGFpbmVyTG9nTWF4RmlsZXM6IDIK
      - path: /etc/systemd/logind.conf.d/zzz-kubelet-graceful-shutdown.conf
        permissions: 0700
        encoding: base64
        content: W0xvZ2luXQojIGRlbGF5CkluaGliaXREZWxheU1heFNlYz0zMDAK
      - path: /etc/systemd/system/containerd.service.d/http-proxy.conf
        permissions: 0644
        encoding: base64
        content: W1NlcnZpY2VdCkVudmlyb25tZW50PSJIVFRQX1BST1hZPWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iSFRUUFNfUFJPWFk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iTk9fUFJPWFk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCkVudmlyb25tZW50PSJodHRwX3Byb3h5PWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iaHR0cHNfcHJveHk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0ibm9fcHJveHk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCg==
      - path: /etc/systemd/system/kubelet.service.d/http-proxy.conf
        permissions: 0644
        encoding: base64
        content: W1NlcnZpY2VdCkVudmlyb25tZW50PSJIVFRQX1BST1hZPWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iSFRUUFNfUFJPWFk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iTk9fUFJPWFk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCkVudmlyb25tZW50PSJodHRwX3Byb3h5PWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iaHR0cHNfcHJveHk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0ibm9fcHJveHk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCg==
      - path: /etc/systemd/system/teleport.service.d/http-proxy.conf
        permissions: 0644
        encoding: base64
        content: W1NlcnZpY2VdCkVudmlyb25tZW50PSJIVFRQX1BST1hZPWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iSFRUUFNfUFJPWFk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iTk9fUFJPWFk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCkVudmlyb25tZW50PSJodHRwX3Byb3h5PWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iaHR0cHNfcHJveHk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0ibm9fcHJveHk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCg==
      - path: /etc/teleport-join-token
        permissions: 0644
        contentFrom:
          secret:
            name: awesome-teleport-join-token
            key: joinToken
      - path: /opt/teleport-node-role.sh
        permissions: 0755
        encoding: base64
        content: IyEvYmluL2Jhc2gKCmlmIHN5c3RlbWN0bCBpcy1hY3RpdmUgLXEga3ViZWxldC5zZXJ2aWNlOyB0aGVuCiAgICBpZiBbIC1lICIvZXRjL2t1YmVybmV0ZXMvbWFuaWZlc3RzL2t1YmUtYXBpc2VydmVyLnlhbWwiIF07IHRoZW4KICAgICAgICBlY2hvICJjb250cm9sLXBsYW5lIgogICAgZWxzZQogICAgICAgIGVjaG8gIndvcmtlciIKICAgIGZpCmVsc2UKICAgIGVjaG8gIiIKZmkK
      - path: /etc/teleport.yaml
        permissions: 0644
        encoding: base64
        content: dmVyc2lvbjogdjMKdGVsZXBvcnQ6CiAgZGF0YV9kaXI6IC92YXIvbGliL3RlbGVwb3J0CiAgam9pbl9wYXJhbXM6CiAgICB0b2tlbl9uYW1lOiAvZXRjL3RlbGVwb3J0LWpvaW4tdG9rZW4KICAgIG1ldGhvZDogdG9rZW4KICBwcm94eV9zZXJ2ZXI6IHRlbGVwb3J0LmdpYW50c3dhcm0uaW86NDQzCiAgbG9nOgogICAgb3V0cHV0OiBzdGRlcnIKYXV0aF9zZXJ2aWNlOgogIGVuYWJsZWQ6ICJubyIKc3NoX3NlcnZpY2U6CiAgZW5hYmxlZDogInllcyIKICBjb21tYW5kczoKICAtIG5hbWU6IG5vZGUKICAgIGNvbW1hbmQ6IFtob3N0bmFtZV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogYXJjaAogICAgY29tbWFuZDogW3VuYW1lLCAtbV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogcm9sZQogICAgY29tbWFuZDogWy9vcHQvdGVsZXBvcnQtbm9kZS1yb2xlLnNoXQogICAgcGVyaW9kOiAxbTBzCiAgbGFiZWxzOgogICAgaW5zOiBnaWFudG1jCiAgICBtYzogZ2lhbnRtYwogICAgY2x1c3RlcjogYXdlc29tZQogICAgYmFzZURvbWFpbjogZXhhbXBsZS5naWdhbnRpYy5pbwpwcm94eV9zZXJ2aWNlOgogIGVuYWJsZWQ6ICJubyIK
      - contentFrom:
          secret:
            key: node-stuff
            name: cluster-super-secret
        path: /etc/aws/node/file.yaml
        permissions: 0644
      - contentFrom:
          secret:
            key: node-stuff
            name: cluster-super-secret
        path: /etc/custom/node/file.yaml
        permissions: 0644
      - path: /etc/flatcar-cgroupv1
        filesystem: root
        permissions: 0444
      - path: /etc/containerd/config.toml
        permissions: 0644
        contentFrom:
          secret:
            name: awesome-def01-containerd-dbb9d89e
            key: config.toml
      - contentFrom:
          secret:
            key: node-stuff
            name: cluster-super-secret-worker
        path: /etc/aws/worker/node/file.yaml
        permissions: 0644
      - contentFrom:
          secret:
            key: node-stuff
            name: cluster-super-secret-worker
        path: /etc/custom/worker/node/file.yaml
        permissions: 0644
    # Source: cluster/templates/clusterapi/workers/kubeadmconfig.yaml
    apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
    kind: KubeadmConfig
    metadata:
      annotations:
        machine-pool.giantswarm.io/name: awesome-def02
        important-cluster-value: 1000
        robots-need-this-in-the-cluster: eW91IGNhbm5vdCByZWFkIHRoaXMsIGJ1dCByb2JvdHMgY2FuCg==
      labels:
        # deprecated: "app: cluster-aws" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-aws
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 1.5.2
        app.kubernetes.io/part-of: cluster-aws
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-1.5.2
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: awesome
        giantswarm.io/organization: giantswarm
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: awesome
        cluster.x-k8s.io/watch-filter: capi
        release.giantswarm.io/version: 27.0.0-alpha.1
        another-cluster-label: label-2
        some-cluster-label: label-1
        giantswarm.io/machine-pool: awesome-def02
      name: awesome-def02-c67d0
      namespace: org-giantswarm
    spec:
      format: ignition
      ignition:
        containerLinuxConfig:
          additionalConfig: |
            systemd:
              units:      
              - name: os-hardening.service
                enabled: true
                contents: |
                  [Unit]
                  Description=Apply os hardening
                  [Service]
                  Type=oneshot
                  ExecStartPre=-/bin/bash -c "gpasswd -d core rkt; gpasswd -d core docker; gpasswd -d core wheel"
                  ExecStartPre=/bin/bash -c "until [ -f '/etc/sysctl.d/hardening.conf' ]; do echo Waiting for sysctl file; sleep 1s;done;"
                  ExecStart=/usr/sbin/sysctl -p /etc/sysctl.d/hardening.conf
                  [Install]
                  WantedBy=multi-user.target
              - name: update-engine.service
                enabled: false
                mask: true
              - name: locksmithd.service
                enabled: false
                mask: true
              - name: sshkeys.service
                enabled: false
                mask: true
              - name: teleport.service
                enabled: true
                contents: |
                  [Unit]
                  Description=Teleport Service
                  After=network.target
                  [Service]
                  Type=simple
                  Restart=on-failure
                  ExecStart=/opt/bin/teleport start --roles=node --config=/etc/teleport.yaml --pid-file=/run/teleport.pid
                  ExecReload=/bin/kill -HUP $MAINPID
                  PIDFile=/run/teleport.pid
                  LimitNOFILE=524288
                  [Install]
                  WantedBy=multi-user.target
              - name: kubeadm.service
                dropins:
                - name: 10-flatcar.conf
                  contents: |
                    [Unit]
                    # kubeadm must run after coreos-metadata populated /run/metadata directory.
                    Requires=coreos-metadata.service
                    After=coreos-metadata.service
                    # kubeadm must run after containerd - see https://github.com/kubernetes-sigs/image-builder/issues/939.
                    After=containerd.service
                    # kubeadm requires having an IP
                    After=network-online.target
                    Wants=network-online.target
                    [Service]
                    # Ensure kubeadm service has access to kubeadm binary in /opt/bin on Flatcar.
                    Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin
                    # To make metadata environment variables available for pre-kubeadm commands.
                    EnvironmentFile=/run/metadata/*
              - name: containerd.service
                enabled: true
                contents: |
                dropins:
                - name: 10-change-cgroup.conf
                  contents: |
                    [Service]
                    CPUAccounting=true
                    MemoryAccounting=true
                    Slice=kubereserved.slice
              - name: auditd.service
                enabled: false      
              - name: var-lib-kubelet.mount
                enabled: true
                mask: false
                contents: |
                  [Unit]
                  Description=kubelet volume
                  DefaultDependencies=no
                  [Mount]
                  What=/dev/disk/by-label/kubelet
                  Where=/var/lib/kubelet
                  Type=xfs
                  [Install]
                  WantedBy=local-fs-pre.target
              - name: var-lib-containerd.mount
                enabled: true
                mask: false
                contents: |
                  [Unit]
                  Description=containerd volume
                  DefaultDependencies=no
                  [Mount]
                  What=/dev/disk/by-label/containerd
                  Where=/var/lib/containerd
                  Type=xfs
                  [Install]
                  WantedBy=local-fs-pre.target
              - name: example2.service
                enabled: false
                mask: false
                dropins:
                - name: hello1.conf
                  contents: |
                    # Multi-line
                    # contents goes here
                - name: hello2.conf
                  contents: |
                    # Multi-line
                    # contents goes here
              - name: var-lib-workload.mount
                enabled: true
                mask: false
                contents: |
                  [Unit]
                  Description=workload volume
                  DefaultDependencies=no
                  [Mount]
                  What=/dev/disk/by-label/workload
                  Where=/var/lib/workload
                  Type=xfs
                  [Install]
                  WantedBy=local-fs-pre.target
              - name: example2-workers.service
                enabled: false
                mask: false
                dropins:
                - name: hello1-workers.conf
                  contents: |
                    # Multi-line
                    # contents goes here
                - name: hello2-workers.conf
                  contents: |
                    # Multi-line
                    # contents goes here
            storage:
              filesystems:      
              directories:      
              - path: /var/lib/kubelet
                mode: 0750      
              - path: /var/lib/kubelet/temporary/stuff
                overwrite: true
                filesystem: kubelet
                mode: 750
                user:
                  id: 12345
                  name: giantswarm
                group:
                  id: 23456
                  name: giantswarm
              - path: /var/lib/kubelet/temporary/stuff/workers
                overwrite: true
                filesystem: kubelet
                mode: 750
                user:
                  id: 12345
                  name: giantswarm
                group:
                  id: 23456
                  name: giantswarm
      joinConfiguration:
        nodeRegistration:
          name: ${COREOS_EC2_HOSTNAME}
          kubeletExtraArgs:
            cgroup-driver: cgroupfs
            cloud-provider: external
            healthz-bind-address: 0.0.0.0
            node-ip: ${COREOS_EC2_IPV4_LOCAL}
            node-labels: "ip=${COREOS_EC2_IPV4_LOCAL},role=worker,giantswarm.io/machine-pool=awesome-def02"
            v: 2
        patches:
          directory: /etc/kubernetes/patches
      preKubeadmCommands:
      - "envsubst < /etc/kubeadm.yml > /etc/kubeadm.yml.tmp"
      - "mv /etc/kubeadm.yml.tmp /etc/kubeadm.yml"
      - "systemctl restart containerd"
      - "systemctl restart sshd"
      - "export HTTP_PROXY=http://proxy.giantswarm.io"
      - "export HTTPS_PROXY=https://proxy.giantswarm.io"
      - "export NO_PROXY="127.0.0.1,localhost,svc,local,awesome.example.gigantic.io,172.31.0.0/16,100.64.0.0/12,elb.amazonaws.com,169.254.169.254,some.noproxy.awesome.example.gigantic.io,another.noproxy.address.giantswarm.io,proxy1.example.com,proxy2.example.com""
      - "export http_proxy=http://proxy.giantswarm.io"
      - "export https_proxy=https://proxy.giantswarm.io"
      - "export no_proxy="127.0.0.1,localhost,svc,local,awesome.example.gigantic.io,172.31.0.0/16,100.64.0.0/12,elb.amazonaws.com,169.254.169.254,some.noproxy.awesome.example.gigantic.io,another.noproxy.address.giantswarm.io,proxy1.example.com,proxy2.example.com""
      - "echo "aws nodes command before kubeadm""
      - "echo "custom nodes command before kubeadm""
      - "echo "aws workers command before kubeadm""
      - "echo "custom workers command before kubeadm""
      postKubeadmCommands:
      - "echo "aws nodes command after kubeadm""
      - "echo "custom nodes command after kubeadm""
      - "echo "aws workers command after kubeadm""
      - "echo "custom workers command after kubeadm""
      users:
      - name: giantswarm
        groups: sudo
        sudo: "ALL=(ALL) NOPASSWD:ALL"
      files:
      - path: /etc/sysctl.d/hardening.conf
        permissions: 0644
        encoding: base64
        content: ZnMuaW5vdGlmeS5tYXhfdXNlcl93YXRjaGVzID0gMTYzODQKZnMuaW5vdGlmeS5tYXhfdXNlcl9pbnN0YW5jZXMgPSA4MTkyCmtlcm5lbC5rcHRyX3Jlc3RyaWN0ID0gMgprZXJuZWwuc3lzcnEgPSAwCm5ldC5pcHY0LmNvbmYuYWxsLmxvZ19tYXJ0aWFucyA9IDEKbmV0LmlwdjQuY29uZi5hbGwuc2VuZF9yZWRpcmVjdHMgPSAwCm5ldC5pcHY0LmNvbmYuZGVmYXVsdC5hY2NlcHRfcmVkaXJlY3RzID0gMApuZXQuaXB2NC5jb25mLmRlZmF1bHQubG9nX21hcnRpYW5zID0gMQpuZXQuaXB2NC50Y3BfdGltZXN0YW1wcyA9IDAKbmV0LmlwdjYuY29uZi5hbGwuYWNjZXB0X3JlZGlyZWN0cyA9IDAKbmV0LmlwdjYuY29uZi5kZWZhdWx0LmFjY2VwdF9yZWRpcmVjdHMgPSAwCiMgSW5jcmVhc2VkIG1tYXBmcyBiZWNhdXNlIHNvbWUgYXBwbGljYXRpb25zLCBsaWtlIEVTLCBuZWVkIGhpZ2hlciBsaW1pdCB0byBzdG9yZSBkYXRhIHByb3Blcmx5CnZtLm1heF9tYXBfY291bnQgPSAyNjIxNDQKIyBSZXNlcnZlZCB0byBhdm9pZCBjb25mbGljdHMgd2l0aCBrdWJlLWFwaXNlcnZlciwgd2hpY2ggYWxsb2NhdGVzIHdpdGhpbiB0aGlzIHJhbmdlCm5ldC5pcHY0LmlwX2xvY2FsX3Jlc2VydmVkX3BvcnRzPTMwMDAwLTMyNzY3Cm5ldC5pcHY0LmNvbmYuYWxsLnJwX2ZpbHRlciA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2lnbm9yZSA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2Fubm91bmNlID0gMgoKIyBUaGVzZSBhcmUgcmVxdWlyZWQgZm9yIHRoZSBrdWJlbGV0ICctLXByb3RlY3Qta2VybmVsLWRlZmF1bHRzJyBmbGFnCiMgU2VlIGh0dHBzOi8vZ2l0aHViLmNvbS9naWFudHN3YXJtL2dpYW50c3dhcm0vaXNzdWVzLzEzNTg3CnZtLm92ZXJjb21taXRfbWVtb3J5PTEKa2VybmVsLnBhbmljPTEwCmtlcm5lbC5wYW5pY19vbl9vb3BzPTEK
      - path: /etc/selinux/config
        permissions: 0644
        encoding: base64
        content: IyBUaGlzIGZpbGUgY29udHJvbHMgdGhlIHN0YXRlIG9mIFNFTGludXggb24gdGhlIHN5c3RlbSBvbiBib290LgoKIyBTRUxJTlVYIGNhbiB0YWtlIG9uZSBvZiB0aGVzZSB0aHJlZSB2YWx1ZXM6CiMgICAgICAgZW5mb3JjaW5nIC0gU0VMaW51eCBzZWN1cml0eSBwb2xpY3kgaXMgZW5mb3JjZWQuCiMgICAgICAgcGVybWlzc2l2ZSAtIFNFTGludXggcHJpbnRzIHdhcm5pbmdzIGluc3RlYWQgb2YgZW5mb3JjaW5nLgojICAgICAgIGRpc2FibGVkIC0gTm8gU0VMaW51eCBwb2xpY3kgaXMgbG9hZGVkLgpTRUxJTlVYPXBlcm1pc3NpdmUKCiMgU0VMSU5VWFRZUEUgY2FuIHRha2Ugb25lIG9mIHRoZXNlIGZvdXIgdmFsdWVzOgojICAgICAgIHRhcmdldGVkIC0gT25seSB0YXJnZXRlZCBuZXR3b3JrIGRhZW1vbnMgYXJlIHByb3RlY3RlZC4KIyAgICAgICBzdHJpY3QgICAtIEZ1bGwgU0VMaW51eCBwcm90ZWN0aW9uLgojICAgICAgIG1scyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1MZXZlbCBTZWN1cml0eQojICAgICAgIG1jcyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1DYXRlZ29yeSBTZWN1cml0eQojICAgICAgICAgICAgICAgICAgKG1scywgYnV0IG9ubHkgb25lIHNlbnNpdGl2aXR5IGxldmVsKQpTRUxJTlVYVFlQRT1tY3MK
      - path: /etc/systemd/timesyncd.conf
        permissions: 0644
        encoding: base64
        content: W1RpbWVdCk5UUD0xNjkuMjU0LjE2OS4xMjMK
      - path: /etc/ssh/trusted-user-ca-keys.pem
        permissions: 0600
        encoding: base64
        content: c3NoLWVkMjU1MTkgQUFBQUMzTnphQzFsWkRJMU5URTVBQUFBSU00Y3ZaMDFmTG1POWNKYldVajdzZkYrTmhFQ2d5K0NsMGJhelNyWlg3c1UgdmF1bHQtY2FAdmF1bHQub3BlcmF0aW9ucy5naWFudHN3YXJtLmlvCg==
      - path: /etc/ssh/sshd_config
        permissions: 0600
        encoding: base64
        content: IyBVc2UgbW9zdCBkZWZhdWx0cyBmb3Igc3NoZCBjb25maWd1cmF0aW9uLgpTdWJzeXN0ZW0gc2Z0cCBpbnRlcm5hbC1zZnRwCkNsaWVudEFsaXZlSW50ZXJ2YWwgMTgwClVzZUROUyBubwpVc2VQQU0geWVzClByaW50TGFzdExvZyBubyAjIGhhbmRsZWQgYnkgUEFNClByaW50TW90ZCBubyAjIGhhbmRsZWQgYnkgUEFNCiMgTm9uIGRlZmF1bHRzICgjMTAwKQpDbGllbnRBbGl2ZUNvdW50TWF4IDIKUGFzc3dvcmRBdXRoZW50aWNhdGlvbiBubwpUcnVzdGVkVXNlckNBS2V5cyAvZXRjL3NzaC90cnVzdGVkLXVzZXItY2Eta2V5cy5wZW0KTWF4QXV0aFRyaWVzIDUKTG9naW5HcmFjZVRpbWUgNjAKQWxsb3dUY3BGb3J3YXJkaW5nIG5vCkFsbG93QWdlbnRGb3J3YXJkaW5nIG5vCkNBU2lnbmF0dXJlQWxnb3JpdGhtcyBlY2RzYS1zaGEyLW5pc3RwMjU2LGVjZHNhLXNoYTItbmlzdHAzODQsZWNkc2Etc2hhMi1uaXN0cDUyMSxzc2gtZWQyNTUxOSxyc2Etc2hhMi01MTIscnNhLXNoYTItMjU2LHNzaC1yc2EK
      - path: /etc/kubernetes/patches/kubeletconfiguration.yaml
        permissions: 0644
        encoding: base64
        content: YXBpVmVyc2lvbjoga3ViZWxldC5jb25maWcuazhzLmlvL3YxYmV0YTEKa2luZDogS3ViZWxldENvbmZpZ3VyYXRpb24Kc2h1dGRvd25HcmFjZVBlcmlvZDogMzAwcwpzaHV0ZG93bkdyYWNlUGVyaW9kQ3JpdGljYWxQb2RzOiA2MHMKa2VybmVsTWVtY2dOb3RpZmljYXRpb246IHRydWUKZXZpY3Rpb25Tb2Z0OgogIG1lbW9yeS5hdmFpbGFibGU6ICI1MDBNaSIKZXZpY3Rpb25IYXJkOgogIG1lbW9yeS5hdmFpbGFibGU6ICIyMDBNaSIKICBpbWFnZWZzLmF2YWlsYWJsZTogIjE1JSIKZXZpY3Rpb25Tb2Z0R3JhY2VQZXJpb2Q6CiAgbWVtb3J5LmF2YWlsYWJsZTogIjVzIgpldmljdGlvbk1heFBvZEdyYWNlUGVyaW9kOiA2MAprdWJlUmVzZXJ2ZWQ6CiAgY3B1OiAzNTBtCiAgbWVtb3J5OiAzODRNaQogIGVwaGVtZXJhbC1zdG9yYWdlOiAxMDI0TWkKa3ViZVJlc2VydmVkQ2dyb3VwOiAva3ViZXJlc2VydmVkLnNsaWNlCnByb3RlY3RLZXJuZWxEZWZhdWx0czogdHJ1ZQpzeXN0ZW1SZXNlcnZlZDoKICBjcHU6IDI1MG0KICBtZW1vcnk6IDEyODBNaQpzeXN0ZW1SZXNlcnZlZENncm91cDogL3N5c3RlbS5zbGljZQp0bHNDaXBoZXJTdWl0ZXM6Ci0gVExTX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19BRVNfMjU2X0dDTV9TSEEzODQKLSBUTFNfQ0hBQ0hBMjBfUE9MWTEzMDVfU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQ0hBQ0hBMjBfUE9MWTEzMDUKLSBUTFNfRUNESEVfRUNEU0FfV0lUSF9DSEFDSEEyMF9QT0xZMTMwNV9TSEEyNTYKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX1JTQV9XSVRIX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19FQ0RIRV9SU0FfV0lUSF9BRVNfMjU2X0NCQ19TSEEKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1X1NIQTI1NgotIFRMU19SU0FfV0lUSF9BRVNfMTI4X0NCQ19TSEEKLSBUTFNfUlNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX1JTQV9XSVRIX0FFU18yNTZfQ0JDX1NIQQotIFRMU19SU0FfV0lUSF9BRVNfMjU2X0dDTV9TSEEzODQKc2VyaWFsaXplSW1hZ2VQdWxsczogZmFsc2UKc3RyZWFtaW5nQ29ubmVjdGlvbklkbGVUaW1lb3V0OiAxaAphbGxvd2VkVW5zYWZlU3lzY3RsczoKLSAibmV0LioiCmNvbnRhaW5lckxvZ01heFNpemU6IDMwTWkKY29udGFpbmVyTG9nTWF4RmlsZXM6IDIK
      - path: /etc/systemd/logind.conf.d/zzz-kubelet-graceful-shutdown.conf
        permissions: 0700
        encoding: base64
        content: W0xvZ2luXQojIGRlbGF5CkluaGliaXREZWxheU1heFNlYz0zMDAK
      - path: /etc/systemd/system/containerd.service.d/http-proxy.conf
        permissions: 0644
        encoding: base64
        content: W1NlcnZpY2VdCkVudmlyb25tZW50PSJIVFRQX1BST1hZPWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iSFRUUFNfUFJPWFk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iTk9fUFJPWFk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCkVudmlyb25tZW50PSJodHRwX3Byb3h5PWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iaHR0cHNfcHJveHk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0ibm9fcHJveHk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCg==
      - path: /etc/systemd/system/kubelet.service.d/http-proxy.conf
        permissions: 0644
        encoding: base64
        content: W1NlcnZpY2VdCkVudmlyb25tZW50PSJIVFRQX1BST1hZPWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iSFRUUFNfUFJPWFk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iTk9fUFJPWFk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCkVudmlyb25tZW50PSJodHRwX3Byb3h5PWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iaHR0cHNfcHJveHk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0ibm9fcHJveHk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCg==
      - path: /etc/systemd/system/teleport.service.d/http-proxy.conf
        permissions: 0644
        encoding: base64
        content: W1NlcnZpY2VdCkVudmlyb25tZW50PSJIVFRQX1BST1hZPWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iSFRUUFNfUFJPWFk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iTk9fUFJPWFk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCkVudmlyb25tZW50PSJodHRwX3Byb3h5PWh0dHA6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0iaHR0cHNfcHJveHk9aHR0cHM6Ly9wcm94eS5naWFudHN3YXJtLmlvIgpFbnZpcm9ubWVudD0ibm9fcHJveHk9MTI3LjAuMC4xLGxvY2FsaG9zdCxzdmMsbG9jYWwsYXdlc29tZS5leGFtcGxlLmdpZ2FudGljLmlvLDE3Mi4zMS4wLjAvMTYsMTAwLjY0LjAuMC8xMixlbGIuYW1hem9uYXdzLmNvbSwxNjkuMjU0LjE2OS4yNTQsc29tZS5ub3Byb3h5LmF3ZXNvbWUuZXhhbXBsZS5naWdhbnRpYy5pbyxhbm90aGVyLm5vcHJveHkuYWRkcmVzcy5naWFudHN3YXJtLmlvLHByb3h5MS5leGFtcGxlLmNvbSxwcm94eTIuZXhhbXBsZS5jb20iCg==
      - path: /etc/teleport-join-token
        permissions: 0644
        contentFrom:
          secret:
            name: awesome-teleport-join-token
            key: joinToken
      - path: /opt/teleport-node-role.sh
        permissions: 0755
        encoding: base64
        content: IyEvYmluL2Jhc2gKCmlmIHN5c3RlbWN0bCBpcy1hY3RpdmUgLXEga3ViZWxldC5zZXJ2aWNlOyB0aGVuCiAgICBpZiBbIC1lICIvZXRjL2t1YmVybmV0ZXMvbWFuaWZlc3RzL2t1YmUtYXBpc2VydmVyLnlhbWwiIF07IHRoZW4KICAgICAgICBlY2hvICJjb250cm9sLXBsYW5lIgogICAgZWxzZQogICAgICAgIGVjaG8gIndvcmtlciIKICAg...*[Comment body truncated]*

@taylorbot
Copy link
Contributor

Hey @fiunchinho, a test pull request has been created for you in the cluster-aws repo! Go to pull request giantswarm/cluster-aws#902 in order to test your cluster chart changes on AWS.

@Gacko Gacko changed the title Backport: allow to configure cgroupsv1 per nodepool (#348) Backport: allow to configure cgroupsv1 per nodepool Oct 10, 2024
@Gacko Gacko changed the title Backport: allow to configure cgroupsv1 per nodepool Allow to configure cgroupsv1 per nodepool Oct 10, 2024
@Gacko
Copy link
Member

Gacko commented Oct 10, 2024

I changed the PR's title as there is, IMHO, no need to prefix it this way. Apart from that: LGTM! I guess you only cherry-picked, right?

I guess it's currently hard to impossible to test stuff like this by just running cluster test suites, so we might need to come up with a test release that uses the test version of cluster-aws that includes this version of cluster here.

@fiunchinho
Copy link
Member Author

I changed the PR's title as there is, IMHO, no need to prefix it this way. Apart from that: LGTM! I guess you only cherry-picked, right?

I guess it's currently hard to impossible to test stuff like this by just running cluster test suites, so we might need to come up with a test release that uses the test version of cluster-aws that includes this version of cluster here.

Yeah, basically cherry-picked and fixed conflicts.

@fiunchinho fiunchinho merged commit d2ebe97 into release-v1.0.x Oct 14, 2024
11 of 12 checks passed
@fiunchinho fiunchinho deleted the cherrypick-348-to-1-0 branch October 14, 2024 12:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants