Skip to content
This repository has been archived by the owner on Oct 3, 2024. It is now read-only.

feat(helm): update chart cilium to 1.16.2 #193

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Jul 13, 2024

This PR contains the following updates:

Package Update Change
cilium (source) minor 1.15.6 -> 1.16.2

Release Notes

cilium/cilium (cilium)

v1.16.2

Compare Source

v1.16.1: 1.16.1

Compare Source

Security Advisories

This release addresses the following security vulnerabilities:

Summary of Changes

Minor Changes:

Bugfixes:

  • auth: Fix data race in Upsert (Backport PR #​34158, Upstream PR #​33905, @​chaunceyjiang)
  • BGPv1 + BGPv2: Fix incorrect service reconciliation in setups with multiple BGP instances (virtual routers) (Backport PR #​34297, Upstream PR #​34177, @​rastislavs)
  • bgpv1: Fix data race in bgppSelection (Backport PR #​34158, Upstream PR #​33904, @​chaunceyjiang)
  • bgpv2: Avoid duplicate route policy naming (Backport PR #​34158, Upstream PR #​34031, @​rastislavs)
  • BGPv2: Fix Service advertisement selector: do not require matching CiliumLoadBalancerIPPool (Backport PR #​34201, Upstream PR #​34182, @​rastislavs)
  • Fix a nil dereference crash during cilium-agent initialization affecting setups with FQDN policies. The crash is triggered when a restored endpoint performs a DNS request just a the right time during early cilium-agent restoration. Problem is not expected to be persistent and the agent should get pass the problematic part of the initialization on restart. (Backport PR #​34158, Upstream PR #​34059, @​joamaki)
  • Fix appArmorProfile condition for CronJob helm template (Backport PR #​34297, Upstream PR #​34100, @​sathieu)
  • Fix bug causing etcd upsertion/deletion events to be potentially missed during the initial synchronization, when Cilium operates in KVStore mode, or Cluster Mesh is enabled. (Backport PR #​34181, Upstream PR #​34091, @​giorio94)
  • Fix issue in picking node IP addresses from the loopback device. This fixes a regression in v1.15 and v1.16 where VIPs assigned to the lo device were not considered by Cilium.
    Fix spurious updates node addresses to avoid unnecessary datapath reinitializations. (Backport PR #​34085, Upstream PR #​34012, @​joamaki)
  • Fix possible connection disruption on agent restart with WireGuard + kvstore (Backport PR #​34158, Upstream PR #​34062, @​giorio94)
  • Fixes DNS proxy "connect: cannot assign requested address" errors in transparent mode, which were due to opening multiple TCP connections to the upstream DNS server. (Backport PR #​34201, Upstream PR #​33989, @​bimmlerd)
  • gateway-api: Add HTTP method condition in sortable routes (Backport PR #​34158, Upstream PR #​34109, @​sayboras)
  • gateway-api: Enqueue gateway for Reference Grant changes (Backport PR #​34158, Upstream PR #​34032, @​sayboras)
  • lbipam: fixed bug in sharing key logic (Backport PR #​34158, Upstream PR #​34106, @​dylandreimerink)
  • policy: Fix policy cache covers context lookup. (#​34322, @​nathanjsweet)
  • service: Relax protocol matching for L7 Service (Backport PR #​34195, Upstream PR #​34131, @​sayboras)

CI Changes:

Misc Changes:

Other Changes:

Docker Manifests
cilium

quay.io/cilium/cilium:v1.16.1@​sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
quay.io/cilium/cilium:stable@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39

clustermesh-apiserver

quay.io/cilium/clustermesh-apiserver:v1.16.1@​sha256:e9c77417cd474cc943b2303a76c5cf584ac7024dd513ebb8d608cb62fe28896f
quay.io/cilium/clustermesh-apiserver:stable@sha256:e9c77417cd474cc943b2303a76c5cf584ac7024dd513ebb8d608cb62fe28896f

docker-plugin

quay.io/cilium/docker-plugin:v1.16.1@​sha256:243fd7759818d990a7f9b33df3eb685a9f250a12020e22f660547f9516b76320
quay.io/cilium/docker-plugin:stable@sha256:243fd7759818d990a7f9b33df3eb685a9f250a12020e22f660547f9516b76320

hubble-relay

quay.io/cilium/hubble-relay:v1.16.1@​sha256:2e1b4c739a676ae187d4c2bfc45c3e865bda2567cc0320a90cb666657fcfcc35
quay.io/cilium/hubble-relay:stable@sha256:2e1b4c739a676ae187d4c2bfc45c3e865bda2567cc0320a90cb666657fcfcc35

operator-alibabacloud

quay.io/cilium/operator-alibabacloud:v1.16.1@​sha256:4381adf48d76ec482551183947e537d44bcac9b6c31a635a9ac63f696d978804
quay.io/cilium/operator-alibabacloud:stable@sha256:4381adf48d76ec482551183947e537d44bcac9b6c31a635a9ac63f696d978804

operator-aws

quay.io/cilium/operator-aws:v1.16.1@​sha256:e3876fcaf2d6ccc8d5b4aaaded7b1efa971f3f4175eaa2c8a499878d58c39df4
quay.io/cilium/operator-aws:stable@sha256:e3876fcaf2d6ccc8d5b4aaaded7b1efa971f3f4175eaa2c8a499878d58c39df4

operator-azure

quay.io/cilium/operator-azure:v1.16.1@​sha256:e55c222654a44ceb52db7ade3a7b9e8ef05681ff84c14ad1d46fea34869a7a22
quay.io/cilium/operator-azure:stable@sha256:e55c222654a44ceb52db7ade3a7b9e8ef05681ff84c14ad1d46fea34869a7a22

operator-generic

quay.io/cilium/operator-generic:v1.16.1@​sha256:3bc7e7a43bc4a4d8989cb7936c5d96675dd2d02c306adf925ce0a7c35aa27dc4
quay.io/cilium/operator-generic:stable@sha256:3bc7e7a43bc4a4d8989cb7936c5d96675dd2d02c306adf925ce0a7c35aa27dc4

operator

quay.io/cilium/operator:v1.16.1@​sha256:258b28fefc9f3fe1cbcb21a3b2c4c96dcc72f6ee258eed0afebe9b0ac47f462b
quay.io/cilium/operator:stable@sha256:258b28fefc9f3fe1cbcb21a3b2c4c96dcc72f6ee258eed0afebe9b0ac47f462b

v1.16.0: 1.16.0

Compare Source

We are excited to announce the Cilium 1.16.0 release. A total of 2969 new commits have been contributed to this release by a growing community of over 750 developers and over 19300 GitHub stars! 🤩

To keep up to date with all the latest Cilium releases, join #release on Slack.

Here's what's new in v1.16.0:
  • 🚠 Networking

    • 🚤 Cilium NetKit: container-network throughput and latency as fast as host-network.
    • 🌐 BGPv2: Fresh new API for Cilium's BGP feature.
    • 📢 BGP ClusterIP Advertisement: BGP advertisements of ExternalIP and Cluster IP Services.
    • 🔀 Service Traffic Distribution: Kubernetes 1.30 Service Traffic Distribution can be enabled directly in the Service spec instead of using annotations.
    • 🔄 Local Redirect Policy promoted to Stable: Redirecting the traffic bound for services to the local backend, such as node-local DNS.
    • 📡 Multicast Datapath: Define multicast groups in Cilium.
    • 🏷️ Per-Pod Fixed MAC Address: Specify the MAC address used on a pod.
  • 🕸️ Service Mesh & Ingress/Gateway API

    • 🧭 Gateway API GAMMA Support: East-west traffic management for the cluster via Gateway API.
    • ⛩️ Gateway API 1.1 Support: Cilium now supports Gateway API 1.1.
    • 🛂 ExternalTrafficPolicy support for Ingress/Gateway API: External traffic can now be routed to node-local or cluster-wide endpoints.
    • 🕸️ L7 Envoy Proxy as dedicated DaemonSet: With a dedicated DaemonSet, Envoy and Cilium can have a separate life-cycle from each other. Now on by default for new installs.
    • 🗂️ NodeSelector support for CiliumEnvoyConfig: Instead of being applied on all nodes, it's now possible to select which nodes a particular CiliumEnvoyConfig should select.
  • 💂‍♀️ Security

    • 📶 Port Range support in Network Policies: This long-awaited feature has been implemented into Cilium.
    • 📋 Network Policy Validation Status: kubectl describe cnp will be able to tell if the Cilium Network Policy is valid or invalid.
    • Control Cilium Network Policy Default Deny behavior: Policies usually enable default deny for the subject of the policies, but this can now be disabled on a per-policy basis.
    • 👥 CIDRGroups support for Egress and Deny rules: Add support for matching CiliumCIDRGroups in Egress policy rules.
    • 💾 Load "default" Network Policies from Filesystem: In addition to reading policies from Kubernetes, Cilium can be configured to read policies locally.
    • 🗂️ Support to Select Nodes as Target of Cilium Network Policies: With new ToNodes/FromNodes selectors, traffic can be allowed or denied based on the labels of the target Node in the cluster.
  • 🌅 Day 2 Operations and Scale

    • 🧝 New ELF Loader Logic: With this new loader logic, the median memory usage of Cilium was decreased by 24%.
    • 🚀 Improved DNS-based network policy performance: DNS-based network policies had up to 5x reduction in tail latency.
    • 🕸️ KVStoreMesh default option for ClusterMesh: Introduced in Cilium 1.14, and after a lot of adoption and feedback from the community, KVStoreMesh is now the default way to deploy ClusterMesh.
  • 🛰️ Hubble & Observability

    • 🗣️ CEL Filters Support: Hubble supports Common Express Language (CEL) giving support for more complex conditions that cannot be expressed using the existing flow filters.
    • 📊 Improved HTTP metrics: There are additional metrics to count the HTTP requests and their duration.
    • 📏 Improved BPF map pressure metrics: New metric to track the BPF map pressure metric for the Connection Tracking BPF map.
    • 👀 Improvements for Egress Traffic Path Observability: Some metrics were added on this release to help troubleshooting Cilium Egress Routing.
    • 🔬 K8S Event Generation on Packet Drop: Hubble is now able to generate a k8s event for a packet dropped from a pod and it that can be verified with kubectl get events.
    • 🗂️ Filtering Hubble flows by node labels: Filter Hubble flows observed on nodes matching the given label.
  • 🏘️ Community:

And finally, we would like to thank you to all contributors of Cilium that helped directly and indirectly with the project. The success of Cilium could not happen without all of you. ❤️

For a full summary of changes, see https://github.com/cilium/cilium/blob/v1.16.0/CHANGELOG.md.

Docker Manifests
cilium

quay.io/cilium/cilium:v1.16.0@​sha256:46ffa4ef3cf6d8885dcc4af5963b0683f7d59daa90d49ed9fb68d3b1627fe058
quay.io/cilium/cilium:stable@sha256:46ffa4ef3cf6d8885dcc4af5963b0683f7d59daa90d49ed9fb68d3b1627fe058

clustermesh-apiserver

quay.io/cilium/clustermesh-apiserver:v1.16.0@​sha256:a1597b7de97cfa03f1330e6b784df1721eb69494cd9efb0b3a6930680dfe7a8e
quay.io/cilium/clustermesh-apiserver:stable@sha256:a1597b7de97cfa03f1330e6b784df1721eb69494cd9efb0b3a6930680dfe7a8e

docker-plugin

quay.io/cilium/docker-plugin:v1.16.0@​sha256:024a17aa8ec70d42f0ac1a4407ad9f8fd1411aa85fd8019938af582e20522efe
quay.io/cilium/docker-plugin:stable@sha256:024a17aa8ec70d42f0ac1a4407ad9f8fd1411aa85fd8019938af582e20522efe

hubble-relay

quay.io/cilium/hubble-relay:v1.16.0@​sha256:33fca7776fc3d7b2abe08873319353806dc1c5e07e12011d7da4da05f836ce8d
quay.io/cilium/hubble-relay:stable@sha256:33fca7776fc3d7b2abe08873319353806dc1c5e07e12011d7da4da05f836ce8d

operator-alibabacloud

quay.io/cilium/operator-alibabacloud:v1.16.0@​sha256:d2d9f450f2fc650d74d4b3935f4c05736e61145b9c6927520ea52e1ebcf4f3ea
quay.io/cilium/operator-alibabacloud:stable@sha256:d2d9f450f2fc650d74d4b3935f4c05736e61145b9c6927520ea52e1ebcf4f3ea

operator-aws

quay.io/cilium/operator-aws:v1.16.0@​sha256:8dbe47a77ba8e1a5b111647a43db10c213d1c7dfc9f9aab5ef7279321ad21a2f
quay.io/cilium/operator-aws:stable@sha256:8dbe47a77ba8e1a5b111647a43db10c213d1c7dfc9f9aab5ef7279321ad21a2f

operator-azure

quay.io/cilium/operator-azure:v1.16.0@​sha256:dd7562e20bc72b55c65e2110eb98dca1dd2bbf6688b7d8cea2bc0453992c121d
quay.io/cilium/operator-azure:stable@sha256:dd7562e20bc72b55c65e2110eb98dca1dd2bbf6688b7d8cea2bc0453992c121d

operator-generic

quay.io/cilium/operator-generic:v1.16.0@​sha256:d6621c11c4e4943bf2998af7febe05be5ed6fdcf812b27ad4388f47022190316
quay.io/cilium/operator-generic:stable@sha256:d6621c11c4e4943bf2998af7febe05be5ed6fdcf812b27ad4388f47022190316

operator

quay.io/cilium/operator:v1.16.0@​sha256:6aaa05737f21993ff51abe0ffe7ea4be88d518aa05266c3482364dce65643488
quay.io/cilium/operator:stable@sha256:6aaa05737f21993ff51abe0ffe7ea4be88d518aa05266c3482364dce65643488

v1.15.9

Compare Source

v1.15.8: 1.15.8

Compare Source

Security Advisories

This release addresses the following security vulnerabilities:

Summary of Changes

Minor Changes:

Bugfixes:

  • add support for validation of stringToString values in ConfigMap (Backport PR #​33962, Upstream PR #​33779, @​alex-berger)
  • auth: Fix data race in Upsert (Backport PR #​34157, Upstream PR #​33905, @​chaunceyjiang)
  • auth: fix fatal error: concurrent map iteration and map write (Backport PR #​33809, Upstream PR #​33634, @​chaunceyjiang)
  • cert: Adding H2 Protocol Support when Get gRPC Config For Client (Backport PR #​33809, Upstream PR #​33616, @​mrproliu)
  • DNS Proxy: Allow SO_LINGER to be set to the socket to upstream (Backport PR #​33809, Upstream PR #​33592, @​gandro)
  • Fix an issue in updates to node addresses which may have caused missing NodePort frontend IP addresses. May have affected NodePort/LoadBalancer services for users running with runtime device detection enabled when node's IP addresses were changed after Cilium had started.
    Node IP as defined in the Kubernetes Node is now preferred when selecting the NodePort frontend IPs. (Backport PR #​33818, Upstream PR #​33629, @​joamaki)
  • Fix bug causing etcd upsertion/deletion events to be potentially missed during the initial synchronization, when Cilium operates in KVStore mode, or Cluster Mesh is enabled. (Backport PR #​34183, Upstream PR #​34091, @​giorio94)
  • Fix issue in picking node IP addresses from the loopback device. This fixes a regression in v1.15 and v1.16 where VIPs assigned to the lo device were not considered by Cilium.
    Fix spurious updates node addresses to avoid unnecessary datapath reinitializations. (Backport PR #​34086, Upstream PR #​34012, @​joamaki)
  • Fix rare race condition afflicting clustermesh while stopping the retrieval of the remote cluster configuration, possibly causing a deadlock (Backport PR #​33809, Upstream PR #​33735, @​giorio94)
  • Fixes a race condition during agent startup that causes the k8s node label updates to not get propagated to the host endpoint. (Backport PR #​33663, Upstream PR #​33511, @​skmatti)
  • gateway-api: Add HTTP method condition in sortable routes (Backport PR #​34157, Upstream PR #​34109, @​sayboras)
  • gateway-api: Enqueue gateway for Reference Grant changes (Backport PR #​34157, Upstream PR #​34032, @​sayboras)
  • helm: remove duplicate metrics for Envoy pod (Backport PR #​34157, Upstream PR #​33803, @​mhofstetter)
  • lbipam: fixed bug in sharing key logic (Backport PR #​34157, Upstream PR #​34106, @​dylandreimerink)
  • pkg/metrics: fix data race warning on metrics init hook. (Backport PR #​33962, Upstream PR #​33823, @​tommyp1ckles)
  • Reduce conntrack lifetime for closing service connections. (Backport PR #​33962, Upstream PR #​33907, @​julianwiedmann)
  • Skip regenerating host endpoint on k8s node labels update if identity labels are unchanged (Backport PR #​33809, Upstream PR #​33306, @​skmatti)
  • The cilium agent will now recover from stale nodeID mappings which could occur in clusters with high node churn, possibly manifesting itself in dropped IPsec traffic. (Backport PR #​34157, Upstream PR #​33666, @​bimmlerd)

CI Changes:

Misc Changes:

Other Changes:

Docker Manifests

cilium

quay.io/cilium/cilium:v1.15.8@​sha256:3b5b0477f696502c449eaddff30019a7d399f077b7814bcafabc636829d194c7

clustermesh-apiserver

quay.io/cilium/clustermesh-apiserver:v1.15.8@​sha256:4c1f33aae2b76392b57e867820471b5472f0886f7358513d47ee80c09af15a0e

docker-plugin

quay.io/cilium/docker-plugin:v1.15.8@​sha256:15b1b6e83e1c0eea97df179660c1898661c1d0da5d431c68f98c702581e29310

hubble-relay

quay.io/cilium/hubble-relay:v1.15.8@​sha256:47e8a19f60d0d226ec3d2c675ec63908f1f2fb936a39897f2e3255b3bab01ad6

operator-alibabacloud

quay.io/cilium/operator-alibabacloud:v1.15.8@​sha256:388ef72febd719bc9d16d5ee47fe6f846f73f0d8a6f9586ada04cb39eb2962d1

operator-aws

quay.io/cilium/operator-aws:v1.15.8@​sha256:3807dd23c2b5f90489824ddd13dca6e84e714dc9eae44e5718acfe86c855b7a1

operator-azure

quay.io/cilium/operator-azure:v1.15.8@​sha256:c517db3d12fcf038a9a4a81b88027a19672078bf8c2fcd6b2563f3eff9514d21

operator-generic

quay.io/cilium/operator-generic:v1.15.8@​sha256:e77ae6fc8a978f98363cf74d3c883dfaa6454c6e23ec417a60952f29408e2f18

operator

quay.io/cilium/operator:v1.15.8@​sha256:e9cf35fe3dc86933ccf3fdfdb7620d218c50aaca5f14e4ba5f422460ea4cb23c

v1.15.7: 1.15.7

Compare Source

Summary of Changes

We are pleased to release Cilium v1.15.7, which makes the load balancer class of the Clustermesh API server configurable and includes stability and bug fixes. Thanks to all contributors, reviewers, testers, and users!

Minor Changes:

Bugfixes:

CI Changes:

Misc Changes:


Configuration

📅 Schedule: Branch creation - "on saturday" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Copy link

github-actions bot commented Jul 13, 2024

--- kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cilium HelmRelease: kube-system/cilium

+++ kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cilium HelmRelease: kube-system/cilium

@@ -13,13 +13,13 @@

     spec:
       chart: cilium
       sourceRef:
         kind: HelmRepository
         name: cilium
         namespace: flux-system
-      version: 1.15.6
+      version: 1.16.2
   install:
     remediation:
       retries: 3
   interval: 30m
   upgrade:
     cleanupOnFail: true

Copy link

github-actions bot commented Jul 13, 2024

--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-dashboard

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-dashboard

@@ -4703,27 +4703,27 @@

           ],
           "spaceLength": 10,
           "stack": false,
           "steppedLine": false,
           "targets": [
             {
-              "expr": "sum(rate(cilium_policy_l7_denied_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m]))",
+              "expr": "sum(rate(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"denied\"}[1m]))",
               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "denied",
               "refId": "A"
             },
             {
-              "expr": "sum(rate(cilium_policy_l7_forwarded_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m]))",
+              "expr": "sum(rate(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"forwarded\"}[1m]))",
               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "forwarded",
               "refId": "B"
             },
             {
-              "expr": "sum(rate(cilium_policy_l7_received_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m]))",
+              "expr": "sum(rate(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"received\"}[1m]))",
               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "received",
               "refId": "C"
             }
           ],
@@ -4869,13 +4869,13 @@

           }
         },
         {
           "aliasColors": {
             "Max per node processingTime": "#e24d42",
             "Max per node upstreamTime": "#58140c",
-            "avg(cilium_policy_l7_parse_errors_total{pod=~\"cilium.*\"})": "#bf1b00",
+            "avg(cilium_policy_l7_total{pod=~\"cilium.*\", rule=\"parse_errors\"})": "#bf1b00",
             "parse errors": "#bf1b00"
           },
           "bars": true,
           "dashLength": 10,
           "dashes": false,
           "datasource": {
@@ -4928,13 +4928,13 @@

             },
             {
               "alias": "Max per node upstreamTime",
               "yaxis": 2
             },
             {
-              "alias": "avg(cilium_policy_l7_parse_errors_total{pod=~\"cilium.*\"})",
+              "alias": "avg(cilium_policy_l7_total{pod=~\"cilium.*\", rule=\"parse_errors\"})",
               "yaxis": 2
             },
             {
               "alias": "parse errors",
               "yaxis": 2
             }
@@ -4949,13 +4949,13 @@

               "interval": "",
               "intervalFactor": 1,
               "legendFormat": "{{scope}}",
               "refId": "A"
             },
             {
-              "expr": "avg(cilium_policy_l7_parse_errors_total{k8s_app=\"cilium\", pod=~\"$pod\"}) by (pod)",
+              "expr": "avg(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"parse_errors\"}) by (pod)",
               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "parse errors",
               "refId": "B"
             }
           ],
@@ -5307,13 +5307,13 @@

               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "Max {{scope}}",
               "refId": "B"
             },
             {
-              "expr": "max(rate(cilium_policy_l7_parse_errors_total{k8s_app=\"cilium\", pod=~\"$pod\"}[1m])) by (pod)",
+              "expr": "max(rate(cilium_policy_l7_total{k8s_app=\"cilium\", pod=~\"$pod\", rule=\"parse_errors\"}[1m])) by (pod)",
               "format": "time_series",
               "intervalFactor": 1,
               "legendFormat": "parse errors",
               "refId": "A"
             }
           ],
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config

@@ -7,20 +7,18 @@

 data:
   identity-allocation-mode: crd
   identity-heartbeat-timeout: 30m0s
   identity-gc-interval: 15m0s
   cilium-endpoint-gc-interval: 5m0s
   nodes-gc-interval: 5m0s
-  skip-cnp-status-startup-clean: 'false'
   debug: 'false'
   debug-verbose: ''
   enable-policy: default
   policy-cidr-match-mode: ''
   prometheus-serve-addr: :9962
   controller-group-metrics: write-cni-file sync-host-ips sync-lb-maps-with-k8s-services
-  proxy-prometheus-port: '9964'
   operator-prometheus-serve-addr: :9963
   enable-metrics: 'true'
   enable-ipv4: 'true'
   enable-ipv6: 'false'
   custom-cni-conf: 'false'
   enable-bpf-clock-probe: 'false'
@@ -28,57 +26,68 @@

   monitor-aggregation-interval: 5s
   monitor-aggregation-flags: all
   bpf-map-dynamic-size-ratio: '0.0025'
   bpf-policy-map-max: '16384'
   bpf-lb-map-max: '65536'
   bpf-lb-external-clusterip: 'false'
+  bpf-events-drop-enabled: 'true'
+  bpf-events-policy-verdict-enabled: 'true'
+  bpf-events-trace-enabled: 'true'
   preallocate-bpf-maps: 'false'
-  sidecar-istio-proxy-image: cilium/istio_proxy
   cluster-name: cluster
   cluster-id: '1'
   routing-mode: native
   service-no-backend-response: reject
   enable-l7-proxy: 'true'
   enable-ipv4-masquerade: 'true'
   enable-ipv4-big-tcp: 'false'
   enable-ipv6-big-tcp: 'false'
   enable-ipv6-masquerade: 'true'
+  enable-tcx: 'true'
+  datapath-mode: veth
   enable-bpf-masquerade: 'false'
   enable-masquerade-to-route-source: 'false'
   enable-xt-socket-fallback: 'true'
   install-no-conntrack-iptables-rules: 'false'
   auto-direct-node-routes: 'true'
+  direct-routing-skip-unreachable: 'false'
   enable-local-redirect-policy: 'true'
   ipv4-native-routing-cidr: 10.69.0.0/16
+  enable-runtime-device-detection: 'true'
   kube-proxy-replacement: 'true'
   kube-proxy-replacement-healthz-bind-address: 0.0.0.0:10256
   bpf-lb-sock: 'false'
+  bpf-lb-sock-terminate-pod-connections: 'false'
+  nodeport-addresses: ''
   enable-health-check-nodeport: 'true'
   enable-health-check-loadbalancer-ip: 'false'
   node-port-bind-protection: 'true'
   enable-auto-protect-node-port-range: 'true'
   bpf-lb-mode: snat
   bpf-lb-algorithm: maglev
   bpf-lb-acceleration: disabled
   enable-svc-source-range-check: 'true'
   enable-l2-neigh-discovery: 'true'
   arping-refresh-period: 30s
+  k8s-require-ipv4-pod-cidr: 'false'
+  k8s-require-ipv6-pod-cidr: 'false'
   enable-endpoint-routes: 'true'
   enable-k8s-networkpolicy: 'true'
   write-cni-conf-when-ready: /host/etc/cni/net.d/05-cilium.conflist
   cni-exclusive: 'false'
   cni-log-file: /var/run/cilium/cilium-cni.log
   enable-endpoint-health-checking: 'true'
   enable-health-checking: 'true'
   enable-well-known-identities: 'false'
-  enable-remote-node-identity: 'true'
+  enable-node-selector-labels: 'false'
   synchronize-k8s-nodes: 'true'
   operator-api-serve-addr: 127.0.0.1:9234
   enable-hubble: 'true'
   hubble-socket-path: /var/run/cilium/hubble.sock
   hubble-metrics-server: :9965
+  hubble-metrics-server-enable-tls: 'false'
   hubble-metrics: dns:query drop tcp flow port-distribution icmp http
   enable-hubble-open-metrics: 'false'
   hubble-export-file-max-size-mb: '10'
   hubble-export-file-max-backups: '5'
   hubble-listen-address: :4244
   hubble-disable-tls: 'false'
@@ -105,12 +114,13 @@

   k8s-client-burst: '20'
   remove-cilium-node-taints: 'true'
   set-cilium-node-taints: 'true'
   set-cilium-is-up-condition: 'true'
   unmanaged-pod-watcher-interval: '15'
   dnsproxy-enable-transparent-mode: 'true'
+  dnsproxy-socket-linger-timeout: '10'
   tofqdns-dns-reject-response-code: refused
   tofqdns-enable-dns-compression: 'true'
   tofqdns-endpoint-max-ip-per-hostname: '50'
   tofqdns-idle-connection-grace-period: 0s
   tofqdns-max-deferred-connection-deletes: '10000'
   tofqdns-proxy-response-max-delay: 100ms
@@ -122,9 +132,15 @@

   proxy-xff-num-trusted-hops-ingress: '0'
   proxy-xff-num-trusted-hops-egress: '0'
   proxy-connect-timeout: '2'
   proxy-max-requests-per-connection: '0'
   proxy-max-connection-duration-seconds: '0'
   proxy-idle-timeout-seconds: '60'
-  external-envoy-proxy: 'false'
+  external-envoy-proxy: 'true'
+  envoy-base-id: '0'
+  envoy-keep-cap-netbindservice: 'false'
   max-connected-clusters: '255'
+  clustermesh-enable-endpoint-sync: 'false'
+  clustermesh-enable-mcs-api: 'false'
+  nat-map-stats-entries: '32'
+  nat-map-stats-interval: 30s
 
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-operator-dashboard

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-operator-dashboard

@@ -11,17 +11,30 @@

     grafana_dashboard: '1'
   annotations:
     grafana_folder: Cilium
 data:
   cilium-operator-dashboard.json: |
     {
+      "__inputs": [
+        {
+          "name": "DS_PROMETHEUS",
+          "label": "prometheus",
+          "description": "",
+          "type": "datasource",
+          "pluginId": "prometheus",
+          "pluginName": "Prometheus"
+        }
+      ],
       "annotations": {
         "list": [
           {
             "builtIn": 1,
-            "datasource": "-- Grafana --",
+            "datasource": {
+              "type": "datasource",
+              "uid": "grafana"
+            },
             "enable": true,
             "hide": true,
             "iconColor": "rgba(0, 211, 255, 1)",
             "name": "Annotations & Alerts",
             "type": "dashboard"
           }
@@ -37,13 +50,16 @@

           "aliasColors": {
             "avg": "#cffaff"
           },
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -163,13 +179,16 @@

           "aliasColors": {
             "MAX_resident_memory_bytes_max": "#e5ac0e"
           },
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -293,13 +312,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -390,13 +412,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -487,13 +512,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -584,13 +612,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -681,13 +712,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -778,13 +812,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
@@ -875,13 +912,16 @@

         },
         {
           "aliasColors": {},
           "bars": false,
           "dashLength": 10,
           "dashes": false,
-          "datasource": "prometheus",
+          "datasource": {
+            "type": "prometheus",
+            "uid": "${DS_PROMETHEUS}"
+          },
           "fieldConfig": {
             "defaults": {
               "custom": {}
             },
             "overrides": []
           },
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-relay-config

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-relay-config

@@ -6,9 +6,9 @@

   namespace: kube-system
 data:
   config.yaml: "cluster-name: cluster\npeer-service: \"hubble-peer.kube-system.svc.cluster.local:443\"\
     \nlisten-address: :4245\ngops: true\ngops-port: \"9893\"\ndial-timeout: \nretry-timeout:\
     \ \nsort-buffer-len-max: \nsort-buffer-drain-timeout: \ntls-hubble-client-cert-file:\
     \ /var/lib/hubble-relay/tls/client.crt\ntls-hubble-client-key-file: /var/lib/hubble-relay/tls/client.key\n\
-    tls-hubble-server-ca-files: /var/lib/hubble-relay/tls/hubble-server-ca.crt\ndisable-server-tls:\
-    \ true\n"
+    tls-hubble-server-ca-files: /var/lib/hubble-relay/tls/hubble-server-ca.crt\n\n\
+    disable-server-tls: true\n"
 
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-dashboard

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-dashboard

@@ -15,3240 +15,1059 @@

     nginx.ingress.kubernetes.io/auth-response-headers: Remote-User,Remote-Name,Remote-Groups,Remote-Email
     nginx.ingress.kubernetes.io/auth-signin: https://auth...PLACEHOLDER../?rm=$request_method
     nginx.ingress.kubernetes.io/auth-snippet: proxy_set_header X-Forwarded-Method
       $request_method;
     nginx.ingress.kubernetes.io/auth-url: http://authelia.auth.svc.cluster.local/api/verify
 data:
-  hubble-dashboard.json: |
-    {
-      "annotations": {
-        "list": [
-          {
-            "builtIn": 1,
-            "datasource": "-- Grafana --",
-            "enable": true,
-            "hide": true,
-            "iconColor": "rgba(0, 211, 255, 1)",
-            "name": "Annotations & Alerts",
-            "type": "dashboard"
-          }
-        ]
-      },
-      "editable": true,
-      "gnetId": null,
-      "graphTooltip": 0,
-      "id": 3,
-      "links": [],
-      "panels": [
-        {
-          "collapsed": false,
-          "gridPos": {
-            "h": 1,
-            "w": 24,
-            "x": 0,
-            "y": 0
-          },
-          "id": 14,
-          "panels": [],
-          "title": "General Processing",
-          "type": "row"
-        },
-        {
-          "aliasColors": {},
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
-          "datasource": "prometheus",
-          "fill": 1,
-          "gridPos": {
-            "h": 5,
-            "w": 12,
-            "x": 0,
-            "y": 1
-          },
-          "id": 12,
-          "legend": {
-            "avg": false,
-            "current": false,
-            "max": false,
-            "min": false,
-            "show": true,
-            "total": false,
-            "values": false
-          },
-          "lines": true,
-          "linewidth": 1,
-          "links": [],
-          "nullPointMode": "null",
-          "options": {},
-          "percentage": false,
-          "pointradius": 2,
-          "points": false,
-          "renderer": "flot",
-          "seriesOverrides": [
-            {
-              "alias": "max",
-              "fillBelowTo": "avg",
-              "lines": false
-            },
-            {
-              "alias": "avg",
-              "fill": 0,
-              "fillBelowTo": "min"
-            },
-            {
-              "alias": "min",
-              "lines": false
-            }
-          ],
-          "spaceLength": 10,
-          "stack": false,
-          "steppedLine": false,
-          "targets": [
-            {
-              "expr": "avg(sum(rate(hubble_flows_processed_total[1m])) by (pod))",
-              "format": "time_series",
-              "intervalFactor": 1,
-              "legendFormat": "avg",
-              "refId": "A"
-            },
-            {
-              "expr": "min(sum(rate(hubble_flows_processed_total[1m])) by (pod))",
-              "format": "time_series",
-              "intervalFactor": 1,
-              "legendFormat": "min",
-              "refId": "B"
-            },
-            {
-              "expr": "max(sum(rate(hubble_flows_processed_total[1m])) by (pod))",
-              "format": "time_series",
-              "intervalFactor": 1,
-              "legendFormat": "max",
-              "refId": "C"
-            }
-          ],
-          "thresholds": [],
-          "timeFrom": null,
-          "timeRegions": [],
-          "timeShift": null,
-          "title": "Flows processed Per Node",
-          "tooltip": {
-            "shared": true,
-            "sort": 1,
-            "value_type": "individual"
-          },
-          "type": "graph",
-          "xaxis": {
-            "buckets": null,
-            "mode": "time",
-            "name": null,
-            "show": true,
-            "values": []
-          },
-          "yaxes": [
-            {
-              "format": "ops",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            },
-            {
-              "format": "short",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            }
-          ],
-          "yaxis": {
-            "align": false,
-            "alignLevel": null
-          }
-        },
-        {
-          "aliasColors": {},
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
-          "datasource": "prometheus",
-          "fill": 1,
-          "gridPos": {
-            "h": 5,
-            "w": 12,
-            "x": 12,
-            "y": 1
-          },
-          "id": 32,
-          "legend": {
-            "avg": false,
-            "current": false,
-            "max": false,
-            "min": false,
-            "show": true,
-            "total": false,
-            "values": false
-          },
-          "lines": true,
-          "linewidth": 1,
-          "links": [],
-          "nullPointMode": "null",
-          "options": {},
-          "percentage": false,
-          "pointradius": 2,
-          "points": false,
-          "renderer": "flot",
-          "seriesOverrides": [],
-          "spaceLength": 10,
-          "stack": true,
-          "steppedLine": false,
-          "targets": [
-            {
-              "expr": "sum(rate(hubble_flows_processed_total[1m])) by (pod, type)",
-              "format": "time_series",
-              "intervalFactor": 1,
-              "legendFormat": "{{type}}",
-              "refId": "A"
-            }
-          ],
-          "thresholds": [],
-          "timeFrom": null,
-          "timeRegions": [],
-          "timeShift": null,
-          "title": "Flows Types",
-          "tooltip": {
-            "shared": true,
-            "sort": 2,
-            "value_type": "individual"
-          },
-          "type": "graph",
-          "xaxis": {
-            "buckets": null,
-            "mode": "time",
-            "name": null,
-            "show": true,
-            "values": []
-          },
-          "yaxes": [
-            {
-              "format": "ops",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            },
-            {
-              "format": "short",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            }
-          ],
-          "yaxis": {
-            "align": false,
-            "alignLevel": null
-          }
-        },
-        {
-          "aliasColors": {},
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
-          "datasource": "prometheus",
-          "fill": 1,
-          "gridPos": {
-            "h": 5,
-            "w": 12,
-            "x": 0,
-            "y": 6
-          },
-          "id": 59,
-          "legend": {
-            "avg": false,
-            "current": false,
-            "max": false,
-            "min": false,
-            "show": true,
-            "total": false,
-            "values": false
-          },
-          "lines": true,
-          "linewidth": 1,
-          "links": [],
-          "nullPointMode": "null",
-          "options": {},
-          "percentage": false,
-          "pointradius": 2,
-          "points": false,
-          "renderer": "flot",
-          "seriesOverrides": [],
-          "spaceLength": 10,
-          "stack": true,
-          "steppedLine": false,
-          "targets": [
-            {
-              "expr": "sum(rate(hubble_flows_processed_total{type=\"L7\"}[1m])) by (pod, subtype)",
-              "format": "time_series",
-              "intervalFactor": 1,
-              "legendFormat": "{{subtype}}",
-              "refId": "A"
-            }
-          ],
-          "thresholds": [],
-          "timeFrom": null,
-          "timeRegions": [],
-          "timeShift": null,
-          "title": "L7 Flow Distribution",
-          "tooltip": {
-            "shared": true,
-            "sort": 2,
-            "value_type": "individual"
-          },
-          "type": "graph",
-          "xaxis": {
-            "buckets": null,
-            "mode": "time",
-            "name": null,
-            "show": true,
-            "values": []
-          },
-          "yaxes": [
-            {
-              "format": "ops",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            },
-            {
-              "format": "short",
-              "label": null,
-              "logBase": 1,
-              "max": null,
-              "min": null,
-              "show": true
-            }
-          ],
-          "yaxis": {
-            "align": false,
-            "alignLevel": null
-          }
-        },
-        {
-          "aliasColors": {},
-          "bars": false,
-          "dashLength": 10,
-          "dashes": false,
-          "datasource": "prometheus",
-          "fill": 1,
-          "gridPos": {
-            "h": 5,
-            "w": 12,
-            "x": 12,
-            "y": 6
-          },
-          "id": 60,
-          "legend": {
-            "avg": false,
-            "current": false,
-            "max": false,
-            "min": false,
-            "show": true,
-            "total": false,
-            "values": false
-          },
-          "lines": true,
-          "linewidth": 1,
-          "links": [],
-          "nullPointMode": "null",
-          "options": {},
-          "percentage": false,
-          "pointradius": 2,
-          "points": false,
-          "renderer": "flot",
-          "seriesOverrides": [],
-          "spaceLength": 10,
-          "stack": true,
-          "steppedLine": false,
-          "targets": [
-            {
-              "expr": "sum(rate(hubble_flows_processed_total{type=\"Trace\"}[1m])) by (pod, subtype)",
-              "format": "time_series",
-              "intervalFactor": 1,
-              "legendFormat": "{{subtype}}",
[Diff truncated by flux-local]
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-dns-namespace

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-dns-namespace

@@ -199,15 +199,15 @@

     \ ],\n    \"refresh\": \"\",\n    \"revision\": 1,\n    \"schemaVersion\": 38,\n\
     \    \"style\": \"dark\",\n    \"tags\": [\n      \"kubecon-demo\"\n    ],\n \
     \   \"templating\": {\n      \"list\": [\n        {\n          \"current\": {\n\
     \            \"selected\": false,\n            \"text\": \"default\",\n      \
     \      \"value\": \"default\"\n          },\n          \"hide\": 0,\n        \
     \  \"includeAll\": false,\n          \"label\": \"Data Source\",\n          \"\
-    multi\": false,\n          \"name\": \"prometheus_datasource\",\n          \"\
-    options\": [],\n          \"query\": \"prometheus\",\n          \"queryValue\"\
-    : \"\",\n          \"refresh\": 1,\n          \"regex\": \"(?!grafanacloud-usage|grafanacloud-ml-metrics).+\"\
+    multi\": false,\n          \"name\": \"DS_PROMETHEUS\",\n          \"options\"\
+    : [],\n          \"query\": \"prometheus\",\n          \"queryValue\": \"\",\n\
+    \          \"refresh\": 1,\n          \"regex\": \"(?!grafanacloud-usage|grafanacloud-ml-metrics).+\"\
     ,\n          \"skipUrlSync\": false,\n          \"type\": \"datasource\"\n   \
     \     },\n        {\n          \"current\": {},\n          \"datasource\": {\n\
     \            \"type\": \"prometheus\",\n            \"uid\": \"${DS_PROMETHEUS}\"\
     \n          },\n          \"definition\": \"label_values(cilium_version, cluster)\"\
     ,\n          \"hide\": 0,\n          \"includeAll\": true,\n          \"multi\"\
     : true,\n          \"name\": \"cluster\",\n          \"options\": [],\n      \
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-l7-http-metrics-by-workload

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-l7-http-metrics-by-workload

@@ -17,13 +17,22 @@

     nginx.ingress.kubernetes.io/auth-snippet: proxy_set_header X-Forwarded-Method
       $request_method;
     nginx.ingress.kubernetes.io/auth-url: http://authelia.auth.svc.cluster.local/api/verify
 data:
   hubble-l7-http-metrics-by-workload.json: |
     {
-      "__inputs": [],
+      "__inputs": [
+        {
+          "name": "DS_PROMETHEUS",
+          "label": "prometheus",
+          "description": "",
+          "type": "datasource",
+          "pluginId": "prometheus",
+          "pluginName": "Prometheus"
+        }
+      ],
       "__elements": {},
       "__requires": [
         {
           "type": "grafana",
           "id": "grafana",
           "name": "Grafana",
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-network-overview-namespace

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/hubble-network-overview-namespace

@@ -355,15 +355,15 @@

     \    \"refresh\": \"\",\n    \"revision\": 1,\n    \"schemaVersion\": 38,\n  \
     \  \"style\": \"dark\",\n    \"tags\": [\n      \"kubecon-demo\"\n    ],\n   \
     \ \"templating\": {\n      \"list\": [\n        {\n          \"current\": {\n\
     \            \"selected\": false,\n            \"text\": \"default\",\n      \
     \      \"value\": \"default\"\n          },\n          \"hide\": 0,\n        \
     \  \"includeAll\": false,\n          \"label\": \"Data Source\",\n          \"\
-    multi\": false,\n          \"name\": \"prometheus_datasource\",\n          \"\
-    options\": [],\n          \"query\": \"prometheus\",\n          \"queryValue\"\
-    : \"\",\n          \"refresh\": 1,\n          \"regex\": \"(?!grafanacloud-usage|grafanacloud-ml-metrics).+\"\
+    multi\": false,\n          \"name\": \"DS_PROMETHEUS\",\n          \"options\"\
+    : [],\n          \"query\": \"prometheus\",\n          \"queryValue\": \"\",\n\
+    \          \"refresh\": 1,\n          \"regex\": \"(?!grafanacloud-usage|grafanacloud-ml-metrics).+\"\
     ,\n          \"skipUrlSync\": false,\n          \"type\": \"datasource\"\n   \
     \     },\n        {\n          \"current\": {},\n          \"datasource\": {\n\
     \            \"type\": \"prometheus\",\n            \"uid\": \"${DS_PROMETHEUS}\"\
     \n          },\n          \"definition\": \"label_values(cilium_version, cluster)\"\
     ,\n          \"hide\": 0,\n          \"includeAll\": true,\n          \"multi\"\
     : true,\n          \"name\": \"cluster\",\n          \"options\": [],\n      \
--- HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium

+++ HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium

@@ -106,14 +106,12 @@

   verbs:
   - get
   - update
 - apiGroups:
   - cilium.io
   resources:
-  - ciliumnetworkpolicies/status
-  - ciliumclusterwidenetworkpolicies/status
   - ciliumendpoints/status
   - ciliumendpoints
   - ciliuml2announcementpolicies/status
   - ciliumbgpnodeconfigs/status
   verbs:
   - patch
--- HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator

+++ HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator

@@ -170,12 +170,13 @@

   - ciliumpodippools.cilium.io
 - apiGroups:
   - cilium.io
   resources:
   - ciliumloadbalancerippools
   - ciliumpodippools
+  - ciliumbgppeeringpolicies
   - ciliumbgpclusterconfigs
   - ciliumbgpnodeconfigoverrides
   verbs:
   - get
   - list
   - watch
--- HelmRelease: kube-system/cilium Service: kube-system/cilium-agent

+++ HelmRelease: kube-system/cilium Service: kube-system/cilium-agent

@@ -15,11 +15,7 @@

     k8s-app: cilium
   ports:
   - name: metrics
     port: 9962
     protocol: TCP
     targetPort: prometheus
-  - name: envoy-metrics
-    port: 9964
-    protocol: TCP
-    targetPort: envoy-metrics
 
--- HelmRelease: kube-system/cilium Service: kube-system/hubble-relay

+++ HelmRelease: kube-system/cilium Service: kube-system/hubble-relay

@@ -12,8 +12,8 @@

   type: ClusterIP
   selector:
     k8s-app: hubble-relay
   ports:
   - protocol: TCP
     port: 80
-    targetPort: 4245
+    targetPort: grpc
 
--- HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium

+++ HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium

@@ -16,24 +16,24 @@

     rollingUpdate:
       maxUnavailable: 2
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/cilium-configmap-checksum: c819e71870cfcc12dcc618fd57c003b65798d784adbd9cf4e3be79f2a31a5d0e
+        cilium.io/cilium-configmap-checksum: 677b4112b0d27f03bdf8e856c9468c7c4bcbf46c4e7b5b9479a0f64eda70227d
       labels:
         k8s-app: cilium
         app.kubernetes.io/name: cilium-agent
         app.kubernetes.io/part-of: cilium
     spec:
       securityContext:
         appArmorProfile:
           type: Unconfined
       containers:
       - name: cilium-agent
-        image: quay.io/cilium/cilium:v1.15.6@sha256:6aa840986a3a9722cd967ef63248d675a87add7e1704740902d5d3162f0c0def
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         command:
         - cilium-agent
         args:
         - --config-dir=/tmp/cilium/config-map
         startupProbe:
@@ -133,16 +133,12 @@

           hostPort: 4244
           protocol: TCP
         - name: prometheus
           containerPort: 9962
           hostPort: 9962
           protocol: TCP
-        - name: envoy-metrics
-          containerPort: 9964
-          hostPort: 9964
-          protocol: TCP
         - name: hubble-metrics
           containerPort: 9965
           hostPort: 9965
           protocol: TCP
         securityContext:
           seLinuxOptions:
@@ -162,12 +158,15 @@

             - SETGID
             - SETUID
             drop:
             - ALL
         terminationMessagePolicy: FallbackToLogsOnError
         volumeMounts:
+        - name: envoy-sockets
+          mountPath: /var/run/cilium/envoy/sockets
+          readOnly: false
         - mountPath: /host/proc/sys/net
           name: host-proc-sys-net
         - mountPath: /host/proc/sys/kernel
           name: host-proc-sys-kernel
         - name: bpf-maps
           mountPath: /sys/fs/bpf
@@ -190,13 +189,13 @@

           mountPath: /var/lib/cilium/tls/hubble
           readOnly: true
         - name: tmp
           mountPath: /tmp
       initContainers:
       - name: config
-        image: quay.io/cilium/cilium:v1.15.6@sha256:6aa840986a3a9722cd967ef63248d675a87add7e1704740902d5d3162f0c0def
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         command:
         - cilium-dbg
         - build-config
         env:
         - name: K8S_NODE_NAME
@@ -215,13 +214,13 @@

           value: '6444'
         volumeMounts:
         - name: tmp
           mountPath: /tmp
         terminationMessagePolicy: FallbackToLogsOnError
       - name: mount-cgroup
-        image: quay.io/cilium/cilium:v1.15.6@sha256:6aa840986a3a9722cd967ef63248d675a87add7e1704740902d5d3162f0c0def
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         env:
         - name: CGROUP_ROOT
           value: /sys/fs/cgroup
         - name: BIN_PATH
           value: /var/lib/rancher/k3s/data/current/bin
@@ -247,13 +246,13 @@

             - SYS_ADMIN
             - SYS_CHROOT
             - SYS_PTRACE
             drop:
             - ALL
       - name: apply-sysctl-overwrites
-        image: quay.io/cilium/cilium:v1.15.6@sha256:6aa840986a3a9722cd967ef63248d675a87add7e1704740902d5d3162f0c0def
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         env:
         - name: BIN_PATH
           value: /var/lib/rancher/k3s/data/current/bin
         command:
         - sh
@@ -277,13 +276,13 @@

             - SYS_ADMIN
             - SYS_CHROOT
             - SYS_PTRACE
             drop:
             - ALL
       - name: mount-bpf-fs
-        image: quay.io/cilium/cilium:v1.15.6@sha256:6aa840986a3a9722cd967ef63248d675a87add7e1704740902d5d3162f0c0def
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         args:
         - mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf
         command:
         - /bin/bash
         - -c
@@ -293,13 +292,13 @@

           privileged: true
         volumeMounts:
         - name: bpf-maps
           mountPath: /sys/fs/bpf
           mountPropagation: Bidirectional
       - name: clean-cilium-state
-        image: quay.io/cilium/cilium:v1.15.6@sha256:6aa840986a3a9722cd967ef63248d675a87add7e1704740902d5d3162f0c0def
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         command:
         - /init-container.sh
         env:
         - name: CILIUM_ALL_STATE
           valueFrom:
@@ -341,13 +340,13 @@

         - name: cilium-cgroup
           mountPath: /sys/fs/cgroup
           mountPropagation: HostToContainer
         - name: cilium-run
           mountPath: /var/run/cilium
       - name: install-cni-binaries
-        image: quay.io/cilium/cilium:v1.15.6@sha256:6aa840986a3a9722cd967ef63248d675a87add7e1704740902d5d3162f0c0def
+        image: quay.io/cilium/cilium:v1.16.1@sha256:0b4a3ab41a4760d86b7fc945b8783747ba27f29dac30dd434d94f2c9e3679f39
         imagePullPolicy: IfNotPresent
         command:
         - /install-plugin.sh
         resources:
           requests:
             cpu: 100m
@@ -362,13 +361,12 @@

         terminationMessagePolicy: FallbackToLogsOnError
         volumeMounts:
         - name: cni-path
           mountPath: /host/opt/cni/bin
       restartPolicy: Always
       priorityClassName: system-node-critical
-      serviceAccount: cilium
       serviceAccountName: cilium
       automountServiceAccountToken: true
       terminationGracePeriodSeconds: 1
       hostNetwork: true
       affinity:
         podAntiAffinity:
@@ -412,12 +410,16 @@

         hostPath:
           path: /lib/modules
       - name: xtables-lock
         hostPath:
           path: /run/xtables.lock
           type: FileOrCreate
+      - name: envoy-sockets
+        hostPath:
+          path: /var/run/cilium/envoy/sockets
+          type: DirectoryOrCreate
       - name: clustermesh-secrets
         projected:
           defaultMode: 256
           sources:
           - secret:
               name: cilium-clustermesh
@@ -429,12 +431,22 @@

               - key: tls.key
                 path: common-etcd-client.key
               - key: tls.crt
                 path: common-etcd-client.crt
               - key: ca.crt
                 path: common-etcd-client-ca.crt
+          - secret:
+              name: clustermesh-apiserver-local-cert
+              optional: true
+              items:
+              - key: tls.key
+                path: local-etcd-client.key
+              - key: tls.crt
+                path: local-etcd-client.crt
+              - key: ca.crt
+                path: local-etcd-client-ca.crt
       - name: host-proc-sys-net
         hostPath:
           path: /proc/sys/net
           type: Directory
       - name: host-proc-sys-kernel
         hostPath:
--- HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator

+++ HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator

@@ -20,22 +20,22 @@

       maxSurge: 25%
       maxUnavailable: 100%
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/cilium-configmap-checksum: c819e71870cfcc12dcc618fd57c003b65798d784adbd9cf4e3be79f2a31a5d0e
+        cilium.io/cilium-configmap-checksum: 677b4112b0d27f03bdf8e856c9468c7c4bcbf46c4e7b5b9479a0f64eda70227d
       labels:
         io.cilium/app: operator
         name: cilium-operator
         app.kubernetes.io/part-of: cilium
         app.kubernetes.io/name: cilium-operator
     spec:
       containers:
       - name: cilium-operator
-        image: quay.io/cilium/operator-generic:v1.15.6@sha256:5789f0935eef96ad571e4f5565a8800d3a8fbb05265cf6909300cd82fd513c3d
+        image: quay.io/cilium/operator-generic:v1.16.1@sha256:3bc7e7a43bc4a4d8989cb7936c5d96675dd2d02c306adf925ce0a7c35aa27dc4
         imagePullPolicy: IfNotPresent
         command:
         - cilium-operator-generic
         args:
         - --config-dir=/tmp/cilium/config-map
         - --debug=$(CILIUM_DEBUG)
@@ -89,13 +89,12 @@

           mountPath: /tmp/cilium/config-map
           readOnly: true
         terminationMessagePolicy: FallbackToLogsOnError
       hostNetwork: true
       restartPolicy: Always
       priorityClassName: system-cluster-critical
-      serviceAccount: cilium-operator
       serviceAccountName: cilium-operator
       automountServiceAccountToken: true
       affinity:
         podAntiAffinity:
           requiredDuringSchedulingIgnoredDuringExecution:
           - labelSelector:
--- HelmRelease: kube-system/cilium Deployment: kube-system/hubble-relay

+++ HelmRelease: kube-system/cilium Deployment: kube-system/hubble-relay

@@ -17,13 +17,13 @@

     rollingUpdate:
       maxUnavailable: 1
     type: RollingUpdate
   template:
     metadata:
       annotations:
-        cilium.io/hubble-relay-configmap-checksum: de56c8021f9fc309e82998fae29b8413e10bc3bab55385a58656186103a10ca0
+        cilium.io/hubble-relay-configmap-checksum: 9240bed4cb239b932143ea5c6d1d7f382be9d56537acf8f5b95ebda7a8c5a882
       labels:
         k8s-app: hubble-relay
         app.kubernetes.io/name: hubble-relay
         app.kubernetes.io/part-of: cilium
     spec:
       securityContext:
@@ -34,13 +34,13 @@

           capabilities:
             drop:
             - ALL
           runAsGroup: 65532
           runAsNonRoot: true
           runAsUser: 65532
-        image: quay.io/cilium/hubble-relay:v1.15.6@sha256:a0863dd70d081b273b87b9b7ce7e2d3f99171c2f5e202cd57bc6691e51283e0c
+        image: quay.io/cilium/hubble-relay:v1.16.1@sha256:2e1b4c739a676ae187d4c2bfc45c3e865bda2567cc0320a90cb666657fcfcc35
         imagePullPolicy: IfNotPresent
         command:
         - hubble-relay
         args:
         - serve
         ports:
@@ -50,30 +50,32 @@

           grpc:
             port: 4222
           timeoutSeconds: 3
         livenessProbe:
           grpc:
             port: 4222
-          timeoutSeconds: 3
+          timeoutSeconds: 10
+          initialDelaySeconds: 10
+          periodSeconds: 10
+          failureThreshold: 12
         startupProbe:
           grpc:
             port: 4222
-          timeoutSeconds: 3
+          initialDelaySeconds: 10
           failureThreshold: 20
           periodSeconds: 3
         volumeMounts:
         - name: config
           mountPath: /etc/hubble-relay
           readOnly: true
         - name: tls
           mountPath: /var/lib/hubble-relay/tls
           readOnly: true
         terminationMessagePolicy: FallbackToLogsOnError
       restartPolicy: Always
       priorityClassName: null
-      serviceAccount: hubble-relay
       serviceAccountName: hubble-relay
       automountServiceAccountToken: false
       terminationGracePeriodSeconds: 1
       affinity:
         podAffinity:
           requiredDuringSchedulingIgnoredDuringExecution:
--- HelmRelease: kube-system/cilium Deployment: kube-system/hubble-ui

+++ HelmRelease: kube-system/cilium Deployment: kube-system/hubble-ui

@@ -28,18 +28,17 @@

     spec:
       securityContext:
         fsGroup: 1001
         runAsGroup: 1001
         runAsUser: 1001
       priorityClassName: null
-      serviceAccount: hubble-ui
       serviceAccountName: hubble-ui
       automountServiceAccountToken: true
       containers:
       - name: frontend
-        image: quay.io/cilium/hubble-ui:v0.13.0@sha256:7d663dc16538dd6e29061abd1047013a645e6e69c115e008bee9ea9fef9a6666
+        image: quay.io/cilium/hubble-ui:v0.13.1@sha256:e2e9313eb7caf64b0061d9da0efbdad59c6c461f6ca1752768942bfeda0796c6
         imagePullPolicy: IfNotPresent
         ports:
         - name: http
           containerPort: 8081
         livenessProbe:
           httpGet:
@@ -54,13 +53,13 @@

           mountPath: /etc/nginx/conf.d/default.conf
           subPath: nginx.conf
         - name: tmp-dir
           mountPath: /tmp
         terminationMessagePolicy: FallbackToLogsOnError
       - name: backend
-        image: quay.io/cilium/hubble-ui-backend:v0.13.0@sha256:1e7657d997c5a48253bb8dc91ecee75b63018d16ff5e5797e5af367336bc8803
+        image: quay.io/cilium/hubble-ui-backend:v0.13.1@sha256:0e0eed917653441fded4e7cdb096b7be6a3bddded5a2dd10812a27b1fc6ed95b
         imagePullPolicy: IfNotPresent
         env:
         - name: EVENTS_SERVER_PORT
           value: '8090'
         - name: FLOWS_API_ADDR
           value: hubble-relay:80
--- HelmRelease: kube-system/cilium ServiceMonitor: kube-system/hubble

+++ HelmRelease: kube-system/cilium ServiceMonitor: kube-system/hubble

@@ -15,12 +15,13 @@

     - kube-system
   endpoints:
   - port: hubble-metrics
     interval: 10s
     honorLabels: true
     path: /metrics
+    scheme: http
     relabelings:
     - replacement: ${1}
       sourceLabels:
       - __meta_kubernetes_pod_node_name
       targetLabel: node
 
--- HelmRelease: kube-system/cilium ServiceAccount: kube-system/cilium-envoy

+++ HelmRelease: kube-system/cilium ServiceAccount: kube-system/cilium-envoy

@@ -0,0 +1,7 @@

+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: cilium-envoy
+  namespace: kube-system
+
--- HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-envoy-config

+++ HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-envoy-config

@@ -0,0 +1,326 @@

+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  name: cilium-envoy-config
+  namespace: kube-system
+data:
+  bootstrap-config.json: |
+    {
+      "node": {
+        "id": "host~127.0.0.1~no-id~localdomain",
+        "cluster": "ingress-cluster"
+      },
+      "staticResources": {
+        "listeners": [
+          {
+            "name": "envoy-prometheus-metrics-listener",
+            "address": {
+              "socket_address": {
+                "address": "0.0.0.0",
+                "port_value": 9964
+              }
+            },
+            "filter_chains": [
+              {
+                "filters": [
+                  {
+                    "name": "envoy.filters.network.http_connection_manager",
+                    "typed_config": {
+                      "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
+                      "stat_prefix": "envoy-prometheus-metrics-listener",
+                      "route_config": {
+                        "virtual_hosts": [
+                          {
+                            "name": "prometheus_metrics_route",
+                            "domains": [
+                              "*"
+                            ],
+                            "routes": [
+                              {
+                                "name": "prometheus_metrics_route",
+                                "match": {
+                                  "prefix": "/metrics"
+                                },
+                                "route": {
+                                  "cluster": "/envoy-admin",
+                                  "prefix_rewrite": "/stats/prometheus"
+                                }
+                              }
+                            ]
+                          }
+                        ]
+                      },
+                      "http_filters": [
+                        {
+                          "name": "envoy.filters.http.router",
+                          "typed_config": {
+                            "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
+                          }
+                        }
+                      ],
+                      "stream_idle_timeout": "0s"
+                    }
+                  }
+                ]
+              }
+            ]
+          },
+          {
+            "name": "envoy-health-listener",
+            "address": {
+              "socket_address": {
+                "address": "127.0.0.1",
+                "port_value": 9878
+              }
+            },
+            "filter_chains": [
+              {
+                "filters": [
+                  {
+                    "name": "envoy.filters.network.http_connection_manager",
+                    "typed_config": {
+                      "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
+                      "stat_prefix": "envoy-health-listener",
+                      "route_config": {
+                        "virtual_hosts": [
+                          {
+                            "name": "health",
+                            "domains": [
+                              "*"
+                            ],
+                            "routes": [
+                              {
+                                "name": "health",
+                                "match": {
+                                  "prefix": "/healthz"
+                                },
+                                "route": {
+                                  "cluster": "/envoy-admin",
+                                  "prefix_rewrite": "/ready"
+                                }
+                              }
+                            ]
+                          }
+                        ]
+                      },
+                      "http_filters": [
+                        {
+                          "name": "envoy.filters.http.router",
+                          "typed_config": {
+                            "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
+                          }
+                        }
+                      ],
+                      "stream_idle_timeout": "0s"
+                    }
+                  }
+                ]
+              }
+            ]
+          }
+        ],
+        "clusters": [
+          {
+            "name": "ingress-cluster",
+            "type": "ORIGINAL_DST",
+            "connectTimeout": "2s",
+            "lbPolicy": "CLUSTER_PROVIDED",
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "commonHttpProtocolOptions": {
+                  "idleTimeout": "60s",
+                  "maxConnectionDuration": "0s",
+                  "maxRequestsPerConnection": 0
+                },
+                "useDownstreamProtocolConfig": {}
+              }
+            },
+            "cleanupInterval": "2.500s"
+          },
+          {
+            "name": "egress-cluster-tls",
+            "type": "ORIGINAL_DST",
+            "connectTimeout": "2s",
+            "lbPolicy": "CLUSTER_PROVIDED",
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "commonHttpProtocolOptions": {
+                  "idleTimeout": "60s",
+                  "maxConnectionDuration": "0s",
+                  "maxRequestsPerConnection": 0
+                },
+                "upstreamHttpProtocolOptions": {},
+                "useDownstreamProtocolConfig": {}
+              }
+            },
+            "cleanupInterval": "2.500s",
+            "transportSocket": {
+              "name": "cilium.tls_wrapper",
+              "typedConfig": {
+                "@type": "type.googleapis.com/cilium.UpstreamTlsWrapperContext"
+              }
+            }
+          },
+          {
+            "name": "egress-cluster",
+            "type": "ORIGINAL_DST",
+            "connectTimeout": "2s",
+            "lbPolicy": "CLUSTER_PROVIDED",
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "commonHttpProtocolOptions": {
+                  "idleTimeout": "60s",
+                  "maxConnectionDuration": "0s",
+                  "maxRequestsPerConnection": 0
+                },
+                "useDownstreamProtocolConfig": {}
+              }
+            },
+            "cleanupInterval": "2.500s"
+          },
+          {
+            "name": "ingress-cluster-tls",
+            "type": "ORIGINAL_DST",
+            "connectTimeout": "2s",
+            "lbPolicy": "CLUSTER_PROVIDED",
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "commonHttpProtocolOptions": {
+                  "idleTimeout": "60s",
+                  "maxConnectionDuration": "0s",
+                  "maxRequestsPerConnection": 0
+                },
+                "upstreamHttpProtocolOptions": {},
+                "useDownstreamProtocolConfig": {}
+              }
+            },
+            "cleanupInterval": "2.500s",
+            "transportSocket": {
+              "name": "cilium.tls_wrapper",
+              "typedConfig": {
+                "@type": "type.googleapis.com/cilium.UpstreamTlsWrapperContext"
+              }
+            }
+          },
+          {
+            "name": "xds-grpc-cilium",
+            "type": "STATIC",
+            "connectTimeout": "2s",
+            "loadAssignment": {
+              "clusterName": "xds-grpc-cilium",
+              "endpoints": [
+                {
+                  "lbEndpoints": [
+                    {
+                      "endpoint": {
+                        "address": {
+                          "pipe": {
+                            "path": "/var/run/cilium/envoy/sockets/xds.sock"
+                          }
+                        }
+                      }
+                    }
+                  ]
+                }
+              ]
+            },
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "explicitHttpConfig": {
+                  "http2ProtocolOptions": {}
+                }
+              }
+            }
+          },
+          {
+            "name": "/envoy-admin",
+            "type": "STATIC",
+            "connectTimeout": "2s",
+            "loadAssignment": {
+              "clusterName": "/envoy-admin",
+              "endpoints": [
+                {
+                  "lbEndpoints": [
+                    {
+                      "endpoint": {
+                        "address": {
+                          "pipe": {
+                            "path": "/var/run/cilium/envoy/sockets/admin.sock"
+                          }
+                        }
+                      }
+                    }
+                  ]
+                }
+              ]
+            }
+          }
+        ]
+      },
+      "dynamicResources": {
+        "ldsConfig": {
+          "apiConfigSource": {
+            "apiType": "GRPC",
+            "transportApiVersion": "V3",
+            "grpcServices": [
+              {
+                "envoyGrpc": {
[Diff truncated by flux-local]
--- HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium-envoy

+++ HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium-envoy

@@ -0,0 +1,171 @@

+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: cilium-envoy
+  namespace: kube-system
+  labels:
+    k8s-app: cilium-envoy
+    app.kubernetes.io/part-of: cilium
+    app.kubernetes.io/name: cilium-envoy
+    name: cilium-envoy
+spec:
+  selector:
+    matchLabels:
+      k8s-app: cilium-envoy
+  updateStrategy:
+    rollingUpdate:
+      maxUnavailable: 2
+    type: RollingUpdate
+  template:
+    metadata:
+      annotations:
+        prometheus.io/port: '9964'
+        prometheus.io/scrape: 'true'
+      labels:
+        k8s-app: cilium-envoy
+        name: cilium-envoy
+        app.kubernetes.io/name: cilium-envoy
+        app.kubernetes.io/part-of: cilium
+    spec:
+      securityContext:
+        appArmorProfile:
+          type: Unconfined
+      containers:
+      - name: cilium-envoy
+        image: quay.io/cilium/cilium-envoy:v1.29.7-39a2a56bbd5b3a591f69dbca51d3e30ef97e0e51@sha256:bd5ff8c66716080028f414ec1cb4f7dc66f40d2fb5a009fff187f4a9b90b566b
+        imagePullPolicy: IfNotPresent
+        command:
+        - /usr/bin/cilium-envoy-starter
+        args:
+        - --
+        - -c /var/run/cilium/envoy/bootstrap-config.json
+        - --base-id 0
+        - --log-level info
+        - --log-format [%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v
+        startupProbe:
+          httpGet:
+            host: 127.0.0.1
+            path: /healthz
+            port: 9878
+            scheme: HTTP
+          failureThreshold: 105
+          periodSeconds: 2
+          successThreshold: 1
+          initialDelaySeconds: 5
+        livenessProbe:
+          httpGet:
+            host: 127.0.0.1
+            path: /healthz
+            port: 9878
+            scheme: HTTP
+          periodSeconds: 30
+          successThreshold: 1
+          failureThreshold: 10
+          timeoutSeconds: 5
+        readinessProbe:
+          httpGet:
+            host: 127.0.0.1
+            path: /healthz
+            port: 9878
+            scheme: HTTP
+          periodSeconds: 30
+          successThreshold: 1
+          failureThreshold: 3
+          timeoutSeconds: 5
+        env:
+        - name: K8S_NODE_NAME
+          valueFrom:
+            fieldRef:
+              apiVersion: v1
+              fieldPath: spec.nodeName
+        - name: CILIUM_K8S_NAMESPACE
+          valueFrom:
+            fieldRef:
+              apiVersion: v1
+              fieldPath: metadata.namespace
+        - name: KUBERNETES_SERVICE_HOST
+          value: 127.0.0.1
+        - name: KUBERNETES_SERVICE_PORT
+          value: '6444'
+        ports:
+        - name: envoy-metrics
+          containerPort: 9964
+          hostPort: 9964
+          protocol: TCP
+        securityContext:
+          seLinuxOptions:
+            level: s0
+            type: spc_t
+          capabilities:
+            add:
+            - NET_ADMIN
+            - SYS_ADMIN
+            drop:
+            - ALL
+        terminationMessagePolicy: FallbackToLogsOnError
+        volumeMounts:
+        - name: envoy-sockets
+          mountPath: /var/run/cilium/envoy/sockets
+          readOnly: false
+        - name: envoy-artifacts
+          mountPath: /var/run/cilium/envoy/artifacts
+          readOnly: true
+        - name: envoy-config
+          mountPath: /var/run/cilium/envoy/
+          readOnly: true
+        - name: bpf-maps
+          mountPath: /sys/fs/bpf
+          mountPropagation: HostToContainer
+      restartPolicy: Always
+      priorityClassName: system-node-critical
+      serviceAccountName: cilium-envoy
+      automountServiceAccountToken: true
+      terminationGracePeriodSeconds: 1
+      hostNetwork: true
+      affinity:
+        nodeAffinity:
+          requiredDuringSchedulingIgnoredDuringExecution:
+            nodeSelectorTerms:
+            - matchExpressions:
+              - key: cilium.io/no-schedule
+                operator: NotIn
+                values:
+                - 'true'
+        podAffinity:
+          requiredDuringSchedulingIgnoredDuringExecution:
+          - labelSelector:
+              matchLabels:
+                k8s-app: cilium
+            topologyKey: kubernetes.io/hostname
+        podAntiAffinity:
+          requiredDuringSchedulingIgnoredDuringExecution:
+          - labelSelector:
+              matchLabels:
+                k8s-app: cilium-envoy
+            topologyKey: kubernetes.io/hostname
+      nodeSelector:
+        kubernetes.io/os: linux
+      tolerations:
+      - operator: Exists
+      volumes:
+      - name: envoy-sockets
+        hostPath:
+          path: /var/run/cilium/envoy/sockets
+          type: DirectoryOrCreate
+      - name: envoy-artifacts
+        hostPath:
+          path: /var/run/cilium/envoy/artifacts
+          type: DirectoryOrCreate
+      - name: envoy-config
+        configMap:
+          name: cilium-envoy-config
+          defaultMode: 256
+          items:
+          - key: bootstrap-config.json
+            path: bootstrap-config.json
+      - name: bpf-maps
+        hostPath:
+          path: /sys/fs/bpf
+          type: DirectoryOrCreate
+

@renovate renovate bot added the type/minor label Jul 24, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from f2ece2c to 72a691a Compare July 24, 2024 16:47
@renovate renovate bot removed the type/patch label Jul 24, 2024
@renovate renovate bot changed the title fix(helm): update chart cilium to 1.15.7 feat(helm): update chart cilium to 1.16.0 Jul 24, 2024
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.16.0 feat(helm): update chart cilium to 1.16.1 Aug 14, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from 72a691a to d058322 Compare August 14, 2024 14:31
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from d058322 to 7078b98 Compare September 26, 2024 12:46
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.16.1 feat(helm): update chart cilium to 1.16.2 Sep 26, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants