You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running telepresence status or telepresence version from the local client causes the traffic-manager in the cluster to crash.
To Reproduce
We deployed the telepresence v2.20.1 OSS chart into a Kubernetes cluster, and are running v2.20.1 of the client locally.
We are only using telepresence for the proxy/forwarding feature, and restricting traffic-manager to specific namespaces. So the only changes to the default Helm values are:
We connect our local client to the cluster with: telepresence connect --namespace staging --manager-namespace traffic-manager, and then run telepresence status.
The traffic-manager pod crashes and restarts after these logs:
Versions (please complete the following information):
Output of telepresence version:
OSS Client : v2.20.1
OSS Root Daemon : v2.20.1
OSS User Daemon : v2.20.1
OSS Traffic Manager: v2.20.1
Traffic Agent : not currently available
Operating system of workstation running telepresence commands
macOS Sequoia 15.0.1
Kubernetes environment and Version
Kubernetes 1.26.9, running on kops on AWS.
The text was updated successfully, but these errors were encountered:
bbensky
changed the title
panic: no ImageRetriever has been configured error from traffic manager
"panic: no ImageRetriever has been configured error" from traffic-manager
Oct 11, 2024
I'm able to reproduce this. It's a regression introduced when the status and version command requests the traffic-agent version from the traffic-manager, and with agentInjector.enabled=false, there's no such thing as a traffic-agent.
I'll create a patch release with a fix for this a.s.a.p.
The 2.20.2-rc.0 release candidate is available for download now, in case you want to give it a try.
Please also note that your expectation is incorrect. The "Traffic Agent : not currently available" is to be expected when you're using agentInjector.enabled=false.
Apologies for the terminology mistake. I tried the release candidate and it looks like the traffic-manager pod is no longer erroring and crashing when I run telepresence status and telepresence version` commands. Thanks for getting the fix up so quickly.
Describe the Bug
Running
telepresence status
ortelepresence version
from the local client causes the traffic-manager in the cluster to crash.To Reproduce
We deployed the telepresence v2.20.1 OSS chart into a Kubernetes cluster, and are running v2.20.1 of the client locally.
We are only using telepresence for the proxy/forwarding feature, and restricting traffic-manager to specific namespaces. So the only changes to the default Helm values are:
We connect our local client to the cluster with:
telepresence connect --namespace staging --manager-namespace traffic-manager
, and then runtelepresence status
.The traffic-manager pod crashes and restarts after these logs:
Aside from this issue, we are able to use the port-forward functionality (i.e. we can curl endpoints of services in the
staging
namespace).Expected behavior
Based on the instructions here, the output of
telepresence status
should include theTraffic Agent
image at the bottom of the output (https://www.getambassador.io/docs/telepresence/latest/howtos/outbound#proxying-outbound-traffic). The output when we run the command stops at theVersion
.Versions (please complete the following information):
telepresence version
:telepresence
commandsmacOS Sequoia 15.0.1
Kubernetes 1.26.9, running on kops on AWS.
The text was updated successfully, but these errors were encountered: