You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Installation
Identifier
mindaro.mindaro
Version
2.0.120240111
Last Updated
2024-04-27, 13:17:44
Size
166.8 MB
Cache
205.5 MB
To Reproduce
Steps to reproduce the behavior:
Create an AKS Cluster and implement a deployment running any application in a pod - apply a service as well, below is the Yaml for my setup (note: i am suffering this problem on every service i use B2k on.)
to set my environment variables when using B2k i am using a KubernetesLocalProcessConfig.yaml file ( Note: I have this problem even if i do not use a KubernetesLocalProcessConfig.yaml file - i am just including it for good measure.
Launch a debugging session using similar setup. you will notice that B2k does what it should start up the session:
The debugger window is running in the VSCode Debugger window ( i see my logs)
VS Code has opened a second integrated terminal to run whatever it needs locally (I presume) - here is a copy of what is in that integrated terminal
Redirecting Kubernetes service org-service-ss to your machine...
Target cluster: core-cluster
Current cluster: core-cluster
Target namespace: subscripify-super-principal
Current namespace: subscripify-super-principal
Target service name: org-service-ss
Target service ports: 50051
Using kubernetes service environment variables: true
Retrieving the current context and credentials...
Validating the credentials to access the cluster...
Validating the requirements to replicate resources locally...
Redirecting traffic from the cluster to your machine...
Loaded Bridge To Kubernetes environment file 'KubernetesLocalProcessConfig.yaml'.
Waiting for 'org-service-ss-7ff84fdbc5-qkfv2' in namespace 'subscripify-super-principal' to reach running state...
Deployment 'subscripify-super-principal/org-service-ss' patched to run agent.
Remote agent deployed in container 'org-service-ss' in pod 'org-service-ss-7ff84fdbc5-qkfv2'.
Preparing to run Bridge To Kubernetes configured as pod subscripify-super-principal/org-service-ss-7ff84fdbc5-qkfv2 ...
Connection established.
Service 'org-service-ss' is available on 127.0.0.1:55049.
Container port 50051 is available at localhost:50051.
##################### Environment started. #############################################################
Run /tmp/tmp-378708f88mhN9rWVm.env.cmd in your existing console to also get connected.
* Terminal will be reused by tasks, press any key to close it.
The deployment on my AKS cluster has changed - it is running the image for bridge and it has added some vars. here is a copy of the modified deployment.
looking at the logs for the pod that is now running on the cluster for my service - they look like this...
2024-12-19T00:31:11.6231246Z | RemoteAgent | TRACE | ReversePortForwardConnector created for port 50051\nOperation context: <json>{"clientRequestId":"d55f1790-028e-4ddf-b923-4c8bf262a712","correlationId":"8d2291d4-8713-4a43-9d7e-51f3285987351734566244115:b318e6c8ce02:be345b679a39","requestId":null,"userSubscriptionId":null,"startTime":"2024-12-19T00:31:08.6361064+00:00","userAgent":"RemoteAgent/1.0.0.0","requestHttpMethod":null,"requestUri":null,"version":"1.0.0.0","requestHeaders":{},"loggingProperties":{"ApplicationName":"RemoteAgent","DeviceOperatingSystem":"Linux 5.15.0-1075-azure #84-Ubuntu SMP Mon Oct 21 15:42:52 UTC 2024","Framework":".NET 7.0.19","ProcessId":1,"TargetEnvironment":"Production"}}</json>
2024-12-19T00:31:11.6406317Z | RemoteAgent | TRACE | ReversePortForwardConnector start listening on port 50051
all is as expected - i can debug. the problem comes when i need to terminate the session.
I first hit the stop button in vs code...
I then kill the terminal that B2k had started by pressing any key while that terminal has focus
doing so starts a restore job on my cluster in the same namespace as the replaced service... here is the yaml for that...
It will keep doing that until I close ALL open terminal windows on my machine (i will have several open at any given time I will have a few repos open in vs code. I have attached a video demonstrating. sometimes I have four or five VS code instances open at any time across several desktops - but this video demonstrates the problem with two VS codes open on two different projects on the same desktop. https://github.com/user-attachments/assets/ddbb0e51-4e4e-4556-9f95-3dd61a677189
Expected behavior
I expect the restoration job to complete immediately when I close the B2k terminal window in VScode- i should not have to close VSCode entirely and I most certainly should not have to close all VSCode windows.
I would also expect all resources that B2K deploys on the cluster - even after I figured out how to get the job to stop there is stil a role and role binding left on the cluster (here are their yamls)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: 2024-12-19T00:31:07Z
labels:
mindaro.io/component: lpkrestorationjob
mindaro.io/version: v2
name: lpkrestorationjob-role-v2
namespace: subscripify-super-principal
resourceVersion: "4928916"
uid: 21ba1725-52a4-449c-b048-68fca0d48853
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- update
- patch
- delete
- apiGroups:
- extensions
- apps
resources:
- deployments
- statefulsets
verbs:
- get
- list
- update
- patch
- apiGroups:
- extensions
- apps
resources:
- replicasets
verbs:
- get
- list
- apiGroups:
- ""
resources:
- secrets
verbs:
- delete
- list
- apiGroups:
- batch
resources:
- jobs
verbs:
- delete
- list
Describe the bug
B2k restore job not able to complete after terminal window close when one has more than one terminal window open.
Mention the platform you are using
WSL Linux on Windows 11 using VSCode connected to WSL using the WSL Code server
specifically
Windows About
VS Code About
Linux Distro running in WSL
b2k version
To Reproduce
Steps to reproduce the behavior:
Create an AKS Cluster and implement a deployment running any application in a pod - apply a service as well, below is the Yaml for my setup (note: i am suffering this problem on every service i use B2k on.)
to set my environment variables when using B2k i am using a
KubernetesLocalProcessConfig.yaml
file ( Note: I have this problem even if i do not use aKubernetesLocalProcessConfig.yaml
file - i am just including it for good measure.This is my debugging pre-launch task:
and this is my debugging launch script
Launch a debugging session using similar setup. you will notice that B2k does what it should start up the session:
The debugger window is running in the VSCode Debugger window ( i see my logs)
VS Code has opened a second integrated terminal to run whatever it needs locally (I presume) - here is a copy of what is in that integrated terminal
The deployment on my AKS cluster has changed - it is running the image for bridge and it has added some vars. here is a copy of the modified deployment.
looking at the logs for the pod that is now running on the cluster for my service - they look like this...
all is as expected - i can debug. the problem comes when i need to terminate the session.
I first hit the stop button in vs code...
I then kill the terminal that B2k had started by pressing any key while that terminal has focus
doing so starts a restore job on my cluster in the same namespace as the replaced service... here is the yaml for that...
Bug! - the job NEVER stops!
looking at the logs in the job it keeps repeating the following...
It will keep doing that until I close ALL open terminal windows on my machine (i will have several open at any given time I will have a few repos open in vs code. I have attached a video demonstrating. sometimes I have four or five VS code instances open at any time across several desktops - but this video demonstrates the problem with two VS codes open on two different projects on the same desktop.
https://github.com/user-attachments/assets/ddbb0e51-4e4e-4556-9f95-3dd61a677189
Expected behavior
I expect the restoration job to complete immediately when I close the B2k terminal window in VScode- i should not have to close VSCode entirely and I most certainly should not have to close all VSCode windows.
I would also expect all resources that B2K deploys on the cluster - even after I figured out how to get the job to stop there is stil a role and role binding left on the cluster (here are their yamls)
and
The text was updated successfully, but these errors were encountered: