Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTPScaledObject reconciler not following best practices #611

Open
Tracked by #911
t0rr3sp3dr0 opened this issue Feb 23, 2023 · 4 comments
Open
Tracked by #911

HTTPScaledObject reconciler not following best practices #611

t0rr3sp3dr0 opened this issue Feb 23, 2023 · 4 comments
Labels
bug Something isn't working help wanted Extra attention is needed stale-bot-ignore All issues that should not be automatically closed by our stale bot

Comments

@t0rr3sp3dr0
Copy link
Contributor

Report

The current implementation of the HTTPScaledObject reconciler is not following the best practices for a K8s Operator.

Expected Behavior

  1. It should only update resources when needed.
  2. It should not delete resources if there was an error on the reconciliation loop.
  3. It should only have access to the least amount of K8s resources required for it to do its job.
  4. It should not reconcile resources outside the reconciliation loop.
  5. It should try to self-heal whenever possible.

Actual Behavior

  1. It perform a patch operation every single reconciliation loop, no checks are performed to verify a patch is indeed required.
  2. If the creation or update of an internal ScaledObject fails for some reason, even when the operation was not required in the first place, the controller deletes the resource and this may cause an outage.
  3. Right now, the service account of the controller has full access to all Pods, Deployments, and Ingresses. But these are not required for the HTTP Add-On to work and may pose a security risk.
  4. The routing table ConfigMap existence is being checked on the main function, outside of the reconciliation loop, and is only performed once, when the controller starts.
  5. The controller doesn't try create a ConfigMap for the routing table if it doesn't exist and fail the entire controller when that happens.

Steps to Reproduce the Problem

N/A

Logs from KEDA HTTP operator

N/A

What version of the KEDA HTTP Add-on are you running?

0.4.0

Kubernetes Version

1.24

Platform

Microsoft Azure

Anything else?

No response

@stale
Copy link

stale bot commented Apr 25, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Apr 25, 2023
@t0rr3sp3dr0
Copy link
Contributor Author

Don’t close

@stale stale bot removed the stale All issues that are marked as stale due to inactivity label Apr 26, 2023
@similark
Copy link
Contributor

I think it should also set OwnerReference...

@JorTurFer JorTurFer added the stale-bot-ignore All issues that should not be automatically closed by our stale bot label Jul 7, 2023
@JorTurFer JorTurFer mentioned this issue Apr 4, 2024
19 tasks
@JorTurFer JorTurFer added the help wanted Extra attention is needed label Apr 4, 2024
@kahirokunn
Copy link
Contributor

It would be good if it were possible to narrow down the namespaces, labels and annotations of the watch targets so that multiple controllers can be placed in a single cluster.
This would make it possible to place an envoy gateway etc. in front of the keda-http-add-on.
If you can manage two keda-http-add-ons by placing an envoy gateway in front, it will be possible to keep the keda-http-add-on thin while also enabling advanced traffic control such as traffic splitting and mirroring.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed stale-bot-ignore All issues that should not be automatically closed by our stale bot
Projects
Status: To Triage
Development

No branches or pull requests

4 participants