Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow to mount on rails #237

Open
hannan228 opened this issue Jun 7, 2022 · 5 comments
Open

Allow to mount on rails #237

hannan228 opened this issue Jun 7, 2022 · 5 comments

Comments

@hannan228
Copy link

hannan228 commented Jun 7, 2022

I try to use this gem with "prometheus_exporter", "~> 2.0" And I wants to mount this prometheus_exporter to rails routes but according to my research and a lot of tries i found that it is not allow to mount. On the other hand yabeda-prometheus-exporter allowing to mount check it here

@wolfgangrittner
Copy link
Contributor

wolfgangrittner commented Jun 11, 2022

Mounting it in Rails routes is an interesting idea. Then again this comes with a lot of caveats and possible security implications, because you'd need to make sure to not expose metrics collection and metrics scraping endpoints to the public. And it only works for web processes, you'd still need a solution for other long running processes like Sidekiq and friends.

What we did for our containerized services was to write a little extension to prometheus_exporter that starts the prometheus_exporter server in a thread in your puma, sidekiq or any other long running process that you have.
That way you get a dedicated web server listening to a dedicated port, like you would when running prometheus_exporter server as its own process, but without the need to start up and maintain a process just for the prometheus_exporter server.
You don't need to secure the metrics endpoints, because they are not part of your public application's surface, they are on a different port altogether. And it works seamlessly for any components that do not already come with a web server like Sidekiq.
Moreover you don't need to jump through any hoops to get it working for multi-process mode, it works just like it would when running prometheus_exporter server as dedicated process.

This fits great into containerized environments like Kubernetes, which is what we are using, where you usually want to just start a single process per container. That's why we called that approach "prometheus_exporter-native".
@SamSaffron, would you be interested in getting something like that integrated into prometheus_exporter?

@adamk9k
Copy link

adamk9k commented Dec 20, 2022

@wolfgangrittner any chance of open sourcing your extension? Even as a separate repo/gist

@wolfgangrittner
Copy link
Contributor

@wolfgangrittner any chance of open sourcing your extension? Even as a separate repo/gist

Yes, was thinking about doing that, but I just didn't find the time yet 😞

Fyi, we recently ran into issues with our approach: as soon as you use any kind of pod auto scaling (like HPA or whatever), running prometheus_exporter inside your containers (that are torn down when pods are scaled down) is not so great anymore.

@adamk9k
Copy link

adamk9k commented Dec 21, 2022

@wolfgangrittner so are you running it in its own dedicated pod? That's the approach I was considering actually, my concern is if some data gets lost in the process (like host name etc.)

@wolfgangrittner
Copy link
Contributor

@adamk9k, actually we haven't solved that issue yet, we don't auto-scale that much yet and currently just live with maybe losing some metrics when scaling down.
Running a dedicated pod is certainly one solution, but I was glad we got rid of running a dedicated pod when we moved off of pushgateway. We might as well end up with bumping terminationGracePeriodSeconds up to fit our Prometheus scraping interval and make sure the process stays around long enough to be scraped one last time before terminating. That should do the trick too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants