Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Helm value for icinga2-master Endpoint address #20

Open
MTSym opened this issue Jul 24, 2023 · 4 comments
Open

[Feature]: Helm value for icinga2-master Endpoint address #20

MTSym opened this issue Jul 24, 2023 · 4 comments
Assignees
Labels
feature New feature or request

Comments

@MTSym
Copy link

MTSym commented Jul 24, 2023

Affected Chart

icinga-stack

Please describe your feature request

By default the endpoint address for the icinga2-master is, afaik, the fullname of the pod.
The problem with this is the generated "Agent" configurations which want to connect to the cluster internal address even if they are installed outside the cluster. By having the host value configurable it is possible to avoid this and the Agent scripts should be copy&paste again.

@MTSym MTSym added feature New feature or request triage Needs to be triaged labels Jul 24, 2023
@mocdaniel mocdaniel removed the triage Needs to be triaged label Jul 24, 2023
@mocdaniel
Copy link
Collaborator

We will have to take a look if we can go ahead and change this - I remember something about the director kickstart breaking when meddling with the Endpoint configuration, but I'll have another look.

@CanisLupusLupus
Copy link
Contributor

I had the same problem in my local deploy and I resolved it by setting icinga2.config.node_name to external name, i.e. to address used in ingress.

@mocdaniel
Copy link
Collaborator

mocdaniel commented Sep 4, 2023

I finally found the time to look at this issue and it raises some problems:

  • The director will use the Endpoint configuration(s) of the parent zone's satellite(s)/leader(s)
  • Therefore, it's not enough to e.g. change the FQDN given in the Director kickstart configuration
  • We would need to actually change the NodeName of Icinga2, as CanisLupusLupus noted.
  • This means, that cluster-internal components would get routed out of the cluster and back into the cluster instead of being able to use cluster-local routing.
    This is due to us relying on said NodeName for cluster-internal connections targeting the Icinga2 API, e.g. Icingaweb2 or Director.

I am not entirely sure whether this is a sane change to make or if it's more bearable for administrators to adapt the script, which gets generated by the Director upon host creation.

Feel free to share your thoughts on the situation, @MTSym @CanisLupusLupus @martialblog

@CanisLupusLupus
Copy link
Contributor

I went with external FQDN as Icinga2 NodeName in my local deployment for simplification, to keep configuration customization to minimum and to have hassle-free user experience, at the cost (as pointed out by mocdaniel) of routing traffic outside of cluster.
Initially I tried to simply keep default NodeName and update Director generated host scripts, but this does not work out of the box - TLS fails because of host name mismatch. I am sure this can be solved either by:

  • using custom cert with subject alt names on Icinga2 node
  • with reverse proxy
  • using dedicated satellite node for out of cluster agents

But all three options (I am sure there are more possibilities, but those three are the simplest ones that I can think of off hand and with some light skimming of Icinga2 docs) increase complexity and fragility of deployment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants