-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow automatic DNS resolvers prepended by localhost #36
Comments
The idea is to offer file with:
Prepared and managed with up-to-date addresses of resolver servers.
|
I'd rather let users configure their own DNS resolvers in vpsAdmin, which could then be assigned to VPS. So, for example, you'd create your resolver with comma-separated IPs |
Of course it would be nice to have possibility to obtain suggested resolvers from the host somehow. For example our internal machines configured via cloud-init allow obtaining similar thing via cloud-init query command. Okay, custom DNS would be nice option too. But then there should be a boolean indication, hinting include localhost as first value anyway. Maybe a checkbox? Whoever uses a lot of queries should have dns cache running locally. But there is no automatic configuration possible, making used local DNS cache service simple enough. Even if I would like to use dns.google or dns.quad6.net or whatever else, I think it should be simple enough to start a cache. For example now on Fedora or CentOS, Anyway, even if I wanted to use unbound or bind9, it would be ideal if I would have suggested DNS servers already on the machine somewhere. Obtaining IP addresses to use is not simple from VPS admin. It specifies only names, but does not print their addresses. I work with DNS related stuff and can get them, but I think it is unnecessary complicated for less advanced people. |
Btw resolv.conf might include also At least all distributions based on glibc should support it. |
It depends. I have relatively common use case as an example. I am not sure what for would work automatically filled nameservers into resolv.conf, where the first server would not be localhost. Remote hosts behave poorly if they do not respond. If the first server is localhost and does not respond, dig for example will skip fast. If it is remote host, behaviour is different and timeout quite visible. What I find more useful would be specification of custom options. edns0, trust-ad, maybe even tuning of |
What I meant by custom DNS resolvers is that you'd create your own alternative option to select in "Manage DNS resolver by vpsAdmin". Instead of Although from your comments I understand that this would perhaps be unnecessarily complicated. Hm, I'm still unsure how to solve it! I'll think about it. Maybe your suggestion (Manage DNS resolver by vpsAdmin with localhost) would cover it. When you say that you'd like a way to get our resolvers' IPs from the host somehow... how would you use this data? To manually setup I could show those IPs somewhere in vpsAdmin for sure. I also thought it'd be nice to have certain settings shown in the VPS somewhere... like VPS ID, network interfaces and their IP addresses and indeed resolvers. I think that's why at one time we've prepared
|
Because you have your desired options in the same file as resolver addresses provided by the host. Which makes it impossible to maintain options part of it and let it use nameservers provided at the same time. I think I have seen in some virtualization technology mounting /run/host into VPS. Make read-only mount inside with common information would be useful. My vps work has some information at path /run/cloud-init/instance-data.json. I have some ideas about possible automation, but do not have anything already. I have been thinking how that information should be prepared for Fedora distribution, but there is no decent implementation independent standard for it now. Except for preparing static /etc/resolv.conf. I think the only supported auto-configuration is via DHCP typically. Then maybe json data over special link address, which cloud-init can use. |
vpsAdmin-managed "Impossible" is a hard overstatement, it's normal to take care of your static configuration everywhere - as I said, vpsAdmin-managed I would close this until more requests come in, it's not worth putting in the work. We have higher priority stuff... |
There's also the argument that no amount of automation will ever enable us to skip informing members about any resolver IP changes sufficiently ahead of such change. We'll always have to keep in mind people with manual configuration (learned the hard way :D). Just briefly looking at the Looks like I misunderstood As for cloud-init, etc. - I'm not sure anyone in upstream projects can be bothered with vpsFree-specific automation - we're too small to impose that maintenance burden on any upstream IMO - and as for DHCP, it doesn't fit all the currently supported cases (multiple IP addresses especially). I mean, we could literally obsess about every possible file that vpsAdmin touches, but as of now it would make sense to me to focus on other areas (such as enabling swap for cgroup v2 nodes, providing the ability to start a VPS with its own kernel when it needs too special config, payments/accounting automation, there's a really long backlog we have...) |
Available DNS resolvers with their IP addresses are now visible in vpsAdmin, you can find it either through VPS details or menu DNS. In the future, I'd like to have it visible inside the VPS as well. Manually-managed I've also added |
If there is anything related to DNS, I might be able to help. I were not suggesting to enable dnssec on every VPS. While it should work, there are possible regressions in that case. But edns0 extension is 20 years old and your DNSSEC validating resolvers definitely know how to use it. Should lower your DNS servers load a bit. One thing is informing all people. Another is every single one of notified people needs to adjust something hardcoded in their configuration manually. In current way there is nothing they can do to have that automated, AFAIK. I doubt anything vpsfree specific would be hardcoded in distributions, except if the concept is generic enough and makes sense also on other similar providers. But okay, that would make sense to discuss on distribution channels. |
Only 41 VPS use manual |
For example my current vps uses managed by vpsAdmin, but is running bind9 inside anyway. But that is not used for queries generated on the machine itself, without manual modification. Maybe there could be more such instances, which might take advantage of my proposal. Is it possible to query running instances, whether something in them listens on port 53 on one of |
Zillion other things might be advantageous to others yet have so far been proposed by 1. Aither has trouble saying no where it's appropriate :) |
I also wanted to add that even if we chose to not implement a specific idea, we're listening, trying to keep such requests in mind and where it makes sense, we do accommodate them in the future. This particular discussion is probably going to lead to wholesale changes to IP/resolv management to leave it all up to DHCP to serve the common case in even more easy-to-use fashion. There already have been such questions / even requests. |
If I am running whatever dns cache in my VPS, there does not seem to be any decent way to prefer using local cache from VPS system. I think it would work well, if the admin panel offered also Automatic resolvers prepended by localhost address.
Dnsmasq for example will ignore own addresses in /etc/resolv.conf, so it would be enough to configure it. Even if other servers like bind9 or unbound accepting queries from localhost would be used, this would make local VPS system to use own cache if working, then to fall back to remote servers if not or failing. Queries to localhost fail relatively fast, so even if I needed to have dns cache down for a bit, my connection would still work.
Is there any instructions prepared in vps image to obtain recommended nameservers? I think preparing fixed /etc/resolv.conf cannot work with any local cache under normal circumstances.
The text was updated successfully, but these errors were encountered: