I enabled DNS over HTTPs in the recent past, and was very happy with it. Until I came to test a staging-version of a website and discovered that updating `/etc/hosts` to change the IP of the given name no longer worked.
It took me an embarassingly long time to realise I was still visiting the production site.
Mozilla is correct to do this and it would be a mistake for them to offer to parse the system hosts file in the name of compatibility as it goes against the very purpose of application-level DNS.
There's really nothing special about /etc/hosts and I think people treat it as something far more mythical and fundamental than it actually is. On just about every *NIX system it's just the file that is parsed by one of the default modules that distros install for the hosts service.
If you're not libc you shouldn't be reading /etc/hosts yourself and only accessing it via gethostname() and the like. But the whole point of application level DNS is to not do this.
If I were to remove the files module from my NSS config I would be very surprised that Firefox was resolving names from it.
You also can't parse /etc/nsswitch.conf to see if the files module is used because there's nothing special about that name -- my module blorp could read /etc/hosts and my files module could be pulling names from Redis.
I strongly disagree with this assessment. hosts (or lmhosts) has been with us for so long its ubiquitous and the resolvers (regardless of platform) have always checked this file first. If applications such as FFox are going to be sucesfully rolling out their own DoH, then reading the hosts is actually a requirement.
As a coder, I completely understand mozilla's and your sentiment, but still, I am sure it's not the right direction to take.
This is my main issue with DoH. The DNS resolver configuration is a system level setting and it is on applications to honor those settings, not do their own special snowflakey thing for whatever reason.
then I simply point my DNS at my router IP. All DNS lookups are then done over DNSCrypt and over the VPN regardless of software, platform, or application. If I want to block a site I simply add it to filter.conf, ie https://wiki.alpinelinux.org/wiki/Linux_Router_with_VPN_on_a...
local-zone: "example.com" redirect
local-data: "example.com A 0.0.0.1"
I’m not clear on how this is “better”. I also run my own DNS server at home- sometimes I want to override resolution for a handful of addresses on a single client, and now (soon) I can’t do that anymore without some likely-convoluted workaround.
It would be nice if Mozilla would stop breaking my shit and stop dishonoring my settings. This dread GNOME disease of knowing better than the user needs to stop.
Because DoH is meant to work on the application level, not OS level. You are trying to do something based on libc's resolver. If you use'd e.g. GoLang's resolver you would have similar behavior.
This is "better" because it's following the spirit of domain separation. In fact, if Mozilla's application based resolver started mucking with /etc/hosts it would, in fact, be breaking your shit.
There are more than one platform that are affected by this, as windows uses it's own hosts file too, so it's not just a libc thing. Every sysadmin and homebrew hacker knows of the hosts file that I'm aware of, regardless of the platform.
FFox has no need to be 'mucking' (which I understand to mean to 'write to') with hosts: it needs to read it and parse it.
Failure to do so will always generate an endless stream of "hosts file isn't being honoured" style bug reports. I think you would agree with me that the FFox maintainers & dev's have got better things to do than to answer WONTFIX on the same fault again and again.
>Because DoH is meant to work on the application level, not OS level.
And thus, we arrive at the crux of the issue, as stated in my original post. It is not the job of everyday applications to be making decisions about name resolution.
>In fact, if Mozilla's application based resolver started mucking with /etc/hosts
Mozilla's application doesn't even need to know about /etc/hosts. It needs to ask the system name resolution interface to resolve a name for it, and then run with what it is given, rather than Mozilla deciding that their baby is too important to use that interface and then proceed to implement one on their own.
Don't turn DoH in FF on if you don't like it (or turn it off).
You're acting like mozilla is deciding this for you without giving you a say. They're not. They're offering you something you don't even have to accept - because they care about your privacy online and they're not happy about little governments dabbling in the censorship game either.
I'm pretty sure the TOR browser is using its own name resolution too - as a privacy feature. This isn't very different.
> It needs to ask the system name resolution interface to resolve a name for it, and then run with what it is given
Despite me agreeing with your need for hosts to work, your suggestion here wont. As the others have mentioned the hosts integration is down way down deep in the code (libc and kernell.dll as far as I know) and is basically an automated part of getting the address of a name, which does a proper DNS lookup automatically (if not found in hosts), meaning DoH wont get a chance.
This means that FFox will need to independantly look up and parse the hosts file as part of it's DoH lookup, basically mimicking what libc is doing on 'nix boxes. It's what the other GPs are mentioning as a no-go/non-starter, whereas I suggest it's not hard to parse a text file.
Using the hosts file to visit a staging environment is a gross practice anyway. Just make the domain configurable in your code and either but the IP directly or make some subdomain (or better yet, separate domain) redirect to it.
I'm not convinced that's the same situation or if what you're describing is necessary in the majority of cases. Using the "correct" domain shouldn't be the norm - it should be an exceptional circumstance. As demonstrated in the above comment, it's easy to accidentally hit the production environment at which point you may be convinced that everything is alright and ship a broken change to production.
OTOH, in the real world, almost all websites that I've got to migrate are somewhat hardcoded for one domain (I've got this side project were I do websites hosting, good recurring money, little work).
Going for /etc/hosts is the only pragmatic choice here.
Yeah, you're right, I'm all for pragmatism when it's just a small site not in active development. I just didn't want people getting the idea that this is a good practice. It's a hack, and a dangerous one at that, but hacks can be okay depending on the circumstance.
It's a quick way to get around iframe requests when the site in question has something like frame-ancestors set and you're doing local development against it. Ignoring the hosts file makes development more difficult. I like firefox and I develop for it first before chrome and safari but I don't agree with your assessment or theirs.
On the one hand yes, what you say makes a lot of sense. In an ideal world you'd have dev, staging, production, all using different configs. Of course then they might different pushes to production might fail.
But one of the things that I'm testing is that SSL, HST-pinning, etc, is working. So I could fuck around with adding "--header" arguments to curl, etc. But really I want to test in a browser and if the name doesn't match I'll have issues.
It's actually really nice to be able to specify a hostname without having to rely on a zillion other services to b available and work. Heck.. maybe there's something wrong with the loadbalancing dns.
It took me an embarassingly long time to realise I was still visiting the production site.