Or, you can use https://github.com/dimkr/nss-tls - everything that uses gethostbyname(), addrinfo(), etc', including Firefox with network.trr.mode set to DNS, will use DoH
I use this and it works great. Especially in combination with https://github.com/jedisct1/bitbar-dnscrypt-proxy-switcher (and its dependency Bitbar), which gives you a little icon in the taskbar to monitor and manage the dnscrypt settings. (mac os x)
That seems to be Linux-only. There should ideally be GUI applications for every platform that lay people can download, install and run with sane defaults. Cloudflare did that for mobile with the 1.1.1.1 app.
Because applications can't ship OS level support, but want to experiment with it and add it now. That doesn't mean OS-level support can't be a thing, but it's a different level (waiting until the OS vendor gets around to it, or users explicitly installing and setting up tools for it)
This is my beef with (my understanding maybe incomplete, in which case I apologise) the implementation.
Internal DNS, split brain DNS aren’t catered for without disabling support? I don’t want my internal names leaking to the internet, nor necessarily are they the same for external resolvers. Now yes the latter is a hack, but it’s one widely used still today.
The idea is laudable. But it feels hostile. I can disable support, but for how long?
I guess it would be possible to run my own local DNS server that connects to these DoH servers. Does any DNS server support DoH? This could also allow the user to override domains using their /etc/hosts file in case DoH on Firefox doesn't support it.
I'm running my own DNSoverHTTP instance at home. I have Apache, with HTTP/2 support, running, some self-signed certificates, and a CGI script that accepts the DNSoverHTTP request and makes a DNS call to my local version of bind. I found RFC-8484 to be quite easy to follow, and I've set network.trr.mode to 4 (use DNS, but also send DNSoverHTTP for testing) and network.trr.allow-rfc1918 to true (so local addresses can be resolved locally).
I will do the occasional tests with network.trr.mode to 3 (only use DNSoverHTTP) but I seem to have issues resolving github. I haven't looked that far into it.
Thanks, I’ll hace to look it up and give it a read. I’ll be honest I’ve not read the actual RFC in this instance and pieced together what I know from articles, reported behaviour, etc.
I know it’s lazy and I should’ve done more work. But, burn out
In that case you are better off running local DNS and using a different subdomain (internal.companyname.com or whatever) for internal DNS entries; the DNS-over-HTTPS query will go out, fail, and then Firefox will fallback to traditional UDP DNS on port 53, hit the local resolver on the LAN, and away you go. It will presumably cause a short delay the first time a host is queried, but after that I assume Firefox is smart enough to cache the result, so unless you have absurdly short TTLs the performance impact should be pretty low.
The positives certainly outweigh the negatives of inconveniencing some IT admins who, as you correctly point out, are implementing a dirty hack anyway.
You completely missed the point of the parent, which is to NOT let internal hostnames out of the network.
The positives certainly outweigh the negatives of inconveniencing some IT admins who, as you correctly point out, are implementing a dirty hack anyway.
This is a perfect example of the irritating attitude I see from people pushing hostile features like this. Everyone wants their network to operate the way they want, and yet you think you know better than the actual owners of those networks.
You seem to forget that Domain Name Resolution became a problem after the more generic Name Resolution (ie Novel/lanman/NetBIOS). The Generic Name resolution system used lmhosts, which became hosts to more easily associate IPs and names. [0]
> Originally these names were stored in and provided by a hosts file but today most such names are part of the hierarchical Domain Name System (DNS).
The lack of trust I mentioned was about ISP provided DNS servers. You don't own your WAN network and the majority of people use the DNS provided by their ISP.
On your own network, if you feel like doing a DNS lookup to what amounts to a public address book is unethical then don't allow arbitrary clients on the network.
If you want to do blocking based on a DNS list, configure your firewall to do that.
Knowing better than the owners is a matter of tradeoff.
There are whole isps and even countries (including the UK shortly) which mess with DNS requests. Helping the millions of users who are in that situation, and don't even know what D Sits, seems like a net good. As you say, experts can choose to disable it.
As long as they can. The problem with these ideas is that it can get increasingly difficult to work around them. How many hoops you have to jump through to pcap your own software on your own machines now that certificate pinning is becoming popular? What when someone will have the bright idea of implementing certificate pinning for DoH inside browsers, "because security"?
(I could live with the choice between having to somehow acquire Chrome Enterprise Edition vs. switching to Firefox, to have a browser I can control. I'm worried now that Firefox might be turning into Chrome, though.)
If you're implying the porn filter, no, the porn filter has been shelved 'indefinitely' because a) it's against EU law, b) it was May's personal project (she pushed heavily for it when she was Home Secretary, and it became a thing under her PM-ship).
Once Firefox starts to ignore DNS resolvers configured at the OS level other apps are sure to look at it and think it must be a good idea because Firefox is doing it. Soon there will be a multitude of applications needing this disabled in their own unique way.
If the Mozilla Foundation see this as an issue they should instead be developing a separate solution to provide this system wide. If you must bundle it with Firefox and offer to install it at browser installation or upgrade time. Don't install it by default and certainly don't enable it without user permission.
Here is an example of how one can use DOH "like ping or nslookup". This example uses HTTP POST and cloudflare-dns. Maybe check out "stubby" for "OS-level" DOH. Currently I think it only does DOT but future plans are for DOH.
Because the OS (getaddrinfo(), gethostbyname(), etc') doesn't implement DoH; it implements a /etc/hosts parser and a DNS (over UDP) client.
I wrote a glibc plugin that implements a caching DoH client for glibc, which can replace the DNS client or fall back to it - https://github.com/dimkr/nss-tls.
The criticism (which you seemed to
miss) is that everyone is rushing to implement this at the application-level(s), instead of contributing to get it implemented, once, at the OS level instead and have a fix in place for everyone.
> Not to mention that DNS over HTTP is one of the class of features where you might want to override sysadmin policy as a user.
I don’t buy that argument at all.
Why should we special case policies of one internet-protocol over all the others?
Also: implementing/marketing DoH as a way to bypass enterprise control and policies is a surefure way to find it permanently blocked at firewall level in said enterprises.
Ie your attempt at subverting control won’t gain you anything but deserved distrust.
Hi Dima - I'm assuming you're aware of dnscrypt-proxy and wrote nss-tls because you wanted a lighter weight implementation of a subset of dnscrypt-proxy's features on a specific platform (linux/glibc, for example this won't work on linux/musl afaik)? I use dnscrypt-proxy happily but was interested in nss-tls, yet couldn't find a rationale/comparison in the readme.
This is doable on linux/unix through an NSS plugin (and has been linked to in this discussion), but the vast majority of Firefox users are on Windows (and a minority on Android) where this cannot be done as easily.
For the users, 99% of whom live in the self-updating browser these days, this is much better than waiting for an OS patch that they may or may not know how to install.
> At least on Linux, isn't DNS all at the application level anyway? There is no system level DNS lookup
Nearly all applications use the standard library, i.e. getaddrinfo(3) or the old gethostbyname(3) or something that wraps them. Which itself uses the services configured in /etc/nsswitch.conf, one of which is DNS which will in turn query the DNS server(s) configured in /etc/resolv.conf.
You can also have other services configured in nsswitch.conf like "mdns" (multicast DNS for names of devices on the LAN) and "files" for /etc/hosts, or any other name resolution system. The general result is that you can change the settings for the whole system and even add completely new name resolution services (like, for example, DoH) and have substantially everything automatically use them.
What the parent poster means is that each application does its own DNS lookup separately and independently. The family of functions you linked to, plus the newer getaddrinfo family of functions, is implemented in the C library within each process, not as a system call or as a separate daemon. These functions read the /etc/nsswitch.conf file, load the C library plugins listed there, and call each one in sequence - still within the same process. The most common setting is a variation of "hosts: files dns", which first reads /etc/hosts, then reads /etc/resolv.conf and connects directly to the DNS servers listed there, without using any system level "DNS lookup" daemon (unless you have nscd enabled).
At least on linux, go's native resolver follows a sane subset of glibc conventions like parsing /etc/nsswitch.conf, /etc/resolv.conf, /etc/hosts [1]. As long as your dns configuration is defined there, you won't notice much of a difference between go programs using go's resolver and programs making glibc library calls for dns stuff.
I recall reading on a Mozilla blog that 0 is 'default'; right now TRR is disabled but eventually it will be enabled by default. If you want to disable it you should explicitly set it to 5, so that a future update does not enable it.
I enabled DNS over HTTPs in the recent past, and was very happy with it. Until I came to test a staging-version of a website and discovered that updating `/etc/hosts` to change the IP of the given name no longer worked.
It took me an embarassingly long time to realise I was still visiting the production site.
Mozilla is correct to do this and it would be a mistake for them to offer to parse the system hosts file in the name of compatibility as it goes against the very purpose of application-level DNS.
There's really nothing special about /etc/hosts and I think people treat it as something far more mythical and fundamental than it actually is. On just about every *NIX system it's just the file that is parsed by one of the default modules that distros install for the hosts service.
If you're not libc you shouldn't be reading /etc/hosts yourself and only accessing it via gethostname() and the like. But the whole point of application level DNS is to not do this.
If I were to remove the files module from my NSS config I would be very surprised that Firefox was resolving names from it.
You also can't parse /etc/nsswitch.conf to see if the files module is used because there's nothing special about that name -- my module blorp could read /etc/hosts and my files module could be pulling names from Redis.
I strongly disagree with this assessment. hosts (or lmhosts) has been with us for so long its ubiquitous and the resolvers (regardless of platform) have always checked this file first. If applications such as FFox are going to be sucesfully rolling out their own DoH, then reading the hosts is actually a requirement.
As a coder, I completely understand mozilla's and your sentiment, but still, I am sure it's not the right direction to take.
This is my main issue with DoH. The DNS resolver configuration is a system level setting and it is on applications to honor those settings, not do their own special snowflakey thing for whatever reason.
then I simply point my DNS at my router IP. All DNS lookups are then done over DNSCrypt and over the VPN regardless of software, platform, or application. If I want to block a site I simply add it to filter.conf, ie https://wiki.alpinelinux.org/wiki/Linux_Router_with_VPN_on_a...
local-zone: "example.com" redirect
local-data: "example.com A 0.0.0.1"
I’m not clear on how this is “better”. I also run my own DNS server at home- sometimes I want to override resolution for a handful of addresses on a single client, and now (soon) I can’t do that anymore without some likely-convoluted workaround.
It would be nice if Mozilla would stop breaking my shit and stop dishonoring my settings. This dread GNOME disease of knowing better than the user needs to stop.
Because DoH is meant to work on the application level, not OS level. You are trying to do something based on libc's resolver. If you use'd e.g. GoLang's resolver you would have similar behavior.
This is "better" because it's following the spirit of domain separation. In fact, if Mozilla's application based resolver started mucking with /etc/hosts it would, in fact, be breaking your shit.
There are more than one platform that are affected by this, as windows uses it's own hosts file too, so it's not just a libc thing. Every sysadmin and homebrew hacker knows of the hosts file that I'm aware of, regardless of the platform.
FFox has no need to be 'mucking' (which I understand to mean to 'write to') with hosts: it needs to read it and parse it.
Failure to do so will always generate an endless stream of "hosts file isn't being honoured" style bug reports. I think you would agree with me that the FFox maintainers & dev's have got better things to do than to answer WONTFIX on the same fault again and again.
>Because DoH is meant to work on the application level, not OS level.
And thus, we arrive at the crux of the issue, as stated in my original post. It is not the job of everyday applications to be making decisions about name resolution.
>In fact, if Mozilla's application based resolver started mucking with /etc/hosts
Mozilla's application doesn't even need to know about /etc/hosts. It needs to ask the system name resolution interface to resolve a name for it, and then run with what it is given, rather than Mozilla deciding that their baby is too important to use that interface and then proceed to implement one on their own.
Don't turn DoH in FF on if you don't like it (or turn it off).
You're acting like mozilla is deciding this for you without giving you a say. They're not. They're offering you something you don't even have to accept - because they care about your privacy online and they're not happy about little governments dabbling in the censorship game either.
I'm pretty sure the TOR browser is using its own name resolution too - as a privacy feature. This isn't very different.
> It needs to ask the system name resolution interface to resolve a name for it, and then run with what it is given
Despite me agreeing with your need for hosts to work, your suggestion here wont. As the others have mentioned the hosts integration is down way down deep in the code (libc and kernell.dll as far as I know) and is basically an automated part of getting the address of a name, which does a proper DNS lookup automatically (if not found in hosts), meaning DoH wont get a chance.
This means that FFox will need to independantly look up and parse the hosts file as part of it's DoH lookup, basically mimicking what libc is doing on 'nix boxes. It's what the other GPs are mentioning as a no-go/non-starter, whereas I suggest it's not hard to parse a text file.
Using the hosts file to visit a staging environment is a gross practice anyway. Just make the domain configurable in your code and either but the IP directly or make some subdomain (or better yet, separate domain) redirect to it.
I'm not convinced that's the same situation or if what you're describing is necessary in the majority of cases. Using the "correct" domain shouldn't be the norm - it should be an exceptional circumstance. As demonstrated in the above comment, it's easy to accidentally hit the production environment at which point you may be convinced that everything is alright and ship a broken change to production.
OTOH, in the real world, almost all websites that I've got to migrate are somewhat hardcoded for one domain (I've got this side project were I do websites hosting, good recurring money, little work).
Going for /etc/hosts is the only pragmatic choice here.
Yeah, you're right, I'm all for pragmatism when it's just a small site not in active development. I just didn't want people getting the idea that this is a good practice. It's a hack, and a dangerous one at that, but hacks can be okay depending on the circumstance.
It's a quick way to get around iframe requests when the site in question has something like frame-ancestors set and you're doing local development against it. Ignoring the hosts file makes development more difficult. I like firefox and I develop for it first before chrome and safari but I don't agree with your assessment or theirs.
On the one hand yes, what you say makes a lot of sense. In an ideal world you'd have dev, staging, production, all using different configs. Of course then they might different pushes to production might fail.
But one of the things that I'm testing is that SSL, HST-pinning, etc, is working. So I could fuck around with adding "--header" arguments to curl, etc. But really I want to test in a browser and if the name doesn't match I'll have issues.
It's actually really nice to be able to specify a hostname without having to rely on a zillion other services to b available and work. Heck.. maybe there's something wrong with the loadbalancing dns.
That is, until software on any of these devices start running their DNS queries through DoH directly, circumventing any DNS filtering at the perimeter.
This is what browsers, like Firefox, are likely to do as it stands today.
It's nice that browsers are likely offering an opt-out. However it seems likely to me that DoH will soon be used by non-browser apps as well, which are in no way obligated to provide an opt-out. What will you do about them?
The target is not Application Level DOH, Firefox implements DOH now because it will take a while until OS' ship it, and the OS vendors want to know it's worth it first.
Once OS vendors include support, your pihole can run a DoH server locally and all apps in your network use that DoH server.
I don't believe OS-level support would be relevant.
Once there is a decent
set of public/commercial DoH servers available, devs can simply follow the browsers' example: Directly embed a DoH client into the application and supply a hardwired list of URLs and certificates. To my knowledge, you cannot block that with pihole.
At least, if I were an app developer with financial interest in users not blocking my ads and trackers, this would seem like an obvious thing to do.
Well, it has happened like this with TLS certificate validation. In theory, apps can use the system cert stores and you as a user can install custom root CAs if you want to find out what an app is actually sending. In practice, many apps have pinned certificates embedded to prevent that.
> it's just too complicated to be worth it
That's the point. It's complicated and costly today if you have to design your own protocol, run your own DNS proxy and be the target of outrage if someone finds out.
It won't be if DoH normalizes application-specific DNS servers and provides an ecosystem with infrastructure and tooling for it.
I don't think DoH will normalize it. As mentioned, App-level DoH is merely here because the OS-level doesn't support it yet. I don't see how that means DoH will replaced OS-level DNS.
Until they take that setting away from you, "because security".
Not to mention that where DNS was centrally managed before, now you have to change settings in each and every application that uses DoH to resolve names on the Internet.
(And then Google decides to do cert pinning on DoH, and suddenly you can only ever use 8.8.8.8 and 1.1.1.1, and if you want to change it, you need to buy Enterprise version of Chrome.)
I do think you're right. And I also think that DoH will legitimate MITM because if application developers can break an long standing contract like DNS on OS well... "why can't we"
Before CF was CF, they were a group of blackhats running honeypots. I was part of that community. It turned out, building a distributed CDN had potential to make money. And here we are. They built a great CDN. I would use them over Akamai any day. They have very talented hackers. That said, consolidating all DNS lookups to one central company guarantees they will have non stop pressure (if not already) to give warrant-less access to the data. This is very valuable data.
I never take privacy statements on a website seriously. A company can say one thing and do another. HN crowd knows this better than anyone.
For what, the honeypots? I have no idea of such an article exists. I was one of their members and ran several honeypots and made use of several of my domains. It was started on freenode's IRC network, or at least that is how I learned about it and joined in. There are probably IRC logs on archive servers out there somewhere.
My ISP knows my real-world identity: name, address, phone number and credit card information. Cloudflare knows none of these things about me, so there's much less they can do with my browsing information.
Cloudflare has access to many of the sites you browse. They have the SSL keys, they host the services. They (potentially) know a lot more about you than your ISP. If you ever bought something from a web shop hosted on Cloudflare they have access to your billing address, credit card info and what you are buying. They may not collect it but they do have access.
You can operate a full resolver without Cloudflare or any other resolver DNS server. Every time you do a complete recursive resolution on the DNS names yourself locally and then only the DNS servers of the service you are using are going to know that you visited their service (and the adversary eavesdropping on the network between you and the service).
I was thinking for a while to do more networking security at home, from basic stuff like a guest's wifi, to white-listing content and ports.
How much time did it take you to set up Pi-hole to your liking?
I set it up a couple of months ago as well and as I recall I went from zero to "up and running" in an hour or two — I usually wake up a little earlier than my wife and I'd finished setting it up by the time we were ready to make breakfast. The ArchWiki page (https://wiki.archlinux.org/index.php/Pi-hole) was a useful reference for the few snags I encountered.
Opera comes with a free VPN (which does collect as much data as it can), and I think, Firefox is primed to move in that direction, as well. Given how they already have a partnership for DoH, they might extend it for Warp, which might be great if they do it in a privacy-oriented way and do right by their users. Esp, as more and more govts censor the Internet and ISPs turn into trackers the need for Firefox to be the thorn in the neck of powers-that-be is ever more important.
If Firefox included a built-in VPN, they could increase their market share substantially. As long as they keep privacy a priority, they can give Chrome solid competition.
Why? They have access to the same data, and a similar incentive to monetize. It will just be an _additional_ profit stream, not the only one.
(I trust a vpn to not be my isp or someone at the next table at a coffee place with WiFi. I trust paid vpn providers to provide decent performance. I’m not as worried about data mining)
Because I trust a private company 100x more than I trust the British government which forces all ISPs to keep my browsing history for a year and then gives dozens of agencies warrantless access to this data, including agencies like the Food Standards Agency. At least a private company has a financial incentive to keep my data safe.
:thinking_face: is the food standards agency more interested in browsing data from food-based corporations to ensure certain standards over your pleblian google searches? it's possible and sounds like a valid reason for sharing of this data. except it's broken because a loophole is to simply use an offshore VPN (guess that's harder for a corporation to get away with)
Much easier if anything. I work for a French software developer in the UK and all of our traffic goes through a Paris-based gateway as per company policy.
This whole DNS over HTTP disaster makes it pretty clear to me as an end user that it's not though. I wouldn't trust a free VPN further than I can throw it, no matter who offered.
> This whole DNS over HTTP disaster makes it pretty clear to me as an end user that it's not though.
True. We've got DoT, which is a very viable alternative. Supported at OS level by Android. And there's DnsCrypt with clients on all major platforms.
> I wouldn't trust a free VPN further than I can throw it, no matter who offered.
Valid point.
Though, the present situation is that one pays the ISPs and yet they traffic shape, surveil, censor their users. I fear, after a point, VPNs might be the only way to access censor-free Internet across the globe.
While I personally trust my ISP more than some random US based entity, that's of course true. The thing with VPN alternatives for me is that the VPN landscape today is already weirdly organized, opaque and would have no real incentive for a free tier.
Short of hosting their own, which consumers will not do, I don't see a scenario where VPN providers end up in a position that's more trustful than your average (Western world) ISP is today tbh.
I don't think any ISP can really be trusted, in most countries, since they are subject to arbitrary government demands. In Australia, for example, that includes data retention, censorship, and assistance with any kind of spying that may be demanded.
Sure, valid as well. If VPNs get relevant enough to matter to nation state actors none of that really exclusively applies to ISPs anymore though, I feel like we're in a cozy transitioning stage there. They'd either get blocked (see feeble attempts in China) or get the same treatment. Either through similar legislation or technology level interference as we've seen before with the Tor network. At that point it's a game of choosing the nation state control you're most comfortable with in terms of oversight, governmental interference, consumer protection, and business incentive. That'll likely still be my European based ISP over a US VPN to be quite honest, given (theoretically) better consumer protection legislation and generally less equipped/capable surveillance apparatus.
Don't get me wrong, VPNs e.g. for access control or untrusted networks are great use cases in my book. I just don't like the snake-oil vibe surrounding VPNs that make it out to be a great way to secure everyday networking for consumers.
How is it a disaster? From what I've seen, a number of gripes are due to people not actually understanding how DoH works or expecting it to do something it isn't designed to do.
Disaster was maybe too strong a word there. I personally don't like some aspects to put it more mildly. To me, the Firefox move centralises critical infrastructure behind players like cloudflare, including their non-contractual(?, at least not with me as the end user) promises and potential US influence. Guess we'll have to hope for upcoming transparency reports there and hope we don't find this part of the infrastructure as a sidenode in some NSA leak a decade down the road. In my view, once the stack is widely adopted, users will by default either use whatever firefox gives them, or talk to a DoH instance their ISP pushes (I assume there's still a mechanism for that?), not really achieving that much in terms of potential privacy breaches if somebody on the other side decides to act maliciously.
Please correct me if I'm wrong here, it looks like a weird approach to fix a protocol on a lower OSI level. Instead of fixing DNS&DNSSEC privacy a few key players bypass and replace it with their own solution, with Firefox pushing it onto users. My major gripes there are the added complexity and an aversion to wrap our whole networking stack into HTTPS instead of addressing the underlying problems. I realize that's more philosophical than technical grievances though, sorry for the wording.
DNSSEC explicitly doesn't provide privacy --- in fact, it does the opposite --- so if you were waiting for DNSSEC to hide your queries from your ISP's adtech analytics, you'd have been waiting a very long time indeed. Firefox made the right call. DoH, by the way, is also an open standard.
Honestly not really familiar with that but from overview graphics it looks like if you wanted to adapt that to live connections you'd end up with something similar to Tor, right? That is nice of course but comes with its own drawbacks (a few that spring to mind are performance, potentially malicious exit nodes, inconvenience).
I'm suggesting that the way Tarsnap organises its billing might be a good way for customers who buy VPNs to be sure that their browsing info is not actually the product the company offering the VPN is selling.
I've looked at their website, thanks for the pointer, I think I'll have to give them a go for my personal backups now :)
As for the application to VPN, I can't see much of a difference to other trustworthy businesses to be honest, I think I missed something there - are you referring to the prepaid aspect?
One would wish but alas a common mode of business is that people who consider themselves users sign up to use some service which is presented to them as being free, when all the while that company's actual business model is monetising those users' info and habits in every conceivable way and the 'service' is just the bait to get people on the hook.
methinks you're talking out of your butthole right now. I run a VPN service. I can get away with about 100 users per dedicated host. That gives me an average CPU of $0.80. If I went with tier-1 datacenters, my cost would triple. Public cloud DCs bring my costs up ~25%.
These numbers shrink with scale (e.g. buying dedicated bandwidth, bringing your own bandwidth, colo and increase server density, etc). Also mozilla has $69m in cash as of 2016, without counting assets and investments. They are okay to host few thousand more servers.. https://assets.mozilla.net/annualreport/2016/2016_Mozilla_Au...
But privacy is the main concern and I would only use it if their were assurances that the VPN would not log nor sell user data.
Mozilla does get a lot of money to keep Google as the default search engine on Firefox so using a built-in VPN to draw more users may get Google to pay them more.
FF is planning to add a paid VPN as a partnership. It will be a non-free, paid feature―possibly in a separate paid version of the browser (not sure if they really plan to make a whole separate FF).
Keep in mind when enabling features ahead of widespread release in software, that obvious and/or non-obvious things are more likely to break when you do so than if you wait until it’s enabled for you.
This goes double for users on the Release channel of software rather than the Beta/Nightly/Canary/Whatever channel, since it takes weeks or months to fix problems.
I’m not saying “don’t”, but I am saying “be prepared to encounter self-inflicted issues”. The tendency is to blame the issues and the frustration of tracking down their cause on the software developer. Keep notes about what you enable, so you can try disabling it and see if that fixes it. Report bugs you find, and don’t panic if they’re known and/or unsolved.
Yes. But. That’s very specific and assumes they’ll only ever query one or the other, which might not necessarily be true for *.local and localhost (I haven’t researched or tested).
I don't know how I've improved the situation going from Chrome to Firefox and then to Firefox Nightly:
I wish Mozilla put efforts toward preserving settings and not reinstalling search providers one has purposefully removed. I understand that by using Nightly I cannot expect what a general user expects, but this problem exists in all browsers. I consider it user-hostile behavior that more emphasis isn't taken to preserve settings. Oh a new update? Clearly you want us to sync everything instead of just the few things you selected. Let's revert it all to defaults.
I also understand how settings are stored (the backend format) might change between minor or major versions. Sometimes factory defaults need to be reinstated - but it should be very fucking clear (with a notification) that the user should go review settings that have changed/reverted. And this cannot be a banner that shows every time an update applies. Give the user some transparency.
On Chrome when I ask it to preserve my previous session it preserves just that session's browsing history. This history is forgotten if I make a point to close all tabs and end the session. On Firefox I must save all history be to 'restore the current session'. Wish we had more control over this.
You can't disable Firefox from checking for updates (I wish this could be left to package managers on some systems). I understand but I don't want to be nagged. You can make Firefox ask you, but it will check nonetheless.
Why the fuck would I want "Recommmend features as I browse?" or "Recommend extensions as I browse?" I hate being advertised to.
"Warn you about unwanted and uncommon software" - who is making this determination? Who is Firefox talking to about what I download?
I wish I could sync settings, open tabs, addresses, history, etc - to an simple archive on close or periodically. No online service to sync against with another account I have to worry about.
Sucks that in hotels Firefox determines if there's a captive portal in effect by querying a Mozilla-hosted site (detectportal.firefox.com).
"Shill, the connection manager for Chromium OS, attempts to detect services that are within a captive portal whenever a service transitions to the ready state. This determination of being in a captive portal or being online is done by attempting to retrieve the webpage http://clients3.google.com/generate_204. This well known URL is known to return an empty page with an HTTP status 204. If for any reason the web page is not returned, or an HTTP response other than 204 is received, then shill marks the service as being in the portal state."
> On Firefox I must save all history be to 'restore the current session'. Wish we had more control over this.
What do you mean by that? I have mine set to never remember history, and I often force-kill FF as a way to save a session. When I start it up again it prompts to restore the previous session.
> You can't disable Firefox from checking for updates (I wish this could be left to package managers on some systems).
Firefox's updater is disabled on Arch Linux and iirc also Ubuntu/Fedora/etc. The 'About' window says that it has been disabled by the system administrator. Presumably this is either an about:config setting (good) or a compile-time flag (sad).
> Why the fuck would I want "Recommmend features as I browse?" or "Recommend extensions as I browse?" I hate being advertised to.
These aren't ads; you can't pay to have your extension shown to users. Recommending features is done by the browser on your computer; iirc the addons recommendations are also computed locally based on anonymized data (don't quote me on that though).
I at least find the feature suggestions to have been helpful. Many normal people won't know about reader mode, and FF on Android also tells you that bookmarking a page will save it for offline use, which I would not have otherwise known.
> "Warn you about unwanted and uncommon software" - who is making this determination? Who is Firefox talking to about what I download?
Google is used; if a file seems suspicious only then are details sent to google to give the final word. I wish they were more upfront about that.
But FF is targeted at normal people, and this does help them not get malware. Advanced users like us can easily disable it. That doesn't excuse the lack of transparency though.
> Sucks that in hotels Firefox determines if there's a captive portal in effect by querying a Mozilla-hosted site (detectportal.firefox.com).
How else do you propose they do this? And better to have Mozilla host it than someone else. On Android you are forced to use Google's NTP servers, and I have no doubt their captive portal detection is also google-hosted. I'd bet the detection URL is exposed in an about:config flag on FF.
> I wish I could sync settings, open tabs, addresses, history, etc - to an simple archive on close or periodically. No online service to sync against with another account I have to worry about.
For what its worth, Firefox Sync is e2e encrypted with your password. If you log out of sync on all your devices and forget your password, the synced data is gone forever. To log back in you will have to reset the password, and since none of your devices are logged in (they act as backups for the data), it is gone forever.
Addon recommendations do not anonymize their input data, which I think is just what browser features and maybe sites you've used (I'm not sure). But it doesn't matter, since it is never sent anywhere. The data is local, the decision is local. Your browser already knows your browsing history.
I also use Firefox Nightly. But was rudely surprised to notice that settings were getting reset after nightly upgrades. This is really poor decision by Firefox. If they expect early adopters to use and give feedback, then they should be friendly to them.
You're probably seeing a recent change. Different channels (eg release vs nightly) recently started using separate profiles, because it was too easy to get data loss by switching between browser versions with the same profile. The storage files are backwards- compatible, but not forward compatible.
I don't know if firefox clones your profile or creates a blank one when you switch channels, though.
I wonder how long it'll be before Firefox comes with it enabled by default. It seems that they're going to do it regardless of the loss of control implications to end users.
It seems that they're going to do it regardless of the loss of control implications to end users.
That seems to be a common trend these days --- ignore what they want, claim that it's "for their safety/security/privacy/whatever", and gradually remove options for configurability.
In particular, this sort of "overstepping the boundaries" is unfortunately getting more popular, and IMHO it's rather disturbing that browsers have gone in this direction; software should follow the system defaults/configuration whatever they are. Yes, the platform coud be compromised or otherwise not to your liking. That's not your problem, Mozilla!
(I run everything on my network through a filtering proxy. These attempts to subvert it are definitely not welcome.)
I disagree. The claim is about a loss of control for "end users" – but it's really a loss of control for network administrators. If I run Firefox on someone else's network and DoH lets me circumvent a filtering proxy, that's handing control away from the network administrator towards me, the end user. Same goes if the network administrator is "just" logging DNS queries, at the cost of my privacy.
It so happens that many of the people who post on this site actively administrate at least a home network for which they are also end users. But even in that rarified group, how many people don't connect their computers to untrusted Wi-Fi networks, regularly or at least occasionally? I'd guess it's a pretty low fraction. And among Firefox users in general, the fraction that has meaningful input over the configuration of any network they connect to is surely negligible. (Owning a router doesn't count if you don't know how to configure it.)
If you're really an end user, then you'll always have the ability to change Firefox's settings to turn off DoH, or point it at whatever server you want. It's only if you're trying to monitor someone else's connections that you're out of luck.
> If you're really an end user, then you'll always have the ability to change Firefox's settings to turn off DoH, or point it at whatever server you want.
But will you, even in a year or five from now? And in the broader discussion of DoH-in-browsers, will the same be true about Chrome?
(I believe the answers are "maybe" and "not likely", respectively.)
That's neat/interesting; any chance you could point me in the right direction to hear more? I'm curious how they'll make it respect that setting from the "local" network and not from ex. an ISP.
I saw somewhere that this can be enabled in Chrome from chrome://flags/, but I can't seem to find it in mine, v75 on Mac. Was it removed from recent versions?
A chromium project called Bromite exposes this flag[0], but I don't think it's ever been available on Desktop versions of Chrome (probably due to the likelihood of Schools, Enterprises, etc. getting mad if a user uses it to circumvent DNS blocks).
As per a comment by Eric [unknown surname] at Microsoft here[1], you can enable it on desktop chrome by adding the following to your chrome launch options:
This can easily be done persistently via Windows, but I'm not sure what it would take on Mac. The official Chromium guide for starting with launch options[2] only recommends opening terminal every time, which would mean it can't be easily ran on each launch with the shortcut/dock icon.
While true, the OS will contain less and less functionality. Which is funny because on one hand we want to have less dependencies, and on the other hand we have microservices for everything.
Developers going the sysop direction of services
Sysops going the developer way of statically linking
After that ISP award thing came out it finally convinced me to look into DoH and give it a go. So I ended up setting up a Pi Hole this weekend running a local DoH-to-DNS proxy and then changing the DNS settings on my router to point to the Pi Hole. This also means my hosts file continues to work if I need it, and all* the programs running on my PC are transparently going through DoH without them being any the wiser.
The setup was a little bit fiddly to get going, but I'm now super happy with it. As a sidenote, it was interesting to see how effective uBlock Origin already was because I thought the Pi Hole's blacklists weren't working at first!
*I imagine I'm not catching every single one of the DNS lookups on my network, but I bet it's now a large percentage of them.
But Foundation for Applied Privacy sounds nice and I want to force DNS over HTTPS. The site specifically tells me to use the the Firfeox setting page https://appliedprivacy.net/services/dns/ but that sets network.trr.custom_uri not
network.trr.uri so whats the diffrence? And it also tells that I have to set the network.trr.bootstrapAddress but does not tell you to what in case I missed something.
I wonder if it is ever possible to move to a protocol like MinimalLT - https://cr.yp.to/tcpip/minimalt-20131031.pdf
and solve the privacy aspects in a fundamental low-level protocol usable for all types of packet transfers.
Using stable Firefox 67.0.4 64-bit and this is right there on the options page in General/Network settings. Truth to be told, it will set network.trr.mode to 2, which falls back to normal DNS if anything is wrong, but nonetheless it's there.
I understand how DOH can help prevent DNS spoofing, but I really don't understand the privacy claims. Are not outbound connections, http or https, known by the ISP? Or is the assumption that the world is all behind a proxy like cloudfare?
Could something like pieHole intercept all DNS and send it over a VPN or something, while providing a local DNS cache? Seems unnecessary to wait for all software to support it nativity.
Edit: Apparently they already thought of this and it's a feature!
I think it's kind of strange that they are planning to enable DOH by default;
Your ISP can see all connections/ip addresses you connect to regardless of whether you use your ISP's DNS servers or not. So, in the end by using DOH in Firefox (= Cloudflare's DNS by default) you're just sharing your internet history with yet another third party.
This may be beneficial for some people where ISP's mess with DNS resolving, but for many other people it's actually a regression in privacy (especially if you live in a country that has higher privacy standards/laws than the US.)
An IP address is not always as telling as the DNS name of what you're connecting to. E.g. I may be connecting to a CDN like CloudFlare for content over HTTPS and my ISP will have no idea what I'm doing. But if I used the DNS name that refers to that content it would likely be more obvious in many cases.
ISPs are a crapshoot the world over apart from very few countries. Almost all block or mess with torrent sites.
Off the back of a trip overseas, the "free wifi" is also a mess with DNS hijacking for no other reason than to feed you a cookie / limit access for essentially no good reason. Breaking that shitshow when chrome eventually follows suit will be a nice change for users.
With HTTP/2, if you open a connection to one HTTPS server then you can pipe additional content down over the same connection without having to reestablish. So DoH can be faster for overall page load times than DoT.
No, they still can spot and block it. The only way it can be hard to block for an ISP is if DoH used domain fronting style collateral freedom, but that didn't turn out to work in practice as governments found even the largest corporations to be acceptable collateral and just blocked IP ranges of those corporations to pressure them to disallow such thing and of course they all pretty quickly disabled domain fronting.
At this point there is no way big tech corporations can get involved in censorship circumvention, but they can and are in censorship both to satisfy all the governments and for their own benefit.