Cloudflare recommends you configure 1.1.1.1 and 1.0.0.1 as DNS servers.
Unfortunately, the configuration mistake that caused this outage disabled Cloudflare's BGP advertisements of both 1.1.1.0/24 and 1.0.0.0/24 prefixes to its peers.
Just wondering, how do y'all manage wifi portals and manually setting DNS services? I used to use cf and google's but it was so annoying to disable and re-enable that every time I use a public wifi network.
Many wifi networks redirect non-encrypted http traffic to their captive portal. For the redirect to work, your DNS needs to be the default one provided by the router so that http://neverssl.com resolves to the wifi's "Please accept our ToS to get online" page.
If you aren't using their DNS, then your network requests just get dropped (as you're not approved yet). You need their DNS to learn how to access their captive host so they can whitelist your mac address.
My clients use DHCP for everything and are always connected to my home VPN. If I'm away from home and need to connect to a captive network, I'll turn off the VPN, connect, then re-enable the VPN. I run unbound at home for DNS.
While I run a home VPN, I think using it exclusively runs into issues:
- frequently capture portals only permit access for 1-2hr. Your internet get cut off, then you have to realize its not a temporary issue, but portal issue, then you close the vpn, try to find the captive portal, and re-auth.
- latency is too high for my home vpn when I travel in asia
On Android, in Settings, Network & internet, Private DNS, you can only provide one in "Private DNS provider hostname" (AFAIK).
Btw, I really don't understand why it does not accept an IP (1.1.1.1), so you have to give an address (one.one.one.one). It would be more sensible to configure a DNS server from an IP rather than from an address to be resolved by a DNS server :/
> So if you want to use DNS over HTTPS on Android, it is not possible to provide a fallback.
Not true. If the (DoH) host has multiple A/AAAA records (multiple IPs), any decent DoH client would retry its requests over multiple or all of those IPs.
Does Cloudflare offer any hostname that also resolves to a different organization’s resolver (which must also have a TLS certificate for the Cloudflare hostname or DoH clients won’t be able to connect)?
DoH hosts can resolve to multiple IPs (and even different IPs for different clients)?
Also see TFA
It's worth noting that DoH (DNS-over-HTTPS) traffic remained relatively stable as most DoH users use the domain cloudflare-dns.com, configured manually or through their browser, to access the public DNS resolver, rather than by IP address. DoH remained available and traffic was mostly unaffected as cloudflare-dns.com uses a different set of IP addresses.
> A cross-organizational fallback is not possible with DoH in many clients, but it is with plain old DNS.
That's client implementation lacking, not some issue inherent to DoH?
The DoH client is configured with a URI Template, which describes how to construct the URL to use for resolution. Configuration, discovery, and updating of the URI Template is done out of band from this protocol.
Note that configuration might be manual (such as a user typing URI Templates in a user interface for "options") or automatic (such as URI Templates being supplied in responses from DHCP or similar protocols). DoH servers MAY support more than one URI Template. This allows the different endpoints to have different properties, such as different authentication requirements or service-level guarantees.
Yes, but this restriction of only a single DoH URL seems to be the norm for many popular implementations. The protocol theoretically allowing better behavior doesn't really help people using these.
TBH at this point the failure modes in which 1.1.1.1 would go down and 1.0.0.1 would not are not that many. At CloudFlare’s scale, it’s hardly believable a single of these DNS servers would go down, and it’s rather a large-scale system failure.
But I understand why Cloudflare can’t just say “use 8.8.8.8 as your backup”.
It would depend on how Cloudflare set up their systems. From this and other outages, I think it's pretty clear that they've set up their systems as a single failure domain. But it would be possible for them to have setup for 1.1.1.1 and 1.0.0.1 to have separate failure domains --- separate infrastructure, at least some sites running one but not the other.
I became a bit disillusioned with quad9 when they started refusing to resolve my website. It's like wetransfer but supporting wget and without the AI scanning or interstitials. A user had uploaded malware and presumably sent the link to a malware scanner. Instead of reporting the malicious upload or blocking the specific URL¹, the whole domain is now blocked on a DNS level. The competing wetransfer.com resolves just fine at 9.9.9.9
I haven't been able to find any recourse. The malware was online for a few hours but it has been weeks and there seems to be no way to clear my name. Someone on github (the website is open source) suggested that it's probably because they didn't know of the website, like everyone heard of wetransfer and github and so they don't get the whole domain blocked for malicious user content. I can't find any other difference, but also no responsible party to ask. The false-positive reporting tool on quad9's website just reloads the page and doesn't do anything
¹ I'm aware DNS can't do this, but with a direct way of contacting a very responsive admin (no captchas or annoying forms, just email), I'd not expect scanners to resort to blocking the domain outright to begin with, at least not after they heard back the first time and the problematic content has been cleared swiftly
Oh hey, didn't expect this to actually be seen by many people, let alone you guys!
There was no ticket number yet because I was mainly trying to resolve it upstream (whoever made it get into uBlock's default block list, Quad9, and probably other places) and then today when I checked your site specifically, the link in "False Positive? <Please contact us>" (when you do a lookup for a blocked domain) just links back to itself so I couldn't open a case there either. Now that I look at the page again, with the advice in mind from a sibling comment to just email you, I now see that maybe this is supposed to go to the generic contact form and I needn't go through this domain status page. Opening the contact page now, I see that removal from blocklist is a selectable option so I'll use that :)
The ticket number I just submitted is 41905. Not that I'd want you to now apply preferential treatment, I didn't expect my post above to be seen by many people though I very much appreciate that you've reached out here. Makes me think you're actually interested in resolving this type of issue for small website operators, where the complete block without so much as a heads up felt a bit, well, like that might not get me anywhere. If the process just works as it normally should, that's good enough for me! Thanks for encouraging me to actually open a ticket!
Glad to hear you were able to submit a ticket! The website form wasn't working a brief time ago. But YES, we want to help! You can DM me in the fedi if you need anything: https://mastodon.social/@quad9dns
I've been the victim of similar abuse before, for my mail servers and one of my community forums that I used to run. It's frustrating when you try to do everything right but you're at the mercy of a cold and uncompromising rules engine.
In the ticket I just opened (see sibling thread), I asked which blocklist my domain was on. Maybe let's see what comes out of it, perhaps they can improve the process (e.g. drop that blocklist, or notify the abuse record of domains which they're blocking so that domain owners are at least aware of where they can go to fix things)
I don't see contact info on your profile or website/blog, but I can post here what the outcome is
You can use it, you just need to set the DNS over HTTPS templates correctly, since there's an issue with the defaults it tries to use when mixing providers.
DNS over HTTPS adds a requirement for an additional field - a URL template - and Windows doesn't handle defaulting that correctly in all cases. If you set them manually it works fine.
It's using DNS over HTTPS, and it doesn't default the URL templates correctly when mixing (some) providers. You can set them manually though, and it works.
This "URL template" thing seems odd – is Windows doing something like creating a URL out of the DNS IP and a pattern, e.g. 1.1.1.1 + "https://<ip>/foo" would yield https://1.1.1.1/foo?
If so, why not just allow providing an actual URL for each server?
It does allow you to provide a URL for each server. The issue is just that its default behavior doesn't work for all providers. I have another comment in this thread telling the original commenter how to configure it.
Could you show a citation? Your statement completely opposes Quad9's official information as published on quad9.net, and what's more it doesn't align at all with Bill Woodcock's known advocacy for privacy.
It doesn't say they sell traffic logs outright, but they do send telemetry on blocked domains to the blocklist provider, and provides "a sparse statistical sampling of timestamped DNS responses" to "a very few carefully vetted security researchers". That's not exactly "selling traffic logs", but is fairly close. Moreover colloquially speaking, it's not uncommon to claim "google sells your data", even they don't provide dumps and only disclose aggregated data.
Disagree that it's fairly close to the statement "they resell traffic logs" and the implication that they leak all queried hostnames ("secret hosts, like for your work, will be leaked"). Unless Quad9 is deceiving users, both statements are, in fact, completely false.
>and the implication that they leak all queried hostnames ("secret hosts, like for your work, will be leaked").
The part about sharing data with "a very few carefully vetted security researchers" doesn't preclude them from leaking domains. For instance if the security researcher exports a "SELECT COUNT(*) GROUP BY hostname" query that would arguably count as "summary form", and would include any secret hostnames.
If you're trying to imply that they can't possibly be leaking hostnames because they don't collect hostnames, that's directly contradicted by the subsequent sections, which specifically mention that they share metrics grouped by hostname basis. Obviously they'll need to collect hostname to provide such information.
I'm implying that I'm convinced they are not storing statistics on (thus leaking) every queried hostname. By your very own admission, they clearly state that they perform statistics on a set of malicious domains provided by a third party, as part of their blocking program. Additionally they publish a "top 500 domains" list regularly. You're really having a go with the shoehorn if you want "secret domains, like for your work" (read: every distinct domain queried) to fit here.
>I'm implying that I'm convinced they are not storing statistics on (thus leaking) every queried hostname. By your very own admission, they clearly state that they perform statistics on a set of malicious domains provided by a third party, as part of their blocking program.
Right, but the privacy policy also says there's a separate program for "a very few carefully vetted security researchers" where they can get data in "summary form", which can leak domain name in the manner I described in my previous comment. Maybe they have a great IRB (or similar) that would prevent this from happening, but that's not mentioned in the privacy policy. Therefore it's totally in the realm of possibility that secret domain names could be leaked, no "really having a go with the shoehorn" required.
We are fully committed to end-user privacy. As a result, Quad9 is intentionally designed to be incapable of capturing end-users' PII. Our privacy policy is clear that queries are never associated with individual persons or IP addresses, and this policy is embedded in the technical (in)capabilities of our systems.
It is about the hostnames themselves like: git.nationalpolice.se but I understand that there is not much choice if you want to keep the service free to use so this is fair
Is that really a concern for most people? Trying to keep hostnames secret is a losing battle anyways these days.
You should probably be using a trusted TLS certificate for your git hosting. And that means the host name will end up in certificate transparency logs which are even easier to scrape than DNS queries.
> When your devices use Quad9 normally, no data containing your IP address is ever logged in any Quad9 system.
Of course they have some kinds of logs. Aggregating resolved domains without logging client IPs is not what the implication of "Quad9 is reselling the traffic logs" seems to be.
Thats more clear, I get your point now. Again, though, that's not how most people would read the original comment. I've never even contemplated that I might generate some hostnames existence of which might be considered sensitive. It seems like a terrible idea to begin with, as I'm sure there are other avenues for those "secret" domains to be leaked. Perhaps name your secret VMs vm1, vm2, ..., instead of <your root password>. But yeah, this is not my area of expertise, nor a concern for the vast majority of internet users who want more privacy than their ISP will provide.
I am curious though, do you have any suggestions for alternative DNS that is better?
I use Google DNS because I feel it suits my personal theory of privacy threats. Among the various public DNS resolver services, I feel that they have the best technical defenses agains insider snooping and outside hackers infiltrating their systems, and I am unperturbed about their permanent logs. I also don't care about Quad9's logs, except to the extent that it seems inconsistent with the privacy story they are selling. I used Quad9 as my resolver of last resort in my config. I doubt any queries actually go there in practice.
It could be some subdomain that’s hard to guess. You can’t (generally) enumerate all subdomains through DNS, and if you use a wildcard TLS certificate (or self-signed / no cert at all), it won’t be leaked to CT logs either. Secret hostname.
Examples:
github.internal.companyname.com
or
jira.corp.org
or
jenkins-ci.internal-finance.acme-corp.com
or
grafana.monitoring.initech.io
or
confluence.prod.internal.companyx.com
etc
These, if you don't know the host, you will not be able to hit the backend service. But if you know, you can start exploiting it, either by lack of auth, or by trying to exploit the software itself
Yeah pretty much. In a perfect world you would pair it with another service I guess but usually you use the official backup IP because it's not supposed to break at same time.
Yes, I would also highly recommend using a DNS closest to you (for those that have ISPs that don't mess around (blocking etc.) with their DNS you usually get much better response times) and multiple from different providers.
If your device doesn't support proper failover use a local DNS forwarder on your router or an external one.
In Switzerland I would use Init7 (isp that doesn't filter) -> quad9 (unfiltered Version) -> eu dns0 (unfiltered Version)
How busy in life are you that we're concerning ourselves with nearest DNS? Are you browsing the internet like a high frequency stock trader? Seriously, in everyone's day to day, other than when these incidents happen, does someone notice a delay from resolving a domain name?
I get that in theory blah blah, but we now have choices in who gets to see all of our requests and the ISP will always lose out to the other losers in the list
Even something simple like www.google.com serves from 5 different DNS names. I have seen as high as 50. It is surprisingly snappier. Especially on older browsers that would only have 2 connections at a time open. It adds up faster than you would intuitively think. I used to have local resolvers that would mess with the TTL. But that was more trouble than it was worth. But it also gave a decent speedup. Was it 'worth' doing. Well it was kinda fun to mess with, I guess.
You know, I recently went through a period of thinking my MacBook was just broken. It had the janks. Everything on the browser was just slower than you're used to. After a week or two of pulling my hair, I figured it out. The newly-configured computer was using the DHCP-assigned DNS instead of Google DNS. Switched it, and it made a massive difference.
but that's the opposite of the request to move from a googDNS to a local one because of latency. so your ISP's DNS sucked, which is a broad statement, and is part of the why services like 1.1.1.1 or 8.8.8.8 exist. you didn't make the change of DNS because you were picking one based on nearest location.
There is more to latency than distance. Server response time is also important. In my case, the problem was that the DNS forwarder in the local wifi access point/router was very slow, even though the ICMP latency from my laptop to that device is obviously low.
which is well and fine, but my original comment was that moving to a closer DNS isn't worth it just for being closer especially when it is usually your ISP's server. so now, you're confirming that just moving closer isn't the solve, so it just reassures that not using the closest DNS is just fine.
If you think you can pontificate on DNS then I think you should be running your own service.
Note how root "." just works and has done for decades - that's proper engineering and actually way more complicated than running 1.1.1.1. What 1.1.1.1 suffers from is anycast and not DNS.
Cloudflare (and Google and co) insist on using one or more "vanity" IP addresses - that is very unfair of me but that it what it is, and to make it work, they have to use anycast.
Listing two is better than nothing, but it's not great. If one goes down, there's nothing that tracks which one is working, so you usually see long hangs and intermittent issues.
Unless you do something fancy with a local caching dns proxy with more than one upstream.
1.1.1.1 is also what they call the resolver service as a whole, the impact section (seems to) be saying both 1.0.0.0/24 and 1.1.1.0/24 were affected (among other ranges).
It is highly recommended to configure two or more DNS servers incase one is down.
I would count not configuring at least two as 'user error'. Many systems require you to enter a primary and alternate server in order to save a configuration.
The default setting on most computers seems to be: use the (wifi) router. I suppose telcos like that because it keeps the number of DNS requests down. So I wouldn't necessarily see it as user error.
The funny part with that is that sites like cloudflare say "Oh, yeah, just use 1.0.0.1 as your alternate", when, in reality, it should be an entirely different service.
Don't you normally have 2 DnS servers listed on any device. So was the second also down, if not why didn't it go to that.