I understand the privacy and security aspects. But I am wondering - how can DNS over HTTPS be more performant in the case of curl commands? A browser could probably persist the connection to the resolver and issue several requests together, but with a single curl command surely there's the overhead of initiating the first DNS resolve, the HTTPS connection, the second DNS resolve. There must be some delay compared to regular DNS.
I don't personally use the word since there are many alternatives, but its use is now definitely widespread and consistently understood. There's really no argument against the fact that it has entered the English lexicon.
I don't use it either, and I agree with the poster on that stackexchange link about it sounding like manager/marketing-speak, but when I see it, it makes me pause for a second to think about what really means (perhaps that's the point) --- I had this exchange with a coworker not long ago:
CW: ...and this way it'll be more performant too.
Me: Performant? As in faster?
CW: Yes.
Me (to self): Then why didn't you just say it'll be faster?
IMHO it obfuscates meaning and should be avoided; the surrounding discussion/vagueness about what it means (I don't think it means efficient, but only faster --- as in, a high-performance car) shows that too.
Performant to me implies performing better against the relevant metrics. So faster, maybe, but perhaps smaller and more energy efficient too. If the context of the metrics is already understood then it seems quite a cromulent word.
I had just explained the etomology of cromulent to my wife a few days ago - awesome to have an example of its usage in the wild. My hat is off to you, sir.
It's from the Simpsons and that is pretty much exactly the context that it first appeared in! Here's an excerpt from the "Made-up words" article on the simpsons.wikia.com:
When schoolteacher Edna Krabappel hears the Springfield town motto "A noble spirit embiggens the smallest man," she comments she'd never heard of the word embiggens before moving to Springfield. Miss Hoover replies, "I don't know why; it's a perfectly cromulent word".
I wonder if you can use "cromulent" to guess the age of a person. Older people would not have watched the Simpsons in the early 90s. Younger people very likely missed this obscure episode. I would guess an age of the user to be 30-35.
The episode was first aired in Feb ‘96, at which point I was 20, and in college. At this point the show was still quite popular with my friend group. I am currently 42. You might need to increase your upper bound on the age range.
You're not far out, but I must have missed that episode originally and only watched it in the last few years. I possibly also learnt it from Reddit first.
Others have filled in the detail of origin and meaning. Yes, my intent was to use another neologism to add an ironic tone. Language is always changing.
Can anyone give examples of cases where “performant” would be better than “efficient”?
To my mind, the difference is that “performant” delivers results quickly, whereas “efficient” uses little energy. A performant solution might be efficient, but not necessarily, and vice versa. Does anyone else share this understanding or am I living in a linguistic bubble?
Not sure about Swedish (Daniel is Swedish), but in Finnish, we have the word “suorituskykyinen”, meaning “efficient” or “able to perform”. I can see how Finnish writers might want to use “performant” to replace this commonly used word. Perhaps there's a similar history for Swedish speakers? I sure have seen “performant” a lot in academic texts written by Finnish and Swedish speakers in the last 20 years.
Efficient has connotations of economy, of optimal use of resources but with a hint of parsimony - another way to get efficiency is to reduce the denominator.
Yep. A top fuel dragster goes from 0-60 mph (~0-100 kmh) in about 0.2 seconds, but it will use about 20 gallons (~75 liters) of fuel going 1,000 ft (~300 m). Very high performance. Very low efficiency.
The denominator is "resources," which here means "resource cost." Increasing resource cost reduces efficiency. You want to either increase the numerator ("performance") or decrease the denominator ("resources").
"My mining algorithm, which uses a lot more CPU and memory, is more performant." The key being that performant is often associated w/ speed (i.e. higher performing) whereas efficient ambiguously can refer to many things you have improved on.
Yep, that's how I understand the word “performant” as well. A car that accelerates from 0-100 kph in under 3 seconds would be ”performant”. A car is “efficient” if it burns less than 4 liters of gasoline per 100 kilometers.
The algorithm doesn't perform more efficiently or accurately, though - it's less efficient (it uses more resources) and it should be the same accuracy (or we're comparing apples with oranges).
haha yah i know, it would perform terrible. imagine running a big dns server at ISP level, and having to perform 9 million tls handshakes a second. have fun with that lol. this was just an example of how to avoid the 'performant' word.
In my understanding, performance is based on the value of a measure improving in one way or another.
Efficiency is relative to the amount of resources required to achieve a similar result. To use the car example above, a car that can do 100km/h at 4L/100km is more efficient than another can than do 100km/h at 8L/100km.
The point of absolute efficiency is referred to as optimum. Essentially, top performance possible with the minimum of resources possible.
Yes, throughput vs latency is a very important distinction. An airplane is always faster then a train, but if what you are measuring is throughput, a train can be more performant. It depends on the context, but that's fine. There is no such thing as a made-up word if the word has a meaning, and is regularly used by a group of people.
"Performant" is a word in German. It translates to exact what "perfomant" would mean in english if it were an english word. The funny thing is, google translate, translates the german word "performant" to the english word "performant". https://translate.google.com/?hl=de#de/en/per%C2%ADfor%C2%AD...
> performant is not considered a real word in English, although I commonly see it used...
This is close to being a contradiction in terms. The purpose of words is to communicate, and if a word is being successfully used to communicate -- which clearly it is -- what exactly does it mean to say that isn't "considered a real word"?
Also, "Considered" by whom? The dictionary? Dictionaries are descriptivist -- they record usage, they don't prescribe it [0]. If it isn't in the dictionary yet, that's only because the usage hasn't been used widely and/or for long enough that it meets the inclusion criteria [1].
(As it happens, "performant" is at the stage where it is starting to pass those thresholds, and is now in the OED [2])
> (As it happens, "performant" is at the stage where it is starting to pass those thresholds, and is now in the OED [2])
Strictly speaking, I don't believe your [2] is the OED. It's "Oxford Living Dictionaries", which I assume tries to be more current/dynamic, but might be regarded as less authoritative.
Interestingly, the word "performant" is in the OED itself, with the earliest citation being from 1809 -- but it is listed only as a noun, meaning "A person who performs a duty, ceremony, etc., a performer", not the adjectival usage under discussion here.
If your ISP is intercepting DNS requests and sending them to a slow or broken server, DNS over HTTPS ensures this won't happen. (Yes, they could block the traffic entirely, or slow it down for fun, but they can no longer intercept and redirect it.)
Ideally the browser and curl only use a local resolver, on your machine or router, which in turn can use a steadily alive or even multiple alive connections to the resolver.
If you had to go the TLS route and wanted better perf, then it'd make more sense to build a new, dead simple, framing protocol that actually specified working pipelining and eliminated HOL blocking by allowing out-of-order responses.
They're just using HTTP because HTTP(S) defaults to port 443 and it'll go through more firewalls, and I guess their thinking here is to (just add more complexity) and move to HTTP/2
If your question is how to reuse the connection pool for, say, 1000 DNS queries from one machine, curl already uses a connection pool. Next, to use fewer TCP connections and get faster response, my first proposal is HTTP/2 which supports TCP multiplexing such that multiple requests can be issued over a single TCP connection. Every response is return asynchronously, we can achieve nonblocking.
cURL also supports HTTP/2.
Speaking of word: it seems author of curl project prefers “curl” over “cURL”? Worth asking.
> Every response is return asynchronously, we can achieve nonblocking.
same goes for UDP which DNS runs over normally.
Like the original commenter, I fail to see how stacking DNS over HTTP over TLS over TCP over IP is going to be faster than running DNS over UDP over IP.
Also, you still need to resolve the address of your DNS-over-HTTP server, which you'd probably still do over traditional DNS-over-UDP.
This feels like doing nothing but adding complexity for zero gain.
From what I understand the "cURL" format of the word is popular mostly with users of the PHP bindings. "curl" is what the project uses, but I'm not sure they really care what other people refer to it as :)
In USA and UK at least unauthorised access or use of a computer is criminalised. On some situations you can argue for assumed consent, the law doesn't operate on "if I can do it then it's authorised". Unless you can show you have permission then it's not authorised, ergo not legal.
But the premise was to circumvent crap such as captive portals. Doing that on your own computer (mostly in a public wlan), I don't see any reason against it.
Who is the "someone else" in your case? Where does the someone else's hardware come from? OP mentioned this to get rid off e.g captive portals.
Iodine requires a client and a server. Both belong to you, what is the problem here? That I use a network to transmit packets? We are not talking about installing iodine on someone else's computer!
Not sure if you're trolling, but the network is being accessed by bypassing the captive portal. The network is being accessed in a way that isn't permitted.
So the network is not password protected, DNS (or ICMP) works normally, but somehow using it in a particular way is not permitted? Then why does it work at all?
Maybe I'm not understanding this correctly, but if a coffee shop has wifi and you need to enter info into a captive portal before you can use their network, by circumventing it, the "Someone else" is the coffee shop owner, and the hardware is their router.
"Please get off my router if you don't agree to my conditions". "Nah I'm just using DNS, it's fine" probably is not an admissible excuse.
Do you dispute that a coffee shop providing WiFi with a captive portal only intends to provide web access through that portal? Just as they only intend customers to take sugar packets for use in their coffee, etc..
If the shop don't intend the use its not authorised. Your ethical framework may not put any value on that lack of authorisation, but you see the action is unauthorised, surely?
It's unfit for general use and abuses the DNS protocol to stealthily convey data, hence the shadiness. You're right, it needn't be used in a negative way. DNS tunneling is slow, needs polling for incoming traffic (due to the way dns works over udp) and is usually used to circumvent firewalls for both good (censorship) and bad reasons (exfiltrating data).
It's clearly to evade a security control..
As if you approached a door, to find it locked, but then discovered the front window was unlocked and let yourself in.. Clearly the occupant didn't want you to enter, and just failed to secure the entire building. Pretty sure nobody would ever suggest it was okay for you to enter in such a way.
We mostly built it because it sounded cool, but one use case that I though was important was being able to boot these images in different data centers to collect telemetry on DNS differences geographically (eg for hijacking or anycast).
We at DNSFilter are able to collect that same telemetry just by having resolvers at each of the anycast locations -- between the different locations, and passing along eDNS Client Subnet information, many responses will be (legitimately) different. A little hard to see a hijack in that noise.
I can see how DNS over HTTPS addresses security, but I do not see how it helps with privacy. After resolving the IP address over secure connection HTTPS still sends the host name unencrypted, so one can just eavesdrop on that. And if encrypted DNS becomes widespread, I suspect that various state-imposed firewalls like one Russia will just look for HTTPS connection header to block a particular site.
We at DNSFilter are working on client to resolver agents which use DNS over TLS as well as DNSCrypt.
We're also looking to create a standard for recursive resolvers like us to communicate securely with authoritative providers (no RFC yet).
needed for webservers because of multiple domains (SNI) on single IP address. Didn't read the spec, but couldn't be hostname encrypted in case of DNS resolving? (eg. hostname can be sent in http body).
Well, unencrypted data by nature are vulnerable to tamper. Say change the DNS answer to point the client to another IP address in this case.
Of course DNS over HTTPS is not a sliver bullet that will ensure your privacy, instead, it's one step towards better security and thus (hopefully) privacy.
You don't even need SNI; the initial Server Hello, which contains the server's certificate (which contains the DNS name the certificate is issued for) is sent in the clear.
The ServerHello starts out by finishing key agreement and then (in the same message) it assumes its peer now knows the session key and encrypts the rest of the message, including Certificate.
Interesting; do you have a reference? In particular, how does it complete key agreement without the certificate? The first thing that comes to my mind is that in order to build the encrypted connection, you need to know who you're building it with, i.e., you need the certificate. You could build an encrypted connection with just whoever is on the other end, and then exchange the cert, but I would wonder then how you would prevent a MitM from just building two connections and forwarding the content across. (Though that would defeat a passive listener, at least.)
(Essentially, the problem is that prior to requesting the certificate for X.com, you need to know that you're talking to X.com; a sort of authentication chicken and egg.)
The TLS 1.3 drafts (and given they're in Last Call, presumably the RFC itself once published) use different types of brackets to indicate whether and how things are encrypted in the ASCII art diagrams. The curly braces {} are used for the Certificate structure, meaning it is encrypted using the session keys but it is NOT authenticated application data yet. We'll see why in a moment.
So, firstly I'm going to say, go read the drafts for yourself https://tools.ietf.org/html/draft-ietf-tls-tls13-23 because I am pretty sure that's the most snarky response and also hey, it's right there, if I'm wrong that's a great place to start in proving so.
But then I'll answer your substantive question, because the answer is pretty interesting (and quite unlike earlier versions of TLS).
So, you are correct that we can (and TLS does) do DH without knowing who we're talking to, and that our problem is that although our session is encrypted and can't be eavesdropped, this seems useless because we don't know who the heck we're talking to, surely we can be subject to a Man in the Middle attack.
However, TLS 1.3 has a trick here, after sending Certificate the server sends CertificateVerify. CertificateVerify signs the entire key agreement (which we just did) with the identity from the Certificate. So, a client receiving CertificateVerify either gets a matching signature (the party we did our DH key agreement with _was_ the owner of the Certificate - good) or they don't (we're being MitM'd the phone call is coming from inside the house - get out!)
Once the client has checked CertificateVerify (and presuming they're happy with whatever was inside Certificate) they're truly secure and Application Data can begin to be processed.
> I suspect that various state-imposed firewalls like one Russia will just look for HTTPS connection header to block a particular site.
What if that site does a lot more things? Even they can only upset their population so much by blocking entire domains. With regards to Russia, Google's DNS over HTTPS is a perfect example. "Domain fronting" is a thing for a reason.
In Russia the government does ban domains unless the domains are popular. At some point they even tried to ban based on IP addresses from DNS records, but people quickly learned how to use that to "ban" government-based media by changing IP addresses of banned domains to point to those sites.
As I understand it, this would fix leaking during DNS resolution, fixing any leaks from TLS is a separate issue.
And yes, when that's fixed, you'll still be leaking destination IP.
But if not one step at time, how is it going to get better? :)
I would love this to catch on wildly! DNS through TLS means it's all end-to-end encrypted, DNS reflection attacks are harder, etc. The "why even have protocols" doesn't make sense. Protocol can be layered just fine (HTTP itself is a good example). DNS as it exists now is another random special snowflake that vendors need corresponding snowflake implementations for.
>DNS through TLS means it's all end-to-end encrypted
I didn't say I'm against DNS being encrypted, even with TLS. I just hate that instead of doing the right thing (e.g. political battle with the government, opening a port on a firewall) people choose the laziest way: just tunnel it over HTTP.
>Protocol can be layered just fine (HTTP itself is a good example)
They can, but why do it? Just figure out a way to make your server be yet-another-"REST"-service and pat yourself on the back for cleverness.
>DNS as it exists now is another random special snowflake that vendors need corresponding snowflake implementations for.
As is: SMTP, POP, IMAP, SSH, AMQP, FIX, SWIFT, LDAP, ODBC and a host of other protocols. Why do you use negative language like "random special snowflake that vendors need corresponding snowflake implementations for"? These are seperate protocols, which serve seperate specific needs. This is how IP was designed to work.
Yes as a protocol designer, I couldn't care less about the type of data that runs through the protocol. The semantics are much more important. Sequenced delivery with hard requirements on delivery order and reliability? Probably should be TCP based. Something that gives something different would be RTP (specialized for sending media packets which aren't useful after a certain time has elapsed) or SCTP or UDP.
I don't disagree at all. But I don't find this sufficient justification to engage in poor engineering practices.
DNS has a different set of use cases than HTTP so, while it can be made to work with enough effort (anything can), HTTP can never be as good at DNS as an actual protocol designed to do DNS can.
Why do we have ports at all? Except 80 and 443 many ports are blocked anyways.
I'd say there is an arms race between admins and users. Step by step features get forbidden and blocked usually in the name of security. For example, NAT took away your globally visible IP address.
The next step is probably blocking domains. In a dystopian vision, Google and Facebook become the proxies for all traffic as all other domains get blocked for most users.
Anyone has good pointers on DNS over DTLS (draft RFC 8094) vs DNS over TLS (RFC 7858) vs DNS over HTTPS (this draft) ? And if you could throw in DNSSEC in the mix, that would be helpful.
Some factors which start to come in to play with the various options are:
Compatibility with firewalls and proxies (DNS over HTTPs wins, others over port 853 have a disadvantage, even if you put them on port 443, they may get stopped by deep packet inspection)
And closely related, performance. It's tough to beat a lossy protocol with minimal overhead (UDP), but there have been a lot of improvements with Keep alive, eDNS0, pipelining, etc.
We're currently doing a lot of testing around this, as we are working to offer useragents and LAN proxies implementing DNS over TLS, compared with DNSCrypt.
Finally: DNSSEC. It lives in a different part of the 'security' spectrum here... The purpose of DNSSEC is to be able to authenticate the validity of an answer when traveling through untrusted parties (say you're using a third-party resolver, or do you trust your ISP's DNS to not mangle with responses). It does not encrypt request/responses. It does not provide privacy. It answers the question: "Can I trust this response I received for example.org to be the one that example.org intended me to receive?"
I've been pushing all my DNS traffic over a VPN, transparently, for the whole house, for years now (by setting the VPN remote IP as the upstream resolver on my router).
It seems the only advantage of DNS-over-HTTPS is that it does DNS over TLS on port 443, which is harder for militant netadmins to block.
It's definitely a solution to a niche problem, but if we really want to encrypt DNS at scale then we could do it easily enough by introducing DNScurve to the SOHO market.
When you run DNS requests over VPN, you are sharing your "browsing history" with (at least) 1 other third party: whoever runs the VPN server, and whoever runs the (public) DNS server.
Your ISP can see the IPs you connect to regardless of whether you use your ISPs DNS server or not (unless of course you tunnel ALL traffic of all clients through the vpn as well.)
DNS through a VPN is only a partial solution though, right? Even if there's a recursive/caching resolver within the VPN (so _my_ direct DNS requests are private) any DNS requests outbound from the VPN go through global DNS.
But, I guess same problem for DNS-over-HTTPS currently.
"Do DNS resolves over HTTPS for privacy, performance and security. Also makes it easier to use a name server of your choice instead of the one configured for your system."
"A server that is acting both as a normal web server and a DNS API server is in a position to choose which DNS names it forces a client to resolve (through its web service) and also be the one to answer those queries (through its DNS API service)."
How might this affect users who block ads via DNS?
What about users who want the system DNS settings they have configured (e.g. /etc/resolv.conf) to be honoured?
I can clear browser DNS cache with a couple of keystrokes, but I am not using Chrome, Firefox, IE/Edge, Safari, Brave, Opera, etc.
How easy is it for a user to clear the DNS cache in the popular browsers?
The best "privacy, performance and security" for the user is achieved by using /etc/hosts. HOSTS (i.e., no DNS) will beat DNS on all three, every time. "Flexibility" was not mentioned as a criteria.
There is also the issue of "transparency" to consider. It is easy for the user to check their HOSTS file to see what are the current name to address mappings. How easy is it for the user to check what mappings are in the browsers DNS cache?
Fact: Many/most of the names looked up by a user repeatedly every day do not change by the hour, by the day, the week, month or even year.
As a user, I use HOSTS heavily, with appropriate ed scripts for fast automated modification. I use HOSTS for the transparency, control and speed. Any privacy or security benefit is a bonus.
Yes, greater for security but still depends upon reliability of certificate authorities and ISPs. ISPs have the ability to issue the client a bogus certificate while they hold the real one in order to decrypt traffic. Why won't browsers allow the certificate's public key to be readable by javascript so that the remote server can verify the client has the correct certificate? This wouldn't be foolproof but it would significantly beef up security.
Similar to how tor hidden services get resolved, distributed hash tables seem like the way to go. Take the ISPs and Cert Authorities out of the equation.
Making the public key of the remote available to JS wouldnt help because if you can't trust the identity of the remote server or integrity of the connection then you can't trust the javascript.
Tor hidden services are secure because of the lack of human meaningful names.
Yes, since you can't trust the JS, doing this would be an attempt at security through obfuscation. I mostly just think it's strange that these keys aren't accessible. If there were ever a mismatch detected, the server could at least know with certainty that there's a bad actor on the connection.
Thanks for Zooko's Traingle wiki. Hadn't seen that one before.
> Why won't browsers allow the certificate's public key to be readable by javascript
Because sites would randomly block reverse proxies, web proxies, enterprise users, people using certain WiFi APs, and it would give advertisers another thing to track users on.
Frankly after looking at what websites did with the User Agent, I'm scared to see what they'd do when a certificate mismatch occurs.
Public key pinning helps but it operates on the assumption that the initial key is the correct one. I find it curious that these public keys are not readable by the client.
people invented other security things inside of dns protocol to avoid having the added bandwidth. i dont think it's reasonable to say that everyone can handle the increased load nowadays.
However, i've been thinking of this, for enterprise, you could build a dns server which servers over TLS connections, and then put a local dns proxy on clients which receives on 127.0.0.1 and sends out tls to the custom dns server.
That way any local network connections would be encrypted to passive listeners on the network.
recursive queries would go onto the internet plaintext to 'normal' dns servers.
You can already do this with existing tech. When a DHCP lease is acquired, you also get DNS server information. If you are concerned about the DNS packets being encrypted on your local network, then you can do that already as well by intercepting them with a local firewall and sending them across a tunnel.
IMO, the purpose of DOH is to address semi hostile environments like the free Wifi at the Airport. It is kind of like a VPN for just your DNS traffic.
I am just wondering why would you want DNS over https instead of something like dnscrypt which would be dns over ssl ( I think opendns does this already) ?
DNSCrypt doesn't use SSL or TLS. You may have assumed that because DNSCrypt defaults to 443/TCP but that is only for availability, as this port is usually not blocked. The packet format is very similar to DNSCurve, from which it descends.
While we at DNSFilter are working on support of DNSCrypt, just to add some flavor to your note... it should be mentioned that the original author just abandoned the project, removed the repos from his github user, shut down the website, and it's been setup by a few industry players under a new github org: https://github.com/DNSCrypt
Tough to say what direction the project will take.
I for one welcome other advancements such as Google's efforts.
FWIW, my understanding is if you use Chrome it actually communicates over QUIC
There is no good reason DNSSEC shouldn't protect query privacy. It doesn't because the most important Internet users --- developers and operations engineers --- are letting them standardize it that way. If it annoys you that we have to resort to elaborate hacks to keep ISPs from monitoring our queries while at the same time undertaking huge projects to migrate to a new DNS protocol, you're not crazy, and you should make some noise. DNSSEC sees barely any use in the real world, and it is not too late to change it.
Easier to prevent censoring. DNSSEC doesn't hide the fact that you're making a DNS call, correct? With HTTPS, censors/MITMs can only see the domain you go to. The censor/MITM won't know whether I went to google.com to search or perform DNS query. Plenty of countries have blocked DNS providers they don't like, now that's harder w/out also blocking the site as a whole.
Easier to prevent censoring from the ISP, but you're now just reliant on another company's systems.
If you just want a solution for having a remote server execute DNS queries for you, you can just use any of the existing VPN protocols purely for DNS resolution, or even make a better performing version.
When you're in an area where your DNS provider is limited by the state, you won't consider it wasted. But if you're not and you do consider it wasted, don't use it. But saying it doesn't seem to provide any benefit is wrong and just telling people to use a VPN for DNS resolution is also wrong (at least until the ergonomics improve).
It's more a question why a company like Google would pay several engineers who are paid 6-digit sums to build this when they could build for the same or less much more powerful solutions.
It all depends on your threat model. Certainly DNS over HTTPS is not a panacea - but it certainly will help when your ISP is doing crazy stuff with his DNS firewall (anyone here from the UK).
The other (darker) side of the coin is that corporate tools to identify e.g. rogue IoT devices by their DNS signatures will have difficulties...
I suspect it wouldn't be too hard for a nation state or similar to determine the signature of DNS over HTTPS requests to see if someone is making a DNS request, unless you add a bunch of noise to the transaction.
Doubt it. How can a DNS HTTP GET look that much different than a favicon.ico GET over TLS? MTU/size alone is all they can use to shape, but there are many small web requests.
"Therefore the client does not know if a site will require SNI or not."
The client that I use assumes no SNI required. (It intentionally does not support SNI.) If it fails because the website is on a shared host and requires SNI, then it retries via a local SNI-enabled proxy bound to localhost.
"Conclusion: browsers will continue to send SNI."
Some clients/browsers will continue to send the domainname in the clear for every https url, even when it is not required.
Some users might consider that as sacraficing their privacy even when it is not necessary.
But not the client I use. It assumes no SNI is required, by default. It never sends the domainname in the clear for https when it is not necessary.
That is awesome (and I personally also always err on the side of safety/privacy there), but I don't think this will help much.
Anyone that can set your setup up can also just openvpn to a remote server, and redirect all DNS queries over that connection (which is quite easily doable, actually).
Everyone that can't do this would still use SNI, so DNS-over-HTTPS wouldn't provide any security win for them.
No. All actors currently involved with the web are ideologically opposed to backwards incompatible changes, and, as a result, they'd never make a change that prevents a browser from accessing a legacy site.
A hypothetical encrypted SNI would be "optional" in the sense that your Firefox 57 wouldn't use it, but a site could implement it, and Firefox 73 could too and then the actual user of that actual site is protected by upgrading to FF 73.
If you were right everything would still be HTML 4 over HTTP 1.1
> "The DOH server is given with a host name that itself needs to be resolved. This initial resolve needs to be done by the native resolver before DOH kicks in."
You can remember a small set of resolved hosts for this purpose, not unlike remembering DNS servers or CAs to trust. A quick search online also says it's possible to issue a cert to a public IP address, so you can also do HTTPS to a numbered IP instead of a host name.
Where does the privacy advantage come from? Once you resolve a hostname privately, don't you still need to use its IP address publicly for your traffic to be routed there?
One case where it might help is shared hosting: you don't know which of the domains hosted on that IP you're accessing. Narrows it down to just a few though, so IMHO not a big improvement.
> One case where it might help is shared hosting: you don't know which of the domains hosted on that IP you're accessing.
That depends on your level of access/monitoring: if it's just a firewall log that shows source/destination IP, then yes - but as soon as you have any kind of packet monitoring, then the domain can be easily sniffed from the SNI header.
You'll need to mask page size if your trying to hide your access, even on a shared host with 100 domains I doubt you'll have pages where the byte-size isn't unique.
I think with https the results were something like 90% of page access could be guessed using meta-data (see eg https://web.archive.org/web/20090308103611/http://sysd.org/s...). That's going to drop when you don't know the site but you're going to need more counter-measures to hide your access effectively.
Yes, but reverse DNS is not totally trivial (think AWS). Also current DNS can easily be MitM'd allowing attackers to insert JavaScript into web pages etc.
I understand the privacy and security aspects. But I am wondering - how can DNS over HTTPS be more performant in the case of curl commands? A browser could probably persist the connection to the resolver and issue several requests together, but with a single curl command surely there's the overhead of initiating the first DNS resolve, the HTTPS connection, the second DNS resolve. There must be some delay compared to regular DNS.
Side note - I just learned that performant is not a recognized word (https://english.stackexchange.com/questions/38945/what-is-wr...)