Hacker News new | past | comments | ask | show | jobs | submit login
DNS over HTTPS (github.com/curl)
284 points by sohkamyung on Jan 17, 2018 | hide | past | favorite | 181 comments



> for privacy, performance and security.

I understand the privacy and security aspects. But I am wondering - how can DNS over HTTPS be more performant in the case of curl commands? A browser could probably persist the connection to the resolver and issue several requests together, but with a single curl command surely there's the overhead of initiating the first DNS resolve, the HTTPS connection, the second DNS resolve. There must be some delay compared to regular DNS.

Side note - I just learned that performant is not a recognized word (https://english.stackexchange.com/questions/38945/what-is-wr...)


> Side note - I just learned that performant is not a recognized word (https://english.stackexchange.com/questions/38945/what-is-wr...)

I don't personally use the word since there are many alternatives, but its use is now definitely widespread and consistently understood. There's really no argument against the fact that it has entered the English lexicon.


I don't use it either, and I agree with the poster on that stackexchange link about it sounding like manager/marketing-speak, but when I see it, it makes me pause for a second to think about what really means (perhaps that's the point) --- I had this exchange with a coworker not long ago:

CW: ...and this way it'll be more performant too.

Me: Performant? As in faster?

CW: Yes.

Me (to self): Then why didn't you just say it'll be faster?

IMHO it obfuscates meaning and should be avoided; the surrounding discussion/vagueness about what it means (I don't think it means efficient, but only faster --- as in, a high-performance car) shows that too.


Performant to me implies performing better against the relevant metrics. So faster, maybe, but perhaps smaller and more energy efficient too. If the context of the metrics is already understood then it seems quite a cromulent word.


I had just explained the etomology of cromulent to my wife a few days ago - awesome to have an example of its usage in the wild. My hat is off to you, sir.


What's cromulent mean? I looked it up at dictionary.com and couldn't find anything. Or are you making up more words to point out the irony? :)


It's from the Simpsons and that is pretty much exactly the context that it first appeared in! Here's an excerpt from the "Made-up words" article on the simpsons.wikia.com:

When schoolteacher Edna Krabappel hears the Springfield town motto "A noble spirit embiggens the smallest man," she comments she'd never heard of the word embiggens before moving to Springfield. Miss Hoover replies, "I don't know why; it's a perfectly cromulent word".




I wonder if you can use "cromulent" to guess the age of a person. Older people would not have watched the Simpsons in the early 90s. Younger people very likely missed this obscure episode. I would guess an age of the user to be 30-35.


I may be an outlier but as a seventeen year old, I'd already heard of 'cromulent' (from the Simpsons, but through Reddit).


The episode was first aired in Feb ‘96, at which point I was 20, and in college. At this point the show was still quite popular with my friend group. I am currently 42. You might need to increase your upper bound on the age range.


You're not far out, but I must have missed that episode originally and only watched it in the last few years. I possibly also learnt it from Reddit first.


My editor, in his mid 60s, uses cromulent quite regularly.


Others have filled in the detail of origin and meaning. Yes, my intent was to use another neologism to add an ironic tone. Language is always changing.


To be fair, "faster" in the context of software development can also frequently refer to the time it takes to implement.


Can anyone give examples of cases where “performant” would be better than “efficient”?

To my mind, the difference is that “performant” delivers results quickly, whereas “efficient” uses little energy. A performant solution might be efficient, but not necessarily, and vice versa. Does anyone else share this understanding or am I living in a linguistic bubble?

Not sure about Swedish (Daniel is Swedish), but in Finnish, we have the word “suorituskykyinen”, meaning “efficient” or “able to perform”. I can see how Finnish writers might want to use “performant” to replace this commonly used word. Perhaps there's a similar history for Swedish speakers? I sure have seen “performant” a lot in academic texts written by Finnish and Swedish speakers in the last 20 years.


    efficiency = performance / resources
Efficient has connotations of economy, of optimal use of resources but with a hint of parsimony - another way to get efficiency is to reduce the denominator.

Performant emphasises the numerator much more.


Yep. A top fuel dragster goes from 0-60 mph (~0-100 kmh) in about 0.2 seconds, but it will use about 20 gallons (~75 liters) of fuel going 1,000 ft (~300 m). Very high performance. Very low efficiency.


*increase the denominator.

nitpick aside, agreed


The denominator is "resources," which here means "resource cost." Increasing resource cost reduces efficiency. You want to either increase the numerator ("performance") or decrease the denominator ("resources").


Reducing the denominator is the other way to increase efficiency. The denominator is the number under the line, the divisor.


"My mining algorithm, which uses a lot more CPU and memory, is more performant." The key being that performant is often associated w/ speed (i.e. higher performing) whereas efficient ambiguously can refer to many things you have improved on.


Yep, that's how I understand the word “performant” as well. A car that accelerates from 0-100 kph in under 3 seconds would be ”performant”. A car is “efficient” if it burns less than 4 liters of gasoline per 100 kilometers.


'performs more [efficiently|accuratley]' would be proper english (not that i speak it well.... :D).

webster says:

"performant"

The word you've entered isn't in the dictionary. Click on a spelling suggestion below or try again using the search bar above.

    preformant performance performing performances performable preformants perforate conformant perform formant performed informant performer perforata performers performs preformat


The algorithm doesn't perform more efficiently or accurately, though - it's less efficient (it uses more resources) and it should be the same accuracy (or we're comparing apples with oranges).


haha yah i know, it would perform terrible. imagine running a big dns server at ISP level, and having to perform 9 million tls handshakes a second. have fun with that lol. this was just an example of how to avoid the 'performant' word.


If we’re getting pedantic about linguistics, wouldn’t that be, “it would perform terribly” instead of “it would perform terrible”?


In my understanding, performance is based on the value of a measure improving in one way or another.

Efficiency is relative to the amount of resources required to achieve a similar result. To use the car example above, a car that can do 100km/h at 4L/100km is more efficient than another can than do 100km/h at 8L/100km.

The point of absolute efficiency is referred to as optimum. Essentially, top performance possible with the minimum of resources possible.


I think the word that was being grasped for is 'faster'.

It suffers in that doesn't sound very technical.


GPU can perform more operations than CPU, but GPU is slower than CPU, not faster.


Yes, throughput vs latency is a very important distinction. An airplane is always faster then a train, but if what you are measuring is throughput, a train can be more performant. It depends on the context, but that's fine. There is no such thing as a made-up word if the word has a meaning, and is regularly used by a group of people.


So is the GPU more or less 'performant?'

That's right - the CPU is faster at completing the job, though the GPU executes instructions more quickly.


Great discussion. I've used the word without thinking twice. Now I'll choose between efficient, faster, or performant depending on context.


It is a recognized word, at least by the Cambridge [0], Oxford [1], and Wiktionary [2] dictionaries.

[0] https://dictionary.cambridge.org/dictionary/english/performa...

[1] https://en.oxforddictionaries.com/definition/performant

[2] https://en.wiktionary.org/wiki/performant


If you recognize it, it's a recognized word.

Dictionaries are descriptive, not prescriptive.


"Performant" is a word in German. It translates to exact what "perfomant" would mean in english if it were an english word. The funny thing is, google translate, translates the german word "performant" to the english word "performant". https://translate.google.com/?hl=de#de/en/per%C2%ADfor%C2%AD...


We have this word "performant" in Romanian, too, it means something that performs well; it's not limited to IT parlance.

I just checked and the French have it too, I can see how it has slowly made its way into English, through immigration and especially the Internet.

A live language is always changing, resistance is futile. :)


It is not actually translating anything since performant is not considered a real word in English, although I commonly see it used in the tech world.


> performant is not considered a real word in English, although I commonly see it used...

This is close to being a contradiction in terms. The purpose of words is to communicate, and if a word is being successfully used to communicate -- which clearly it is -- what exactly does it mean to say that isn't "considered a real word"?

Also, "Considered" by whom? The dictionary? Dictionaries are descriptivist -- they record usage, they don't prescribe it [0]. If it isn't in the dictionary yet, that's only because the usage hasn't been used widely and/or for long enough that it meets the inclusion criteria [1].

(As it happens, "performant" is at the stage where it is starting to pass those thresholds, and is now in the OED [2])

[0] http://englishplus.com/news/news1100.htm

[1] http://public.oed.com/about/frequently-asked-questions/#qual...

[2] https://en.oxforddictionaries.com/definition/performant


> (As it happens, "performant" is at the stage where it is starting to pass those thresholds, and is now in the OED [2])

Strictly speaking, I don't believe your [2] is the OED. It's "Oxford Living Dictionaries", which I assume tries to be more current/dynamic, but might be regarded as less authoritative.

Interestingly, the word "performant" is in the OED itself, with the earliest citation being from 1809 -- but it is listed only as a noun, meaning "A person who performs a duty, ceremony, etc., a performer", not the adjectival usage under discussion here.

http://www.oed.com/view/Entry/262085?redirectedFrom=performa... (requires login)


Would you prefer the dictionary from Cambridge University Press as a source?

https://dictionary.cambridge.org/dictionary/english/performa...


If your ISP is intercepting DNS requests and sending them to a slow or broken server, DNS over HTTPS ensures this won't happen. (Yes, they could block the traffic entirely, or slow it down for fun, but they can no longer intercept and redirect it.)


Ideally the browser and curl only use a local resolver, on your machine or router, which in turn can use a steadily alive or even multiple alive connections to the resolver.


It has zero performance advantages

If you had to go the TLS route and wanted better perf, then it'd make more sense to build a new, dead simple, framing protocol that actually specified working pipelining and eliminated HOL blocking by allowing out-of-order responses.

They're just using HTTP because HTTP(S) defaults to port 443 and it'll go through more firewalls, and I guess their thinking here is to (just add more complexity) and move to HTTP/2


cURL supports it. See https://ec.haxx.se/usingcurl-persist.html

If your question is how to reuse the connection pool for, say, 1000 DNS queries from one machine, curl already uses a connection pool. Next, to use fewer TCP connections and get faster response, my first proposal is HTTP/2 which supports TCP multiplexing such that multiple requests can be issued over a single TCP connection. Every response is return asynchronously, we can achieve nonblocking.

cURL also supports HTTP/2.

Speaking of word: it seems author of curl project prefers “curl” over “cURL”? Worth asking.


> Every response is return asynchronously, we can achieve nonblocking.

same goes for UDP which DNS runs over normally.

Like the original commenter, I fail to see how stacking DNS over HTTP over TLS over TCP over IP is going to be faster than running DNS over UDP over IP.

Also, you still need to resolve the address of your DNS-over-HTTP server, which you'd probably still do over traditional DNS-over-UDP.

This feels like doing nothing but adding complexity for zero gain.


Zero gain except going from easy eavesdropping, blocking, censoring, etc. to effectively unblockable, private and secure? Ok.

Also you would probably need to put in the IP of the DNS server manually, exactly like you have to with DNS now.


I would love to know how widely used UDP over TLS is in the wild.


Not quite the same, but QUIC is close to this description description, so it might be higher than initially expected?

From 2016: https://www.bizety.com/2016/01/26/half-of-chrome-to-google-t...


From what I understand the "cURL" format of the word is popular mostly with users of the PHP bindings. "curl" is what the project uses, but I'm not sure they really care what other people refer to it as :)


I think it depends on whether you are referring to the project or the executable [1]: "cURL is the name of the project."

[1] https://curl.haxx.se/docs/faq.html#What_is_cURL


You can also do http(s) over DNS: http://code.kryo.se/iodine/ . Nice way to avoid paying captive portals, although probably not legal.


Why not?


In USA and UK at least unauthorised access or use of a computer is criminalised. On some situations you can argue for assumed consent, the law doesn't operate on "if I can do it then it's authorised". Unless you can show you have permission then it's not authorised, ergo not legal.

AIUI; not legal advice.


But the premise was to circumvent crap such as captive portals. Doing that on your own computer (mostly in a public wlan), I don't see any reason against it.


You're still circumventing security measures to use somebody else's hardware in a way they clearly don't want you to. That's illegal in most cases.


Who is the "someone else" in your case? Where does the someone else's hardware come from? OP mentioned this to get rid off e.g captive portals.

Iodine requires a client and a server. Both belong to you, what is the problem here? That I use a network to transmit packets? We are not talking about installing iodine on someone else's computer!


Not sure if you're trolling, but the network is being accessed by bypassing the captive portal. The network is being accessed in a way that isn't permitted.


So the network is not password protected, DNS (or ICMP) works normally, but somehow using it in a particular way is not permitted? Then why does it work at all?


And how should I know that there is a captive portal? I connect (connection is established after receiving IP address!) and use iodine (or similar).


I'm sure the jury will be delighted to hear your explanation.


"I used iodine all the time, so I can't even notice the presence of a captive portal, if any."

"My son handles the computer stuff for me, I don't even know about that logging page you're talking about."

"I thought restricted networks had a WPA-2 password? That's what they use at my workplace."


"The door to the house was open, move of the stuff was tied down, how could i be expected to know I wasn't authorised to use it??"

Judges just aren't that stupid.


Here's the thing: I never give my real name to captive portals. I don't even give a real email address.

Who can tell me with a straight face that this is criminal behaviour


Only that the comparison is stupid.


Why should this go to court? I'm not sure if you have understood what this thread was about.


> That I use a network to transmit packets?

Maybe I'm not understanding this correctly, but if a coffee shop has wifi and you need to enter info into a captive portal before you can use their network, by circumventing it, the "Someone else" is the coffee shop owner, and the hardware is their router.

"Please get off my router if you don't agree to my conditions". "Nah I'm just using DNS, it's fine" probably is not an admissible excuse.


The assumption from the coffee shop is that wifi == internet == browser, which is not true. Why should I open the browser if I don't need it?


Do you dispute that a coffee shop providing WiFi with a captive portal only intends to provide web access through that portal? Just as they only intend customers to take sugar packets for use in their coffee, etc..

If the shop don't intend the use its not authorised. Your ethical framework may not put any value on that lack of authorisation, but you see the action is unauthorised, surely?


In this case, you're potentially using the public wlan's router in an unauthorized manner.


What is authorized and what is not? It is (usually) not presented.


Especially when the network is open, because for instance Android devices automatically connect to networks like that


Can I do HTTP over DNS over HTTP over DNS over HTTP over DNS over HTTP over DNS over HTTP over DNS?


You do realize DNS is not a fully featured protocol right...


What does this shady technique (dns tunneling) have to do with dns-over-ssl?


why shady? (assuming that shady is meant in a negative way)


It's unfit for general use and abuses the DNS protocol to stealthily convey data, hence the shadiness. You're right, it needn't be used in a negative way. DNS tunneling is slow, needs polling for incoming traffic (due to the way dns works over udp) and is usually used to circumvent firewalls for both good (censorship) and bad reasons (exfiltrating data).


It's clearly to evade a security control.. As if you approached a door, to find it locked, but then discovered the front window was unlocked and let yourself in.. Clearly the occupant didn't want you to enter, and just failed to secure the entire building. Pretty sure nobody would ever suggest it was okay for you to enter in such a way.


I doubt that. If DNS traffic is not filtered or blocked, it is clearly intended to transmit data that way.

On a public wlan I first try to use VPN with port 53 (not DNS). If it works, I'll use it that way.


We prototyped this as an intern project at OpenDNS, too: https://github.com/opendns/OpenResolve

We mostly built it because it sounded cool, but one use case that I though was important was being able to boot these images in different data centers to collect telemetry on DNS differences geographically (eg for hijacking or anycast).


Nice project!

We at DNSFilter are able to collect that same telemetry just by having resolvers at each of the anycast locations -- between the different locations, and passing along eDNS Client Subnet information, many responses will be (legitimately) different. A little hard to see a hijack in that noise.


I can see how DNS over HTTPS addresses security, but I do not see how it helps with privacy. After resolving the IP address over secure connection HTTPS still sends the host name unencrypted, so one can just eavesdrop on that. And if encrypted DNS becomes widespread, I suspect that various state-imposed firewalls like one Russia will just look for HTTPS connection header to block a particular site.


The industry stance seems to be: SNI not yet being fixed should not be an excuse to preclude work on securing the privacy of DNS requests. SNI Encryption is being discussed: https://www.ietf.org/proceedings/94/slides/slides-94-tls-8.p...

We at DNSFilter are working on client to resolver agents which use DNS over TLS as well as DNSCrypt. We're also looking to create a standard for recursive resolvers like us to communicate securely with authoritative providers (no RFC yet).


>HTTPS still sends the host name unencrypted.

needed for webservers because of multiple domains (SNI) on single IP address. Didn't read the spec, but couldn't be hostname encrypted in case of DNS resolving? (eg. hostname can be sent in http body).


Well, unencrypted data by nature are vulnerable to tamper. Say change the DNS answer to point the client to another IP address in this case.

Of course DNS over HTTPS is not a sliver bullet that will ensure your privacy, instead, it's one step towards better security and thus (hopefully) privacy.


> After resolving the IP address over secure connection HTTPS still sends the host name unencrypted

Correct me if I'm wrong, but I'm pretty sure the host name is generally only sent in an HTTP header, which should be encrypted over HTTPS.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Ho...


No, in order to multi-host SSL sites the host name has to be sent first in order for the correct certificate to be presented.

https://en.wikipedia.org/wiki/Server_Name_Indication


You don't even need SNI; the initial Server Hello, which contains the server's certificate (which contains the DNS name the certificate is issued for) is sent in the clear.


No longer true in TLS 1.3 as I understand it

The ServerHello starts out by finishing key agreement and then (in the same message) it assumes its peer now knows the session key and encrypts the rest of the message, including Certificate.


Interesting; do you have a reference? In particular, how does it complete key agreement without the certificate? The first thing that comes to my mind is that in order to build the encrypted connection, you need to know who you're building it with, i.e., you need the certificate. You could build an encrypted connection with just whoever is on the other end, and then exchange the cert, but I would wonder then how you would prevent a MitM from just building two connections and forwarding the content across. (Though that would defeat a passive listener, at least.)

(Essentially, the problem is that prior to requesting the certificate for X.com, you need to know that you're talking to X.com; a sort of authentication chicken and egg.)


The TLS 1.3 drafts (and given they're in Last Call, presumably the RFC itself once published) use different types of brackets to indicate whether and how things are encrypted in the ASCII art diagrams. The curly braces {} are used for the Certificate structure, meaning it is encrypted using the session keys but it is NOT authenticated application data yet. We'll see why in a moment.

So, firstly I'm going to say, go read the drafts for yourself https://tools.ietf.org/html/draft-ietf-tls-tls13-23 because I am pretty sure that's the most snarky response and also hey, it's right there, if I'm wrong that's a great place to start in proving so.

But then I'll answer your substantive question, because the answer is pretty interesting (and quite unlike earlier versions of TLS).

So, you are correct that we can (and TLS does) do DH without knowing who we're talking to, and that our problem is that although our session is encrypted and can't be eavesdropped, this seems useless because we don't know who the heck we're talking to, surely we can be subject to a Man in the Middle attack.

However, TLS 1.3 has a trick here, after sending Certificate the server sends CertificateVerify. CertificateVerify signs the entire key agreement (which we just did) with the identity from the Certificate. So, a client receiving CertificateVerify either gets a matching signature (the party we did our DH key agreement with _was_ the owner of the Certificate - good) or they don't (we're being MitM'd the phone call is coming from inside the house - get out!)

Once the client has checked CertificateVerify (and presuming they're happy with whatever was inside Certificate) they're truly secure and Application Data can begin to be processed.


In TLS, the server will send its certificate — and thus the domain name — in the clear.


> I suspect that various state-imposed firewalls like one Russia will just look for HTTPS connection header to block a particular site.

What if that site does a lot more things? Even they can only upset their population so much by blocking entire domains. With regards to Russia, Google's DNS over HTTPS is a perfect example. "Domain fronting" is a thing for a reason.


In Russia the government does ban domains unless the domains are popular. At some point they even tried to ban based on IP addresses from DNS records, but people quickly learned how to use that to "ban" government-based media by changing IP addresses of banned domains to point to those sites.


Are you talking about leaking names through TLS?

As I understand it, this would fix leaking during DNS resolution, fixing any leaks from TLS is a separate issue. And yes, when that's fixed, you'll still be leaking destination IP.

But if not one step at time, how is it going to get better? :)


One step, but in what direction?


“...for privacy...” and “Google runs one...”

Strikes me as funny. Interesting concept apart from this detail though. Are there any servers that aren’t owned by advertisers and the like?



You could stand up your own server. It isn't that hard or expensive.


I really hope this doesn't catch on widely. It should be a tool for use where censorship issues can't be solved politically.

Otherwise, why even have protocols at all? Just say the only protocol is HTTP. IP, etc. are just an HTTP implementation detail.


I would love this to catch on wildly! DNS through TLS means it's all end-to-end encrypted, DNS reflection attacks are harder, etc. The "why even have protocols" doesn't make sense. Protocol can be layered just fine (HTTP itself is a good example). DNS as it exists now is another random special snowflake that vendors need corresponding snowflake implementations for.


>DNS through TLS means it's all end-to-end encrypted

I didn't say I'm against DNS being encrypted, even with TLS. I just hate that instead of doing the right thing (e.g. political battle with the government, opening a port on a firewall) people choose the laziest way: just tunnel it over HTTP.

>Protocol can be layered just fine (HTTP itself is a good example)

They can, but why do it? Just figure out a way to make your server be yet-another-"REST"-service and pat yourself on the back for cleverness.

>DNS as it exists now is another random special snowflake that vendors need corresponding snowflake implementations for.

As is: SMTP, POP, IMAP, SSH, AMQP, FIX, SWIFT, LDAP, ODBC and a host of other protocols. Why do you use negative language like "random special snowflake that vendors need corresponding snowflake implementations for"? These are seperate protocols, which serve seperate specific needs. This is how IP was designed to work.


Yes as a protocol designer, I couldn't care less about the type of data that runs through the protocol. The semantics are much more important. Sequenced delivery with hard requirements on delivery order and reliability? Probably should be TCP based. Something that gives something different would be RTP (specialized for sending media packets which aren't useful after a certain time has elapsed) or SCTP or UDP.


Political battles are hard to do, and take time, sometimes very very long times they take.


I don't disagree at all. But I don't find this sufficient justification to engage in poor engineering practices.

DNS has a different set of use cases than HTTP so, while it can be made to work with enough effort (anything can), HTTP can never be as good at DNS as an actual protocol designed to do DNS can.


I guess someone just needs to do a VPN over HTTP. (Probably done already too)


Why do we have ports at all? Except 80 and 443 many ports are blocked anyways.

I'd say there is an arms race between admins and users. Step by step features get forbidden and blocked usually in the name of security. For example, NAT took away your globally visible IP address.

The next step is probably blocking domains. In a dystopian vision, Google and Facebook become the proxies for all traffic as all other domains get blocked for most users.


Anyone has good pointers on DNS over DTLS (draft RFC 8094) vs DNS over TLS (RFC 7858) vs DNS over HTTPS (this draft) ? And if you could throw in DNSSEC in the mix, that would be helpful.


With pleasure - (CTO of DNSFilter, this is my world)

DNS over DTLS (RFC 8094) speaks only to UDP This could be compared with DNSCrypt's implementation

DNS over TLS (RFC 7858) is TCP focused

DNS over HTTPs is focused on... you guessed it, HTTPs

So all the same thing, all trying to add security and privacy, just over different transport mechanisms.

While we're at it -- you did not mention DNS over QUIC: https://tools.ietf.org/id/draft-huitema-quic-dnsoquic-00.htm...

Some factors which start to come in to play with the various options are: Compatibility with firewalls and proxies (DNS over HTTPs wins, others over port 853 have a disadvantage, even if you put them on port 443, they may get stopped by deep packet inspection)

TLS Improvement knobs: 0-RTT, TCP Fast Open, etc. (You can find a summary of these here: https://dnsprivacy.org/wiki/display/DP/DNS+Privacy+Implement... )

And closely related, performance. It's tough to beat a lossy protocol with minimal overhead (UDP), but there have been a lot of improvements with Keep alive, eDNS0, pipelining, etc.

We're currently doing a lot of testing around this, as we are working to offer useragents and LAN proxies implementing DNS over TLS, compared with DNSCrypt.

Finally: DNSSEC. It lives in a different part of the 'security' spectrum here... The purpose of DNSSEC is to be able to authenticate the validity of an answer when traveling through untrusted parties (say you're using a third-party resolver, or do you trust your ISP's DNS to not mangle with responses). It does not encrypt request/responses. It does not provide privacy. It answers the question: "Can I trust this response I received for example.org to be the one that example.org intended me to receive?"


Thanks, that's great!

I didn't even know about dns-over-QUIC, but considering QUIC is still a draft itself… (I've been excited about it for quite a while though).

Regarding middleboxes, I forgot about those, and it's very sad that they keep hindering progress this way.


I've been pushing all my DNS traffic over a VPN, transparently, for the whole house, for years now (by setting the VPN remote IP as the upstream resolver on my router).

It seems the only advantage of DNS-over-HTTPS is that it does DNS over TLS on port 443, which is harder for militant netadmins to block.

It's definitely a solution to a niche problem, but if we really want to encrypt DNS at scale then we could do it easily enough by introducing DNScurve to the SOHO market.


When you run DNS requests over VPN, you are sharing your "browsing history" with (at least) 1 other third party: whoever runs the VPN server, and whoever runs the (public) DNS server.

Your ISP can see the IPs you connect to regardless of whether you use your ISPs DNS server or not (unless of course you tunnel ALL traffic of all clients through the vpn as well.)


It's low hanging fruit. Logging DNS queries is a lot cheaper for an ISP than filtering out and logging SNI entries in TLS handshakes.


DNS through a VPN is only a partial solution though, right? Even if there's a recursive/caching resolver within the VPN (so _my_ direct DNS requests are private) any DNS requests outbound from the VPN go through global DNS.

But, I guess same problem for DNS-over-HTTPS currently.


Correct -- there's not currently any standard for recursive resolvers to communicate securely to authoritative providers. We'd like to change that.


dingo - a Google DNS-over-HTTPS caching proxy in Go https://news.ycombinator.com/item?id=12514170


"Do DNS resolves over HTTPS for privacy, performance and security. Also makes it easier to use a name server of your choice instead of the one configured for your system."

"A server that is acting both as a normal web server and a DNS API server is in a position to choose which DNS names it forces a client to resolve (through its web service) and also be the one to answer those queries (through its DNS API service)."

https://tools.ietf.org/html/draft-ietf-doh-dns-over-https-02

How might this affect users who block ads via DNS?

What about users who want the system DNS settings they have configured (e.g. /etc/resolv.conf) to be honoured?

I can clear browser DNS cache with a couple of keystrokes, but I am not using Chrome, Firefox, IE/Edge, Safari, Brave, Opera, etc.

How easy is it for a user to clear the DNS cache in the popular browsers?

The best "privacy, performance and security" for the user is achieved by using /etc/hosts. HOSTS (i.e., no DNS) will beat DNS on all three, every time. "Flexibility" was not mentioned as a criteria.

There is also the issue of "transparency" to consider. It is easy for the user to check their HOSTS file to see what are the current name to address mappings. How easy is it for the user to check what mappings are in the browsers DNS cache?

Fact: Many/most of the names looked up by a user repeatedly every day do not change by the hour, by the day, the week, month or even year.

As a user, I use HOSTS heavily, with appropriate ed scripts for fast automated modification. I use HOSTS for the transparency, control and speed. Any privacy or security benefit is a bonus.

Contrast: https://en.wikipedia.org/wiki/Fast_flux (need DNS or perhaps "DOH" for this to work)


> How might this affect users who block ads via DNS?

They should make sure to use a trusted DNS over HTTPS resolver.


Yes, greater for security but still depends upon reliability of certificate authorities and ISPs. ISPs have the ability to issue the client a bogus certificate while they hold the real one in order to decrypt traffic. Why won't browsers allow the certificate's public key to be readable by javascript so that the remote server can verify the client has the correct certificate? This wouldn't be foolproof but it would significantly beef up security.

Similar to how tor hidden services get resolved, distributed hash tables seem like the way to go. Take the ISPs and Cert Authorities out of the equation.


Making the public key of the remote available to JS wouldnt help because if you can't trust the identity of the remote server or integrity of the connection then you can't trust the javascript.

Tor hidden services are secure because of the lack of human meaningful names.

See: https://en.wikipedia.org/wiki/Zooko%27s_triangle


Yes, since you can't trust the JS, doing this would be an attempt at security through obfuscation. I mostly just think it's strange that these keys aren't accessible. If there were ever a mismatch detected, the server could at least know with certainty that there's a bad actor on the connection.

Thanks for Zooko's Traingle wiki. Hadn't seen that one before.


> Why won't browsers allow the certificate's public key to be readable by javascript

Because sites would randomly block reverse proxies, web proxies, enterprise users, people using certain WiFi APs, and it would give advertisers another thing to track users on.

Frankly after looking at what websites did with the User Agent, I'm scared to see what they'd do when a certificate mismatch occurs.


I thought modern browsers ship with pinned certificates for Google and other large companies built in to the browser download.


Public key pinning helps but it operates on the assumption that the initial key is the correct one. I find it curious that these public keys are not readable by the client.


people invented other security things inside of dns protocol to avoid having the added bandwidth. i dont think it's reasonable to say that everyone can handle the increased load nowadays.

However, i've been thinking of this, for enterprise, you could build a dns server which servers over TLS connections, and then put a local dns proxy on clients which receives on 127.0.0.1 and sends out tls to the custom dns server. That way any local network connections would be encrypted to passive listeners on the network.

recursive queries would go onto the internet plaintext to 'normal' dns servers.


You can already do this with existing tech. When a DHCP lease is acquired, you also get DNS server information. If you are concerned about the DNS packets being encrypted on your local network, then you can do that already as well by intercepting them with a local firewall and sending them across a tunnel.

IMO, the purpose of DOH is to address semi hostile environments like the free Wifi at the Airport. It is kind of like a VPN for just your DNS traffic.


Maybe someone people will "rediscover" dnscurve.


Which is completely unrelated


It isn't completely unrelated.

DNSCurve secures the link between resolvers and authoritative servers. DNSCrypt (based on DNSCurve) secures the link between clients and resolvers.


I am just wondering why would you want DNS over https instead of something like dnscrypt which would be dns over ssl ( I think opendns does this already) ?


DNSCrypt doesn't use SSL or TLS. You may have assumed that because DNSCrypt defaults to 443/TCP but that is only for availability, as this port is usually not blocked. The packet format is very similar to DNSCurve, from which it descends.


My guess is meddling corporate firewalls that spy on SSL/TLS traffic (using custom certs installed on the machines) and block unrecognized traffic.


Actual DNS servers and clients have supported TLS since the early years of this decade. What's gained by adding HTTP transport overhead?


http is not overhead, http is the only transport that can go through middle boxes.


if the traffic is encrypted how can you tell whether it's http or plain dns?


Because the purpose of middle boxes is to log your traffic no matter what, usually by installing a root certificate on your devices.


What dns servers and clients support TLS?


Unbound does it natively, and BIND and the rest use DNSCrypt's shim


While we at DNSFilter are working on support of DNSCrypt, just to add some flavor to your note... it should be mentioned that the original author just abandoned the project, removed the repos from his github user, shut down the website, and it's been setup by a few industry players under a new github org: https://github.com/DNSCrypt

Tough to say what direction the project will take.

I for one welcome other advancements such as Google's efforts.

FWIW, my understanding is if you use Chrome it actually communicates over QUIC


DoH's great.

I'm planning to write an Nginx module to support it soon.


OpenDNS used to offer a secure DNS-tool. I can't find it anymore. Maybe Cisco removed it? Why can't we use something like that?


You're thinking of DNSCrypt.

It unfortunately was recently abandoned by the original developer. Some folks have mirrored it here: https://github.com/DNSCrypt


I'm wondering if libcurl supporting custom DNS resolvers could make it easier to explore alternative DNS systems like namecoin.


As a mac user, how can I set this up locally to run all my DNS over TLS?


So what use does this have?

DNSSEC already gives us validation of the records, and thanks to SNI, this doesn't give us any privacy.

It is more complex, more centralized, and ends up slower than using actual DNS, and doesn't seem to provide any benefits.

Am I missing something?


There is no good reason DNSSEC shouldn't protect query privacy. It doesn't because the most important Internet users --- developers and operations engineers --- are letting them standardize it that way. If it annoys you that we have to resort to elaborate hacks to keep ISPs from monitoring our queries while at the same time undertaking huge projects to migrate to a new DNS protocol, you're not crazy, and you should make some noise. DNSSEC sees barely any use in the real world, and it is not too late to change it.


Easier to prevent censoring. DNSSEC doesn't hide the fact that you're making a DNS call, correct? With HTTPS, censors/MITMs can only see the domain you go to. The censor/MITM won't know whether I went to google.com to search or perform DNS query. Plenty of countries have blocked DNS providers they don't like, now that's harder w/out also blocking the site as a whole.


Easier to prevent censoring from the ISP, but you're now just reliant on another company's systems.

If you just want a solution for having a remote server execute DNS queries for you, you can just use any of the existing VPN protocols purely for DNS resolution, or even make a better performing version.

All that overhead HTTP adds is wasted for this.


> All that overhead HTTP adds is wasted for this.

When you're in an area where your DNS provider is limited by the state, you won't consider it wasted. But if you're not and you do consider it wasted, don't use it. But saying it doesn't seem to provide any benefit is wrong and just telling people to use a VPN for DNS resolution is also wrong (at least until the ergonomics improve).


It's more a question why a company like Google would pay several engineers who are paid 6-digit sums to build this when they could build for the same or less much more powerful solutions.


It all depends on your threat model. Certainly DNS over HTTPS is not a panacea - but it certainly will help when your ISP is doing crazy stuff with his DNS firewall (anyone here from the UK). The other (darker) side of the coin is that corporate tools to identify e.g. rogue IoT devices by their DNS signatures will have difficulties...


I suspect it wouldn't be too hard for a nation state or similar to determine the signature of DNS over HTTPS requests to see if someone is making a DNS request, unless you add a bunch of noise to the transaction.


Doubt it. How can a DNS HTTP GET look that much different than a favicon.ico GET over TLS? MTU/size alone is all they can use to shape, but there are many small web requests.


Perhaps for a single request, but I wonder if you could infer something by watching longer term patterns.


"... thanks to SNI, this doesn't give us any privacy."

Assumption: All sites use SNI or will use SNI.

True?

Experiment: List all domains posted to HN (pages 1-20) that require https on any given day. Try accessing each one without SNI.

Result: Most do not require SNI.


Wrong.

SNI has to be sent without any previous negotiation. Therefore the client does not know if a site will require SNI or not.

Assumption 1: browser vendors do not want to break sites relying on SNI

(Confirmed by WHATWG)

Assumption 2: at least one major site will continue to use SNI

Conclusion: browsers will continue to send SNI.


"Therefore the client does not know if a site will require SNI or not."

The client that I use assumes no SNI required. (It intentionally does not support SNI.) If it fails because the website is on a shared host and requires SNI, then it retries via a local SNI-enabled proxy bound to localhost.

"Conclusion: browsers will continue to send SNI."

Some clients/browsers will continue to send the domainname in the clear for every https url, even when it is not required.

Some users might consider that as sacraficing their privacy even when it is not necessary.

But not the client I use. It assumes no SNI is required, by default. It never sends the domainname in the clear for https when it is not necessary.


That is awesome (and I personally also always err on the side of safety/privacy there), but I don't think this will help much.

Anyone that can set your setup up can also just openvpn to a remote server, and redirect all DNS queries over that connection (which is quite easily doable, actually).

Everyone that can't do this would still use SNI, so DNS-over-HTTPS wouldn't provide any security win for them.


don't you think SNI leaks will get fixed eventually?


No. All actors currently involved with the web are ideologically opposed to backwards incompatible changes, and, as a result, they'd never make a change that prevents a browser from accessing a legacy site.

As result, SNI is here to stay.


This isn't how backwards compatibility works.

A hypothetical encrypted SNI would be "optional" in the sense that your Firefox 57 wouldn't use it, but a site could implement it, and Firefox 73 could too and then the actual user of that actual site is protected by upgrading to FF 73.

If you were right everything would still be HTML 4 over HTTP 1.1


The problem with SNIis that it happens before all that negotiation.

This is why today your browser sends the SNI info even if the site will never need it.

And that also means for eternity all TLS1.3 or lower connections will continue to use SNI.



I hate the idea of ad networks using this ...


Must everything be over HTTPS? Must all diversity really be killed like this? It saddens me.



To understand DNS, you must first understand DNS.


DNS is one of those things I have to reference every time. Definitely feels like this.


Isn’t this a chicken vs egg problem?


Yes and no, I think Google’s DNS over HTTPS service is run on 8.8.8.8 the same IP for their main dns service, so you should always know where it is.

Edit: Looks like it’s changed and it’s now a url, so you’re right it is a bit chicken and egg


Apparently:

> "The DOH server is given with a host name that itself needs to be resolved. This initial resolve needs to be done by the native resolver before DOH kicks in."


You can remember a small set of resolved hosts for this purpose, not unlike remembering DNS servers or CAs to trust. A quick search online also says it's possible to issue a cert to a public IP address, so you can also do HTTPS to a numbered IP instead of a host name.


I haven't heard of any respectable CAs that would issue certs for IP addresses...


They're rare but they do exist. The CA needs to ensure you really have long term control over the address.

Most people would never need one, but a few people have a real use for them.


Just stick the address of the DNS resolver in /etc/hosts.


Where does the privacy advantage come from? Once you resolve a hostname privately, don't you still need to use its IP address publicly for your traffic to be routed there?


One case where it might help is shared hosting: you don't know which of the domains hosted on that IP you're accessing. Narrows it down to just a few though, so IMHO not a big improvement.


> One case where it might help is shared hosting: you don't know which of the domains hosted on that IP you're accessing.

That depends on your level of access/monitoring: if it's just a firewall log that shows source/destination IP, then yes - but as soon as you have any kind of packet monitoring, then the domain can be easily sniffed from the SNI header.


You'll need to mask page size if your trying to hide your access, even on a shared host with 100 domains I doubt you'll have pages where the byte-size isn't unique.

I think with https the results were something like 90% of page access could be guessed using meta-data (see eg https://web.archive.org/web/20090308103611/http://sysd.org/s...). That's going to drop when you don't know the site but you're going to need more counter-measures to hide your access effectively.


Yes, but reverse DNS is not totally trivial (think AWS). Also current DNS can easily be MitM'd allowing attackers to insert JavaScript into web pages etc.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: