Hacker News new | past | comments | ask | show | jobs | submit login
Internet Draft: Let 'localhost' be localhost (ietf.org)
594 points by beyang on Aug 7, 2017 | hide | past | favorite | 167 comments



   First, the lack of confidence that "localhost" actually resolves to
   the loopback interface encourages application developers to hard-code
   IP addresses like "127.0.0.1" in order to obtain certainty regarding
   routing.  This causes problems in the transition from IPv4 to IPv6
   (see problem 8 in [draft-ietf-sunset4-gapanalysis]).
That does remind me of the times I was dealing with weird connection issues in some critical services.

It turned to be related to the use of "localhost" in the configuration. It resolves to ipv6 on some systems and that breaks everything because the target app is only listening to the ipv4 address.

Went as far as removing all references to localhost and added lint errors in the configuration system so that noone could ever be able to give localhost as a setting in anything.


> and that breaks everything

If you're on a POSIX system, I'd argue that this is a bug in the client. Typically, the client should call getaddrinfo(3); as part of that, the application would either specify directly that it's only interested in AF_INET results, or just filter out non-AF_INET results.

(Further, if you support IPv6 in the client, and thus request such results from getaddrinfo, you should skip to the next result if the connection fails.)

On the server, you can also bind to both the IPv4 and the IPv6 addresses. If you listen to ::, you should get IPv4 connections too. (Through this[1] mechanism.)

[1]: https://en.wikipedia.org/wiki/IPv6_address#Transition_from_I...


I agree about getaddrinfo(). Applications should have specified AF_INET to gethostbyname() and then when updating to getaddrinfo(), handle ipv6 correctly or skip it.

The IPv4-mapped IPv6 addresses thing is a terrible idea that ends up turned off everywhere it makes a difference. Like all of these "transition" ideas, it tries to help, but it just hurts by causing admins/ops/devs to just make sure ipv6 is off everywhere they run into it.

If all IPv6 was just opt-in everywhere, we wouldn't need all significant applications to detect and work-around broken ipv6 (in the local network, in the ISP, in the server, in the peer application). If IPv6 was only explicitly enabled, and fully consciously handled when it was, then practically everything would have opted into IPv6 years ago. As it is, things started getting IPv6 support around 2005/2006, and then disabling it around 2007/2008 because it was fake/broken ipv6 in too many places (windows teredo, 6in4, etc).

OS X had to disable IPv6 in one release and then re-enable after implementing happy-eyeballs in everything. Firefox had to do that. I know about these cases in particular, but I bet all significant network applications had to have a bunch of "detect broken IPv6" code added. Major websites had to disable IPv6 on their main domains (and sometimes added an "ipv6." subdomain). The reason was that some people had broken IPv6 and their computer would try to use it to access their site and fail (but other ipv4 only websites worked). Things are getting better now, but it was a 10 year set-back. So much time and effort could have been saved if people with good-intentions didn't add automagic transition technologies and just waited for IPv6 to be explicitly added on both ends.


Transition mechanisms suffer from the same problem as the upgrade to ipv6, the need for an upgrade.

So in this sense all were doomed to fail.

Remarkably the CGN devices are actually the best practical idea for this. Fake IPv4 addresses for the dumb ossified clients and that's it. If something important enough for the end user doesn't work, they'll finally upgrade.


Is it fair to say IPv6 has been generally a failure? Or is it too early for that?


I'd say this growth curve looks healthy and robust to me: https://www.google.com/intl/en/ipv6/statistics.html


Yeah you're right, it certainly does. I would be curious if it goes mainstream enough to replace IPv4 in the foreseeable future though, which was its intention.


It's been quite mainstream in certain contexts and geographies for a number of years now. As an example - most handsets on modern LTE (and newer) networks have been strong-majority v6 for quite a while. The fact that this hasn't been obvious is an argument in favor of v6's success.


Hah, I didn't know precisely because in my case I've always seen an IPv4 address on mobile...


I'm not sure that's the case. Certainly not in Europe anyway, although it is seeing wider adoption now (particularly 464XLAT based solutions).


The expectation is that networks will switch to IPv6 only internally, and eventually the IPv4-only remainder of the Internet decays until it's no longer an "IPv4 Internet" but just a handful of separate IPv4 networks that are connected to the (now IPv6 only) Internet by protocol converters.

Some US corporations did this already, rather than fuss with being "dual stack" and potentially introducing new IPv4-only services or systems, they switched wholesale to IPv6 and add converters at the edges. By choosing to do this they get most of the benefits of a future IPv6-only Internet today. For example, numbering internally is a breeze, they can auto-number almost everything because the address space is so vast there is no need to "plan" any of it.

Lots of other US corporations are still IPv4-only, indeed that's why the Google graph earlier has a distinct weekday vs weekends / holidays step change in it. At home a very large proportion of people in industrialised countries have IPv6, major ISPs supply it, common household routers understand how to use it, every modern OS groks it. But at work IPv6 is often disabled by policy, in favour of cumbersome IPv4 because that works and changing things at work is forbidden.


All that's needed is for Google to make it factor in search ranking and you can bet that we'll all be finally reading up on ipv6 and how to make it work well on our servers, and testing the hell out of it :-)


It's "too needed to fail" - and there's nothing to supplant it.

And it's finally starting to catch on, 10 years late: Google's primary web domains, Facebook, AWS, Comcast and Time Warner cable internet in the US, most LTE cell service in the US.


It's now embedded in huge chunks of internet so I wouldn't call it a failure. The transition could and should have been handled better, and the specification has its flaws (too machine-oriented) which unfortunately will never be fixed, but it's here to stay.


The mechanism you alluded to (dual binding--receiving IPv4-mapped IPv6 addresses on an IPv6 socket) requires explicitly disabling the IPV6_V6ONLY option on each socket. Some systems have IPV6_V6ONLY as the default; I think modern FreeBSD releases do this out-of-the-box. I don't think many Linux distributions enable IPV6_V6ONLY by default, but administrators can enable it globally, necessitating a per-socket reversion.

Some systems, like OpenBSD, don't even support disabling IPV6_V6ONLY and therefore don't support dual binding at all. OpenBSD contentiously argues that dual binding is likely to lead to security exploits as applications that naively bind to "::" might not expect to de-queue IPv4-mapped IPv6 addresses, consequentially possibly breaking their local access control logic. For example, they may setup firewalls rules that restrict access to the IPv6 port but forget to set the same restrictions on IPv4 ports.

I'm not sure I agree with OpenBSD's approach, but in any event applications should explicitly disable the IPV6_V6ONLY socket option if they're relying on dual binding. Ideally they should use two different sockets; one for each address family. If the application stack doesn't make it easy to listen on multiple sockets, that's a strong hint that the design is broken.


Does it matter if it's a bug in the client?

Make it work like expected.

Especially when changing infrastructure like IPV4->IPV6 - don't break existing userbase code! (This is a fundamental precept of Linux development.)


Only in software development are we expected to go out of our way to support people who don't know what they're doing. Imagine a medical student complaining the professor that the scalpel doesn't cut on the dull end or a would-be airline pilot upside down in their seat complaining he can't reach the controls.

And your comment about never breaking existing userbase code is as ridiculous as it is unreasonable. This would only be a reasonable expectation if only correct code were possible to run. Since it's possible to run broken code (as this anecdote demonstrates) then it must be possible to fix the system even if that breaks some poorly written, but popular systems.

Does anyone want a world of e.g. Windows where decades old bugs have to be replicated because we can't dare break programs so old all the authors are retired or dead? I don't. Backwards compatibility has a cost and I'm not willing to pay it unconditionally.


There's only so much the kernel can do to protect userspace from itself. When you have an interface which returns an explicitly extensible data structure, you have to assume userspace is going to at least ignore extended data it does not understand. Otherwise you cannot have such interfaces at all.


> Make it work like expected.

Expected by code or by the progammer(s)?

That said, strong API guarantees are the way, documentation should note the bug, introduce a fixed version for the API, maybe schedule deprecation in 10-15 years, and carry on with life.

It's not worth it to hunt down every client and make them fix your honest mistake.


Agreed that it's a bug in the client, but these bugs are going to be ever-present in the transition to IPv6.


Hello me from last week. Had exactly this bug, sometimes nginx couldn't connect to the backend (but very rarely, and not reproducible on demand), which I eventually tracked to the fact that localhost sometimes resolved to ::1 instead of 127, which is what the backend was listening on. Still don't understand why it was only like 1 in 1000 requests, and not every or every other request. Just one more slice of ipv6 mystery.


I've had weird errors like that where two DNS servers were giving answers to my query rather than just the one that I intended. This will never happen when using TCP but when using UDP it may happen. Every now and then the packets would receive in a different order and then I'd be paged because some app fell over. Fun times.


Is the Nginx client using the happy eyeballs algorithm?

https://en.m.wikipedia.org/wiki/Happy_Eyeballs

Can be a source of race conditions.


I had a very similar problem recently with docker + nginx. Best I could figure out was the randomness of the problem was being caused by keep alive connection limits. If the connection was opened as IPv4 it would work until it hit the keep alive limit but the new connection might run into the IPv4/IPv6 lookup problem and fail. Never really figured it out for sure. It's definitely thrown some cold water on my plans to go dual stack everywhere all the time. Not sure it's worth the risk of running into these stupid bugs.


Ideally, a server binding to "localhost" should create a listening socket for each of its IP addresses (e.g. 127.0.0.1 and ::1), and a client connecting to "localhost" should try each IP address in order, until one succeeds.

But a lot of standard libraries (including parts of Java and Go) get this wrong, and pick exactly one IP address arbitrarily. When you combine a buggy client with a buggy server, and their preferences for IPv4/IPv6 disagree, then all hell breaks loose.

These are the open bugs I'm aware of:

https://bugs.openjdk.java.net/browse/JDK-8170568

https://github.com/golang/go/issues/9334


Or use a single socket for both protocols (set the IPV6_V6ONLY socket option to false).


That works when binding to :: (all interfaces), but it's irrelevant when binding to "localhost", because you can only bind one address per socket.


Yes, I thought it worked for ::1 too but you're right.


An interface on linux has only one ip address.

The "localhost" interface will either designate the ipv4 127.0.0.1 interface or the ipv6 ::1 interface. That's the realm of undefined behavior and system specifics.

This whole IETF draft looks like a mess. They should reserve the names localhost4 and localhost6.


> An interface on linux has only one ip address.

absolutely not true. Neither for IPv4 nor IPv6, where it's even the default to have a multitude of addresses on an interface.

> The "localhost" interface will either designate the ipv4 127.0.0.1 interface or the ipv6 ::1 interface. That's the realm of undefined behavior and system specifics.

no it won't. My loopback (lo) interface has both 127.0.0.1 and ::1 as its addresses.


> An interface on linux has only one ip address.

Eh?

    $ ifconfig lo
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)


> An interface on linux has only one ip address.

Just to point out, while the behaviour of any one OS is a useful data point for the discussion, it's not super useful to say "because OS 'foo' does things in a certain way, a well used programming language should limit it's design to that OS's way of doing things." :)

Hmmm, I guess I'm trying to say that (at least) Go should be fairly OS agnostic about this.


Some genius at my company decided that ~180,000 Windows endpoints needed "localhost" removed from their hosts file, which has resulted in millions of requests per minute for localhost hitting our resolvers just to return 127.0.0.1.

My guess is that it was some hack they tried to disable IPv6, but aside from the insane load it added to the DNS infrastructure, the other result is that if these machines talk to a malicious resolver, their traffic destined for the loopback interface could end up going anywhere and being captured by anyone.

Great job!


If the machines talk to a malicious resolving proxy DNS server, then more than traffic destined for loopback is at risk.

I suspect that removing the "localhost." record was nothing to do with IPv6 and everything to do with a corporate policy to not have anything other than the Microsoft default contents in hosts files, possibly because of concerns relating to malware prevention. The problem is possibly the result of the default hosts content changing in Windows NT 6.1.

* https://support.microsoft.com/en-gb/help/972034/

As of Windows NT 6.1, lookups of "localhost." are handled internally within (as I understand) the DNS Client, and never require inspecting a hosts file or sending a query to a DNS server. So the new default hosts file content no longer contains a "localhost." record. But use the Windows NT 6.1 or later default hosts file content on earlier versions of Windows NT, and one will see "localhost." queries being sent by the DNS Client to a server.

Handling "localhost." within the DNS Client is -- reportedly -- so that the DNS Client can inspect the local machine's protocol support and only return non-empty AAAA and A resource record sets if IPv6 or IPv4 is actually enabled on the machine.


The weirdest was a co-worker who had some simple webserver, which was listening on only IPv4 or IPv6 (but not both). When he went to "localhost" on Firefox it used IPv4 and he was able to see it. On Chrome "localhost" was IPv6 (or the other way around), and he got "Could not connect" error. It confused him no end how this simple web server worked on FF but not Chrome. :)


> It turned to be related to the use of "localhost" in the configuration. It resolves to ipv6 on some systems and that breaks everything because the target app is only listening to the ipv4 address.

You just found a major bug in the application and should complain to the developer. Applications that do not support IPv6 are simply broken and should be avoided at all cost by now.


I always have local.my company.com DNS that resolves to 127.0.0.1. I can get a valid cert that way too.


I hope you don't publish that record to the world.


It's already been done: https://git.daplie.com/Daplie/localhost.daplie.me-certificat...

Which I find to be a very practical solution for connecting to localhost over https, it frees you from having to install a self-signed certificates/CAs on your machine.


Publishing private keys is a violation of the Let's Encrypt terms of service. We are revoking these certificates.

https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016...


Not a great idea to publish private keys for valid certificates. Anyone could probably submit a certificate revocation request to the CA, as the key would be considered compromised.


Why?


I guess anyone on 127.0.0.1 can pretend to that address. Very unlikely to matter.



Interesting. Still, that requires the attacker to be already running a process on the victim's machine, even if with reduced privileges. Nowadays that's rare, since there's no reason not to give each user its own network namespace, at the very least.


Just a guess: CORS-related attacks


How would that work?


Lots of sites seem to be doing it now. Off the top of my head I know that Box and Spotify both do it.


Got to chuckle about this. The new generation is fearless and naked. Break it all, admit nothing, make it 'better'.

Standards, practices, tradition, culture. They mean nothing when a devops lead has commit rights to the ansible playbook and a will to deliver a fix in 5 seconds flat.


If this doesn't happen or takes too long, there's always lacolhost.com and *.lacolhost.com. I own this domain, have registered it out until 2026 and vow that the domain and all subdomains will always redirect to localhost.

It's easy to type and easy to remember and should always do a good job of expressing intent of usage.


I don't think that quite fits into the security model of many of the purposes for which people use localhost, as they often want to avoid all external dependencies (including Internet access!) and all external trust.

If they do use your domain name then they have to trust that nobody has subsequently seized the name in court, nobody has hacked or DDOSed your nameservers, there's not an interruption of network connectivity between them and your servers, and that the ISPs forwarding the DNS queries didn't substitute a different response for the one you intended to return (since currently your leaf records aren't DNSSEC-signed).


The issue they are trying to solve is that the DNS request might be sent somewhere at all (and thus manipulated). So your solution doesn't address that (as it guarantees a request is sent)


Could it be you were hanging on #css on irc on a network I can't remember more than 10 years ago ? (there usually were only 5 of us there). I seem to remember someone owning a domain like that.


i'm not really sure why i'm saying this, but you should put an AAAA record there too, so people can benefit from ipv6 loopback, haha


It would make more sense for someone to setup localhost.mydomain.com than to trust a third party.


I wonder if lacolhost.com is just an A record pointing to 127.0.0.1, what would be the use-case to use that instead of just using 127.0.0.1? Is it that some systems require a domain to be used/tested?


lacolhost.com should also resolve to the ipv6 record for localhost.

Being able to test locally while using the full internet name resolution system is a valuable thing as well. Though if this is your use case I wouldn't trust lacolhost: register your own domain.


It doesn't though. Unless I'm missing something obvious, it only has an A record, not a AAAA.


The problem with things like this is that they won't resolve without an internet connection.


You're amazing. I hope you're given more power than you already have.


If people use this in production services, he will have much more power over their services.


I don't think (?) I've ever made this typo. How many hits do you get?

(I feel like "ocalhost", "locahost", or "localhos" would be more common typos.)


I have no clue, it's all handled by the registrar's DNS. I know when it last expired I was getting a good number of emails from people begging for it to come back asap.

The other commentators here are right, it won't solve a good number of the issues in this proposal, but works well if you need something for developing against subdomains.


localhost would be a good name for a bar in SF.


I keep trying to go to this hip new bar called localhost, but somehow directions just lead me to drinking at home.


I lol'd


The anagram alcolhost would be a nice one too


"St Alcohol" and "Alcohol St" are also good anagrams


Did you think of that on your own? God I feel dumb. (I like being made to feel dumb) I googled it and didn't get that answer. I guess remembering ST can be a street or saint is a good rule.


There's the Network Operation Center Network Operation Center.

https://www.yelp.com/biz/noc-noc-san-francisco


Right next to a Foo Bar


It's a tech community center in Philadelphia: http://localhostphilly.com/


this is an actually good idea - it's still a good name if you have no idea where it comes from or what it means


There was the time that Keith Henson tried to explain the local loopback address to Scientology lawyers during a deposition...

http://www.cryonet.org/cgi-bin/dsp.cgi?msg=6289

Henson: (patiently) It's at 127.0.0.1. This is a loop back address. This is a troll.

Lieberman: what's a troll?

Henson: it comes from the fishing where you troll a bait along in the water and a fish will jump and bite the thing, and the idea of it is that the internet is a very humorous place and it's especially good to troll people who don't have any sense of humor at all, and this is a troll because an ftp site of 127.0.0.1 doesn't go anywhere. It loops right back around into your own machine.

https://en.wikipedia.org/wiki/Keith_Henson


What's the "'ho" referred to? An alias for Scientology/SeaOrg, or a particular individual?


I've had web browsers perform a web search for 'localhost', or even just redirect me to localhost.com.

Annoying.


Yeah. The ancients speak of a time when address and search bars where separated.

I think it's a legend.


I remember this. Also, if I remember correctly, seamlessly integrating search and address bar was the thing that really made Chrome take off. Sure, it was an all around better product than IE/Safari, but to get people to actually download it over just using the default requires it to be significantly more convenient, and it sure was.


>Also, if I remember correctly, seamlessly integrating search and address bar was the thing that really made Chrome take off.

That was part of it. What's weird is that the Mozilla suite had the same thing before firefox even existed, in many ways firefox was a step backward. I believe opera did this years earlier too.


Hey, Firefox's still alive and kicking. It's quite ancient, comparatively speaking. :-)

Old wisemen may refer to it as a fire-breathing reptile, others a bird clad in flames. And despite suffering from declining market share in recent years, prophecy foretold that it will one day rise from the red ashes, stronger and more resilient than ever!

(Apologies for running away with the joke)


Firefox has the awesome bar, which works almost exactly like Chrome's omnibox.


But Firefox has the option to disable search functionality in the URL bar - not sure if it still asks you or if it's hidden behind an about:config option these days though. It's also significantly better at matching parts of URLs and titles from your history in my experience.


Vivaldi has both the chrome omnibar and a separate search input.


That's probably due to shitty ISP DNS hijacking/redirecting requests if it can't resolve a name.


I assure you it's not. :)


If you leave a trailing slash it will prevent this problem.


chrome does this. if i dont specify http or https before domain chrome will try to search any .dev or .loc or ... domain with googe search engine.

anoying


I think a trailing slash is what makes Chrome try to resolve instead of searching.


OMG thank you.


Add a slash to the end, less typing work than adding https:// to the start.


TIL. thx :)


At work someone once spent hours trying to resolve a network issue. Turns out he didn't have a localhost entry in his /etc/hosts and some sadistic person had created a VM named 'localhost' that registered a DNS entry via DHCP.


A similar, common issue is to not have the machine's hostname pointing to a valid IP address in /etc/hosts (99% of the time it should be loopback, some like to point it to a fixed eth0 address), which causes delays in various part of an otherwise fine OS.


nss_myhostname fixes this without you having to modify /etc for every host.


At least on the OS I use, which is more IPv6 ready than most, /etc/hosts solves this "uncertainty" problem.

I have found that failing to include a localhost entry in the HOSTS file can lead to some strange behavior.

If there are "computers" out there that have no /etc/hosts or deny the computer's owner access to it, then maybe it might be time for an Internet Draft from Joe User.

There should always be a mechanism for the user to override the internet DNS. And applications should continue to respect it.


I remember in the late 90s running into a "mysterious" problem where www.hotmail.com would fail to load, but hotmail.com (without www.) worked just fine.

Spent the better part of a year just remembering to not type "www." until one day they made the domain redirect to the canonical name (breaking both).

After asking around a bit, someone showed me where the Windows equivalent of /etc/hosts was, and lo-and-behold, there was an outdated entry for www.hotmail.com there. Deleting the offending line fixed the problem.

Desktop computers are actually the odd one out in letting people manually override DNS -- if you want to fix the DNS for a phone or smart tv or thermostat or video game console, you need to configure DHCP and a resolver on a router in the middle.


> the Windows equivalent of /etc/hosts

Which is, bizarrely, etc\hosts (somewhere in C:\Windows\). I've always wondered why they felt the need to make an 'etc' folder just to have the hosts file in it.


Software historians differ on the degree to which the Windows NT 3.1 TCP/IP stack & utilities were derived from BSD, but it's one possible reason.


A very diplomatic way of putting it.


C:\windows\system32\drivers\etc\hosts

and, if i remember correctly from my windows days, not only did they make the whole 'etc' folder just to put the hosts file in, the strangely-named 'drivers' folder didn't have anything else in it either.


/etc/hosts also seems to work on android https://android.stackexchange.com/a/110483


Only if you've rooted your phone, and $EMPLOYER doesn't let rooted phones access certain resources.


I discovered a few weeks back that my Android build has dnsmasq installed in the system directories. Apparently stock.

It's not running. But ... you could have fun with that.


We have two entries in our DNS which point to 127.0.0.1/::1 - localhost and elvis.

This enables the following on Solaris and similar systems:

  $ ping elvis
  elvis is alive


this reminds me of a class I went to at a major company in 1999, we had problems following the setup instructions which included going to localhost/db-admin-path, after some sleuthing it turned out somebody 'in corporate' on the network we were using had named their computer localhost.


I would very much like to see this draft extended to cover SRV lookup as well.

Right now, section 3 of this draft would prohibit all SRV queries for localhost, which may hinder development and deployment of a SRV based application. That's an immediate problem.

But not only are there existing applications to which it is immediately applicable - it is a design error in HTTP that plain address records are used for resolution. One day this will be corrected, in which case measures like this should continue to apply.


Indeed.

* http://jdebp.eu./FGA/dns-srv-record-use-by-clients.html

But what should such a standardized SRV lookup for _proto1._proto2.localhost. yield as the answer? For starters, what port numbers?


SRV lookup for localhost names should yield a (probably identical) localhost name that (by the rest of this draft) necessarily then resolves to a loopback address.

For ports we have the list of well-known services maintained by IANA, for which an extract appears on many systems in /etc/services. Local configuration can adjust as necessary.


Local configuration cannot adjust as necessary. Remember: the headlined article is something that is being proposed to fix into an RFC, and indeed the whole point of it is to hardwire something that, in fact, is currently a matter of local configuration.


Also very important to point out; this same standardisation is missing on the TLD level.

Both for safeguarding internal use, and making a global TLD reserved on the global DNS zones. You'll find organisations using in production .local .dev (Taken by Google on 2014-11-20, followed by .app in 2015) *.zone (Taken by a LLC on 2014-01-09 ) as internal domains, with potential conflicts with the Internet's DNS resolution.

More importantly .dev [1] and .zone [2] are now valid TLDs, so watch out people!

[1] https://www.iana.org/domains/root/db/dev.html

[2] https://www.iana.org/domains/root/db/zone.html


IMO, a lot of these vanity TLDs are stupid and harmful to the web.

macOS has been using .app as its application extension for decades. Now when you want to search for a specific app on the browser you'll have to be more careful.

The fact that they allowed .dev, which is a fairly common TLD for development, is pretty unbelievable.

I haven't seen any evidence that by allowing companies to register these TLDs we've brought forth some kind of improvement or benefit to users.

There's a recent issue with TLDs that makes me particularly angry. There's ongoing work on this homenet spec. Originally it proposed using .home to route exclusively within the local network. But since .home is already used by a large number of people for private purposes, they changed it to .home.arpa. How the heck have we gotten to the point where we can justify allowing .google as a TLD, but we can't reserve something nice and short for non-companies?


.local is reserved for mDNS too, so using it internally via an authoritative DNS is bound to result in issues.



Just add a line your hosts file mapping lolcathost to 127.0.0.1 and you never have to worry about it again.

No that's not a typo


Does this mean that an entry in /etc/hosts assigning ip to localhost will be ignored?


Well, the explicit intent of the proposal is to hardwire the localhost = 127.0.0.1 / ::1 mapping, so my guess is yes.


That is not the effect of the draft. The effect is that localhost = lo0. Other addresses are valid.


Not necessarily. The requirement for applications concerned about security properties of localhost names is that the resolved address is bound to a loopback interface, in order to satisfy the test in section 5.1 of this draft.

However, I expect many application developers to mistakenly assume that only 127/8 and ::1 are valid loopback interface addresses.


Or, indeed, to assume (as I did for many years) that only 127.0.0.1 itself is a valid loopback address. I was rather surprised when I looked it up and found that it was the entire 127 block.


I use multiple addresses in this block for local DNS servers, both authoritative and recursive, and various local proxies. Alternatively I can clone tap devices and assign them RFC 1918 addresses. However the loopback works better in my experience.

As an end user, I use /etc/hosts not only as a substitute for DNS but also in addition to it.

For example I may block/redirect a mostly noxious domain via wildcard in a customized root zone file on computer1 (an "authoritative nameserver" for a "recursive cache" that serves computer2) but then edit the HOSTS file on computer2 to make an impromptu single exception for a particular subdomain. Ideally I would prefer to run local DNS and local proxies on computer2 like computer1, but with many of today's "computers" this is infeasible; so computer2 might use computer1 for name lookups, routing, etc., as suggested by another commenter.

There are other ways to make these adjustments but editing a text file that is always present and in the same location is quick and dirty, immediate and particularly easy. This is only one use. HOSTS is quite useful in a variety of situations.

IMHO, the use of local computer1 as a gateway/server for another such as local computer2 only becomes even more important as we see a rise in "computers" that are resistant to manual control by the end user, such as those mentioned by another commenter.

Without having the ability to do IP forwarding, packet filtering, run localhost DNS servers and localhost proxies, as an end user I would struggle to control the internet traffic[1] of many of today's "computers" where the manufacturers have preconfigured them to serve their own interests and attempted to lock users out from making changes.

1. e.g., ad blocking, disabling incessant phone home behavior and other bandwidth conservation measures

In short, I need to be able to control the address for "localhost" without relying on or having to worry about DNS. HOSTS achieves that without the complexity and politics of DNS.


This can be useful for testing - especially before one could safely assume SNI support - but also for setting up various Web (or other network) servers locally - with an entry in /etc/hosts (so dev-local can point to some other local ip than test-local) etc.


On the one hand, this isn't exactly a new idea and in the real world has been happening for years now.

* dnscache from djbdns has handled "localhost." queries internally all along, since 1999. It maps "localhost." to 127.0.0.1 and bgack again. Various people, including me, have since added code to do the same thing with the mappings between "localhost." and ::1. (http://jdebp.eu./Softwares/djbwares/guide/dnscache.html) I implemented implicit localhost support in my proxy DNS servers for OS/2, as well.

* It is conventional good practice to have a db.127.0.0 and a master.localhost "zone" file on BIND that do this. This is in Chapter 4 of the book by Albitz and Liu, for example.

* Unbound has built-in "local zone" defaults mapping between "localhost." and both 127.0.0.1 and ::1.

On the other hand, this proposal explicitly rules out all of the aforementioned existing practice, by demanding that both proxy and content DNS servers instead return "no such domain" answers for the domain name "localhost.". That seems like a fairly pointless deviation from what is fast approaching two decades of existing practice, for which no rationale is given and none is apparent.


One time I was debugging a problem for a user of our desktop software (I work on https://expo.io) by sharing his screen and taking over his computer. And it turned out the reason the user was having problems was that in his /etc/hosts file, he had an entry pointing localhost at the IP address of some other computer on his network. Crazy. I have no idea how anything worked on his machine.

Took a while to track that was down. Was both bewildering and sort of satisfying to figure it out in the end.


Surprised the more common .localdomain is omitted as a domain rather than having a .localhost domain.


or ".lan"


Can anybody with more knowledge point out techniques that this would break?

Are there any software or networking patterns that currently rely on localhost _not_ resolving to the loopback?

EDIT: The RFC mentions that MySQL currently differentiates between the two, but that's it.


Anyone using DNS search lists to have localhost resolve to e.g. localhost.example.com would potentially have problems. Obviously this is a pretty weird thing to intentionally do, but it currently gives you a quick way to get around certain issues. Obviously this is ugly, but in some ways that actually makes breaking it more problematic: Things which this breaks are likely to be things that are difficult to fix.


Within ".localhost."...

www.localhost.mydomain.com?


The trailing "." signifies the end of the domain name, so your example domain would be unaffected.


Thanks, I misread the draft as '<star>.localhost.<star>'(!)


> Application software MUST NOT use a searchlist to resolve a localhost name.


Hmm I guess people may use IPs other than 127.0.0.1 like 127.0.0.2 for Docker or maybe VPN/VM shenanigans...


This remains valid under the draft.


   The domain "localhost.", and any names falling within ".localhost.",
   are known as "localhost names".  Localhost names are special in the
   following ways […]
Is this not implemented on macOS or am I just misunderstanding?

     ~ ping test.localhost
   ping: cannot resolve test.localhost: Unknown host
     ~ ping localhost.test
   ping: cannot resolve localhost.test: Unknown host


As I understood it, that simply states that if any of those domains were implemented, they should loopback, as opposed to all domains containing "localhost" ARE implemented.


Thank you for the clarification!


To clarify a bit further than what the other comments said (abc.localhost. isn't implemented on macOS):

The RFC draft is talking about domains ending in ".localhost.", not any domain containing ".localhost." (so abc.localhost.example.com wouldn't be localhost).

All domains have to end with a period, or they're not fully qualified. "example.com" could mean "example.com.my.domain." if whatever you're using uses "my.domain." as the root. But "example.com." is always "example.com."

Nowadays that feature isn't used very often, as the root of domains not ending in a period is usually assumed to be in the root zone.


It would be rather surprising if a brand new draft RFC from a google employee were already implemented in macOS...


Yeah of course, sorry, I misread the document. I took that part as to describe the current state.


Sounds reasonable, but would probably break a ton of stuff. Does this provide enough benefits to outweigh the downsides?


What do you think might break?


Badly written software will break.

Or software relying on badly written DNS resolvers.


There was no RFC for localhost yet?! That's pretty surprising... That this RFC have any practical meaning? People didn't actually register localhost. domain, did they? Is there an actual line of code that this should change? Are they just trying to promote writing localhost instead of 127.0.0.1?


From the actual draft:

> First, the lack of confidence that "localhost" actually resolves to the loopback interface encourages application developers to hard-code IP addresses like "127.0.0.1" in order to obtain certainty regarding routing. This causes problems in the transition from IPv4 to IPv6 (see problem 8 in [draft-ietf-sunset4-gapanalysis]).

>Second, HTTP user agents sometimes distinguish certain contexts as "secure"-enough to make certain features available. Given the certainty that "127.0.0.1" cannot be maliciously manipulated or monitored, [SECURE-CONTEXTS] treats it as such a context. Since "localhost" might not actually map to the loopback address, that document declines to give it the same treatment. This exclusion has (rightly) surprised some developers, and exacerbates the risks of hard-coded IP addresses by giving developers positive encouragement to use an explicit loopback address rather than a localhost name.

>This document hardens [RFC6761]'s recommendations regarding "localhost" by requiring that DNS resolution work the way that users assume: "localhost" is the loopback interface on the local host. Resolver APIs will resolve "localhost." and any names falling within ".localhost." to loopback addresses, and traffic to those hosts will never traverse a remote network.


To be clear, this is not an RFC yet. It's not even adopted by a working group, although I hope it will be.

Mods: can RFC be removed from the title? [Edit: thanks for updating the title!]


Thank you, we've replaced “RFC” with ”Internet Draft”.


So, an RFRFC?


It's called an I-D, for "Internet Draft."

Very roughly, a Standards Track document in the IETF goes through four stages:

1. Someone submits an I-D as an individual. Literally anyone can do this and such a document carries no weight.

2. A working group (WG) "adopts" the I-D and proceeds to work on the document as a group. The document still carries no weight at this stage.

3. The I-D is published as an RFC. When this happens, the document finally has real weight. The document can no longer be changed, although it can be updated or obsoleted by another RFC.

4. When the RFC matures, it can be promoted to an Internet Standard.

Let localhost be localhost is still in the first stage.


To be fair this is the fourth draft of "Let localhost be localhost". Internet Drafts deliberately "expire" unless you write an updated version every so often proving it's still interesting and being worked on. @agwa says they have "no weight" but pragmatically what matters is always whether people do what the document says, not its supposed "weight" within a standards body.

For example many of you will have used Let's Encrypt, the whole of Let's Encrypt is built upon a standard issuance and management protocol, ACME. But ACME is still only an Internet Draft, albeit it's likely to go to RFC before the end of this year. So it didn't matter that in your view it "has no weight", it had millions of people using it.

If this I-D takes off and is widely accepted by, say, Microsoft and Apple, even if it never becomes an RFC it has a real effect. On the other hand, if it becomes a standards track RFC but Apple decides to ignore it and never promise that "localhost is localhost" in their operating systems then it's worthless despite the "Standard" tag.


The one that I liked to point to was TSIG. ISC and Nominum got RFC 2845 through in a matter of months. Microsoft submitted its first Internet Draft of GSS TSIG in 1998, and it did not get through the RFC process for about 5 years, despite already being in real world use by Windows.


Pretty sure it's already called a "draft."


Hm, I'm not familiar with that acronym.


It stands for Document Requesting All Faults and Technicalities. (Source: I have a PhD in speaking out of my ass.)


I bet you're an excellent party ventriloquist. ;-)


Localhost resolving to IPv6 basically breaks with Docker they unless you give special instructions only listens on IPv4. With curl for instance you can use the -4 parameter but probably best we start saying test the site on 127.0.0.1 in tutorials.


Sorry, but none of that is correct.

First, if you publish a container's port in docker, such as with the -p flag, e.g.,

    docker run --rm -p 8080:80/tcp nginx:latest
Docker will listen, on the host, on ::; it will accept IPv4 connections on that bind. (Through IPv4-mapped IPv6 addresses[1], which is a transition mechanism.)

But even if we force Docker to bind to only IPv4, curl will still work:

    docker run --rm -p 127.0.0.1:8080:80/tcp nginx:latest
curl will attempt an IPv6 connection first in this case (since localhost resolves to ::1), and fallback when it fails (since localhost also resolves to 127.0.0.1). If you pass -v to curl, it will tell you as much:

  *   Trying ::1...
  * TCP_NODELAY set
  * connect to ::1 port 8080 failed: Connection refused
  *   Trying 127.0.0.1...
  * TCP_NODELAY set
  * Connected to localhost (127.0.0.1) port 8080 (#0)
Perhaps there is some pathological case where Docker and curl and IPv6 and localhost don't all work together, but one would need more information to tell. But it doesn't "basically break" Docker.

[1]: https://en.wikipedia.org/wiki/IPv6_address#Transition_from_I...


I'm sorry but none of what you've said is correct - I'm speaking from experience from the Docker community.

Watch this ASCII recording to see the issue

https://asciinema.org/a/xM8m0iqOepkSwCRBTIP9hYXGU

We run into the issue on RPi/Raspbian - curl hangs indefinitely. I had someone report to me that he couldn't access localhost:8080 in a web-browser using a Docker container for FaaS because it was resolving to this IPv6 - he was on Arch Linux.

Perhaps you can reverse your hasty down-voting?


You're right; that opening line was unduly harsh. My apologies.

It's unfortunate, however, that the creator of the video did not capture the output of curl -v, or perhaps even an strace. The logic presented in my first post is nonetheless what curl does, and should apply, so something else is going wrong here. I still think there's a wide difference between "in this particular case, something is causing the IPv6 address to hang" and "Localhost resolving to IPv6 basically breaks with Docker".


Nobody said it broke Docker (it was the resolution that "broke". But the resolution clearly did not work and we had people running through tutorials only to find curl would hang and timeout unless switching to 127.0.0.1 or passing the -4 flag.


hang == block. Docker is badly written and basically broken. With those corrections, carry on.


This sucks. I have registered and am actively using a 'localhost' domain name under one of the new generic TLDs for for emails and account signups for quite some time now.


This wouldn't affect that: it applies to the bare TLD localhost and its subdomains -- not localhost.<something>


Why couldn't they just redirect "localhost" at the DNS level to 127.0.0.1?


Well, because that IP isn't what localhost is supposed to be pointed to. It's supposed to point at 127.0.0.1. The IP you mentioned is more likely to be the IP of the router you're connected to.

Usually 127.0.0.1 is limited to the computer's internal network via a loopback network interface and `localhost` should be resolving just to that, but this RFC makes it so that no matter what `localhost` will point locally instead of somewhere else which helps people who want to bind to just the internal loopback and nothing else.


For context, as the parent has edited their post: the original question was asking why localhost couldn't have a DNS entry for 192.something.


Trust: there's no “DNS level” which you can reason about reliably across the wide range of networks people use. Developers would still get bug reports because some ISP resolved localhost to the IP address of their search / ad page, or a dodgy home router returned its setup page, etc. Lest that seem contrived, there are major ISPs – national level in Europe – which ran transparent HTTP proxy-caches which stored pages for years beyond their expiration date. I have zero confidence that some ISP wouldn't through incompetence or marketing have localhost resolve to something you don't expect.

Building it into the network stack means that entire class of errors stops happening.


oh, screw off. Run your own servers and mandate your configurations for search and acceptable servers or go tsig and penalize your providers. this isn't hard. In fact for a scalable|good business it should be mandatory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: