This makes me think of some AD best practices I read a long time ago. One of the practices was to never use made up local TLDs like .internal or .local because some day they might be real and get picked up by someone.
Essentially you should always use a domain you control both outside and inside, like a regular gTLD or ccTLD.
Pretty much every single company I've worked for with AD has broken this rule.
> Essentially you should always use a domain you control both outside and inside, like a regular gTLD or ccTLD.
Yep. We use two domains - everything on A is "public" and everything on "B" is internal. The root of "B" is a static page on AWS that is gated behind our VPN so it serves as a quick "can you hit our infra at all" check when troubleshooting, and catches the people who are convinced they've enabled their VPN.
One thing to note here that we haven't solved (We've only got two "infra" people, we're a small company) is how you handle an internal portal to an external service. If our public domain is maccard.com and private is maccard.dev, where does internal-admin.maccard.com _actually_ live? Our solve for this is we have an internal.maccard.com for this very specific use case, but I'd much rather it was admin.live.maccard.dev
We have enough issues with DNS that adding split DNS into the mix is a ball ache I don't want to contend with.
We actually have the DNS for our private domain set publicly, and all the actual work happens on a load balancer which is on the network. We're fully remote so this avoids the "my communal WiFi provider seems to have issues with the VPN" (which is what we had when we used split DNS)
We use split DNS and the admins can't even do it right, they keep fucking it up and configuring one DNS view but not the other, so when I'm on VPN I randomly can't use certain domain names.
Also as another commenter mentioned, it is impossible to tell based on the name if it is an internal or external resource
I'm curious what split DNS offers that a separate internal zone wouldn't.
And having a website on the domain.tld adds shenanigans.
One of many examples I had is when Outlook loses connection to Exchange (eg S2S VPN is down) it starts autodiscovery process, hits domain.tld (because users have email@domain.tld, duh) and complains to user with a scary messages (which are also blocks the process until the users hit something). Which is totally understandable, because the website is on some public hosting, so CN in the cert is from the public host at best and != domain.tld.
Using corp.domain.tld or even techdomain.tld solves this totally and also let you use public certs (LE in the current era) even on the 'local' side of the network.
Aside from all technical issues the biggest problem I have seen with such an approach is that is really hard for employees to remember what is external and what internal that way. Distinct domains help there
made me think this wasnt such a great idea. particularly the part about facebook employees not being able to use their keycards to enter the buildings at the same time as the site outage.
The ".local" domain specifically is a bad choice as many platforms use MDNS instead of DNS for looking up those names. Leading to issues resolving names on some client devices. It's also very common due to Microsoft suggesting it as best practise in the early days of AD.
I'm wondering about one thing, now that I've read a few "may cause issues" and "is used for mDNS" replies: what the F is mDNS actually doing in the background?
Is it really going to assign "lancelot.roundtable.local" to my washing machine on a whim, which leaves the microwave unresolvable?
Can't I instruct the mDNS server running on my machine to respond to a particular name ending in .local?
Can't eg. dnsmasq insert itself into a conversation on 224.0.0.251 saying "Let me answer this question" for certain queries?
I'm a little fuzzy on this, but my understanding is that for mDNS to be reliable it is required that all .local hosts implement mDNS to allow for conflict resolution.
If you’ve set up x.local in your DNS for your dryer but your laptop uses mDNS, it’s possible that your lapatop’s mDNS will get a response from your microwave that it’s reachable at x.local. The solution (not an expert, please check this) is to set up the dryer in DNS as x.domain-thar-you-own or x.home.arpa
> RFC 6762 was authored by Apple Inc. employees Stuart Cheshire and Marc Krochmal, and Apple's Bonjour zeroconf networking software implements mDNS. That service will automatically resolve the private IP addresses of link-local Macintosh computers running MacOS and mobile devices running iOS if .local is appended to their hostnames. In addition, Bonjour devices will use those .local hostnames when advertising services to DNS Service Discovery clients.
> Most Linux distributions also incorporate and are configured to use zeroconf.
> ..The connection of Macintosh and Linux computers or zeroconf peripherals to Windows networks can be problematic if those networks include name servers that use .local as a search domain for internal devices.
Kinda weird to blame the people who made an RFC, instead of the industry leader who recommended using .local completely on their own, without support from the wider industry. This is explained in the next couple paragraphs, where you stopped copying.
You're right, the confusion about the use of .local domain seems to be more due to Microsoft going back-and-forth about it.
> At one time, Microsoft at least suggested the use of .local as a pseudo-TLD for small private networks with internal DNS servers.
> ..However, more recent articles have cautioned or advised against such use of the .local TLD.
> Microsoft TechNet article 708159[7] suggested .local
> ..but later recommended against it.
> The Microsoft Learn article "Selecting the Forest Root Domain"[8] cautioned against using .local
> By default, a freshly installed Windows Server 2016 Essentials also adds .local as the default dns-prefix when a user doesn't select the advanced option, resulting in a domain with .local extension.
Test is best used in temporary setups that you use for testing things.
Example is best used in documentation only.
Invalid is weird and confusing.
Localhost as a TLD should still be on the machine itself. Keeping in mind that 127.0.0.1 is not the only loopback address at your disposal – you have the whole /8. You could bind different services on different loopback ip addresses and then assign host names in the .localhost tld to those.
So for example you could run different Wordpress instances locally on your machine on ips 127.87.80.2, 127.87.80.3, and 127.87.80.4, so that each can run on port 80 without colliding with one another, and without resorting to having non-80/non-443 ports.
Then have corresponding entries in /etc/hosts
127.87.80.2 dogblog.localhost
127.87.80.3 cooking.localhost
127.87.80.4 travel.localhost
And use those domains to access each of those from your browser. Then you don’t even need to keep all services behind the same Nginx instance for example, as you otherwise would do if you had different domain names but was using 127.0.0.1 and port 80 for all of them.
Whereas having the localhost tld refer to hosts elsewhere on a physical network.. that’s about equally as weird and confusing as “invalid”.
> Keeping in mind that 127.0.0.1 is not the only loopback address at your disposal – you have the whole /8.
Tell that to Google Chrome developers, who are so arrogant they think they know better than the operating system, and force-resolve *.localhost. to 127.0.0.1
‘The ".localhost" TLD has traditionally been statically defined in host DNS implementations as having an A record pointing to the loop back IP address and is reserved for such use. Any other use would conflict with widely deployed code which assumes this use.’
Reading that sentence implies a single loop back address. I appreciate that IPv6 “fixed” this confusion by having a single loop back address of ::1. While you are correct that IPv4 reserved the entire /8 in practice the expectation is generally that 127.0.0.1 is the loopback address, and using other addresses in the 127/8 space tends to lead to unexpected issues across many pieces of software. The intention of the .localhost domain is to ensure that DNS resolutions never resolves to something external, for security reasons.
Yes, but who would want .test, .example or .invalid as an internal domain? Also, they are too long (yes, that matters).
What I've seen lately is '.int' for internal usage. While this is a valid TLD, it is only for international organizations, and it is not possible for "normal" people to reserve a domain with that TLD, so unless your company is called "WHO" or similar, you shouldn't have any problems...
People thought .dev was safe until it wasn't. Bear in mind they can still do something like "enroll all of .int in HSTS preload" (like was done for .dev), and suddenly your browsers will permanently refuse to load any of your internal sites.
No one competent would ever have thought dev. was safe. It's quite simple: if you don't own the domain, or it hasn't been reserved (e.g., home.arpa.), don't use it!
The only one of those appropriate for accessing actual hosts would be .test, and obviously using .test for non-testing purposes would also not be appropriate.
I had a very large client that was squatting someone else's address space in the pre-WWW days. When I pointed out the problem this would eventually cause and that RFC1918 was the (then) answer, the CTO said "no one here needs to get to the University of Tokyo". I was long gone by the time the SHTF.
I had another much smaller client who's leadership insisted that because they were using it the owners would have to just have to not use it. Some sort of imagined IP squatters rights, I guess. I doubt the DoD accommodated them.
Why not use the designated private IP ranges? There are more than enough addresses there, unless it's some crazy application with millions of instances each needing an IP. But then use IPv6?
I saw a successful attack on a .dev domain doing exactly this. Links on PCs worked correctly, but phones showed a scam site, so emailed links were attacked.
It was hard to fix because they couldn’t get the spoofed domain, and there were so many copies of bad links everywhere.
My work used .local which means mDNS and service discovery etc doesn't work. Very annoying. What are they teaching these network admins? Why don't they even know what a domain name is for?
This happened with the .dev situation. I switched to .d ever since. I’ve read somewhere that new TLDs are required to be at least 2 letters so I guess I’m safe this way.
Devil's advocate: using an Internet-public domain for internal purposes will publicly expose your internal hostnames if you enable DNSSEC on the Internet-public domain. This is a problem if you're required to enable DNSSEC, e.g. for FedRAMP compliance.
The number of cases where this is actually a legitimate concern, IMO, is extremely small, and I'm personally of the opinion that using Internet-public domains for internal purposes is generally fine. But it's still important to point out that the number of cases is not zero.
No it wasn't! NSEC3 is crackable the same way a 1990s Unix password file is. This was such a big problem that two competing approaches were introduced to defeat it: "whitelies", which I perceive as the "best practices standard" answer, requires servers to operate as online signers (they should have been all along) so they can generate dynamic chaff records to foil enumeration, and NSEC5.
>Pretty much every single company I've worked for with AD has broken this rule.
It's a lot better now. Ever since companies started moving from on-prem Exchange to O365 in droves I've noticed that most orgs I work with (painfully) updated their domain so their user principals align w/ their O365 mailbox.
There's only one customer I have that still uses a ".local" domain for AD, and they got bought out last year. (By an org that uses a real FQDN.)
Hah that's a funny idea that you'd be relatively safer if you used something outrageous like godsavethequeen or crappymcfartlegs. But you never know what promotional TLDs are created in the future. Or how free the process might become, as remembering names becomes less and less important to a majority of internet users.
oh. its worse. the original request for the TLD was only granted because in the application Google specifically mentioned that it should be reserved due to its unofficial use by developers and that if anyone else got it then they might put real domains on it.
I specifically use some .dev domains because of HSTS. Some of us don't cling to http and I prefer an error rather than transparent fallback to unsafe protocol if I screw up the config.
The first point is valid but that is mostly ICANN's fault, they should have proposed it as reserved instead of selling it.
Just my person experience, but I got my relatively uncommon name on .dev for $12/yr so I'm pretty happy with that. While the situation worked in my favor, I agree .dev probably should have been the official internal only TLD.
The last time I set up DNS at home, I decided to use a fictitious and undelegated subdomain under my ISP's domain name. This structure did not create any "extra" problems in the short term.
But I suppose that I still ran the risk that the subdomain could "become real", or draw attention from security admins, or I would change ISPs.
How? I’m not saying this is great practice — there are certainly better options — but no one outside your network will ever know about it. It also won’t matter if you switch ISPs.
It's not a problem (well, most of the time), but you would see the requests for 'internal' resources in DNS (ie your machine is not on your network but tries to resolve the internal DNS records) and in certificate checks even for non-public PKIs
The .local bullshit gate so much headache. When I was working on Ubuntu based POS systems we had to make sure avahi or any kind of mDNS wouldn’t be installed. It became a running joke “we got another MS MVP here” every time we had issues with a client with a .local domain.
.corp, .home and .mail should also be perfectly viable for private use after ICANN eventualy decided to cease all processing of applications for those TLDs.
"Whereas, on 30 July 2014, the ICANN Board New gTLD Program Committee adopted the Name Collision Management Framework. In the Framework, .CORP, .HOME, and .MAIL were noted as high-risk strings whose delegation should be deferred indefinitely".[1]
Deferred indefinitely is not quite the same thing as reserved for private use. For home use cases it's probably good enough, but a corporation will want more assurance than "the current ICANN has stopped processing applications for this TLD."
Actually, if history is any indicator, we can’t trust any of these companies with this. Remember the org tld private equity fiasco? ICANN only did something after public outcry and EFF interest.
Another possibility is `.zz`, which technically can be a ccTLD but it's a user-assigned ISO 3166-1 alpha-2 code, and its last position makes extremely impossible for it to be repurposed as a valid code even in that setting. In comparison, some user-assigned codes like `XZ` are often used for temporary country codes so `.xz` would be less appropriate.
It seems that ICANN did consider this choice among others, but reject for the lack of meaningfulness:
> The qualitative assessment on the latter properties was performed in the six United Nations languages (Arabic, Chinese, English, French, Russian and Spanish). [...] Many candidate strings were deemed unsuitable due to their lack of meaningfulness. [...] In this evaluation, only two candidates emerged as broadly meeting the assessment criteria across the assessed languages. These were “INTERNAL” and “PRIVATE”. Some weaknesses were identified for both of these candidates. [...]
I wonder if this means that they only scored the highest among others and all candidate strings were indeed unsuitable, but that they had to pick one anyway. I'm not even sure that laypersons can relate `.internal` with the stuff for "internal" uses.
I opposite to use the `.zz`. Instead, I think this should be assigned as new TLD domain for all people live in any country, because it is a reserved ISO 3166-1 alpha-2 code, and will not be assigned to any country. It's the best convenient TLD for human on the earth.
There is not much to gain from having a new two-letter gTLD instead of existing all-purpose three-letter TLDs (say, `.com`, `.ooo` or `.xyz`). Especially given that every TLD needs a registry, and `.zz` would be a particularly extra-special gTLD that needs a special procedure to select the registry. Better to make it a reserved TLD if it is ever going to be used.
It's so sad this is still necessary. We've had service discovery via avahi/zeroconf for years. Why does it seem like uPNP has been and gone? It shouldn't be necessary to type hostnames any more.
After reading through the threads, I still think that '.lan' is a better non-reserved suffix to use for this than '.internal'; however, my opinion rarely has significant weight in the grand scheme of things.
And when you're using it to connect to devices not physically on your LAN, like servers reached via tailscale? `.internal` implies your internal network, however it may be setup, `.lan` implies your "local" area network.
It would be nice to see this paired with more widespread support for the Name Constraints TLS extension, which would in theory allow internal CAs to be restricted to issuing certificates for .internal domains. That would open up a lot of very interesting applications in terms of streamlining HTTPS on local networks, for example, ACME on openWRT routers.
Absolutely. We're getting closer, but it's hard to measure what actually supports it as bettertls (the caniuse equivalent in this space) doesn't track it.
As others have mentioned there already is the ".home.arpa" TLD but I definitely think ".internal" is a step up in terms of clarity. That said, for my internal network I just put things under a subdomain of a domain I own so I can use HTTPS with a proper SSL cert
> I just put things under a subdomain of a domain I own
Yup, same here. Great in combination with ACME DNS-01 so your DNS server can request all those certificates and then push them out to your devices. (Otherwise the hostnames need to be externally accessible, which means either exposing the internal devices, or mucking around with split-view DNS. The former is a terrible idea, the latter is also DNS server complexity and worse than doing DNS-01 IMHO.)
IMHO if you are already doing some process of "push certificates out to devices," you'll likely be much happier with getting a wildcard cert using DNS-01 and change that update process from "all devices all the time according to their schedule" over to "all devices but once every 80 days"
I do appreciate the threat model of one device getting owned leaks all your certs but security is always a trade-off between security and convenience. It also lowers the load upon the LE servers, for what that's worth
Not sure everything updating at the same time is more "convenient" than staggered failures. For one, if multiple things break at the same time, it's easier to lock yourself out of things in more complicated ways. Also it's generally the first refresh that breaks, and everything at once only helps when you freshly roll out certs to a whole bunch of devices… if you add things incrementally (e.g. either because you finally get around to it, or you just bought something new) it makes no difference if it's all in the same cycle. Except now you have a wildcard cert floating around…
Counterpoint: .internal is much easier to understrand, .intra sounds like a reference to "intranet" as opposed to "internet". That's terminology networking people would use, but .internal is likely something both non-tech and tech people would intuitively understand.
I don't think `.internal` to be intuitively understandable either. In spite of other shortcomings, `.home` and `.corp` convey its private-use natures much better than `.internal`. (But they won't be suitable for anything other than that.)
Isn’t it possible to setup a default search domain. So most places you could just type something like https://site/ and your dns config knows to look for https://site.internal/
I liked this idea at first, but the more I think about it the less I'm sure it works. Intra seems like a word which denotes physical boundaries to be within. It prefixes words like intramural, intramuscular, and intracellular. It's used for intranet as well, and while this usage of the prefix works, it has more broad use-cases than that. On the other hand, internal seems very clear and direct, and might suit a more digital, organizational context with less of a physical or spatial aspect to it.
Also, since intra is typically used as a prefix, it seems strange to use it bare. People familiar with intranets will probably make (or assume) the connection, but others might find it unclear while "internal" would likely not be.
ICANN: [Proposed Top-Level Domain String for Private Use](https://www.icann.org/en/public-comment/proceeding/proposed-...) "The Internet Assigned Numbers Authority (IANA) has made a provisional determination that “.INTERNAL” should be reserved for private-use and internal network applications(...)"
Didn't .local start out this way until it was co-opted by Apple for some network abomination? Any new private domain is just going to get co-opted by something else soon enough. Browser authors and network service authors are going to start using it for random, incompatible purposes and break everything.
If you need DNS, register and use a real domain name. Everything else is going to be a hack. Anyone tech-savvy enough to know what an internal, unroutable TLD is, and have a use for one, is going to be just as comfortable and capable of managing a real domain.
I support the idea of something like .internal, but I'm certain it will be made useless for its intended purpose in short order.
Zeroconf is only considered an abomination by folks who view any naming system other than DNS to be an abomination.
Domain names are not slaved to DNS, they predate DNS by over a decade, and there is room for more than one naming system.
If DNS has to surrender one of their precious, precious, TLDs (lord knows there are soooo few to choose from) which, again, existed BEFORE DNS was even a twinkle in Paul Mockapetris' eye, so be it.
Messing up your .internal configuration won’t result in leaking queries to the public.
And maybe secondarily this may encourage tool development for supporting internal names and make it easier for setting up informal or per department configurations.
I'd revently ran into this, after using .local for a long time, and installing something with mdns. Nslookup gave the correct ip, but ping got confused.
A quick google did not deliver a decent reserved domain, but multiple people suggested .home
I use .localnet to go with the name of localhost, as this has been suggested by ... one of the RFCs, but I can't remember which.
If .localnet ever becomes a real TLD, well, I'm pretty sure the entire global infra is going to collapse and not necessarily be my problem.
Edit: And to be clear, I'm doing this for my house, not some enterprise setup; using real actual FQDN for internal services at a company, especially one that is multi-site/cloud, is still the best advice.
This is exactly how a committee would design it if none of the participants had actually used an internal domain.
For example, in Google, https://go/foo had "go" as technically a TLD, and the memorable suffix that followed was already part of the path and not the domain name. It made it easy to type or include anywhere, including chats, posters, presentation slides, etc.
If they were to follow this proposal instead, you'd be typing or including https://go.internal/foo , which while more explicit largely defeats the point of the short URL.
This was very common in Windows shops back in the NT and even post-NT4 days to leverage the hostname as the URL (http://exchange, http://sharepoint, and so on).
If you have a customer and DNS seems to work strangely, have a look if you don't have a so called single-label domain for Active Directory, such as host1.internal or crusty.internal if you have more creative admins or even worse, db.local.
On Windows, the name resolution on single-label domains behaves a bit differently, using NetBIOS resolution which can prevent you from e.g. adding a new host to the domain from a different subnet - you might see this as DNS failure. Of course, it is not DNS's fault, if it wasn't asked in the first place.
The server authentication story is fairly weak. Multiple companies may use the same .internal domain name and none of them can get a TLS certificate from a public certificate authority. This means they'll each need to operate a private CA if they want to authenticate connections to the server (and encrypt with HTTPS). A major problem with this approach is that computers (especially laptops) travel between networks and can end up trusting more than one private CA. This means that you can have multiple servers using the same domain name, but operated by different orgs, and each appears valid to the end user. Session cookies and other data can leak when this happens.
I think the right solution is that we should require domain registration (google.internal, microsoft.internal, etc.) to avoid these conflicts. A public CA may be able to verify ownership, avoiding the need for private CAs.
I built a service [1] that does this and is compatible with Let's Encrypt. The trick is that I only allow users to set ACME-DNS01 TXT records, not A/AAAA/CNAME records. So you'll still need to run internal DNS for those.
What's the benefit over just buying a regular domain for my internal stuff? The result would be about the same, and I wouldn't be dependent on you never going out of business.
Even with a TLD, going out of business is a concern. Take Freenom as an example, they had rampant abuse and ran into funding issues. People who previously used that service for free internal domains have been looking for a new home.
What's different is that the public suffixes I operate cannot publicly host content, which should protect the service from the abuse concerns that plagued Freenom and other free public suffixes. That reduces cost and should keep the site running.
I recommend buying your own domain if you don't mind the cost. A free solution for domains with TLS on internal networks is valuable to many.
To be fair, .dev TLD seems to imply developer or development. .internal is a broader name and even if .dev was an option it probably wouldn't be selected.
Also good to reserve localhost since some system resolvers will actually resolve any subdomain of localhost as 127.0.0.1. (I think systemd-resolved does, but I know for sure glibc NSS with the nss-myhostname module does.)
The GP describes resolver software, which corresponds to item 6.3.3, not a caching server. This does specify the same behavior.
In RFC terminology, "MUST" > "SHOULD" > "MAY", so there is some wiggle room there.
6.3.2 permits Chromium and other apps to hardcode localhost names as such, instead of using a resolver.
A very popular vector for adware/malware is to take over the system resolver, or replace the DNS client configuration, so this is one reason Chromium jealously guards 127.0.0.0/8
That's too long. I just bastardize an existing tld on local network like home.net. Some browsers don't even allow made up names. Internal is too long to type.
Answer is simple and straightforward - nothing like this is going to happen because short abbreviations and any potential short alternatives can be sold and sweet sweet money can be extracted
Wait, so all the '.io' domains are actually registered by people operating in the British Indian Ocean Territory?
Seriously, .lan is just a convention. For all I care they could come up with any three-letter thingy, as long as there's a mutual understanding that no global DNS will ever resolve it.
2. ".local has since been designated for use in link-local networking, in applications of multicast DNS (mDNS) and zero-configuration networking (zeroconf) so that DNS service may be established without local installations of conventional DNS infrastructure on local area networks." https://en.wikipedia.org/wiki/.local
That's a good list of reasons, however it seems the biggest concern is if you run a dedicated DNS service on your network.
For a simple home network setup, as long as naming conflicts can be managed, it looks like mDNS is quite handy.
On a side note, I find .local to be best suited for the purpose, since from the language perspective it's easier on international users than .localhost
The newly proposed .internal comes close, but .local still looks more semantically flexible or maybe this is a cognitive bias of mine.
.local is used by mdns so your .local machines can conflict with discoverable devices and services. I guess they wanted something that wouldn't conflict?
It shouldn't. There is no way to prove ownership of a domain, because everyone owns it. Both a genuine company and their attacker have the right to use the .internal TLD, so both should be granted a certificate. This makes it completely trivial for the attacker to MitM the company's TLS connections.
The only option to somewhat-securely run TLS would be to have the company run their own internal CA, and trust its root certificate on all internal clients.
I suspect you'd need to generate your own, unless they intend on allowing people to register them. It's hard to provide a SSL for the 100,000 different "tv.internal".
(to be fair, you generally can't get an .int domain registered. "int is considered to have the strictest application policies of all TLDs, as it implies that the holder is a subject of international law.")
… now that I think about it, "foo.in/ternal" makes so much more sense …
There's an existing TLD that is a string prefix of the new TLD.
Apart from lookalike attacks I'm also wondering if this will do weird things while you type in an address, e.g. if you try to type "foo.internal" and pause for a second after "foo.int"… your application may run off and do lookups or even prefetches.
That happens pretty regularly anyway, e.g. `example.com` vs. `example.co/m` or `example.net` vs. `example.ne/t` (and both `.co` and `.ne` allow SLD registrations).
Crypto is/could be still fine, it will still make it harder for others to see the contents of the conversation that is going on between the two machines.
What you are losing is the modicum of confidence that a website is the real deal because another party (that browser and OS manufacturers trust) took the time to, at the very least, verify ownership of the domain.
Automatically trusting any cert raises the bar for attackers from passive snooping to MITM. On a home LAN/wifi, MITM is pretty much as likely as passive listening. At the very least you need TOFU (trust-on-first-use) for any kind of real attack prevention.
Currently you can get encryption if you care to configure it, so some have crypto and some don't; but if all certs are trusted, then everybody gets compromised crypto, which isn't much better than no crypto, so it's IMHO a downgrade.
The problem is ownership and trust roots. If you don't uniquely own a domain name then you can't get a TLS cert for it from a public CA. Private CAS still work, but are challenging.
I've been exploring private-use-only domain registration at https://www.getlocalcert.net/, which is compatible with LetsEncrypt.
Alternatively I use a map file loaded into the memory of a loopback-bound forward proxy. No DNS.
I also use loopback-bound authoritative DNS to a limited extent as it provides wildcards.
There are ways to avoid using DNS.
Most web developers do not understand DNS, or at least dislike it, and some get annoyed by the HOSTS file. Quite funny. But I'm not a developer. DNS is something I understand well enough, I like it, and, in addition, the HOSTS file is useful for me. But sometimes it's most useful for me to avoid DNS.
You can set up a server listening to port 53 that will return the corresponding entry from a "hosts file" if you query it for an A record. To avoid the file growing uncomfortably big, it can be split up amongst hierarchically arranged servers.
While I have no need to lookup an address in local computer A's HOSTS file from local computer B as this is not how I use the HOSTS file, in the event that I did want computer B to look up addresses in computer A's HOSTS file, it seems there are many possible options. I could not even come close to listing all of them. (The discussion here is computers on a local network. Is there a need for a "distributed hierarchical system".)
But I'm not interested in using the HOSTS file in this way on the local network. I'm more interested in IP addresses than "domain names". I am not a fan of web browsers; I make HTTP requests from the shell command prompt and from shell scripts. For example, I like to create shortcuts for certain IP addresses so I do not have to type them, e.g., when using netcat. For me, the HOSTS file works perfectly for that purpose. I use this functionality every day.
Not every computer on my local network has the same ability to lookup names and IP addresses. Most have zero access to DNS data. No lookups. Some may only be able to lookup a few remote addresses. I might put those in the computer's HOSTS file.
There is a fear or hatred of /etc/hosts amongst web developers. A regurgitated origin story about DNS and a perpetuated myth about the HOSTS file, having to do with constantly dynamic IP addresses. But the truth is that the conditions of the internet have changed. As someone who uses static, stored DNS data, and as such possesses a large chunk of historical DNS data, I have proof that, for the websites I may visit, most IP addresses do not change frequently.
Domain names are overrated. Web marketing hype. For example, no one uses a domain name to log into their router. But no one at home is getting internet access without typing an IP address at least once to set up a router. If I want to type a short, memorable name instead of an IP number to reach a computer on the local network I can make an entry in /etc/hosts. Using computers that have no /etc/hosts and no control over DNS sucks. Let web developers use those computers.
How many times have I seen developers copy entire portions of RFC 1035 into their code as a "comment". Too many to count. They will always struggle to understand it.
Why not tell us the name of the router so we can learn about something different. Would be interesting to see a router with a built-in DNS server. Most home routers I have seen require people to type 192.168.0.1, 10.0.0.1 or whatever to set it up or to change settings. If anyone wants to try to argue otherwise, I could provide links to countless PDFs of manuals online showing this step in the set up instructions.
The router I'm referring to is specific to a provider. Wouldn't be very useful. I've also observed Fritzbox routers using the fritz.box domain for internal networking out of the box.
Typing "fritz.box" into a browser is useless unless one already has an internet connection.
If it's a remote DNS query then is that really "internal networking".
Looks like there's an ad for NFTs at fritz.box along with some links to Javascript files and nothing else. No content. I think I'd rather just use a local address.
Somehow you skipped step 2 that says: "Enter the address http://fritz.box" which is the expected way to connect to the device and that worked for me even before Internet is set up. What you quote is the alternative method.
It does look like the domain fritz.box is not owned by AVM, the manufacturer of the device. Apparently they didn't manage to register this domain once .box became available. So in the future they might want to use fritzbox.internal, if this proposal gets approved.
Ask yourself: Why would an alternative method be required.
Typing "fritz.box" into a browser without an internet connection will not accomplish anything. To even get an IP address for "fritz.box" there needs to be either (a) an appropriate entry in /etc/hosts assuming the browser does not ignore /etc/hosts, (b) a DNS server on a loopback address or (c) a DNS server on the local network, _and_ the DNS server needs to have the IP address for "fritz.box" (so the person would have to know the address already, before connecting to the internet) _and_ the browser has been configured to use that local DNS server.
To demonstrate, assuming there is not a DNS server listening on the loopback address 127.23.59.88, change the DNS settings in the operating system to 127.23.59.88. Then try typing "fritz.box" to set up the router.
The reason an "alternative" method is provided in the instructions is because without an internet connection, typing "fritz.box" will not accomplish anything except generating an error.
The reason the company suggests that people type "fritz.box" instead of an IP address is likely because the operator of "fritz.box" website is advertising NFTs for sale. Not to mention the data collection the operator of fritz.box and potentially their marketing partners will gain with respect to people who own these routers. For example, everytime someone is configuring their router and types "fritz.box" the operator of "fritz.box" website gets to know about it.
The "alternative" method is the most reliable method, and the most private one. If we review the manuals for thousands of routers, we learn it is the most common method.
Yea sure if people change their resolver away from the one provided by DHCP, they're on their own. But when they get their connection configured by the router, it works, because the router will act as resolver and resolve fritz.box to itself. As I said, worked fine for me.
Essentially you should always use a domain you control both outside and inside, like a regular gTLD or ccTLD.
Pretty much every single company I've worked for with AD has broken this rule.