This reasoning is all based on valid, but technical issues for the hosting side.
A general rule for any customer-facing business is to put the customer first. I could list 200 different reasons why you should make the customer register an account with their email address before they can purchase something for you. However, if you put the customer first, in many cases it is easier for them if they don't have to do that. Having the www before the domain name adds unnecessary visual clutter and from the customers point of view, an unnecessary redirect before they can get to your site. A lot of sites use minimalist style everywhere, and it's great. Having the www there for technical reasons is putting the user second in those cases.
I think that www vs. no-www, as a matter of "putting the customer first" is so INCREDIBLY insignificant, compared to the thousands of other decisions that go into a product, that it's ridiculous we're even having this conversation. This is looking for optimization in the wrong places at its finest.
Ever since the first web browsers, way back in the early 1990s, it has been commonplace to leave out the port number. The web browser adds it automatically.
Similar logic would lead us to leave off the "http". And similar logic would lead us to leave off the "www". The trend has been to simplify the URL as much as possible.
Nobody is suggesting the user should be forced to type in the protocol, the subdomain, or the port number. If the user types in:
apple.com
It should lead to where the user wants to go.
However, there are good reasons for using the www subdomain as the canonical URL, and it is also worth noting that some users will habitually type in www anyway.
If you don't want to include the subdomain in marketing material, then there's nothing stopping you from leaving it out, just as there's nothing stopping you from leaving out the protocol.
Redirection ensures that visitors who type in your URL reach you regardless of which form they use, and also ensures that search engines index your canonical URLs properly.
So I don't think you're advocating anything they don't.
Annoyingly, this can be difficult to do if the A record for your domain doesn't point to a webserver. There are other legitimate things it could point to, like an authentication domain controller or a session border controller. Do you really want to be running a webserver there, even if it's only redirecting?
I'm not familiar with session border controllers, but DCs can be and usually are placed somewhere at the apex of the domain, via SRV records.
If you really have trouble, you can tell your firewall to route ports 80 and 443 to another machine and everything else to the device you need. (I've run a hackier version of this that used netcat inside inetd: we didn't want a web server on the machine that owned the domain name, but there was another nearby web server cluster that we added a virtual host to. And we were fine running inetd and netcat on the machine.)
I've done something similar but using iptables and dnsmasq to route DNS differently depending on where the requests were coming from and what domain they were asking about.
Your argument that eliding the www is better for users is logical, but you failed to make any argument whatsoever that it is significant. Given that you were responding to a statement that the difference isn't significant, you're basically attacking a straw man.
> www vs. no-www, as a matter of "putting the customer first" is so INCREDIBLY insignificant
I don't know, I've always been fascinated by the premium of domains that are just one character shorter, and all of the startups that exclude a vowel to get a compressed (or maybe just available) name. That could reflect actual user preferences.
UX theorists convinced me over the last decade that user behavior is shaped by tiny moments and irritations that we think are insignificant at first glance. A few 100 ms extra in delays may seem barely perceptible, but they can kill a site. It's not implausible that a few extra keystrokes could do the same.[0]
On the other hand, redirects seem like a happy medium, so long as they're fast enough. nasa.gov uses a redirect, that seems fine. Note that they were driven to that (from 'www'-only) after confused fans kept writing in to complain that "http://nasa.gov" was a dead end and that they didn't "even know the basics of running a website."[1]
[1] https://blogs.nasa.gov/nasadotgov/2011/05/31/post_1306860816...
NASA's case provides a real example of something Jakob Nielsen pointed out in the first link: usability is a slave to expectations. So if enough popular sites are using naked domains, and your naked domain just 404s, some users will dismiss your site as unreliable.
I was going to write the exact same thing as you. But then I thought about Chrome removing "http://" to avoid displaying useless and dense information to the user. What about www?
Well, if the org hosting the content is being nice and following convention, sure.
But, there's nothing stopping me from running a webserver listening on port 80 (so, accessed in the browser at http://example.com) that serves a picture of a baseball.
I can also run a webserver listening on port 443 only (with SSL/TLS set up, so accessed in a browser at https://example.com), on the same machine, that serves a picture of a dog instead.
This sort of breaks the rules/conventions though because you expect the resource to be the same by nature of the URL you're using to access it. But nobody has to follow that rule
That's one way to set it up, but there's no guarantee of that. Servers may switch anything based on TLS status or really any other property of a client request.
I agree with this. The people saying it's technically possible to get a different webpage at that resource address are missing the point - it's trivial to do A/B with the same site, same port, same protocol, based on user IP, time of day or a RNG.
The point of URL is to identify a single address owned by one guy. Removing the www subdomain means you have two addresses, possibly owned by two guys.
The protocol name is part of the URL, so it is possible and technically valid that the https version leads to different content than the http version. Some things are certainly different, for example if there are ads and other linked resources on the https page, they all need to go through https, while on http page, there is no such requirement.
Technically I guess you could make it a subdomain but no one would actually do that in a production site and in my 14 years have never seen it used, not to mention that major search engine bots will lookup both. The general user will simply type in the domain name more times than not without the www
`www` used to literally be a different host in a network (and in some cases, I'm sure still is) specifically designated for WWW traffic. Think of universities in the 90s which had their existing infrastructure and an Internet facing host on their primary domain and they want to add a web server. They may have had a firewall, probably no load balances, so routing port 80 around their primary host was much more complicated than just throwing up a new host and DNS entry.
I regularly come across sites that only work with "www.". Common with university sites that use the subdomain hierarchy a lot. For extra fun, make the behavior reversed depending on if you are inside or outside their network.
Same with my uni. And then you have sites that only work with, and others that only work without the www.
But it makes sense. the first subdomain before the uni domain specifies the faculty, many of which have their own datacenters. Then many of those have yet their own servers in their network, and often www. is one added later on.
Oh yeah. And then someone enables HSTS and a subdomain doesn't support HTTPS, so now you have to keep a browser around that never ever is allowed to contact the parent site...
A redirect is also completely invisible to the user, and should only add a few microseconds to the page load. But you should always redirect www. if you are not using www. Many people instinctively type that into the address bar when you tell them a domain name.
$ time (curl -L www.reddit.com > /dev/null 2>&1)
real 0m0.410s
user 0m0.040s
sys 0m0.008s
$ time (curl -L reddit.com > /dev/null 2>&1)
real 0m0.389s
user 0m0.036s
sys 0m0.012s
So for Reddit, I'm going to make the cost of the redirect 21 milliseconds.
I cleared Firefox's cache, opened the waterfall diagram, and looked at the time between when I hit enter (assuming that's 0ms) and the time when the request for the www.* address went out. I was planning to subtract off DNS time if necessary but in all cases it hit the cache and contributed 0ms.
I didn't have wireshark open so I don't really know what happened with google. It surprised me too. Maybe something had to be re-transmitted? Now it seems to take 90-100ms. Perhaps I should have done best-of-three, but my point wasn't about precise numbers, it was about orders of magnitude, and "tens to hundreds of ms" is definitely more in line with what I expected than "a few us".
So first off what this does is, it expands to the expression:
'-o /dev/null' '-o /dev/null '
Even if we remove the latter space by just using `{,}` instead of `{,\ }` curl still returns for me an error code 23 -- CURLE_WRITE_ERROR.
curl seems to interpret `'-o /wtf'` as a command to write to the file ` /wtf`, so this only makes sense if you have a directory called ` ` in the folder you're running from.
You can therefore do this correctly with:
-o/dev/null{,}
and that correctly writes the contents to /dev/null without issuing a curl write error.
Thanks, it sure looks less ugly with -o/dev/null{,}
I couldn't find any other way to get curl to stay silent and still output redirect times. Hence the crude hack.
(Obviously my bash and curl versions had no problem with the spaces or I wouldn't have posted it)
you're calculating load time the wrong way - you should not do time check outside the process as you do calling 'time' as another process from shell. consider using curl profiling option next time
I'm aware I'm nitpicking and maybe too theoretical now, but the processing time would vary in whatever application the user is using.. I'm with negus here who basically means the same I guess. The generic danger here (regarding benchmarking) is that you're explicitly also benchmarking curl. In case curl/"the web client" would handle 30x redirects super inefficiently, these results could lead to wrong assumptions.
There are plenty of places in the world where internet latency is a big issue, not to mention mobile networks everywhere. There's no reason to add a roundtrip unless absolutely necessary.
HTTP/301 redirects are "permanent" per the RFC, and therefore cacheable. Subsequent requests for the apex zone by the user should cause the browser to skip the first request entirely.
microseconds? That's definitely not the case. It takes 10s of milliseconds just to leave your internet router on busy wifi home networks. A 301 redirect is an extra network roundtrip for no gain and much more (perceived) latency.
that is actually false. The majority of people no longer type in www.
For example, in the last 10 years branding has completely remove the www and so likewise user reaction has followed suit. Unless you are targeting an older crowd, the vast majority ignore www on a search. With chrome being the major browser now and with browsers allowing search from the address bar, lookup without www is pretty much standard - again assuming your users are in a class of under 35 years of age.
The www serves as a clue that it's a URL. Most people would recognize example.com as a domain even if given no other context. Contrast that with example.design. I think to most, www.example.design would be more recognizable.
My domain is not immediately obvious as an URL: hisham.hm (also my username in many places, or variants of it, like @hisham_hm). Sometimes I do need to make it more obvious that it's an URL (e.g. when printed, or on presentation slides). In those cases I write it as http://hisham.hm
If people don't look at the URL there's no usability issue either way, so you might as well use the www version for technical reasons. This (sub-)discussion pertains to people who do look at the url. (And personally I agree that the www helps to denote a url, and so it's a net positive for users, or at least not a net negative.)
I saw a poster on a bus with domain.sydney sitting on its own line with no other context. I'd expect the majority of people had no idea it was a web address.
Yea, there should be an http:// before it on print media. Otherwise no one is going to type it into their phone. I've done that for a couple of projects that use non-standard domains
This would really depend on the demographic. Someone that is more technologically inclined would understand. However, appealing to the broad spectrum the general public would not. However, that example has to do more with the domain extension rather than www. Another example would be the .co domain extension example. In the western part of the country it could be easily recognize, but in the Midwest the demographic is much different and the general public may feel that a .co domain is merely a spelling error. It's all about your user demographic and their perception. Go with non www and then a redirect. Unless your demographic runs on 2G, either way isn't going to drastically affect your users. I would be more concerned about your file and images sizes than 100ms of a redirect.
I bet the sign itself serves as search engine optimization. If everybody starts searching for "foo" and clicks on my site in the result set then a reasonable search engine might bump my site up higher for searches for "foo."
That's an interesting hypothesis. It makes sense for the company both as optimization of their web pages for search engines and as making good impression on customers.
For loading two-factor authentication secrets. Or anything else you can think of that has too much information to convey with a short link. If you are directing people to a web site using QR codes then you're doing it wrong. Well, wrong now that everyone has realized just how limited QR codes are.
I do for any printed media. It makes it easier for people to look up the corresponding electronic version of a document. QR code has its uses, but tends fail when misused. QR code is best when interacting with it is strictly optional.
RedLaser works well. Really well, actually. I'm not a QR fan, but hey every now and again I do scan one (usually out of some weird curiosity) and RedLaser is the best app for the job.
When we did some research for RBI (part of Reed Elsevier) w found that about 8 - 10% of our inbound links linked to the wrong version of the home page.
So correctly handling all standard HP synonyms is a good start for all sites.
I had mine on http://[subdomain].rya.nc/wedding/ and had it printed that way with the wedding invitations and even my less technical relatives didn't seem to have problems with it. Did you have a bare domain or did you include http:// and the trailing slash?
> This reasoning is all based on valid, but technical issues for the hosting side. A general rule for any customer-facing business is to put the customer first.
The article says:
"Should I redirect no-www to www?
Yes.
Redirection ensures that visitors who type in your URL reach you regardless of which form they use, and also ensures that search engines index your canonical URLs properly."
You will want the naked and www subdomains to point to the same address in your DNS configuration, and you will want to configure your webserver to issue a 301 redirect to the canonical subdomain at the HTTP layer. There are several reasons for this, but the most pressing is that it does not make sense to use a CNAME record for the naked domain.
There's a slight advantage to doing it at the registrar because it keeps the hit off your servers. This is assuming of course that http://foo.com/bar will get redirected to http://www.foo.com/bar rather than just having everything redirected to the http://www.foo.com.
That's interesting. Currently I have 2 different domains at 2 different registrars. In each case I have the bare name, www and other subdomains as "A" records, not CNAME.
So it doesn't matter if people use the www prefix or not, since both point to the same IP. And with this configuration I've had no trouble with MX records.
If the goal is www/no-www transparency, there are other ways to do it, but using a simple DNS setup seems like the least hassle.
Great point. And candidly I never actually do this myself (CNAME on the bare domain), I just point it at the server and let apache/nginx handle the redirect.
For very little effort you can simply redirect your main domain, and gain the technical benefits/simplicity of having a www subdomain, all while not inconveniencing your users. This pretty much makes the parent's point of hosting actual content on the main domain for customer convenience moot.
Further, most lay users are familiar with large web properties (amazon, google, apple), and most of those large properties redirect to www (due to aforementioned technical reasons). So if such a user even happens to notice, if anything, they may find it odd to not see a www prefix.
I think that this is exactly right. I have personally observed users become confused over a URL not starting with www. Occasionally I have even seen them add it, just because it was assumed that all addresses should have it at the beginning.
This might change eventually, but it will change slowly if so.
Aesthetics aside, one benefit of naked domains is that shorter URLs are better for sharing in chat programs. 'www.' takes up a nontrivial amount of space in certain environments (e.g., a gchat window within gmail).
It sucks that this is the state of things, because of the technical infrastructure that's supporting these systems. But it's absolutely not being advocated because it's technically more efficient, because it's cute for branding, because it helps the marketing team collect customer data. It's entirely based on being able to provide the highest level of consistent service to a customer.
Doing it any other way is prioritizing brand over customer.
It doesn't just suck, it's bad for almost everyone.
> Should I redirect no-www to www?
My guy reaction is still no, fix the technology that was never designed for this mass consumer use. DNS could support cnames for naked domains, but the standard groups only care about backward compatibility and 0-risk, while consumers still don't understand how DNS works at all, and now we have developers who apparently need articles spoon feeding them why it's bad one way and not another.
You're describing a world where everybody expected a website to end in .com, and if not that, then maybe a handful of others. That world is changing; what will people's expectations be in the future?
Uptime and performance is typically some of the most important things for customers. Definitively way more important in general than whether there's a www prefix or not...
I agree. Microsoft, Apple, and Google have the ability to modify user behavior without much downside. The rest of us are better off making the user experience as fast, efficient, and simple as possible.
By coincidence, I just put a 301 redirect on my personal site to go from www.xpda.com to xpda.com two days ago. If I could only get ctrl-enter to submit https://domain.com instead of http://www.domain.com in Firefox, I'd be happy. (There was a bug that prevented this last time I checked.)
Being able to serve your website from a CDN helps the customer: it means the website is up when they want to visit it, and it usually means it's faster for them.
Having static content not get cookies helps the customer: it keeps HTTP traffic down, which means that pages load faster.
Separating cookies between different websites helps the customer: it means that when, inevitably, blog.example.com gets broken into, the customer's account info at www.example.com is not compromised. (Of course, this assumes that you're doing least-privilege between your various websites, but you're already doing that because you care about your customer and don't want to send them a letter about how you lost their personal information.)
Not using a separate top-level domain for static content helps the customer: they know that www.example.com and static.example.com are from the same company they trust, Example Industries, Inc. If you caused their browser to show that it's loading files from "static.excdn.com" or something, they might worry.
The customer isn't personally following the redirect; their browser does, so I'm not sure I understand how the redirect deprioritizes the user. As far as visual clutter goes, get an EV cert, which either supplants or replaces the address bar with something much more useful than a URL:
What!? You saying we can't make absolute rules? We have to consider the context? Horrible person!
On that note.. those minimalist designs are quite hard for older or less tech savvy to understand at times. Not the www part of course but the designs in general. Something to consider when putting the customer first maybe..
Remember that HTTP vs HTTPS adds yet another dimension you have to take into account. This bit me the other day.
I run a site at example.com (I prefer a naked domain, for no technical reasons whatsoever), with a CNAME record for the www subdomain. But I only want to serve the site over HTTPS. So http://example.com redirects to https://example.com, as does http://www.example.com. Simple enough, right?
I however started receiving some spurious reports that the Google Account login option wasn't working on the site, which was quite puzzling at first. Turns out, some users were manually entering https://www.example.com as the address (it's not indexed or linked to anywhere in this form that I could find), which was being handled by the Nginx default_server directive on port 443, causing the site itself to appear to work just fine at https://www.example.com as well. But the Google OAuth service checks the authorized origins for any client side requests, saw www.example.com and was expecting example.com so simply failed silently. Doh!
TLDR summary: If you redirect to HTTPS by default, check that all 4 options ([www, naked] * [http, https]) work correctly, and that all redirect just ONE canonical name to keep things sane. And make sure the 3 redirects preserve any request URI parts after the domain as well.
The OP could have avoided a lot of confusion if he began his article like this:
> www. is not deprecated for webmasters (but users don't need to type it)
> This page is intended for webmasters who are looking for information about whether or not to use www in their canonical web site URLs; however, the website can still be advertised without the www and the user never needs to type it.
Even in this HackerNews discussion with technically knowledgeable people I see a lot of discussion stemming form this misunderstanding of what the OP is trying to say. (Example: "I should take the toll by typing www every time? life is too short.")
Yeah, this would have helped a lot to avoid discussion. I suppose the author of the website expects mostly webmasters to come and read what he writes. But here are a lot of other people as well. So probably it's not the author who should adopt his text (why should he even know we exist) but the title of the link in HN itself should make it clear.
Well, I agree that users shouldn't need to type it, but it should still be advertised, especially in print.
www makes it immediately obvious that you're looking at a url, which is especially important with gtlds. A line at the bottom that says dog.spa is meaningless. www.dog.spa fixes that with only 4 characters.
You could print http://dog.spa, but now you've added 7 characters and it looks even more technical, which is stupid if the goal of removing www is to simplify things.
I'm perplexed by the cookie claim. I have a naked domain (foo.com), and a static domain with the same domain name. (static.foo.com) and the cookies of foo.com are not being sent to static.foo.com if the path is configured as /. (I can see that cookies are not sent from the dev tools' network tab. The cookies for google analytics, which set the domain to .foo.com as opposed to foo.com are being sent to the static subdomain though.)
Could someone enlighten me on this? Seems like the article might be spreading misinformation.
Edit: Seems like the issue is "host only cookies" are just sent to foo.com, not to static.foo.com
A cookie, unless the domain is explicitly set is already host-only. So you will see that if you set up a naked domain, cookies you set for authentication will probably not be sent to subdomain by default.
The third party cookies on your application, like Google Analytics on the other hand, have to have specified a domain name and are not host only, so you will see your Google Analytics cookie being sent to the static subdomain.
So, this statement from the article seems to be wrong:
"If you use the naked domain, the cookies get sent to all subdomains (by recent browsers that implement RFC 6265), slowing down access to static content, and possibly causing caching to not work properly."
It should be
"If you use the naked domain, the cookies which are not host-only and have domain set get sent to all subdomains (by recent browsers that implement RFC 6265), slowing down access to static content, and possibly causing caching to not work properly."
Is anyone else really bothered that 2 sites advocating for "best practices" fail to use HTTPS? People are willing to write paragraphs arguing the technical merits of subdomains but nobody could take 10 minutes to put their site behind CloudFlare?
Not really. These are old sites build before CloudFlare even existed or the encrypt-ALL-the-traffic movement started.
I myself prefer non-HTTPS over CloudFlare for sites like this. As for best practice:
> Back in 2003, Lee Holloway and I started Project Honey Pot as an open-source project to track online fraud and abuse. The Project allowed anyone with a website to install a piece of code and track hackers and spammers. We ran it as a hobby and didn't think much about it until, in 2008, the Department of Homeland Security called and said, 'Do you have any idea how valuable the data you have is?' That started us thinking about how we could effectively deploy the data from Project Honey Pot, as well as other sources, in order to protect websites online. That turned into the initial impetus for CloudFlare. -- Matthew Prince, CloudFlare CEO
I'm using CloudFlare for GH Pages based web-sites with custom domains. However, Tor user in me says that's not really a "good practice". Looks like there isn't a way to turn off that annoying "Attention Required!" "One more step" checks, is there?
It doesn't accomplish the same thing. Cookies are added to every web request, while you have to supply your localstorage values manually if you want to use them. At the least, this requires JavaScript. It's also impossible to do for regular clicks on links and when loading images and scripts. Sometimes you want to authenticate those too.
Ok, I'll rephrase: only use cookies when you want to send them with requests to static files. But don't forget - you will not be able to use CDN in this case.
You could also set up a cookie-free domain and have your static assets accelerated by the CDN while keeping your files (which send cookies) on a separate domain.
You can still use a CDN in this case however, by default files with cookies cannot be cached on a CDN's edge servers. Some CDNs have the ability to exclude the cookie from the static file such as KeyCDN which allows you to ignore the cookie and/or strip it entirely (https://www.keycdn.com/support/pull-zone-settings/). This ensures that your file is now cacheable which is important for instance if you are using Cloudflare which adds a Set-Cookie header to domains routed through them.
Nothing stops you from using a CDN with cookies as long as your app generates proper caching directives and the CDN obeys them. The only reason this would ever be a problem is if your app/server does not correctly mark pages that may contain private/user-specific content accordingly.
Further, a common method to reduce the risk of this is to place purely static public assets and user-specific private data on different domains.
A CDN is just a reverse proxy with caching. The "proxy" part means anything, including cookies, can be transferred from the end-user to your origin server.
Yes, that's what the letters CDN stand for and is just a marketing term.
Technically, they are reverse proxies with a focus on caching, however that is not all they do. You can use them for various other features like security and front-end optimization and they work fine with cookies.
It's common to use a CDN for the entire site - caching static files at the edge while sending page requests to the origin server with all the cookies, especially important as many sites are now dynamic and customized to the individual. There is no issue in using a CDN to proxy all requests.
Maybe you have not noticed, but we were talking about authorized access to static files, using cookies. And I was talking you there's no point to use CDN if EVERY request will be send to main server (it will work even slower). Now you are trying to explain me that CDN can send SOME requests to main server (to dynamic pages), delivering static files from the nearest point to the user and without authentication by cookies.
I dont see where in this thread that became the topic, rather it's been about cookies in CDNs. You still seem to think that CDNs are only used for caching when that is just one of their features. You can also use them for security, for example, without using any caching at all.
If you need to authorize every single request (even to a static file) which means that every single request is unique, then this is obviously not a good use case for an edge server cache. You can still cache things in the browser with cache headers and continue using the CDN to proxy the full request with cookies to the origin. This doesn't add much latency and can sometimes decrease it because the CDN will keep faster connections open to the origin.
However, most CDNs today also offer their own access controls either with cookies or url tokens so you can do authorization at the CDN edge instead of the origin.
And yes, you can use the CDN to cache static files and just proxy requests to pages. Usually private information that requires authentication is in the webpage and the static files like javascript, css and images dont need protection.
In short: LocalStorage is supported by all browsers, has no security issues as cookies have, has much bigger storage limit, much more handy API and don't need to be send on each request.
They're not the same thing. Cookies add state to HTTP which is very useful, compared to LocalStorage which is really just client-side storage that can also be used to store state but at the higher web application level rather than HTTP.
If you need state you typically use JS + XHR and can set your own headers. Cookies are just header values with some predefined behaviour wrapped around them.
If SRV[1] records had been a thing sooner, we could have had our cake and eaten it, too. SRV records encode the protocol into the DNS entry. If you wanted the HTTP server for example.com, for example, you'd lookup the SRV record for _http._tcp.example.com. You get back the IP and port of the host to connect to.
If you had a hosting provider, you could CNAME _http._tcp.example.com to your hosting provider. Naked domains + CNAME works as expected.
No. The main issue here is cookie control, or, to be exact, the complete lack of one.
You can't tell user-agents "the cookie I set must be valid for example.org and websocket.example.org but no others" (the example is crude and non-scalable, real-world semantics of this must be different), which leads to all sort of problems. Heavier static media requests, mixed cookie state if you use staging.example.org for pre-production environment, inability to provide third parties subdomains for their UGC, etc etc. All can be solved but not really convenient.
Would there be a way to have good control over cookie scoping, a lot of hacks would be gone and www/non-www distinction would be purely cosmetic for most cases. Granted, not all, SRV records are still a good idea.
I'm using a naked domain. The biggest reason why I'm using it is because my domain name as minimalistic as it can get (six characters long, seven if you count the dot) and I sure as hell love saying to people to just type in my alias and add .me at the end.
Although I did not know about these issues, I have to say that the source really didn't give me strong enough arguments to convince me to go through the hassle of making the switch.
With that being said, I opened up a couple of links from the source and I will look through them and see if they'll change my mind.
Yeah, I use a naked domain too because I got a short and nice one, and I like it this way. All this stuff about cookies is a non-argument for me—I don't set any cookies and don't want to. My DNS host does happen to support ALIAS records, and I find the non-technical argument that the www prefix "serves as a gentle reminder that there are other services than the Web on the Internet" kind of silly. I don't host any other services except ssh, and that's just at the naked domain, too. This whole thing is a non-problem for me and all my users...
The advice being given by the OP is outdated. The only relevant point is the CNAME for load balancing which, unless you are on AWS which permits CNAMEs on root domains, can be a restriction.
Everything else is advice from someone who still thinks it's 1995. The norm now is for naked domains. www is a concept that has vanished from the "real world" that most of us live in here in 2016. It's natural to hang onto things that you grew up with, and the "I should use www" is one of those things. It used to be the norm. This is no longer the case, and there are few arguments to support keeping it other than "it's what I originally learned to use".
The thing is, www prefix was never a requirement. It came about as a convention to very specifically indicate that www.example.com = web server, and ftp.example.com = ftp server, etc. There's no real reason to do it, other than convention from the early days. The CNAME restriction is really the only technical gotcha, and this is slowly becoming a non-issue with DNS services like Route 53 that now allow CNAME on roots.
>There's no real reason to do it, other than convention from the early days. //
It wasn't convention it was that the naked domain wasn't assumed to have web content as not all domains had web servers on them. If you're used to gophering in to balrog.example.com then a www probably was required to get a browser to reach the www pages.
Supposing VR takes off and people have a VR space as their primary internet presence then we might be having the conversation as to why do we both with the vr. subdomain on all the URLs.
The protocol only tells your computer which program and protocol should be used to communicate with the server. The server server name itself is the destination. If you tell your ftp client to connect to ftp://www.example.com it's going to try and connect to the IP address returned by an A lookup to www.example.com. It has no way of knowing you actually want the IP returned by an A query to ftp.example.com.
SRV records could clean this all up, but there seems to be stubborn resistance by web browsers against doing srv lookups.
Cookies are broken down into two possible situations:
1. Serving of static content from a CDN on a separate hostname than the web servers. First of all, 99% of companies do not fall under the umbrella where saving the few bytes of the cookie request header is going to save you many Mbps/Gbps/packets of bandwidth. For most of us, having our example.com cookies unnecessarily sent to static.example.com isn't a big deal. Secondly, this is remedied by using a separate domain name for your CDN rather than a subdomain. Example: Facebook uses fbcdn.net rather than a subdomain on facebook.com - even though they do still use the www prefix.
2. You want separate "sections" to your website, akin to Google's mail.google.com, news.google.com, images.google.com, maps.google.com, etc. Maybe you want the cookies for your root domain to only be sent to that root domain and not all your subdomains. This is not completely ignorable, but I have personally not seen a situation where cookies on the root domain or www cannot/should not be able to be shared across subdomains.
I just find it odd that www is still considered a convention by some. If you're redirecting the root to www anyway, there is no reason to use www. You may as well use hi.example.com as your primary domain and just redirect example.com to hi.example.com. www has no intrinsic meaning, it's just a regular subdomain with DNS records like any other subdomain.
It only matters if you intend (or require) to CNAME your main site off to some other DNS name. If you're serving your entire site from S3 for example, you can't just alias yoursite.com to your-bucket.s3.amazonaws.com, but you can CNAME www.yoursite.com and have a small server sitting on yoursite.com sending a 301 redirect for every request.
Github and Twitter are large enough to not care. And if you're using something like Cloudflare, they can just take over your IP address with BGP, no DNS trickery needed.
I've worked a sysadmin for more than 10 years now, and never realized that it's not possible to CNAME the domain root. It's good to keep in mind, but in most cases there are other workarounds.
This is also why "naked domains" set up a whole other domain for static files rather than "static.yoursite.com", to avoid the "top-level" cookie (megacookie?) being sent with every request.
> I've worked a sysadmin for more than 10 years now, and never realized that it's not possible to CNAME the domain root. It's good to keep in mind, but in most cases there are other workarounds.
The common approach is called CNAME flattening, where you can specify a CNAME at the root and your DNS provider 'flattens' that by resolving the A/AAAA record at the end of the CNAME chain.
It's not a true CNAME, but rather a sort of server-side translator to make it behave as a CNAME while really just returning A/AAAA records. It's more like the 'ALIAS' record.
As PG says, when you're starting out, do things that don't scale.
My advice is don't over optimize _anything_ until you actually need it.
In the early stages a naked domain is a branding decision that looks nicer than old school www, to my eyes, at least.
You can avoid the cookie issue by not using any subdomains and sending all your calls to single sub URIs such as yoursite.com/api/, yoursite.com/blog/
If you're running your own infrastructure on AWS, for example, you can start your scaling efforts simply with load balancers and multiple instances all pointing to your naked domain. That's going to get you pretty far until you're so big that you need geo scaling & distribution.
Then, if you need geo scaling such as Akami or other geo load blanching solutions you can start to redirect your traffic away from the naked domain to www or whatever.
I frequently use naked domains as they "read back" better in every site I use them in, removes visual cruft and requires recalling less character space in customers memories.
Other websites I frequent that uses naked domains include:
I'm surprised tha they really don't redirect. Would be interesting to see how they deal with the problems mentioned. As always there are probably different solutions to the same problem.
They do redirect, but it's actually to go from the www to the naked domain. This always leaves the naked domain in the URL bar and is, to me, a clean look.
Unfortunately, some hosting providers make this very difficult or impossible. I struggled to get this working on a Google Site hosted at a Google Domain using their DNS tools; it seems Google Domains doesn't have a basic "redirect to www" feature. Eventually I gave up and used Dreamhost's nameservers instead; they offer this feature (and its free; you don't need to pay for hosting).
Or, just use a DNS nameserver that can emulate an apex CNAME, if you are that concerned about letting a third-party renumber their servers at the drop of a hat. I know CloudFlare can do this and it's a feature that standalone DNS nameservers should support.
When you reach their size, it's not a big deal to build and maintain your own DNS serving infrastructure; in that case, you can do as you please, which might include serving up a pool of A records round-robin style or using an expensive magic DNS geodirector thingamabob.
While the original post was not specific to InfoSec, it does beg the question whether there are any security implications to using/not using "www" in your domain names. Does a naked domain pose any risk? No suggestion to this effect can be found in discussions on the topic I can see so far, but it's a good question to ask in case those of us who do prefer naked domains are missing something.
That's good news. I chose no-www because www seemed redundant and have been wondering if it's a problem. Turns out no, unless it's very big, which I don't expect my site to be (niche market, not a web app). It can even be changed later by redirecting no-www to www.
Is this limitation a problem in the design of the domain name system, or something quite natural and necessary?
I can't for the life of me figure out why web browsers don't yet do a query for an SRV record named _http._tcp.example.com when the user browses to example.com.
For me, this would allow me to have domain names that point to various HTTP/HTTPS based services behind my single outward-facing IPv4 address. As-is, I have to memorize what non-standard ports they're on.
The servers are located in different parts of the continent, and the only one that is capable of acting as a proxy has metered bandwidth, while one is interactive and another is a media server.
I suppose I could set up a system of HTTP redirects but combined with HTTPS I feel that will get hairy quickly. The service-level redirection SRV records provide are really exactly match my needs.
Where is the extra latency? You can do DNS queries for example.com and _http._tcp.example.com concurrently so that you get both answers in the time of one round trip to the DNS server.
If there actually is a SRV record and the target isn't locally cached then you would need to do another query, but that is faster than establishing an HTTP session with example.com, getting the 301 redirect and then having to do the query anyway.
They didn't want to wait for multiple queries to finish. If you're interested in a detailed rationale you can take a look through the mailing list archives (though it'll be frustrating reading).
And that's because the major DNS servers don't bother with answering more than one question per query. Nothing in the DNS specification limits the number of questions to one. The only down side might be that the response to multiple questions might not fit in the standard UDP DNS packet.
DNS servers can't answer more than one question per query because NXDOMAIN is signalled in the message header and so when QDCOUNT is greater than one the response becomes ambiguous.
Regarding defunctness, there's nothing magical about the 'www' string - it can be anything, as long as it's not the DNS zone apex (~='base domain'). There are limitations on what you can do with the zone apex entry (such as no CNAMEing it), but no limitations on other entries. 'www' is just convention.
As to your question, it's a bit of both. I'm not a network admin, though, so I'll let someone a bit more skilled pipe up. I have run into difficulties with migrating 'bare domains' in a small business environment due to these limitations, though. Not insurmountable, but required more work to cover the issues. Of course, when you're 'big', you'll be dealing with these kinds of issues a lot more.
> If you are using www, then this is no problem; your site’s cookies won’t be sent to the static subdomain (unless you explicitly set them up to do so). If you use the naked domain, the cookies get sent to all subdomains (by recent browsers that implement RFC 6265)
From my understanding, pretty much all browsers DON'T send google.com cookies to subdomains -- they only send .google.com cookies to all subdomains.
This seems backed up by their cited RFC 6265[1]:
> Unless the cookie's attributes indicate otherwise, the cookie is returned only to the origin server (and not, for example, to any subdomains)
Got any proof of a case where this happens? I'm using it to redirect to https:// and it redirects deep links to the right deep links, and nothing goes to the home page
If your service requires scaling because it's expensive dynamic content, then apache could easily be fast enough and not the bottleneck requiring a switch to something else.
It doesn't matter if nginx/whatever can handle 1000+ reqs/sec if your app consumes 300MB of ram per request and takes 1 second to process.
I've lost track over the years of hearing major media outlets give addresses as "double-you double-you double-you ourdomain dot com" Typing what they actually say will get you nothing, unless they were sharp enough to also get wwwourdomain.com
Exactly! It's a little questionable that the linked website doesn't mention the existence of these types of records considering how widespread they are these days. Every DNS host I've used in the last five years has supported something like "ANAME". For example:
brand awareness and recall is king for small niche companies. we removed the www to bring our brand, a unique but simple domain, into focus. IMHO if you have a trade and/or trademarked domain name and wish to get more organic traffic using a naked domain is the best choice.
yes-www is clearly the more practical solution based on current technology and its limitations. And no-www (http://no-www.org/) is clearly the more aesthetic solution.
Either is perfectly valid, and depending on who you ask you'll get reasonable arguments in favor of both. That said, neither choice will ultimately make any difference. Period.
Even if your site becomes the next Google, the minor hurdles of a no-www domain vs the technical advantages of a yes-www domain will not make an ounce of difference. Anyone who tells you otherwise is straight up fooling themselves.
I found this article hard to read. It seems to boil down to three points:
1. Some hosts might want you to use a CNAME record to send them your web traffic, and CNAME records are undesirable for apex domain names because they can't coexist with other records you might need (like MX).
2. Cookies set on the apex domain will be sent to subdomains.
3. Older browsers may not let the apex domain read cookies set by subdomains.
Is that right?
(CloudFlare deals with (1) by just hosting your DNS, and I don't have enough experience with (2) and (3) to have strong opinions on them.)
> 1. Some hosts might want you to use a CNAME record to send them your web traffic, and CNAME records are undesirable for apex domain names because they can't coexist with other records you might need (like MX).
Nitpick: With the exception of RRSIGs, a CNAME record must be the only record at a given owner name. Since the apex of a zone has a SOA record, a CNAME cannot exist there.
Many many websites, fail to display anything if www is not prepended to the URL. Particularly of small businesses. If anyone if looking for a business development niche selling basic consulting to small and medium sized companies, it is this.
I've always hated the www prefix. But understand the technical gains of using one. If you are a domain owner, subdomains give you quite a bit of flexibility. You can always use a different prefix than www.
Don't get why blocking cookies to static.domain.com is a desired feature. Especially when sharing cookies btw app.domain.com and login.dimain.com would be desirable.
With DNS and CAs as broken as they are, I'm skipping this and wondering not just go back to using IP addresses. (I am also the resident contrarian so...)
Empirically, www seems to have won -- a large fraction of the top consumer sites online have picked it. One could make UI/UX-type arguments for no-www, but I sites like Facebook and Pinterest have some of the best UI/UX people in the world, and have still picked www.
What is this the case? What is the top reasons for www? Is it the cookie thing? I've heard that www lets you play DNS trics also, but haven't seen more details on this.
I'd love to hear more about this choice from people who understand the decisions at top Internet companies.
I'm a convert from non-www to www. Just avoids security implications re: cookies and no need to buy secondary domain for static assets. A simple redirect mitigates the "ux" argument that the domain looks less nice to type or read. Simple implementation with more pros than cons.
Should add it's not a hard and fast rule, and depends on the use case. But the default question for me nowadays is "why no-www" and not "why www".
A www. prefix just makes life easier for everybody from just running it so there is really not much reason for not having it. Supporting the redirect is easy and people generally don't seem to care.
Both of these things seem to have been design with "www" in mind.
So the reason boils down to: original specs of various technologies are optimized around the assumption that your site will have a www (or more accurately: a subdomain rather than a naked domain).
I find "www" aesthetically unpleasant. If one must use a subdomain, how about "web" instead? It's still three characters, but much cleaner.
No you shouldn't, the article suggests server-side redirection so you don't have to type www in front of the url. Many of the highest-traffic websites already do that, Twitter being a notable exception.
www is a relic of a different time, when networks were client-server first and http was an afterthought. Now it is expected that the root domain at minimum bring up a single page description of the org.
Add the www cname with a redirect to the root, start phasing www out.