Hacker News new | past | comments | ask | show | jobs | submit login
Why use www? (yes-www.org)
601 points by wfunction on Jan 30, 2016 | hide | past | favorite | 267 comments



This reasoning is all based on valid, but technical issues for the hosting side. A general rule for any customer-facing business is to put the customer first. I could list 200 different reasons why you should make the customer register an account with their email address before they can purchase something for you. However, if you put the customer first, in many cases it is easier for them if they don't have to do that. Having the www before the domain name adds unnecessary visual clutter and from the customers point of view, an unnecessary redirect before they can get to your site. A lot of sites use minimalist style everywhere, and it's great. Having the www there for technical reasons is putting the user second in those cases.


I think that www vs. no-www, as a matter of "putting the customer first" is so INCREDIBLY insignificant, compared to the thousands of other decisions that go into a product, that it's ridiculous we're even having this conversation. This is looking for optimization in the wrong places at its finest.


Do you a think a user should have to write:

http://www.apple.com:80/

or:

http://www.apple.com/

Ever since the first web browsers, way back in the early 1990s, it has been commonplace to leave out the port number. The web browser adds it automatically.

Similar logic would lead us to leave off the "http". And similar logic would lead us to leave off the "www". The trend has been to simplify the URL as much as possible.


Nobody is suggesting the user should be forced to type in the protocol, the subdomain, or the port number. If the user types in:

apple.com

It should lead to where the user wants to go.

However, there are good reasons for using the www subdomain as the canonical URL, and it is also worth noting that some users will habitually type in www anyway.

If you don't want to include the subdomain in marketing material, then there's nothing stopping you from leaving it out, just as there's nothing stopping you from leaving out the protocol.


And even just typing:

apple

Should lead to where they want to go.


Which leads to 127.0.53.53 on Iceweasel 38.5.0 https://www.icann.org/namecollision


From the article:

Should I redirect no-www to www?

Yes.

Redirection ensures that visitors who type in your URL reach you regardless of which form they use, and also ensures that search engines index your canonical URLs properly.

So I don't think you're advocating anything they don't.


Annoyingly, this can be difficult to do if the A record for your domain doesn't point to a webserver. There are other legitimate things it could point to, like an authentication domain controller or a session border controller. Do you really want to be running a webserver there, even if it's only redirecting?


I'm not familiar with session border controllers, but DCs can be and usually are placed somewhere at the apex of the domain, via SRV records.

If you really have trouble, you can tell your firewall to route ports 80 and 443 to another machine and everything else to the device you need. (I've run a hackier version of this that used netcat inside inetd: we didn't want a web server on the machine that owned the domain name, but there was another nearby web server cluster that we added a virtual host to. And we were fine running inetd and netcat on the machine.)


I've done something similar but using iptables and dnsmasq to route DNS differently depending on where the requests were coming from and what domain they were asking about.


Your argument that eliding the www is better for users is logical, but you failed to make any argument whatsoever that it is significant. Given that you were responding to a statement that the difference isn't significant, you're basically attacking a straw man.


> www vs. no-www, as a matter of "putting the customer first" is so INCREDIBLY insignificant

I don't know, I've always been fascinated by the premium of domains that are just one character shorter, and all of the startups that exclude a vowel to get a compressed (or maybe just available) name. That could reflect actual user preferences.

UX theorists convinced me over the last decade that user behavior is shaped by tiny moments and irritations that we think are insignificant at first glance. A few 100 ms extra in delays may seem barely perceptible, but they can kill a site. It's not implausible that a few extra keystrokes could do the same.[0]

On the other hand, redirects seem like a happy medium, so long as they're fast enough. nasa.gov uses a redirect, that seems fine. Note that they were driven to that (from 'www'-only) after confused fans kept writing in to complain that "http://nasa.gov" was a dead end and that they didn't "even know the basics of running a website."[1]

[0] https://www.nngroup.com/articles/response-times-3-important-... N.B.: In that link, Jakob Nielsen recommended making "www" optional through redirects. It's been a while, so not sure his current thoughts, but the same reasons would apply today. https://www.nngroup.com/articles/compound-domain-names/

[1] https://blogs.nasa.gov/nasadotgov/2011/05/31/post_1306860816... NASA's case provides a real example of something Jakob Nielsen pointed out in the first link: usability is a slave to expectations. So if enough popular sites are using naked domains, and your naked domain just 404s, some users will dismiss your site as unreliable.


I was going to write the exact same thing as you. But then I thought about Chrome removing "http://" to avoid displaying useless and dense information to the user. What about www?


That would be misleading as example.com and www.example.com are not guaranteed to be the same site.


You could say the same for http:// and https://.


I don't think chrome (or any browser) removes https:// from url bars. They only remove http://.


Ah you're right! For some reason I had it in my head that Chrome was hiding that. I really just hate that it hides any part of the full URL.


Doesn’t safari mobile do that?


Safari desktop as well.


No you couldn't, with http vs https you are accessing the exact same resource, just through a different protocol.


Not necessarily. I've written my share of rewrite rules that change behavior based on https vs http.

Imagining that were true, you would never be redirected to an https version of a page.


Well, if the org hosting the content is being nice and following convention, sure.

But, there's nothing stopping me from running a webserver listening on port 80 (so, accessed in the browser at http://example.com) that serves a picture of a baseball. I can also run a webserver listening on port 443 only (with SSL/TLS set up, so accessed in a browser at https://example.com), on the same machine, that serves a picture of a dog instead.

This sort of breaks the rules/conventions though because you expect the resource to be the same by nature of the URL you're using to access it. But nobody has to follow that rule


That's one way to set it up, but there's no guarantee of that. Servers may switch anything based on TLS status or really any other property of a client request.


I agree with this. The people saying it's technically possible to get a different webpage at that resource address are missing the point - it's trivial to do A/B with the same site, same port, same protocol, based on user IP, time of day or a RNG.

The point of URL is to identify a single address owned by one guy. Removing the www subdomain means you have two addresses, possibly owned by two guys.


The protocol name is part of the URL, so it is possible and technically valid that the https version leads to different content than the http version. Some things are certainly different, for example if there are ads and other linked resources on the https page, they all need to go through https, while on http page, there is no such requirement.


Technically I guess you could make it a subdomain but no one would actually do that in a production site and in my 14 years have never seen it used, not to mention that major search engine bots will lookup both. The general user will simply type in the domain name more times than not without the www


`www` used to literally be a different host in a network (and in some cases, I'm sure still is) specifically designated for WWW traffic. Think of universities in the 90s which had their existing infrastructure and an Internet facing host on their primary domain and they want to add a web server. They may have had a firewall, probably no load balances, so routing port 80 around their primary host was much more complicated than just throwing up a new host and DNS entry.


I regularly come across sites that only work with "www.". Common with university sites that use the subdomain hierarchy a lot. For extra fun, make the behavior reversed depending on if you are inside or outside their network.


Same with my uni. And then you have sites that only work with, and others that only work without the www.

But it makes sense. the first subdomain before the uni domain specifies the faculty, many of which have their own datacenters. Then many of those have yet their own servers in their network, and often www. is one added later on.


My uni makes this extra fun with different sites on http and https.


Oh yeah. And then someone enables HSTS and a subdomain doesn't support HTTPS, so now you have to keep a browser around that never ever is allowed to contact the parent site...


Apple actually does this on the iPhone.


I know IE used to do this too.


A redirect is also completely invisible to the user, and should only add a few microseconds to the page load. But you should always redirect www. if you are not using www. Many people instinctively type that into the address bar when you tell them a domain name.


A few microseconds, huh? I decided to do some experiments. Here's how long some 301s on common sites took:

    google.com:    529ms
    apple.com:     261ms
    microsoft.com: 142ms
    reddit.com:     61ms
Which is not only 100,000 times longer than a few microseconds but more importantly well above the perception threshold.


Where in the world are you that google.com's 301 takes over half a second ? It's under 100ms for me.


I don't get the parent's numbers either. I did:

    $ time (curl -L www.reddit.com > /dev/null 2>&1)
    real	0m0.410s
    user	0m0.040s
    sys	0m0.008s

    $ time (curl -L reddit.com > /dev/null 2>&1)
    real	0m0.389s
    user	0m0.036s
    sys	0m0.012s
So for Reddit, I'm going to make the cost of the redirect 21 milliseconds.


I cleared Firefox's cache, opened the waterfall diagram, and looked at the time between when I hit enter (assuming that's 0ms) and the time when the request for the www.* address went out. I was planning to subtract off DNS time if necessary but in all cases it hit the cache and contributed 0ms.

I didn't have wireshark open so I don't really know what happened with google. It surprised me too. Maybe something had to be re-transmitted? Now it seems to take 90-100ms. Perhaps I should have done best-of-three, but my point wasn't about precise numbers, it was about orders of magnitude, and "tens to hundreds of ms" is definitely more in line with what I expected than "a few us".


This is because reddit.com and www.reddit.com gets you the http-versions, which both are redirected to https

Try this: curl -sL https://{www.,}reddit.com -o\ /dev/null{,\ } -w "%{time_redirect}\n"


Can I ask you to explain that "-o\ /dev/null{,\ }" magic ?


It's not magic -- it's some form of crude error.

So first off what this does is, it expands to the expression:

    '-o /dev/null' '-o /dev/null '
Even if we remove the latter space by just using `{,}` instead of `{,\ }` curl still returns for me an error code 23 -- CURLE_WRITE_ERROR.

curl seems to interpret `'-o /wtf'` as a command to write to the file ` /wtf`, so this only makes sense if you have a directory called ` ` in the folder you're running from.

You can therefore do this correctly with:

    -o/dev/null{,}
and that correctly writes the contents to /dev/null without issuing a curl write error.


Thanks, it sure looks less ugly with -o/dev/null{,} I couldn't find any other way to get curl to stay silent and still output redirect times. Hence the crude hack. (Obviously my bash and curl versions had no problem with the spaces or I wouldn't have posted it)


Using the curl'ing tips up this branch of comments I programmed a highly sophisticated script and dropped it on github. :)

https://github.com/dougsimmons/301debate


Nice!

I would have gone for a simpler loop:

echo -e "sec\tmethod\turl";for url in {https,http}://{www.,}{en.wikipedia.org,{google,reddit,facebook,youtube,netflix,amazon,twitter,linkedin,msn}.com,google.co.in}; do curl -sL "$url" -w "%{time_redirect}\t${url%:}\t${url#//}\n" -o/dev/null; done|sort


Wow. Thanks for teaching me that. Yeah, yours is better, I'll lay it on github with a thank you and maybe "embrace and extend" it.

Or learn python and port it to python, I could swing it that way.. Cheers.


you're calculating load time the wrong way - you should not do time check outside the process as you do calling 'time' as another process from shell. consider using curl profiling option next time


With `time` you also calculated the process time curl needs to evaluate the 30x response and the reissued http request to www.


Which is valid since this processing time will be included in whatever application the user is using to access the website.


I'm aware I'm nitpicking and maybe too theoretical now, but the processing time would vary in whatever application the user is using.. I'm with negus here who basically means the same I guess. The generic danger here (regarding benchmarking) is that you're explicitly also benchmarking curl. In case curl/"the web client" would handle 30x redirects super inefficiently, these results could lead to wrong assumptions.


There are plenty of places in the world where internet latency is a big issue, not to mention mobile networks everywhere. There's no reason to add a roundtrip unless absolutely necessary.


In most cases it should stay low too: your browser should retain a keep-alive to the web server, so you're not throwing away the connection.

Future requests will auto-resolve due to caching of the 301.


Mediocre Wifi can easily add seconds of latency.


HTTP/301 redirects are "permanent" per the RFC, and therefore cacheable. Subsequent requests for the apex zone by the user should cause the browser to skip the first request entirely.


touche.


And this is EXACTLY why you should use www.


microseconds? That's definitely not the case. It takes 10s of milliseconds just to leave your internet router on busy wifi home networks. A 301 redirect is an extra network roundtrip for no gain and much more (perceived) latency.


that is actually false. The majority of people no longer type in www. For example, in the last 10 years branding has completely remove the www and so likewise user reaction has followed suit. Unless you are targeting an older crowd, the vast majority ignore www on a search. With chrome being the major browser now and with browsers allowing search from the address bar, lookup without www is pretty much standard - again assuming your users are in a class of under 35 years of age.


"many people" != "the majority". Having the site not work with www. would be very stupid.


The www serves as a clue that it's a URL. Most people would recognize example.com as a domain even if given no other context. Contrast that with example.design. I think to most, www.example.design would be more recognizable.


My domain is not immediately obvious as an URL: hisham.hm (also my username in many places, or variants of it, like @hisham_hm). Sometimes I do need to make it more obvious that it's an URL (e.g. when printed, or on presentation slides). In those cases I write it as http://hisham.hm


What looks less technical and more user friendly on a business card?

http://hisham.hm or www.hisham.hm

Personally, I think www is. And it's shorter.


How about @hisham.hm? People are used to seeing it as part of email but it conveys that it's an internet address.


I would think that's a twitter handle and I don't even use twitter.


I think few people look at URLs. They click on whatever is underlined or looks like a button.


If people don't look at the URL there's no usability issue either way, so you might as well use the www version for technical reasons. This (sub-)discussion pertains to people who do look at the url. (And personally I agree that the www helps to denote a url, and so it's a net positive for users, or at least not a net negative.)


I saw a poster on a bus with domain.sydney sitting on its own line with no other context. I'd expect the majority of people had no idea it was a web address.


Yea, there should be an http:// before it on print media. Otherwise no one is going to type it into their phone. I've done that for a couple of projects that use non-standard domains


This would really depend on the demographic. Someone that is more technologically inclined would understand. However, appealing to the broad spectrum the general public would not. However, that example has to do more with the domain extension rather than www. Another example would be the .co domain extension example. In the western part of the country it could be easily recognize, but in the Midwest the demographic is much different and the general public may feel that a .co domain is merely a spelling error. It's all about your user demographic and their perception. Go with non www and then a redirect. Unless your demographic runs on 2G, either way isn't going to drastically affect your users. I would be more concerned about your file and images sizes than 100ms of a redirect.


There are a lot of offline ads (billboards, magazines, TV spots) that give out a URL.


Even as a technical person, I hardly use URLs apart from sites I know well.

I've noticed more and more recently a trend of billboards and things that just say 'Search [name]' instead of putting their URL there.


I've seen it too. That's a lot of confidence in their SEO.


I bet the sign itself serves as search engine optimization. If everybody starts searching for "foo" and clicks on my site in the result set then a reasonable search engine might bump my site up higher for searches for "foo."


That's an interesting hypothesis. It makes sense for the company both as optimization of their web pages for search engines and as making good impression on customers.


Not really, if I were doing this, I'd buy ads for the search term so I can guarantee (within reason) I'm at the top.


More likely a QR code these days



Thanks for sharing. I was a bit dense for a few minutes.


Haha, yeah I get it. But I scan QR codes all the time, maybe I'll take a photo and submit it.


People still use QR codes?


For loading two-factor authentication secrets. Or anything else you can think of that has too much information to convey with a short link. If you are directing people to a web site using QR codes then you're doing it wrong. Well, wrong now that everyone has realized just how limited QR codes are.


I do for any printed media. It makes it easier for people to look up the corresponding electronic version of a document. QR code has its uses, but tends fail when misused. QR code is best when interacting with it is strictly optional.


They ever did?


Well I think people did, once, but then immediately discovered it wasn't worth the effort.


They're great for managing parts in a manufacturer's storeroom. A QR code is more information-dense than a barcode.


I use them at bus stops to pull up live tracking


Right. By the time most people remember how to scan a QR code, they could've typed in the URL.


Is there a good (or even native) app to scan these on iPhone?

Everything I have tried is lackluster. I'm to the point where if something is qr only I just don't care enough and move on


RedLaser works well. Really well, actually. I'm not a QR fan, but hey every now and again I do scan one (usually out of some weird curiosity) and RedLaser is the best app for the job.


Opera mini does its job, and approaches QR codes the right way, being an input alternative to the keyboard reachable from the omnibox


Scanbot. Great app for scanning documents, but when you scan a QR code, it handles it.


In my experience from my wedding site, which used the .dance tld no matter how hard you try people will add .com to the end.


When we did some research for RBI (part of Reed Elsevier) w found that about 8 - 10% of our inbound links linked to the wrong version of the home page.

So correctly handling all standard HP synonyms is a good start for all sites.


HP synonyms?


Apparently it's too hard to type "Home Page".


I had mine on http://[subdomain].rya.nc/wedding/ and had it printed that way with the wedding invitations and even my less technical relatives didn't seem to have problems with it. Did you have a bare domain or did you include http:// and the trailing slash?


I had http://supermarried.dance/ on our invitations. That really apparently confused our older relatives.


Maybe having a subdirectory helped.


Why not http://? To me it looks more obviously like a web address. www just looks weird


> This reasoning is all based on valid, but technical issues for the hosting side. A general rule for any customer-facing business is to put the customer first.

The article says:

"Should I redirect no-www to www?

Yes.

Redirection ensures that visitors who type in your URL reach you regardless of which form they use, and also ensures that search engines index your canonical URLs properly."


Where is the best place to do that? A redirect at the domain registrar, in my webserver (e.g. Apache), or in my app?


You will want the naked and www subdomains to point to the same address in your DNS configuration, and you will want to configure your webserver to issue a 301 redirect to the canonical subdomain at the HTTP layer. There are several reasons for this, but the most pressing is that it does not make sense to use a CNAME record for the naked domain.


If you have a larger setup, you’d want to already serve the 301 on your load balancer.


There's a slight advantage to doing it at the registrar because it keeps the hit off your servers. This is assuming of course that http://foo.com/bar will get redirected to http://www.foo.com/bar rather than just having everything redirected to the http://www.foo.com.

Edit: I'm wrong. See tadfisher's comment below.


Note that the usual method for doing so (a CNAME on the bare domain) prevents you from adding other resource records (such as MX) on the domain.

1. http://joshstrange.com/why-its-a-bad-idea-to-put-a-cname-rec...


That's interesting. Currently I have 2 different domains at 2 different registrars. In each case I have the bare name, www and other subdomains as "A" records, not CNAME.

So it doesn't matter if people use the www prefix or not, since both point to the same IP. And with this configuration I've had no trouble with MX records.

If the goal is www/no-www transparency, there are other ways to do it, but using a simple DNS setup seems like the least hassle.


The main reason to redirect to one or the other is to have a canonical URL scheme for search engines.


Great point. And candidly I never actually do this myself (CNAME on the bare domain), I just point it at the server and let apache/nginx handle the redirect.


Yes, this is the problem at my registrar if I setup a redirect there. Everything gets redirected to www.foo.com instead of www.foo.com/bar.


What is your point? I mean, everything the parent wrote still applies.


For very little effort you can simply redirect your main domain, and gain the technical benefits/simplicity of having a www subdomain, all while not inconveniencing your users. This pretty much makes the parent's point of hosting actual content on the main domain for customer convenience moot.


Find me this customer cohort that gives even a moment's hesitation about whether they're at a naked domain or not.


Further, most lay users are familiar with large web properties (amazon, google, apple), and most of those large properties redirect to www (due to aforementioned technical reasons). So if such a user even happens to notice, if anything, they may find it odd to not see a www prefix.


I think that this is exactly right. I have personally observed users become confused over a URL not starting with www. Occasionally I have even seen them add it, just because it was assumed that all addresses should have it at the beginning.

This might change eventually, but it will change slowly if so.


Aesthetics aside, one benefit of naked domains is that shorter URLs are better for sharing in chat programs. 'www.' takes up a nontrivial amount of space in certain environments (e.g., a gchat window within gmail).


I'm not sure I understand why this creates any real issues.


I personally dislike it. It's annoying. Links can be long as it already.


Hence the purpose of the redirect so links to the naked domain work just as well.


But most users will copy/paste whatever's in their URL bar, which will include www. if the site redirected them to it.


When computers was relatively new to me, I used to suspect naked domains of phishing. So there's that.


It could add to the overall experience, just like tiny details in a logo or a well-considered font choice could.


It sucks that this is the state of things, because of the technical infrastructure that's supporting these systems. But it's absolutely not being advocated because it's technically more efficient, because it's cute for branding, because it helps the marketing team collect customer data. It's entirely based on being able to provide the highest level of consistent service to a customer.

Doing it any other way is prioritizing brand over customer.


It doesn't just suck, it's bad for almost everyone.

> Should I redirect no-www to www?

My guy reaction is still no, fix the technology that was never designed for this mass consumer use. DNS could support cnames for naked domains, but the standard groups only care about backward compatibility and 0-risk, while consumers still don't understand how DNS works at all, and now we have developers who apparently need articles spoon feeding them why it's bad one way and not another.


I think most people are used to having www. in front of a url. If they even notice it, I think it will confuse them not having that.


You're describing a world where everybody expected a website to end in .com, and if not that, then maybe a handful of others. That world is changing; what will people's expectations be in the future?


My blog has the URL http://boston.conman.org/ (leaving http://www.conman.org/ for other stuff on my website). I had to add a redirect for http://www.boston.conman.org/ because I had a non-zero number of requests.


This is a good point. Any subdomain that you host web pages on should itself have a www sub that redirects, for this reason.


I've seen some sites that redirect to m.example.com if you're accessing them from a phone. Do you think people are getting confused by that?


Uptime and performance is typically some of the most important things for customers. Definitively way more important in general than whether there's a www prefix or not...


I agree. Microsoft, Apple, and Google have the ability to modify user behavior without much downside. The rest of us are better off making the user experience as fast, efficient, and simple as possible.

By coincidence, I just put a 301 redirect on my personal site to go from www.xpda.com to xpda.com two days ago. If I could only get ctrl-enter to submit https://domain.com instead of http://www.domain.com in Firefox, I'd be happy. (There was a bug that prevented this last time I checked.)


Using www puts the customer first.

Being able to serve your website from a CDN helps the customer: it means the website is up when they want to visit it, and it usually means it's faster for them.

Having static content not get cookies helps the customer: it keeps HTTP traffic down, which means that pages load faster.

Separating cookies between different websites helps the customer: it means that when, inevitably, blog.example.com gets broken into, the customer's account info at www.example.com is not compromised. (Of course, this assumes that you're doing least-privilege between your various websites, but you're already doing that because you care about your customer and don't want to send them a letter about how you lost their personal information.)

Not using a separate top-level domain for static content helps the customer: they know that www.example.com and static.example.com are from the same company they trust, Example Industries, Inc. If you caused their browser to show that it's loading files from "static.excdn.com" or something, they might worry.

The customer isn't personally following the redirect; their browser does, so I'm not sure I understand how the redirect deprioritizes the user. As far as visual clutter goes, get an EV cert, which either supplants or replaces the address bar with something much more useful than a URL:

https://www.expeditedssl.com/assets/browser-ssl/extended-val...


What!? You saying we can't make absolute rules? We have to consider the context? Horrible person!

On that note.. those minimalist designs are quite hard for older or less tech savvy to understand at times. Not the www part of course but the designs in general. Something to consider when putting the customer first maybe..


Remember that HTTP vs HTTPS adds yet another dimension you have to take into account. This bit me the other day.

I run a site at example.com (I prefer a naked domain, for no technical reasons whatsoever), with a CNAME record for the www subdomain. But I only want to serve the site over HTTPS. So http://example.com redirects to https://example.com, as does http://www.example.com. Simple enough, right?

I however started receiving some spurious reports that the Google Account login option wasn't working on the site, which was quite puzzling at first. Turns out, some users were manually entering https://www.example.com as the address (it's not indexed or linked to anywhere in this form that I could find), which was being handled by the Nginx default_server directive on port 443, causing the site itself to appear to work just fine at https://www.example.com as well. But the Google OAuth service checks the authorized origins for any client side requests, saw www.example.com and was expecting example.com so simply failed silently. Doh!

TLDR summary: If you redirect to HTTPS by default, check that all 4 options ([www, naked] * [http, https]) work correctly, and that all redirect just ONE canonical name to keep things sane. And make sure the 3 redirects preserve any request URI parts after the domain as well.


> And make sure the 3 redirects preserve any request URI parts after the domain as well.

I intentionally break such requests by dropping anything beyond the host name.

If someone are sending data in the open, I don't want their clients to keep working thanks to built-in support for redirects.


The OP could have avoided a lot of confusion if he began his article like this:

> www. is not deprecated for webmasters (but users don't need to type it)

> This page is intended for webmasters who are looking for information about whether or not to use www in their canonical web site URLs; however, the website can still be advertised without the www and the user never needs to type it.

Even in this HackerNews discussion with technically knowledgeable people I see a lot of discussion stemming form this misunderstanding of what the OP is trying to say. (Example: "I should take the toll by typing www every time? life is too short.")


Yeah, this would have helped a lot to avoid discussion. I suppose the author of the website expects mostly webmasters to come and read what he writes. But here are a lot of other people as well. So probably it's not the author who should adopt his text (why should he even know we exist) but the title of the link in HN itself should make it clear.


Well, I agree that users shouldn't need to type it, but it should still be advertised, especially in print.

www makes it immediately obvious that you're looking at a url, which is especially important with gtlds. A line at the bottom that says dog.spa is meaningless. www.dog.spa fixes that with only 4 characters.

You could print http://dog.spa, but now you've added 7 characters and it looks even more technical, which is stupid if the goal of removing www is to simplify things.


I'm perplexed by the cookie claim. I have a naked domain (foo.com), and a static domain with the same domain name. (static.foo.com) and the cookies of foo.com are not being sent to static.foo.com if the path is configured as /. (I can see that cookies are not sent from the dev tools' network tab. The cookies for google analytics, which set the domain to .foo.com as opposed to foo.com are being sent to the static subdomain though.)

Could someone enlighten me on this? Seems like the article might be spreading misinformation.

Edit: Seems like the issue is "host only cookies" are just sent to foo.com, not to static.foo.com

A cookie, unless the domain is explicitly set is already host-only. So you will see that if you set up a naked domain, cookies you set for authentication will probably not be sent to subdomain by default.

The third party cookies on your application, like Google Analytics on the other hand, have to have specified a domain name and are not host only, so you will see your Google Analytics cookie being sent to the static subdomain.

So, this statement from the article seems to be wrong:

"If you use the naked domain, the cookies get sent to all subdomains (by recent browsers that implement RFC 6265), slowing down access to static content, and possibly causing caching to not work properly."

It should be

"If you use the naked domain, the cookies which are not host-only and have domain set get sent to all subdomains (by recent browsers that implement RFC 6265), slowing down access to static content, and possibly causing caching to not work properly."


Well, the obligatory counterpart:

http://no-www.org/


http://www.www.extra-www.org/ used to be around, but the domain has expired at least two years ago.


I know some websites that prefix even all their subdomains with www. For HN it would look like this: https://www.news.ycombinator.com

On the other side http://www.bbc.co.uk is similar, but co.uk is sort of the ".com" of GB.


I tried to pull it up on archive.org, but the archive page seems to redirect in a loop, even though the response has a 200 status: https://web.archive.org/web/20100906050755/http://extra-www....

Edit: Looks like the tag <meta http-equiv="refresh" content="0"> will trigger an immediate client-side refresh. Why do browsers support this?


Is anyone else really bothered that 2 sites advocating for "best practices" fail to use HTTPS? People are willing to write paragraphs arguing the technical merits of subdomains but nobody could take 10 minutes to put their site behind CloudFlare?


Not really. These are old sites build before CloudFlare even existed or the encrypt-ALL-the-traffic movement started.

I myself prefer non-HTTPS over CloudFlare for sites like this. As for best practice:

> Back in 2003, Lee Holloway and I started Project Honey Pot as an open-source project to track online fraud and abuse. The Project allowed anyone with a website to install a piece of code and track hackers and spammers. We ran it as a hobby and didn't think much about it until, in 2008, the Department of Homeland Security called and said, 'Do you have any idea how valuable the data you have is?' That started us thinking about how we could effectively deploy the data from Project Honey Pot, as well as other sources, in order to protect websites online. That turned into the initial impetus for CloudFlare. -- Matthew Prince, CloudFlare CEO


I'm using CloudFlare for GH Pages based web-sites with custom domains. However, Tor user in me says that's not really a "good practice". Looks like there isn't a way to turn off that annoying "Attention Required!" "One more step" checks, is there?


"Technical part" is not as convincing.

P.S., About cookies: Don't use cookies, LocalStorage is much better.


It doesn't accomplish the same thing. Cookies are added to every web request, while you have to supply your localstorage values manually if you want to use them. At the least, this requires JavaScript. It's also impossible to do for regular clicks on links and when loading images and scripts. Sometimes you want to authenticate those too.


> "Cookies are added to every web request, while you have to supply your localstorage values manually if you want to use them. "

And actually it's advantage of local storage, because of less amount of garbage traffic.


Making use of local storage requires javascript, which definitely adds to the amount of garbage traffic!


I don't talk with people who are saying web in 2016 is possible without JS. They are either trolls or fanatics.


Or the still see the difference between documents on the web and the web applications. There is zero need for JS to be mandatory in the first case.


And you want to say "documents" need to send cookies to control authorization?


Ok, I'll rephrase: only use cookies when you want to send them with requests to static files. But don't forget - you will not be able to use CDN in this case.


You could also set up a cookie-free domain and have your static assets accelerated by the CDN while keeping your files (which send cookies) on a separate domain.

You can still use a CDN in this case however, by default files with cookies cannot be cached on a CDN's edge servers. Some CDNs have the ability to exclude the cookie from the static file such as KeyCDN which allows you to ignore the cookie and/or strip it entirely (https://www.keycdn.com/support/pull-zone-settings/). This ensures that your file is now cacheable which is important for instance if you are using Cloudflare which adds a Set-Cookie header to domains routed through them.


Nothing stops you from using a CDN with cookies as long as your app generates proper caching directives and the CDN obeys them. The only reason this would ever be a problem is if your app/server does not correctly mark pages that may contain private/user-specific content accordingly.

Further, a common method to reduce the risk of this is to place purely static public assets and user-specific private data on different domains.


And who will do authentication process? CDN? And different domains can be an issue too.

> Further, a common method to reduce the risk of this is to place purely static public assets and user-specific private data on different domains.

In other words, "don't use CDN when you need cookies", fine.


A CDN is just a reverse proxy with caching. The "proxy" part means anything, including cookies, can be transferred from the end-user to your origin server.

There is no issue using CDNs with cookies.


No, CDN is content delivery network and it will not wait your server response on each request, otherwise it will kill whole point of CDN existence.


Yes, that's what the letters CDN stand for and is just a marketing term.

Technically, they are reverse proxies with a focus on caching, however that is not all they do. You can use them for various other features like security and front-end optimization and they work fine with cookies.

It's common to use a CDN for the entire site - caching static files at the edge while sending page requests to the origin server with all the cookies, especially important as many sites are now dynamic and customized to the individual. There is no issue in using a CDN to proxy all requests.


Maybe you have not noticed, but we were talking about authorized access to static files, using cookies. And I was talking you there's no point to use CDN if EVERY request will be send to main server (it will work even slower). Now you are trying to explain me that CDN can send SOME requests to main server (to dynamic pages), delivering static files from the nearest point to the user and without authentication by cookies.


I dont see where in this thread that became the topic, rather it's been about cookies in CDNs. You still seem to think that CDNs are only used for caching when that is just one of their features. You can also use them for security, for example, without using any caching at all.

If you need to authorize every single request (even to a static file) which means that every single request is unique, then this is obviously not a good use case for an edge server cache. You can still cache things in the browser with cache headers and continue using the CDN to proxy the full request with cookies to the origin. This doesn't add much latency and can sometimes decrease it because the CDN will keep faster connections open to the origin.

However, most CDNs today also offer their own access controls either with cookies or url tokens so you can do authorization at the CDN edge instead of the origin.

And yes, you can use the CDN to cache static files and just proxy requests to pages. Usually private information that requires authentication is in the webpage and the static files like javascript, css and images dont need protection.


> Don't use cookies, LocalStorage is much better.

Source? Not saying I agree or disagree, but a statement like that without any supporting arguments or discussion isn't particularly helpful.


Have you tried to find something? Let me help you: http://caniuse.com/#feat=namevalue-storage http://stackoverflow.com/a/3220802/680786

In short: LocalStorage is supported by all browsers, has no security issues as cookies have, has much bigger storage limit, much more handy API and don't need to be send on each request.


They're not the same thing. Cookies add state to HTTP which is very useful, compared to LocalStorage which is really just client-side storage that can also be used to store state but at the higher web application level rather than HTTP.


If you need state you typically use JS + XHR and can set your own headers. Cookies are just header values with some predefined behaviour wrapped around them.


Sure, which again means it's on a web application level rather than working at the HTTP level.

There are lots of uses for cookies without having JS or a full browser available.


If SRV[1] records had been a thing sooner, we could have had our cake and eaten it, too. SRV records encode the protocol into the DNS entry. If you wanted the HTTP server for example.com, for example, you'd lookup the SRV record for _http._tcp.example.com. You get back the IP and port of the host to connect to.

If you had a hosting provider, you could CNAME _http._tcp.example.com to your hosting provider. Naked domains + CNAME works as expected.

(And you can weight records, assign priorities…)

[1]: https://en.wikipedia.org/wiki/SRV_record


No. The main issue here is cookie control, or, to be exact, the complete lack of one.

You can't tell user-agents "the cookie I set must be valid for example.org and websocket.example.org but no others" (the example is crude and non-scalable, real-world semantics of this must be different), which leads to all sort of problems. Heavier static media requests, mixed cookie state if you use staging.example.org for pre-production environment, inability to provide third parties subdomains for their UGC, etc etc. All can be solved but not really convenient.

Would there be a way to have good control over cookie scoping, a lot of hacks would be gone and www/non-www distinction would be purely cosmetic for most cases. Granted, not all, SRV records are still a good idea.


Consul service registry uses SRV records in this manner and it's fantastic.

However, on the wider internet with third party services it's much easier to manage and debug dedicated port numbers.


I'm using a naked domain. The biggest reason why I'm using it is because my domain name as minimalistic as it can get (six characters long, seven if you count the dot) and I sure as hell love saying to people to just type in my alias and add .me at the end.

Although I did not know about these issues, I have to say that the source really didn't give me strong enough arguments to convince me to go through the hassle of making the switch.

With that being said, I opened up a couple of links from the source and I will look through them and see if they'll change my mind.


Yeah, I use a naked domain too because I got a short and nice one, and I like it this way. All this stuff about cookies is a non-argument for me—I don't set any cookies and don't want to. My DNS host does happen to support ALIAS records, and I find the non-technical argument that the www prefix "serves as a gentle reminder that there are other services than the Web on the Internet" kind of silly. I don't host any other services except ssh, and that's just at the naked domain, too. This whole thing is a non-problem for me and all my users...


301 redirect the naked domain to the www and you can keep telling people the same thing - they will be redirected to the www version seamlessly.


You can do that, but the redirect will affect initial load times. Might be worth it, might not be.


I used to host at lowmag.net (my old username, still used in places like hn for laziness puposes)

I now use the bare version of my name plus .com.


We also strongly discourage users from using naked domains, unless they have a DNS host that supports ALIAS records or CNAME flattening.

I wrote a post with all the details around why it's best to use www and why naked domains can be really bad for performance and uptime:

https://www.netlify.com/blog/2016/01/12/ddos-attacks-and-dns...


Your post is grey, light-weight text, which is two reason people won't read it.


And the background isn't even white. Come on!


And "font-weight: 300", naturally. Argh!


> You should use www because today you have a small web site, and tomorrow you want a big web site. Really big.

So Twitter, Pocket, Github, Trello...are all doing it wrong?

I really don't think this matters more, and I think that the non-www version makes a web address so much more readable.


The advice being given by the OP is outdated. The only relevant point is the CNAME for load balancing which, unless you are on AWS which permits CNAMEs on root domains, can be a restriction.

Everything else is advice from someone who still thinks it's 1995. The norm now is for naked domains. www is a concept that has vanished from the "real world" that most of us live in here in 2016. It's natural to hang onto things that you grew up with, and the "I should use www" is one of those things. It used to be the norm. This is no longer the case, and there are few arguments to support keeping it other than "it's what I originally learned to use".


Boy, I hope someone informs Amazon, Apple, Google, Facebook, and Microsoft that it's not 1995 anymore--they're still using www.

But, I guess those are more like "old man" companies, not cool new 2016 companies like AirBnB, Uber, and Snapchat. Oh wait, they use www too.


Google are really inconsistent in this:

[typed in] -> [sent to]

maps.google.co.uk -> www.google.co.uk/maps/

www.maps.google.co.uk -> www.google.co.uk/maps/

calendar.google.co.uk -> server not found error

calendar.google.com -> https://calendar.google.com/calendar/...

www.google.co.uk/calendar -> 404 error

www.google.com/calendar -> https://calendar.google.com/calendar/...

google.com/calendar -> https://calendar.google.com/calendar/...

google.co.uk/calendar -> 404 error

But then they probably know that most of their users simply type "google" in the search box and then click on "maps" or whatever ...


The thing is, www prefix was never a requirement. It came about as a convention to very specifically indicate that www.example.com = web server, and ftp.example.com = ftp server, etc. There's no real reason to do it, other than convention from the early days. The CNAME restriction is really the only technical gotcha, and this is slowly becoming a non-issue with DNS services like Route 53 that now allow CNAME on roots.


>There's no real reason to do it, other than convention from the early days. //

It wasn't convention it was that the naked domain wasn't assumed to have web content as not all domains had web servers on them. If you're used to gophering in to balrog.example.com then a www probably was required to get a browser to reach the www pages.

Supposing VR takes off and people have a VR space as their primary internet presence then we might be having the conversation as to why do we both with the vr. subdomain on all the URLs.


> If you're used to gophering in to balrog.example.com then a www probably was required to get a browser to reach the www pages.

Wouldn't the protocol before the domain handle that ? Same goes for ftp etc.


The protocol only tells your computer which program and protocol should be used to communicate with the server. The server server name itself is the destination. If you tell your ftp client to connect to ftp://www.example.com it's going to try and connect to the IP address returned by an A lookup to www.example.com. It has no way of knowing you actually want the IP returned by an A query to ftp.example.com.

SRV records could clean this all up, but there seems to be stubborn resistance by web browsers against doing srv lookups.


> The advice being given by the OP is outdated. The only relevant point is the CNAME for load balancing

> Everything else is advice from someone who still thinks it's 1995

What about the part about cookies and static content? Why do you consider that inapplicable nowadays?


Cookies are broken down into two possible situations:

1. Serving of static content from a CDN on a separate hostname than the web servers. First of all, 99% of companies do not fall under the umbrella where saving the few bytes of the cookie request header is going to save you many Mbps/Gbps/packets of bandwidth. For most of us, having our example.com cookies unnecessarily sent to static.example.com isn't a big deal. Secondly, this is remedied by using a separate domain name for your CDN rather than a subdomain. Example: Facebook uses fbcdn.net rather than a subdomain on facebook.com - even though they do still use the www prefix.

2. You want separate "sections" to your website, akin to Google's mail.google.com, news.google.com, images.google.com, maps.google.com, etc. Maybe you want the cookies for your root domain to only be sent to that root domain and not all your subdomains. This is not completely ignorable, but I have personally not seen a situation where cookies on the root domain or www cannot/should not be able to be shared across subdomains.

I just find it odd that www is still considered a convention by some. If you're redirecting the root to www anyway, there is no reason to use www. You may as well use hi.example.com as your primary domain and just redirect example.com to hi.example.com. www has no intrinsic meaning, it's just a regular subdomain with DNS records like any other subdomain.


It only matters if you intend (or require) to CNAME your main site off to some other DNS name. If you're serving your entire site from S3 for example, you can't just alias yoursite.com to your-bucket.s3.amazonaws.com, but you can CNAME www.yoursite.com and have a small server sitting on yoursite.com sending a 301 redirect for every request.

Github and Twitter are large enough to not care. And if you're using something like Cloudflare, they can just take over your IP address with BGP, no DNS trickery needed.

I've worked a sysadmin for more than 10 years now, and never realized that it's not possible to CNAME the domain root. It's good to keep in mind, but in most cases there are other workarounds.

This is also why "naked domains" set up a whole other domain for static files rather than "static.yoursite.com", to avoid the "top-level" cookie (megacookie?) being sent with every request.


> If you're serving your entire site from S3 for example, you can't just alias yoursite.com to your-bucket.s3.amazonaws.com

You can if you also use Amazon's Route 53 service which can resolve the A record of your naked domain directly to a S3 bucket or CloudFront endpoint. They call it an Alias record: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/res...


> I've worked a sysadmin for more than 10 years now, and never realized that it's not possible to CNAME the domain root. It's good to keep in mind, but in most cases there are other workarounds.

The common approach is called CNAME flattening, where you can specify a CNAME at the root and your DNS provider 'flattens' that by resolving the A/AAAA record at the end of the CNAME chain.

e.g. CloudFlare's approach: https://support.cloudflare.com/hc/en-us/articles/200169056-C...


By the way, even the CNAME on root domain is no longer a concern if you are already using AWS. Route 53 has supported CNAME on root domain for quite some time now: https://aws.amazon.com/blogs/aws/root-domain-website-hosting...


It's not a true CNAME, but rather a sort of server-side translator to make it behave as a CNAME while really just returning A/AAAA records. It's more like the 'ALIAS' record.


> And if you're using something like Cloudflare, they can just take over your IP address with BGP, no DNS trickery needed.

Why can't Heroku and such say 'put this IP as an A record' and do the same if they have issues with that particular server?


Not to mention archive.org. If they don't care about archival URL's, I'm not sure who does.

Edit: the answer is probably http://longbets.org/


As PG says, when you're starting out, do things that don't scale.

My advice is don't over optimize _anything_ until you actually need it.

In the early stages a naked domain is a branding decision that looks nicer than old school www, to my eyes, at least.

You can avoid the cookie issue by not using any subdomains and sending all your calls to single sub URIs such as yoursite.com/api/, yoursite.com/blog/

If you're running your own infrastructure on AWS, for example, you can start your scaling efforts simply with load balancers and multiple instances all pointing to your naked domain. That's going to get you pretty far until you're so big that you need geo scaling & distribution.

Then, if you need geo scaling such as Akami or other geo load blanching solutions you can start to redirect your traffic away from the naked domain to www or whatever.


I frequently use naked domains as they "read back" better in every site I use them in, removes visual cruft and requires recalling less character space in customers memories.

Other websites I frequent that uses naked domains include:

  - stackoverflow.com
  - github.com
  - twitter.com
  - stripe.com


I'm surprised tha they really don't redirect. Would be interesting to see how they deal with the problems mentioned. As always there are probably different solutions to the same problem.


They do redirect, but it's actually to go from the www to the naked domain. This always leaves the naked domain in the URL bar and is, to me, a clean look.


See also these directions for redirecting naked domain to www,

http://www.yes-www.org/redirection/

Unfortunately, some hosting providers make this very difficult or impossible. I struggled to get this working on a Google Site hosted at a Google Domain using their DNS tools; it seems Google Domains doesn't have a basic "redirect to www" feature. Eventually I gave up and used Dreamhost's nameservers instead; they offer this feature (and its free; you don't need to pay for hosting).



I guess they don't provide DNS ALIAS records. I normally see subdomain redirects implemented as an HTTP 301 redirect in the server configuration.


Or, just use a DNS nameserver that can emulate an apex CNAME, if you are that concerned about letting a third-party renumber their servers at the drop of a hat. I know CloudFlare can do this and it's a feature that standalone DNS nameservers should support.


> The technical reasons to use www primarily apply to the largest web sites which receive millions (or more) of page views per day

How do GitHub and Twitter for example deal with this? Do they have to go through a lot of hoops in order to not use 'www'?


> Do they have to go through a lot of hoops in order to not use 'www'?

No, they don't. The claim that apex domains can't be used with CDNs is badly out of date.

Even a free-tier Cloudflare account supports CNAME flattening, which solves the problem just fine.

https://support.cloudflare.com/hc/en-us/articles/200169056-C...


When you reach their size, it's not a big deal to build and maintain your own DNS serving infrastructure; in that case, you can do as you please, which might include serving up a pool of A records round-robin style or using an expensive magic DNS geodirector thingamabob.



While the original post was not specific to InfoSec, it does beg the question whether there are any security implications to using/not using "www" in your domain names. Does a naked domain pose any risk? No suggestion to this effect can be found in discussions on the topic I can see so far, but it's a good question to ask in case those of us who do prefer naked domains are missing something.


As mentioned on the site, there are some stipulations about cookies that are fairly important if you're doing something sophisticated.


Missing reason: provides an anchor for weird domains.

Imagine being confronted with something.nyc, or something.cool, or something.repair. WTF is it?

That www gives you some context.


Why not just http://?


Exactly! http:// makes a lot more sense, and people are less likely to track on a .com to the end


That's good news. I chose no-www because www seemed redundant and have been wondering if it's a problem. Turns out no, unless it's very big, which I don't expect my site to be (niche market, not a web app). It can even be changed later by redirecting no-www to www.

Is this limitation a problem in the design of the domain name system, or something quite natural and necessary?


I can't for the life of me figure out why web browsers don't yet do a query for an SRV record named _http._tcp.example.com when the user browses to example.com.


There are old open bugs for this against most major browsers. See e.g. https://bugzilla.mozilla.org/show_bug.cgi?id=14328

For me, this would allow me to have domain names that point to various HTTP/HTTPS based services behind my single outward-facing IPv4 address. As-is, I have to memorize what non-standard ports they're on.


Why not just use a reverse proxy and virtualhost entries? A straightforward proxy will take care of the hostnames, and SNI will handle https.


The servers are located in different parts of the continent, and the only one that is capable of acting as a proxy has metered bandwidth, while one is interactive and another is a media server.

I suppose I could set up a system of HTTP redirects but combined with HTTPS I feel that will get hairy quickly. The service-level redirection SRV records provide are really exactly match my needs.


This came up again during http2. The browser vendors just aren't willing to take on the extra latency at the start of the request.


Where is the extra latency? You can do DNS queries for example.com and _http._tcp.example.com concurrently so that you get both answers in the time of one round trip to the DNS server.

If there actually is a SRV record and the target isn't locally cached then you would need to do another query, but that is faster than establishing an HTTP session with example.com, getting the 301 redirect and then having to do the query anyway.


They didn't want to wait for multiple queries to finish. If you're interested in a detailed rationale you can take a look through the mailing list archives (though it'll be frustrating reading).


And that's because the major DNS servers don't bother with answering more than one question per query. Nothing in the DNS specification limits the number of questions to one. The only down side might be that the response to multiple questions might not fit in the standard UDP DNS packet.


DNS servers can't answer more than one question per query because NXDOMAIN is signalled in the message header and so when QDCOUNT is greater than one the response becomes ambiguous.


Regarding defunctness, there's nothing magical about the 'www' string - it can be anything, as long as it's not the DNS zone apex (~='base domain'). There are limitations on what you can do with the zone apex entry (such as no CNAMEing it), but no limitations on other entries. 'www' is just convention.

As to your question, it's a bit of both. I'm not a network admin, though, so I'll let someone a bit more skilled pipe up. I have run into difficulties with migrating 'bare domains' in a small business environment due to these limitations, though. Not insurmountable, but required more work to cover the issues. Of course, when you're 'big', you'll be dealing with these kinds of issues a lot more.


Use a CNAME. Here's another good explanation: https://www.netlify.com/blog/2016/01/12/ddos-attacks-and-dns...


> If you are using www, then this is no problem; your site’s cookies won’t be sent to the static subdomain (unless you explicitly set them up to do so). If you use the naked domain, the cookies get sent to all subdomains (by recent browsers that implement RFC 6265)

From my understanding, pretty much all browsers DON'T send google.com cookies to subdomains -- they only send .google.com cookies to all subdomains.

This seems backed up by their cited RFC 6265[1]: > Unless the cookie's attributes indicate otherwise, the cookie is returned only to the origin server (and not, for example, to any subdomains)

Am I just confused?

[1] http://tools.ietf.org/html/rfc6265#section-4.1.2


I dislike www simply because it is the world's worst acronym; having three times as many syllables as the words it replaces.


dub dub dub ?


you don't need to do a mod_rewrite thing, btw, just do:

<VirtualHost *:80>

  ServerName whatever.com

  Redirect permanent / http://www.whatever.com/
</VirtualHost>


That either doesn't redirect "deep" links, or it redirects them to the start page.


Got any proof of a case where this happens? I'm using it to redirect to https:// and it redirects deep links to the right deep links, and nothing goes to the home page


I'm running it on my personal site and all links work as intended


Honestly, if you're running on a simple Apache config, you almost certainly don't need to worry about any of the technical issues the article raises.


If your service requires scaling because it's expensive dynamic content, then apache could easily be fast enough and not the bottleneck requiring a switch to something else.

It doesn't matter if nginx/whatever can handle 1000+ reqs/sec if your app consumes 300MB of ram per request and takes 1 second to process.


www bothers me for mainly one reason. It is a pain to say.

It is one of the few abbreviations that takes longer and more effort to say than the unabbreviated words themselves——world wide web.


And... few people actually pronounce it out.

I've lost track over the years of hearing major media outlets give addresses as "double-you double-you double-you ourdomain dot com" Typing what they actually say will get you nothing, unless they were sharp enough to also get wwwourdomain.com


some DNS services are providing "ANAME" http://www.dnsmadeeasy.com/services/anamerecords/ for instance


Exactly! It's a little questionable that the linked website doesn't mention the existence of these types of records considering how widespread they are these days. Every DNS host I've used in the last five years has supported something like "ANAME". For example:

* CloudFlare's "CNAME Flattening": https://blog.cloudflare.com/introducing-cname-flattening-rfc...

* DNSimple's ALIAS record: https://support.dnsimple.com/articles/alias-record/

* Route 53's alias records: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/res... (although these need to go to an ELB, CloudFront distribution, S3 bucket, or Elastic Beanstalk environment)


Those are non-standard proprietary extensions that aren't universally available. Omitting them is not questionable.


What a bizarre advocacy site. If you need the subdomain, you'll use it. Otherwise, why make such an impassioned plea for 'www'?


brand awareness and recall is king for small niche companies. we removed the www to bring our brand, a unique but simple domain, into focus. IMHO if you have a trade and/or trademarked domain name and wish to get more organic traffic using a naked domain is the best choice.


yes-www is clearly the more practical solution based on current technology and its limitations. And no-www (http://no-www.org/) is clearly the more aesthetic solution.

Either is perfectly valid, and depending on who you ask you'll get reasonable arguments in favor of both. That said, neither choice will ultimately make any difference. Period.

Even if your site becomes the next Google, the minor hurdles of a no-www domain vs the technical advantages of a yes-www domain will not make an ounce of difference. Anyone who tells you otherwise is straight up fooling themselves.


I found this article hard to read. It seems to boil down to three points:

1. Some hosts might want you to use a CNAME record to send them your web traffic, and CNAME records are undesirable for apex domain names because they can't coexist with other records you might need (like MX).

2. Cookies set on the apex domain will be sent to subdomains.

3. Older browsers may not let the apex domain read cookies set by subdomains.

Is that right?

(CloudFlare deals with (1) by just hosting your DNS, and I don't have enough experience with (2) and (3) to have strong opinions on them.)


> 1. Some hosts might want you to use a CNAME record to send them your web traffic, and CNAME records are undesirable for apex domain names because they can't coexist with other records you might need (like MX).

Nitpick: With the exception of RRSIGs, a CNAME record must be the only record at a given owner name. Since the apex of a zone has a SOA record, a CNAME cannot exist there.


Welp, stupidly I dropped the www with zero research because I saw others do it. Love the explanation of why keeping the www is useful.


I did too. Then I wanted to host a webpage out of google cloud drive. Opps, doesn't support naked domains. Back to www in the future.


Interesting that I responded to this in 2007 and it still rings true today!

http://wade.be/yes-www/

Somewhat ironically, in 2016 I've since moved to no www on my none business critical blog.


Pedantic correction: that should be "non business-critical".


Many many websites, fail to display anything if www is not prepended to the URL. Particularly of small businesses. If anyone if looking for a business development niche selling basic consulting to small and medium sized companies, it is this.


I've always hated the www prefix. But understand the technical gains of using one. If you are a domain owner, subdomains give you quite a bit of flexibility. You can always use a different prefix than www.


Having run a website on an apex domain for over 5 years I can tell you all the mentioned issues are easy to overcome.

But if you are a hobbiest or a novice at running a website (most startups) then this is good advice.


... Or you could just let your CDN control DNS.

I think www is a relic from the days where you only had one host per server ...

Or do you want to slap www infront of everything, like www.news.ycombinator.com !?


Don't get why blocking cookies to static.domain.com is a desired feature. Especially when sharing cookies btw app.domain.com and login.dimain.com would be desirable.


With DNS and CAs as broken as they are, I'm skipping this and wondering not just go back to using IP addresses. (I am also the resident contrarian so...)


The "having to buy different domain names" to server static content seems like a really minor downside. Cookie point is a good point though.


Empirically, www seems to have won -- a large fraction of the top consumer sites online have picked it. One could make UI/UX-type arguments for no-www, but I sites like Facebook and Pinterest have some of the best UI/UX people in the world, and have still picked www.

What is this the case? What is the top reasons for www? Is it the cookie thing? I've heard that www lets you play DNS trics also, but haven't seen more details on this.

I'd love to hear more about this choice from people who understand the decisions at top Internet companies.


I'm a convert from non-www to www. Just avoids security implications re: cookies and no need to buy secondary domain for static assets. A simple redirect mitigates the "ux" argument that the domain looks less nice to type or read. Simple implementation with more pros than cons.

Should add it's not a hard and fast rule, and depends on the use case. But the default question for me nowadays is "why no-www" and not "why www".


A www. prefix just makes life easier for everybody from just running it so there is really not much reason for not having it. Supporting the redirect is easy and people generally don't seem to care.


Your users won't save 4 chars...


"www." is just 4 more characters I have to put on the flyer. no-www til I die!!


I still don't think this makes sense. You can just get a static IP and point it at a load balancer.

How would this help if your site fails? You can cname undfecarriage services like api and blog.

What do you gain from www?


Yes, but what about the trailing slash ...


Two reasons are listed:

- DNS records

- Cookies

Both of these things seem to have been design with "www" in mind.

So the reason boils down to: original specs of various technologies are optimized around the assumption that your site will have a www (or more accurately: a subdomain rather than a naked domain).

I find "www" aesthetically unpleasant. If one must use a subdomain, how about "web" instead? It's still three characters, but much cleaner.


"web" would also satisfy all of the requirements, at the expense of not following convention.


whichever the reasons, I should take the toll by typing www every time? life is too short.


No you shouldn't, the article suggests server-side redirection so you don't have to type www in front of the url. Many of the highest-traffic websites already do that, Twitter being a notable exception.


You're a robot. What do you care?


www is a relic of a different time, when networks were client-server first and http was an afterthought. Now it is expected that the root domain at minimum bring up a single page description of the org.

Add the www cname with a redirect to the root, start phasing www out.


I am glad to have some rando explaining how Google's domain names are wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: