Hacker News new | past | comments | ask | show | jobs | submit login

Always use subdomains. Businesses only ever need a single $10 domain for their entire existence.





Not true. If you are hosting user content, you want their content on a completely separate domain, not a subdomain. This is why github uses githubusercontent.com.

https://github.blog/engineering/githubs-csp-journey/


interesting, why is this?

I can think of two reasons: 1. it's immediately clear to users that they're seeing content that doesn't belong to your business but instead belongs to your business's users. maybe less relevant for github, but imagine if someone uploaded something phishing-y and it was visible on a page with a url like google.com/uploads/asdf.

2. if a user uploaded something like an html file, you wouldn't want it to be able to run javascript on google.com (because then you can steal cookies and do bad stuff), csp rules exist, but it's a lot easier to sandbox users content entirely like this.


> if a user uploaded something like an html file, you wouldn't want it to be able to run javascript on google.com (because then you can steal cookies and do bad stuff)

Cookies are the only problem here, as far as I know, everything else should be sequestered by origin, which includes the full domain name (and port and protocol). Cookies predate the same-origin policy and so browsers scope them using their best guess at what the topmost single-owner domain name is, using—I kid you not—a compiled-in list[1]. (It’s as terrifying as it sounds.)

[1] https://publicsuffix.org/


There might be reason to block your user content.

3. If someone uploads something bad, it could potentially get your entire base domain blocklisted by various services, firewalls, anti-malware software, etc.

I'm wondering, many SaaS offer companyname.mysaas.com. Is that totally secure?

If it's on the PSL it gets treated similarly to second level "TLDs" like co.uk.

PSL = Public Suffix List

https://publicsuffix.org/


Wouldn't usercontent.github.com work just as well?

Script running on usercontent.github.com:

- is allowed to set cookies scoped to *.github.com, interfering with cookie mechanisms on the parent domain and its other subdomains, potentially resulting in session fixation attacks

- will receive cookies scoped to *.github.com. In IE, cookies set from a site with address "github.com" will by default be scoped to *.github.com, resulting in session-stealing attacks. (Which is why it's traditionally a good idea to prefer keeping 'www.' as the canonical address from which apps run, if there might be any other subdomains at any point.)

So if you've any chance of giving an attacker scripting access into that origin, best it not be a subdomain of anything you care about.


A completely separate domain is more secure because it's impossible to mess up. From the browser's point of view githubusercontent.com is completely unrelated to github.com, so there's literally nothing github could accidentally do or a hacker could maliciously do with the usercontent site that would grant elevated access to the main site. Anything they could do is equally doable with their own attacker-controlled domain.

I think one reason is that a subdomain of github.com (like username.github.com) might be able to read and set cookies that are shared with the main github.com domain. There are ways to control this but using a different domain (github.io is the one I'm familiar with) creates wider separation and probably helps reduce mistakes.

I read about this a while back but I can't find the link anymore (and it's not the same one that op pointed to).


client browsers have no "idea" of subdomains, either. if i have example.com login saved, and also a one.example.com and a two.example.com, a lot of my browsers and plugins will get weird about wanting to save that two.example.com login as a separate entity. I run ~4 domains so i use a lot of subdomains, and the root domain (example.com) now has dozens of passwords saved. I stand up a new service on three.example.com and it will suggest some arbitrary subset of those passwords from example.com, one.example.com, two.example.com.

Imagine if eg.com allowed user subdomains, and some users added logins to their subdomains for whatever reason, there's a potential for an adversarial user to have a subdomain and just record all logins attempted, because browsers will automagically autofill into any subdomain.

if you need proof i can take a screenshot, it's ridiculous, and i blame google - it used to be the standard way of having users on your service, and then php and apache rewrite style usage made example.com/user1 more common than user1.example.com.


> client browsers have no "idea" of subdomains, either.

They have. That's why PSL list exists. It applies to all CSP rules.

> if i have example.com login saved,

It's the passsword wallet thing. It uses different rules and have no standards


Because there's stuff out there (software, entities such as Google) that assume the same level of trust in a subdomain vs its parent and siblings. Therefore if something bad ends up being served on one subdomain they can distrust the whole tree. That can be very bad. So you isolate user provided content on its own SLD to reduce the blast radius.

I've read - because if a user uploads content that gets you on a list that blocks your domain - you could technically switch user content domains for your hosting after purging the bad content. If it's hosted under your primary domain, your primary domain is still going to be on that blocked list.

Example I have is - I have a domain that allows users to upload images. Some people abuse that. If google delists that domain, I haven't lost SEO if the user content domain gets delisted.


This is probably the best reason. I had a project where it went in reverse. It was a type of content that was controlled in certain countries. We launched a new feature and suddenly started getting reports from users in one country that they couldn't get into the app anymore. After going down a ton of dead ends, we realized that in this country, the ISPs blocked our public web site domain, but not the domain the app used. The new feature had been launched on a subdomain of the web site as part of a plan to consolidate domains. We switched the new feature to another domain, and the problems stopped.

CDNs can be easier to configure, you can more easily put your CDNs colocated into POPs if it's simpler to segregate them, and you have more options for geo-aware routing and name resolution.

Also in the case of HTTP/1 browsers will limit the number of simultaneous connections by host or domain name, and this was a technique for doubling those parallel connections. With the rise of HTTP/2 this is becoming moot, and I'm not sure of the exact rules of modern browsers to know if this is still true anyway.


There's historical reasons regarding per-host connection limitations of browsers. You would put your images, scripts, etc each on their own subdomain for the sake of increased parallelization of content retrieval. Then came CDNs after that. I feel like I was taught in my support role at a webhost that this was _the_ reasoning for subdomains initially, but that may have been someone's opinion.

Search engines, anti-malware software, etc track sites' reputations. You don't want users' bad behavior affecting the reputation of your company's main domain.

Also subdomains could set cookies on parent domains. Also causes a security problem between sibling domains.

I presume this issue has been reduced over the years by browsers as part of the third-party cookies denial fixes...?

Definitely was a bad security problem.


Another aspect are HSTS (HTTP Strict Transport Security) headers, which can extend to subdomains.

If your main web page is available at example.com, and the CMS starts sending HSTS headers, stuff on subdomain.example.com can suddenly break.


I actually think they need 2, usually need a second domain / setup for failover. Especially if the primary domain is a novelty TLD like.. .IO which showed that things can happen at random to the TLD. If the website down it's fine, but if you have systems calling back to subdomains on that domain, you're out of luck. A good failover will help mitigate / minimize these issues. I'd also keep it on a separate registrar.

Domains are really cheap, I try to just pay for 5-10 year blocks (as many as I can), when I can just to reduce the issues.


And a second for when your main domain gets banned for spam for innocuous reasons.

I felt the need to get in addition to (shall we say) foo-bar.nl the foobar.nl the foo-bar.com and foobar.com because I dont want a competitor picking up those and customers might type it like that.

Don't forget about infrastructure domains, static-asset domains, separation of product domains from corporate domains ... there are plenty of good reasons to use multiple domains, especially if you're doing anything with the web where domain hierarchies and the same-origin policy are so critical to the overall security model.

For whatever it's worth, subdomain takeovers are also a thing and bug bounty hunters have been exploiting it for years.

A lot of interesting and informative rebuttals to this comment but no one anticipated the obvious counter argument.

Businesses only ever need two $10 domains, usercompany.com and company.com, just in case they ever want to host user generated content.


I think it's a sane practice to keep the marketing landing page on a separate domain than the product in case of SaaS.

Why? I always get frustrated when I end up in some parallel universe of a website (like support or marketing) and I can't easily click back to the main site.

The non-technical reason is that these are usually owned by different teams in your org (after you mature beyond a 5-person startup).

The technical perspective is that things like wildcard subdomains (e.g. to support yourcustomername.example.com), or DNSSec if your compliance requires it, etc. cause an extra burden if done for these two use-cases at a time.

> can't easily click

Http pages don't have problems with having a link to example.net from within example.com. Or the opposite. Seems like an unrelated problem.


One potential reason is that marketing teams often want to do things that are higher risk than you may want to do on your main application domain. For example, hosting content (possibly involving a CNAME pointing to a domain outside your control) on a third party platform. Using a framework that may be less secure and hardened than your main application (for example WordPress or drupal with a ton of plugins) using third party Javascript for analytics, etc.

Could you elaborate on why? The companies I have worked for have pretty much all used domain.com for marketing and app.domain.com for the actual application. What's wrong with this approach?

If there’s any scope for a user to inject JavaScript, then potentially this gives a vector of attack against other internal things (e.g admin.domain.com, operations.domain.com etc)

Also, if for example the SaaS you’re running sends a lot of system emails that really shouldn’t end up in spam filters, you can’t afford to let things like marketing campaigns negatively influence your domain’s spam score.

Easier and safer to have separate domains.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: