Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe it's just me, but I hate how current web technologies force people to register separate domains for static content. This is not how domains are supposed to work.


They don't. You could serve your site off of www.domain.com and your CDN off of cdn.domain.com easily.


But then you still have to remain vigilant against a clueless dev or random JS lib on www.domain.com setting a cookie for .domain.com, which your browser will helpfully include with requests to cdn.domain.com. With the completely separate root you're protected from that.

http://en.wikipedia.org/wiki/HTTP_cookie#Domain_and_Path


So basically it doesn't have to be that way, but to protect yourself from cluelessness/stupidity, you do it.

Kind of like Unix permissions vs. jails/virtual machines. Both are secure, but one is more secure against incompetence than the other.


I wouldn't agree. There are completely legitimate reasons to set cookies on *.domain.com -- it isn't "clueless/stupid" to do so, just less ideal.


I wasn't making a statement, just trying to clarify and then making an analogy to verify my understanding. I think I omitted a question mark where I should have had one.

Certainly, it is not automatically a bad idea to set such cookies. I see that.


We use wildcard cookies to let users login to www.domain and peek at beta.domain without logging in again. This isn't incompetence so much as reducing user drag. Perhaps I should have set it up to use www.domain/beta/ instead.

Unfortunately, this means our cookies are sent to static.domain. Worse, once we get rid of beta.domain there's no going back on wildcard cookies - there's no way to force clients to expunge cookies.


You could use beta.www.domain.com.


This makes a ton of sense, but it might be unorthodox enough to scare off some users thinking it was a scam of some sort.


Correct, but only if you don't set any *.domain.com cookies, which as it turns out most do.


Right -- if you're serving your website off of the root, then you're going to have this problem. If you serve it off of www (which, as it turns out, is how domains were originally intended to be used!) then you don't.

It's not an inherent flaw of the tech. It's a flaw in how we use it.


+1 Use of a subdomain prefix is absolutely the right thing to do, trendy root-domain-only sites notwithstanding.

CNAMEs are inherently more flexible and more resilient in the face of various load challenges or DoS attacks:

"Root domains are aesthetically pleasing, but the nature of DNS prevents them from being a robust solution for web apps. Root domains don't allow CNAMEs, which requires hardcoding IP addresses, which in turn prevents flexibility on updates to IPs which may need to change over time to handle new load or divert denial-of-service attacks. We strongly recommend against using root domains. Use a subdomain that can be CNAME aliased... " - Heroku [https://status.heroku.com/incident/156]


Could always use cookie paths and require all secured or stateful actions to hit some path (domain.tld/a). As long as no cookies are set at root, same benefit.

However, having static assets spread across multiple host names also helps browsers which can spawn multiple threads to pull assets from a page. I think most browsers allow 4 concurrent threads per HOST. In this case, it's just one additional host.


and I hate how Google (and others) force websites to serve small files from *.googleapis.com (and other domains)... some websites end up loading files from 10-15 different domain names... I wish this would stop, but I am sarcastic at the same time because I know that they could easily host them locally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: