Hmm, looks like Chrome isn't respecting the Public suffix list for setting cookies ATM, even though the site for the list claims that it does.[0]
For an example, view [1] and [2] in Chrome, and note that cookies set by [1] are viewable by [2] even though this shouldn't be allowed given blogspot's entry in the public suffix list. Firefox doesn't exhibit this behaviour, and I'm thinking this is a recent regression in Chrome, but who knows.
> Also look at translate.googleusercontent.com, if you bomb it, Google Translate will stop working.
I haven't taken a look at it, but would it make sense to add dynamic <original>.translate.googleusercontent.com subdomains for translated sites and add the base domain to the public suffix list?
> it should be solved by browsers too & length should be limited
IMO this is only going to be solved by a revision to the spec that resolves the ambiguity. The core issue is that browsers and servers disagree as to what a "reasonable" cookie jar size is, and servers are rejecting request with "unreasonably" large cookie jars.
Until those limits are actually part of a spec that people follow, someone's going to be sending too much or allowing too little and legitimate requests will get rejected.
I don't know if you've read Michal's "The Tangled Web" or the Browser Security Handbook but they both go into it a little.[3]
>I haven't taken a look at it, but would it make sense to add dynamic <original>.translate.googleusercontent.com subdomains for translated sites and add the base domain to the public suffix list?
random hash as a sandbox.<hash>.guc.com will work.
That's semi-solution. How is it going to help mysite.cdn.com/file1 to bomb mysite.cdn.com/other-files...
Also look at translate.googleusercontent.com, if you bomb it, Google Translate will stop working.
I think public suffix is great and useful idea but it should be solved by browsers too & length should be limited