Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thinking about it further, it's an interesting question. A typical HTTP system has 3 roles: the client, the intermediaries (proxy/proxies/CDNs/caches), and the application (which combines the server, and any edge devices with knowledge and behaviour that is specific to the application).

Currently, the business rules are governed by a complex interrelationship between the request method, request headers, and response headers (including response status).

Although request and response bodies may be present, I've not come across any system where the contents of the body affect the business logic of intermediaries.

Yes, this could be simplified. But chucking out the request methods is both a low-hanging fruit, but also a short-term saving. Much of the complexity is in the request or response headers (such as Vary), whilst the request method provides a consistent and simple community standard.

One of the common limitations I come across is caching of content that varies according to the individual user (or perhaps the role(s) that user has access to within the site).

Most web systems send a plethora of cookies - for google analytics, web tracking, advertising, a dozen other things, and eventually for the session. But the proxy-controls that can be sent are limited to "Vary: cookie". This reduces the cache-potential massively. If I were to request one improvement in the HTTP 2 protocol, it would be the ability to vary according to a particular named cookie, rather than the entire cookie header.

Oh, and yes, I am aware of the ability to parse the cookie in a proxy, extract the proper key-val params, and vary according to that…but it adds unnecessary complexity to the application, and you can't currently expect uncontrolled downstream proxies to accommodate this practice.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: