This page is an attempt to document some discrepancies between browsers and RFC 2068 (and its successor, RFC 2616) because the HTTP WG seems unwilling to resolve those issues. Hopefully one day someone writes HTTP5 and takes this into account.
Under certain conditions this header needs to be stripped: http://hg.mozilla.org/mozilla-central/file/366b5c0c02d3/netwerk/protocol/http/nsHttpChannel.cpp#l4042
Not raised. Monkey patched in Fetch.
Pretty sure I (Anne) raised this at some point. A trailing ";" after a MIME type is considered invalid, but works fine in all implementations.
mnot: relevant spec - http://httpwg.github.io/specs/rfc7231.html#media.type I don't remember this being raised; we can either record it as errata or work it into the next revision.
For 301 and 302 redirects browsers uniformly ignore HTTP and use GET for the subsequent request if the initial request uses an unsafe method. (And the user is not prompted.)
Browsers handle relative URIs and URIs with invalid characters in interoperable fashion.
mnot: see note in: http://httpwg.github.io/specs/rfc7231.html#header.location
If there's an updated URL spec that's able to be referenced when 7231 is revised, we can point at that.
Browsers cannot support this header.
This has apparently been fixed by making Content-Location have no UA conformance criteria. (It's not clear what it's good for at this point.)
Accept header should preferably be done without spaces.
(not raised, odinho: I came across a site that didn't like the spaces, the developer said he'd gotten it off php.net or stackoverflow. He fixed the site. This could be disputed.)
Requiring two interoperable browser implementations
To prove that RFC 2616 can be implemented there should be two compatible implementations in browsers.
mnot: That'll happen when RFC723x go to full Standard.
Assume Vary: User-Agent
UAs and intermediary caches should act as if all responses had Vary: User-Agent specified since many pages on the Web serve different content depending on the User-Agent header but do not bother specifying Vary: User-Agent.
- You may as well not have a cache if you do this. It's hard to find two users with the same User-Agent string if you try. It varies based on minor browser version, major OS version, and in old IE doesn't it vary based on installed plugins? Yes, some pages will break if you run a transparent caching proxy and don't vary based on UA, but it will be a small minority and somewhat random, and generally they'll fix themselves if you force-refresh. (Browsers send Cache-Control: no-cache when you force-refresh, which will skip a normally-configured cache.) Even if you vary based on UA, caching proxies will break some pages, because some sites serve incorrect caching headers and a caching proxy will make you hit these more often even in the single-user case. (E.g., hitting refresh will skip browser cache for the current page but not proxy cache, right?)
So basically, this is a performance vs. correctness tradeoff, and the correct answer for the vast majority of users is not to have a caching proxy at all. Some will want a caching proxy that serves them some incorrect pages. No one wants a caching proxy that varies based on UA, because then the cache will be useless. The only case I could think of where this might make sense is in an office with a homogeneous browser environment, which wants caching for its standard browsers (which all have the same UA string), but still wants to be relatively correct for people using Wi-Fi on their laptops with different browsers. But it's not something that makes any sense to require across the board. Aryeh Gregor 08:45, 17 October 2012 (UTC)
mnot: Yeah, that's a really bad idea.