A user account is required in order to edit this wiki, but we've had to disable public user registrations due to spam.

To request an account, ask an autoconfirmed user on Chat (such as one of these permanent autoconfirmed members).


From WHATWG Wiki
Revision as of 18:26, 4 August 2014 by Annevk (talk | contribs)
Jump to navigation Jump to search

This page is an attempt to document some discrepancies between browsers and RFC 2068 (and its successor, RFC 2616) because the HTTP WG seems unwilling to resolve those issues. Hopefully one day someone writes HTTP5 and takes this into account.

Header parsing: newlines


Header parsing: handling "duplicates"



Under certain conditions this header needs to be stripped: http://hg.mozilla.org/mozilla-central/file/366b5c0c02d3/netwerk/protocol/http/nsHttpChannel.cpp#l4042

Not raised. Monkey patched in Fetch.


In cases where Content-Length doesn't equal the actual content length, browsers truncate to the Content-Length value if it's smaller, but behaviour varies if Content-Length value is larger than actual content. Test results: https://github.com/slightlyoff/ServiceWorker/issues/362#issuecomment-49011736

Content-Type parsing

Pretty sure I (Anne) raised this at some point. A trailing ";" after a MIME type is considered invalid, but works fine in all implementations.

mnot: relevant spec - http://httpwg.github.io/specs/rfc7231.html#media.type I don't remember this being raised; we can either record it as errata or work it into the next revision.

Raised:: http://www.rfc-editor.org/errata_search.php?rfc=7231&eid=4031

Potential replacement: http://mimesniff.spec.whatwg.org/#parsing-a-mime-type


For 301 and 302 redirects browsers uniformly ignore HTTP and use GET for the subsequent request if the initial request uses an unsafe method. (And the user is not prompted.)

Raised: http://lists.w3.org/Archives/Public/ietf-http-wg/2007JanMar/thread.html#msg225

mnot: See http://httpwg.github.io/specs/rfc7231.html#status.3xx

(Seems this is mostly solved now. Would still be good to explicitly require behavior here. Maybe in Fetch.)

Location header: URLs

Browsers handle relative URIs and URIs with invalid characters in interoperable fashion.

Raised: http://lists.w3.org/Archives/Public/ietf-http-wg/2009JanMar/thread.html#msg276

mnot: see note in: http://httpwg.github.io/specs/rfc7231.html#header.location If there's an updated URL spec that's able to be referenced when 7231 is revised, we can point at that.

Location header: duplicates

Nothing defines what happens with multiple Location headers. Apparently if their values match it is okay, but otherwise a network error.

Location header: fragment


Content-Location header

Browsers cannot support this header.

Raised: http://lists.w3.org/Archives/Public/ietf-http-wg/2006OctDec/thread.html#msg190

This has apparently been fixed by making Content-Location have no UA conformance criteria. (It's not clear what it's good for at this point.)

Accept header

Accept header should preferably be done without spaces.

(not raised, odinho: I came across a site that didn't like the spaces, the developer said he'd gotten it off php.net or stackoverflow. He fixed the site. This could be disputed.)

Requiring two interoperable browser implementations

To prove that RFC 2616 can be implemented there should be two compatible implementations in browsers.

Raised: http://lists.w3.org/Archives/Public/ietf-http-wg/2007JanMar/0222.html

mnot: That'll happen when RFC723x go to full Standard.

[Not an issue] Assume Vary: User-Agent

UAs and intermediary caches should act as if all responses had Vary: User-Agent specified since many pages on the Web serve different content depending on the User-Agent header but do not bother specifying Vary: User-Agent.

Raised: http://lists.w3.org/Archives/Public/ietf-http-wg/2012OctDec/0114.html

You may as well not have a cache if you do this. It's hard to find two users with the same User-Agent string if you try. It varies based on minor browser version, major OS version, and in old IE doesn't it vary based on installed plugins? Yes, some pages will break if you run a transparent caching proxy and don't vary based on UA, but it will be a small minority and somewhat random, and generally they'll fix themselves if you force-refresh. (Browsers send Cache-Control: no-cache when you force-refresh, which will skip a normally-configured cache.) Even if you vary based on UA, caching proxies will break some pages, because some sites serve incorrect caching headers and a caching proxy will make you hit these more often even in the single-user case. (E.g., hitting refresh will skip browser cache for the current page but not proxy cache, right?)

So basically, this is a performance vs. correctness tradeoff, and the correct answer for the vast majority of users is not to have a caching proxy at all. Some will want a caching proxy that serves them some incorrect pages. No one wants a caching proxy that varies based on UA, because then the cache will be useless. The only case I could think of where this might make sense is in an office with a homogeneous browser environment, which wants caching for its standard browsers (which all have the same UA string), but still wants to be relatively correct for people using Wi-Fi on their laptops with different browsers. But it's not something that makes any sense to require across the board. Aryeh Gregor 08:45, 17 October 2012 (UTC)

mnot: Yeah, that's a really bad idea.