A user account is required in order to edit this wiki, but we've had to disable public user registrations due to spam.
To request an account, ask an autoconfirmed user on Chat (such as one of these permanent autoconfirmed members).
Why not conneg: Difference between revisions
m (Grammar fix to remove stray articles) |
(Mention accept-charset, discuss HTTP versioning some more) |
||
Line 1: | Line 1: | ||
The purpose of this page is to explain what's wrong with HTTP content negotiation and why you should not suggest HTTP content negotiation as a solution to a problem. | The purpose of this page is to explain what's wrong with HTTP content negotiation and why you should not suggest HTTP content negotiation as a solution to a problem. | ||
HTTP content negotiation has | HTTP content negotiation has four axes: negotiation by format (<code>Accept</code>), negotiation by character encoding <code>Accept-Charset</code>, negotiation by natural language (<code>Accept-Language</code>) and negotiation by compression (<code>Accept-Encoding</code>). These axes need to be discussed separately. The only one of these that shouldn't be classified as a failure is negotiation by compression, but even it shouldn't be taken as a role model considering its verbosity for basically negotiating one bit of information. | ||
==Negotiation by compression== | ==Negotiation by compression== | ||
Line 8: | Line 8: | ||
These days, all major browsers support gzipped responses. In that sense, the feature is a success. However, it is terribly wasteful that each request ends up containing 23 bytes of boilerplate. It would have been much more efficient if HTTP 1.1 had made the ability to accept gzipped responses a mandatory feature so that servers would have known that they are eligible to send a gzipped response if the request says HTTP/1.1 (or higher) rather than HTTP/1.0. | These days, all major browsers support gzipped responses. In that sense, the feature is a success. However, it is terribly wasteful that each request ends up containing 23 bytes of boilerplate. It would have been much more efficient if HTTP 1.1 had made the ability to accept gzipped responses a mandatory feature so that servers would have known that they are eligible to send a gzipped response if the request says HTTP/1.1 (or higher) rather than HTTP/1.0. | ||
One might argue that it was important for getting from the HTTP/1.0 world to the world we have now by allowing new features to be opted into one by one and that format and protocol versions are an anti-pattern on the Web. In any case, it’s sad for the 1 bit of information to take 23 bytes—especially if HTTP is to have version number anyway and some features tied to the version number (like support for chunked responses). | |||
==Negotiation by character encoding== | |||
UTF-8 was introduced in 1993. By the time <code>Accept-Charset</code> was introduced, it was already obsolete in the sense that any server capable of converting to multiple encoding could have used its conversion capabilities to just convert to UTF-8. (Granted, UTF-8 support in browsers didn’t appear right away, but supporting UTF-8 sooner would have been more worthwhile than sending <code>Accept-Charset</code>.) Fortunately, we have been able to get rid of this one. Of the major browsers, only Chrome sends <code>Accept-Charset</code> anymore. | |||
==Negotiation by natural language== | ==Negotiation by natural language== |
Revision as of 12:20, 17 January 2013
The purpose of this page is to explain what's wrong with HTTP content negotiation and why you should not suggest HTTP content negotiation as a solution to a problem.
HTTP content negotiation has four axes: negotiation by format (Accept
), negotiation by character encoding Accept-Charset
, negotiation by natural language (Accept-Language
) and negotiation by compression (Accept-Encoding
). These axes need to be discussed separately. The only one of these that shouldn't be classified as a failure is negotiation by compression, but even it shouldn't be taken as a role model considering its verbosity for basically negotiating one bit of information.
Negotiation by compression
In practice, in the years HTTP has existed, a wide variety of compression methods has not cropped up. For practical purposes, there are two compression modes: uncompressed and gzipped. Google experimented with Shared Dictionary Compression over HTTP, but in practice it went nowhere on the wider Web. Even the Google-designed HTTP replacement SPDY uses gzip.
These days, all major browsers support gzipped responses. In that sense, the feature is a success. However, it is terribly wasteful that each request ends up containing 23 bytes of boilerplate. It would have been much more efficient if HTTP 1.1 had made the ability to accept gzipped responses a mandatory feature so that servers would have known that they are eligible to send a gzipped response if the request says HTTP/1.1 (or higher) rather than HTTP/1.0.
One might argue that it was important for getting from the HTTP/1.0 world to the world we have now by allowing new features to be opted into one by one and that format and protocol versions are an anti-pattern on the Web. In any case, it’s sad for the 1 bit of information to take 23 bytes—especially if HTTP is to have version number anyway and some features tied to the version number (like support for chunked responses).
Negotiation by character encoding
UTF-8 was introduced in 1993. By the time Accept-Charset
was introduced, it was already obsolete in the sense that any server capable of converting to multiple encoding could have used its conversion capabilities to just convert to UTF-8. (Granted, UTF-8 support in browsers didn’t appear right away, but supporting UTF-8 sooner would have been more worthwhile than sending Accept-Charset
.) Fortunately, we have been able to get rid of this one. Of the major browsers, only Chrome sends Accept-Charset
anymore.
Negotiation by natural language
Negotiation by natural language is a failure for multiple reasons.
First, Web sites that provide content (as opposed to application user interface) in multiple languages so that the different language versions are so equally good that one language version could be selected without human judgment is actually very, very rare. When sites do have multiple language versions, the versions often are not equal. Typically, one language is the primary language for the site and the other language versions are incomplete, out of date or otherwise of low quality. Therefore, if the user can read more than one of the languages offered by the site, the user is most often best off choosing the language version to read manually by making a judgment about which language version that the user is able to read is most likely to be of highest quality. The negotiation algorithm tries to make this automatic, but in practice the algorithm is no substitute for the users human judgment.
It is slightly more common for Web applications to provide their UI strings in multiple languages. In that case, automatic choice might have a chance of working, but the user has a larger time commitment to a particular application anyway, so it is more okay for an application to provide an explicit language is UI than for a content site to require the user choose a language (which, per the previous paragraph, multilanguage content sites need to do anyway) making automatic negotiation less necessary for an application. Even though the initial value for the language preference for a Web application UI could be taken from what the browser suggests, the user starts using a new Web application so rarely relative to all HTTP requests that the user's browser makes that it is terribly wasteful for each HTTP request to broadcast the user's preferred language in case the request happens to be one whose headers and up seeding the preferences for a Web application. (And Web applications these days use JavaScript anyway and, therefore, could request the data from the browser instead of the browser broadcasting it with every HTTP request.)
Also, it's a problem that negotiation by natural language is about negotiating by a characteristic of the human user as opposed to a characteristic of software. That is, the browser doesn't really know the characteristics of the human user without the human user configuring the browser. Since negotiation by natural language is so rarely useful, it doesn't really make sense for the browser to advertise the configuration option a lot or insist that the user makes the configuration before browsing the Web. As a result, the browser doesn't really know about the user's language skills beyond guessing that the user might be able to read the UI language of the browser. And that's a pretty bad guess. It doesn't give any information about the other languages the user is able to read and the user might not even be able to actually read meaningful prose in the language of the browser UI. (You can get pretty far with the browser UI simply by knowing that you can type addresses into the location bar and that the arrow to the left takes you back to the previous page.)
Finally, there is an actual disincentive for configuring the browser: Even if everyone configured with their language preferences in their browser, the combination of languages that the person can read would partition the population of the world into rather small buckets in some cases making the language combination the way to identify a particular user or a smallish group of users. Since people so rarely configure their language preference, when someone does configure it, chances are that the configuration becomes uniquely or almost uniquely identifying. This can be seen as a privacy problem.
Negotiating by format
Negotiating by format is a failure because:
- It is actually rather rare that format alternatives can be chosen without knowing the user's intent. You might be able to choose automatically between different ways of compressing bitmaps, but choosing among e.g. a PDF and a Microsoft Word file might depend on the user's intent to print without reflowing lines due to different fonts on the system (PDF) or editing (Word) even if the user has software for reading both PDF and Microsoft Word files. Therefore, it makes sense to provide links to both the PDF and the Microsoft Word file and let the user choose which link to follow.
- You can’t rely on format negotiation as Web author, because there are always clients that accept something they don’t declare.
- Due to the previous point, if you are a browser vendor and another vendor has shipped a browser that doesn’t declare that it support something it supports, it doesn't make sense for you to waste bytes declaring it, either, because Web authors can rely (solely) on the declaration anyway.
- Negotiated responses are so rare relative to all HTTP responses that the request bloat resulting from advertising the supported formats is almost always waste.
- Past experience from the GIF to PNG transition suggests that authors are more likely to deploy one format either the old one (with worse capabilities) or the new one (forgoing support for old browsers) than actually bother using negotiation (as a generalization on the Web scale; there may be individual counterexamples that round to practical nothingness).
- It never pays to advertise a format once all current browsers support it but there are still old browsers that don't. (E.g. SVG as of 2013.) Since there are browsers that support the format that don't advertise it, the earlier point about not being able to rely on the declaration applies. However, since non-support is a past phenomenon, it is safe to UA sniff for the legacy browsers. If all current browsers support the format, there probably cannot be successful future browsers that don't support the format so the usual problem with you a sniffing where you assume that the current defect stays present in the future versions of a given browser does not apply.