Jump to navigation Jump to search
Revision as of 07:25, 5 June 2012 by Hsivonen (→Browsers should accept byte code from the network instead of other languages having to be compiled into JS: Expand on byte code problems)
The purpose of this page is to collect explanations why commonly-proposed bad ideas for the Web are bad, so that they don't need to be explained over and over again.
It should be possible to escape from CSS to a Turing-complete language like JS
This would introduce at least one of these problems:
- Styling couldn't be applied incrementally if the styling program needs to run from start to finish with the entire document has its input. XSLT suffers from this problem.
- Content changes couldn't be efficiently reflected in layout by doing partial updates if the style system had to analyze a Turing-complete program to see if it affects the styling of a particular subtree.
- If the Turing-complete language allowed side effects, the style system couldn't decide when to re-resolve styles. Instead, style re-resolution times would have to be specified in the standard in order to get interoperable side effects.
- Even now, some oft-requested selectors aren't supported due to time-complexity issues. It would be hard to curb bad time-complexity if arbitrary programs were allowed to run as part of styling.
- The environment in which the Turing-complete language program with run would have to be specified and would constrain the implementation of the style system.
Browsers should accept byte code from the network instead of other languages having to be compiled into JS
- Java's byte code compat limited where the language could go. Needed a complex byte code verifier to make sure the low-level operations don't combine to something dangerous.
- JS is already supported, so compiling to JS enjoys better benefits from network effects that launching a new byte code.
- Using gzip compression on JS produces an already supported binary representation of JS.
- The language needs to be able to call some APIs in the environment. In the browser, those are DOM APIs which are single-threaded. In order to call those APIs, the new languages are limited by JS's concurrency model anyway. Otherwise, there's a need to reinvent the world on the API side as well. The JDK was a huge parallel system to the browser-native APIs. Pepper as available to NaCl reinvents a lot of APIs that have Web-native analogs.
- You can't standardize an existing JS VM byte code: SpiderMonkey and SquirrelFish both have byte code but they are of very different designs (stack-based vs. register-based) and neither is designed to be ingested from untrusted sources. V8 doesn't have a byte code.
- Adding a few features to JS that make it a better compilation target is a smaller step from the status quo than launching something completely new.
- A new byte code design wouldn't be unbiased towards a kind of language anyway. (See e.g. Java byte code and later additions to better support dynamic languages.) Might as well accept the biases of JS as the interchange format.
- There's a lot of JS out there, so browsers need to compete on JS performance anyway. Using JS as the interchange intermediate language helps leverage that work instead of involving parallel work for a parallel system.
- See this talk by Brendan Eich from 22:30 onwards. (Should transcribe the points here in due course.)