<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.whatwg.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Junov</id>
	<title>WHATWG Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.whatwg.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Junov"/>
	<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/wiki/Special:Contributions/Junov"/>
	<updated>2026-04-30T08:04:28Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.3</generator>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=OffscreenCanvas.requestAnimationFrame&amp;diff=10148</id>
		<title>OffscreenCanvas.requestAnimationFrame</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=OffscreenCanvas.requestAnimationFrame&amp;diff=10148"/>
		<updated>2017-01-18T17:11:34Z</updated>

		<summary type="html">&lt;p&gt;Junov: Adding the commit solution to the proposal&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;This proposal aims to provide a reliable mechanism for driving animations using OffscreenCanvas in a Worker&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
An OffscreenCanvas is used in a worker to produce a sequence of frames that constitute an animation. The frames need to be produced at regular intervals that match the frame rate of the display&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
Currently, it is possible to use setTimeout or setInterval in a worker to invoke an animation callback at a regular interval. However, since this mechanism is not driven by the display, the following issues arise:&lt;br /&gt;
* The display device&#039;s refresh rate needs to be guessed.&lt;br /&gt;
* Even if the correct rate is guessed, drift in the timing will cause dropped frames and skipped frames.&lt;br /&gt;
* In GPU-accelerated use cases, it is possible for the rendering script to run at a rate that the GPU cannot keep-up with, which may cause the accumulation of a multi-frame rendering backlog, resulting in high latency and eventually OOM crashes.  A possible solution to this is for the user agent to implement a throttling mechanism, in which case the worker&#039;s event loop may be periodically de-scheduled while the GPU catches up.  Such de-scheduling is bad because it prevents the worker from doing other work.&lt;br /&gt;
&lt;br /&gt;
Another solution is to have a requestAnimationFrame loop in the browsing context&#039;s event loop that posts a message to the worker at each animation iteration.&lt;br /&gt;
* This mechanism may add undue latency to the signal, especially when the browsing context&#039;s event loop is busy, which completely destroys one of the key advantages of using OffscreenCanvas in a worker.&lt;br /&gt;
* The the frame rate in the browsing context&#039;s event loop may be higher than the worker can keep up which which will require a throttling mechanism to be implemented in script&lt;br /&gt;
* As with setTimeout/setInterval, it is possible for the rendering script to run at a rate that the GPU cannot keep-up with.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
New API to drive OffscreenCanvas animations can ensure optimal frame rate, minimize animation jank, prevent overdraw, and minimize display latency.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://github.com/w3ctag/spec-reviews/issues/141 W3C TAG review for ImageBitmapRenderingContext]&amp;lt;/cite&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;(...) there seems to be a good bit of concern about lack of things like requestAnimationFrame (...)  -- L. David Barron, Mozilla&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://lists.w3.org/Archives/Public/public-whatwg-archive/2015Aug/0019.html whatwg mailing list thread &amp;quot;Worker requestAnimationFrame&amp;quot;]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;For OffscreenCanvas we need a way for a Worker to draw once per composited frame. --Robert O&#039;Callahan, Mozilla&lt;br /&gt;
&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== OffscreenCanvas.requestAnimationFrame ===&lt;br /&gt;
&lt;br /&gt;
Works much like window.requestAnimationFrame, except that the scheduling of callbacks is independent of the browsing context event loop, and therefore is not necessarily synchronized with graphics updates from the browsing context.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
The requestAnimationFrame() and cancelAnimationFrame() methods shall be spec&#039;ed almost identically to their Window interface counterparts, except that the callback list would be stored in the OffscreenCanvas object.&lt;br /&gt;
&lt;br /&gt;
The main difference with respect to the Window.requestAnimationFrame processing model is in how the callbacks are scheduled. In a browsing context, the animation callbacks are coordinated with the graphics update in the [https://html.spec.whatwg.org/multipage/webappapis.html#event-loop-processing-model event loop processing model]. Such coordination shall not exist in workers.  &lt;br /&gt;
&lt;br /&gt;
For OffscreenCanvases, the user agent will schedule an &amp;quot;animation task&amp;quot; to run all of the OffscreenCanvas&#039;s pending animation callbacks. in a way the respects the following constraints:&lt;br /&gt;
* No overdraw: An animation frame committed from an animation task shall not replace an animation frame from a previous animation task until that previous frame has been rendered to the display device.&lt;br /&gt;
* animation tasks shall be scheduled at the highest possible rate that can be maintained without going into overdraw and without accumulating a backlog of more than one pending frame.&lt;br /&gt;
&lt;br /&gt;
===== Special Cases =====&lt;br /&gt;
&lt;br /&gt;
* When rAF is invoked on an OffscreenCanvas that does not have a placeholder canvas and is not linked to a VRLayer, throw an InvalidStateError. The OffscreenCanvas content must be composited for the rAF processing model to make sense.&lt;br /&gt;
* Attempting to transfer an OffscreenCanvas object with a non-empty animation callback list throws an InvalidStateError.&lt;br /&gt;
* Attempting to construct a VRLayer using an OffscreenCanvas object with a non-empty animation callback list throws an InvalidStateError.&lt;br /&gt;
* When the OffscreenCanvas is associated with a VRLayer, all calls to {request|cancel}AnimationFrame must be forwarded to the VRLayer&#039;s VRDisplay&#039;s {request|cancel}AnimationFrame methods.  This implies that when the OffscreenCanvas simultaneously is visible through a placeholder canvas and a VR device, the animation loop is driven by the VR device.&lt;br /&gt;
* The animations tasks for different OffscreenCanvas objects that live in the same event loop are not necessarily synchronized. &lt;br /&gt;
&lt;br /&gt;
==== Open issues ==== &lt;br /&gt;
Calling commit() on a given OffscreenCanvas multiple times in the same animation frame is problematic.  Possible way of handling the situation:&lt;br /&gt;
* Drop all commits but the last one (or the first one?)&lt;br /&gt;
* Queue multiple frames and wait for all of them to have be displayed before scheduling the next animation task&lt;br /&gt;
* Throw an exception&lt;br /&gt;
&lt;br /&gt;
What to do if commit() is not called from within the animation callback?  This is problematic because &lt;br /&gt;
* Do an implicit commit()&lt;br /&gt;
* Repeat the previous frame&lt;br /&gt;
* Schedule the next animation frame immediately.&lt;br /&gt;
* Prevent this from ever happening: Let OffscreenCanvas object have a needsCommit flag that is initially false. Set needsCommit to true at the beginning of an animation task. Set needsCommit to false when commit is called. When requestAnimationFrame is called, throw an exception if needsCommit is true.&lt;br /&gt;
&lt;br /&gt;
Should it be possible to commit() the contents of other canvases from within a rAF callback? &lt;br /&gt;
&lt;br /&gt;
=== OffscreenCanvas.commit() to return a promise ===&lt;br /&gt;
&lt;br /&gt;
An alternate solution would be to have commit() return a promise that gets resolved when it is time to begin rendering the next frame.  This single API entry-point provides the necessary flexibility to handle continuous animations as well as sporadic updates.&lt;br /&gt;
&lt;br /&gt;
Continuous animation example:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;function animationLoop() {&lt;br /&gt;
  // draw stuff&lt;br /&gt;
  (...)&lt;br /&gt;
  ctx.commit().then(animationLoop);&lt;br /&gt;
  // do post commit work&lt;br /&gt;
  (...)&lt;br /&gt;
}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another possibility is to use the async/await syntax:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;async function animationLoop() {&lt;br /&gt;
  var promise;&lt;br /&gt;
  do {&lt;br /&gt;
    //draw stuff&lt;br /&gt;
    (...)&lt;br /&gt;
    promise = ctx.commit()&lt;br /&gt;
    // do post commit work&lt;br /&gt;
    (...)&lt;br /&gt;
  } while (await promise);&lt;br /&gt;
}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To animate multiple canvases in lock-step, one could do this, for eaxample:&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;nowiki&amp;gt;function animationLoop() {&lt;br /&gt;
  // draw stuff&lt;br /&gt;
  (...)&lt;br /&gt;
  Promise.all([ctx1.commit(), ctx2.commit()]).then(animationLoop);&lt;br /&gt;
}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For occasional update use cases, it is just a matter of ignoring the promise returned by commit() and to drive the animation using another signal, for example a network event.  In the case where multiple calls to commit are made in the same frame interval, the user agent skips frames in order to avoid accumulating a multi-frame backlog, as described in the processing model below.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
An OffscreenCanvas object has a &#039;&#039;pendingFrame&#039;&#039; internal slot that stores a reference to the frame that was captured by the last call to commit(). The reference is held until the frame is actually committed. &#039;pendingFrame&#039; is initially unset.&lt;br /&gt;
&lt;br /&gt;
An OffscreenCanvas object has a &#039;&#039;pendingPromise&#039;&#039; internal slot that stores a reference to the promise that was returned by the last call to commit(). [[pendingPromise]] is initially unset, and its reference is only retained while the promise is in the unresolved state.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;BeginFrame&#039;&#039; signal is a signal that is dispatched by the UserAgent to a specific OffscreenCanvas when it is time to render the next animation frame for the OffscreenCanvas.&lt;br /&gt;
&lt;br /&gt;
When commit() is called:&lt;br /&gt;
# Let &#039;&#039;frame&#039;&#039; be a copy of the current contents of the canvas.&lt;br /&gt;
# If &#039;&#039;pendingPromise&#039;&#039; is set, then run theses substeps:&lt;br /&gt;
## Set &#039;&#039;pendingFrame&#039;&#039; to be a reference to &#039;&#039;frame&#039;&#039;.&lt;br /&gt;
## Return &#039;&#039;pendingPromise&#039;&#039;.&lt;br /&gt;
# Set &#039;&#039;pendingPromise&#039;&#039; to be a newly created unresolved promise object.&lt;br /&gt;
# Run the steps to &#039;&#039;&#039;commit a frame&#039;&#039;&#039;, passing &#039;&#039;frame&#039;&#039; as an argument.&lt;br /&gt;
# Return &#039;&#039;pendingPromise&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
When the &#039;&#039;BeginFrame&#039;&#039; signal is to be dispatched to an OffscreenCanvas object, the UserAgent must queue a task on the OffscreenCanvas object&#039;s event loop that runs the following steps: &lt;br /&gt;
# If &#039;&#039;pendingFrame&#039;&#039; is set, then run the following substeps:&lt;br /&gt;
## Run the steps to &#039;&#039;&#039;commit a frame&#039;&#039;&#039;, passing [[pendingFrame]] as an argument.&lt;br /&gt;
## Unset &#039;&#039;pendingFrame&#039;&#039;.&lt;br /&gt;
## Abort these steps.&lt;br /&gt;
# If &#039;&#039;pendingPromise&#039;&#039; is not set then abort these steps.&lt;br /&gt;
# Resolve the promise referenced by &#039;&#039;pendingPromise&#039;&#039;.&lt;br /&gt;
# Unset &#039;&#039;pendingPromise&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
When the user agent is required to run the steps to &#039;&#039;&#039;commit a frame&#039;&#039;&#039;, it must do what is currently spec&#039;ed as the steps for commit().&lt;br /&gt;
&lt;br /&gt;
This processing model takes care the unresolved issues with the OffscreenCanvas.requestAnimationFrame solution because it makes it safe to call commit at any time by providing the following guarantees:&lt;br /&gt;
# In cases of overdraw (commit() called at a rate higher than can be displayed), frames may be dropped to ensure low latency (no more than one frame of backlog).&lt;br /&gt;
# The frame captured by the last call to commit after the end of an animation sequence is never dropped. In other words, when animation stops, it is always the most recent frame that is displayed.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ====&lt;br /&gt;
&lt;br /&gt;
The idea of the commit API was discussed at a meeting of the WebVR working group and has support from multiple browser vendors.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=OffscreenCanvas.requestAnimationFrame&amp;diff=10118</id>
		<title>OffscreenCanvas.requestAnimationFrame</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=OffscreenCanvas.requestAnimationFrame&amp;diff=10118"/>
		<updated>2016-12-02T18:58:14Z</updated>

		<summary type="html">&lt;p&gt;Junov: Created page with &amp;quot;:&amp;#039;&amp;#039;This proposal aims to provide a reliable mechanism for driving animations using OffscreenCanvas in a Worker&amp;#039;&amp;#039;  == Use Case Description == An OffscreenCanvas is used in a wo...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;This proposal aims to provide a reliable mechanism for driving animations using OffscreenCanvas in a Worker&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
An OffscreenCanvas is used in a worker to produce a sequence of frames that constitute an animation. The frames need to be produced at regular intervals that match the frame rate of the display&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
Currently, it is possible to use setTimeout or setInterval in a worker to invoke an animation callback at a regular interval. However, since this mechanism is not driven by the display, the following issues arise:&lt;br /&gt;
* The display device&#039;s refresh rate needs to be guessed.&lt;br /&gt;
* Even if the correct rate is guessed, drift in the timing will cause dropped frames and skipped frames.&lt;br /&gt;
* In GPU-accelerated use cases, it is possible for the rendering script to run at a rate that the GPU cannot keep-up with, which may cause the accumulation of a multi-frame rendering backlog, resulting in high latency and eventually OOM crashes.  A possible solution to this is for the user agent to implement a throttling mechanism, in which case the worker&#039;s event loop may be periodically de-scheduled while the GPU catches up.  Such de-scheduling is bad because it prevents the worker from doing other work.&lt;br /&gt;
&lt;br /&gt;
Another solution is to have a requestAnimationFrame loop in the browsing context&#039;s event loop that posts a message to the worker at each animation iteration.&lt;br /&gt;
* This mechanism may add undue latency to the signal, especially when the browsing context&#039;s event loop is busy, which completely destroys one of the key advantages of using OffscreenCanvas in a worker.&lt;br /&gt;
* The the frame rate in the browsing context&#039;s event loop may be higher than the worker can keep up which which will require a throttling mechanism to be implemented in script&lt;br /&gt;
* As with setTimeout/setInterval, it is possible for the rendering script to run at a rate that the GPU cannot keep-up with.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
New API to drive OffscreenCanvas animations can ensure optimal frame rate, minimize animation jank, prevent overdraw, and minimize display latency.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://github.com/w3ctag/spec-reviews/issues/141 W3C TAG review for ImageBitmapRenderingContext]&amp;lt;/cite&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;(...) there seems to be a good bit of concern about lack of things like requestAnimationFrame (...)  -- L. David Barron, Mozilla&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://lists.w3.org/Archives/Public/public-whatwg-archive/2015Aug/0019.html whatwg mailing list thread &amp;quot;Worker requestAnimationFrame&amp;quot;]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;For OffscreenCanvas we need a way for a Worker to draw once per composited frame. --Robert O&#039;Callahan, Mozilla&lt;br /&gt;
&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solution ==&lt;br /&gt;
&lt;br /&gt;
=== OffscreenCanvas.requestAnimationFrame ===&lt;br /&gt;
&lt;br /&gt;
Works much like window.requestAnimationFrame, except that the scheduling of callbacks is independent of the browsing context event loop, and therefore is not necessarily synchronized with graphics updates from the browsing context.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
The requestAnimationFrame() and cancelAnimationFrame() methods shall be spec&#039;ed almost identically to their Window interface counterparts, except that the callback list would be stored in the OffscreenCanvas object.&lt;br /&gt;
&lt;br /&gt;
The main difference with respect to the Window.requestAnimationFrame processing model is in how the callbacks are scheduled. In a browsing context, the animation callbacks are coordinated with the graphics update in the [https://html.spec.whatwg.org/multipage/webappapis.html#event-loop-processing-model event loop processing model]. Such coordination shall not exist in workers.  &lt;br /&gt;
&lt;br /&gt;
For OffscreenCanvases, the user agent will schedule an &amp;quot;animation task&amp;quot; to run all of the OffscreenCanvas&#039;s pending animation callbacks. in a way the respects the following constraints:&lt;br /&gt;
* No overdraw: An animation frame committed from an animation task shall not replace an animation frame from a previous animation task until that previous frame has been rendered to the display device.&lt;br /&gt;
* animation tasks shall be scheduled at the highest possible rate that can be maintained without going into overdraw and without accumulating a backlog of more than one pending frame.&lt;br /&gt;
&lt;br /&gt;
===== Special Cases =====&lt;br /&gt;
&lt;br /&gt;
* When rAF is invoked on an OffscreenCanvas that does not have a placeholder canvas and is not linked to a VRLayer, throw an InvalidStateError. The OffscreenCanvas content must be composited for the rAF processing model to make sense.&lt;br /&gt;
* Attempting to transfer an OffscreenCanvas object with a non-empty animation callback list throws an InvalidStateError.&lt;br /&gt;
* Attempting to construct a VRLayer using an OffscreenCanvas object with a non-empty animation callback list throws an InvalidStateError.&lt;br /&gt;
* When the OffscreenCanvas is associated with a VRLayer, all calls to {request|cancel}AnimationFrame must be forwarded to the VRLayer&#039;s VRDisplay&#039;s {request|cancel}AnimationFrame methods.  This implies that when the OffscreenCanvas simultaneously is visible through a placeholder canvas and a VR device, the animation loop is driven by the VR device.&lt;br /&gt;
* The animations tasks for different OffscreenCanvas objects that live in the same event loop are not necessarily synchronized. &lt;br /&gt;
&lt;br /&gt;
==== Open issues ==== &lt;br /&gt;
Calling commit() on a given OffscreenCanvas multiple times in the same animation frame is problematic.  Possible way of handling the situation:&lt;br /&gt;
* Drop all commits but the last one (or the first one?)&lt;br /&gt;
* Queue multiple frames and wait for all of them to have be displayed before scheduling the next animation task&lt;br /&gt;
* Throw an exception&lt;br /&gt;
&lt;br /&gt;
What to do if commit() is not called from within the animation callback?  This is problematic because &lt;br /&gt;
* Do an implicit commit()&lt;br /&gt;
* Repeat the previous frame&lt;br /&gt;
* Schedule the next animation frame immediately.&lt;br /&gt;
* Prevent this from ever happening: Let OffscreenCanvas object have a needsCommit flag that is initially false. Set needsCommit to true at the beginning of an animation task. Set needsCommit to false when commit is called. When requestAnimationFrame is called, throw an exception if needsCommit is true.&lt;br /&gt;
&lt;br /&gt;
Should it be possible to commit() the contents of other canvases from within a rAF callback? &lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:&#039;&#039;There is no other way to guarantee optimal animation smoothness and low latency while not wasting CPU cycles on overdraw, so this API should be the method of choice for driving OffscreenCanvas.commit()&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10116</id>
		<title>OffscreenCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10116"/>
		<updated>2016-11-30T18:14:50Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;This proposal is no longer active. OffscreenCanvas is now in the HTML specification [https://html.spec.whatwg.org/multipage/scripting.html#the-offscreencanvas-interface here].&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Provides more control over how canvases are rendered. This is a follow-on to the [[WorkerCanvas]] proposal and will be merged once agreement is reached.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
Feedback from web application authors using canvases have shown the need for the following controls:&lt;br /&gt;
&lt;br /&gt;
* (From ShaderToy, Sketchfab, Verold): need to be able to render to multiple regions on the page efficiently using a single canvas context. 3D model warehouse sites desire to show multiple live interactive models on the page, but creating multiple WebGL contexts per page is too inefficient. A single context should be able to render to multiple regions on the page.&lt;br /&gt;
* (From Google Maps): need to be able to render WebGL from a worker, transfer the rendered image to the main thread without making any copy of it, and composite it with other HTML on the page, guaranteeing that the updates are all seen in the same rendered frame.&lt;br /&gt;
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.&lt;br /&gt;
* (From adopters of the Push API): need to be able to dynamically create images to use as notification icons, such as compositing avatars, or adding an unread count&lt;br /&gt;
* (From the Google Docs team): need to be able to layout and render text from a worker using CanvasRenderingContext2D and display those results on the main thread.&lt;br /&gt;
* (From the Google Slides team): want to layout and render the slide thumbnails from a worker. During initial load and heavy collaboration these update frequently, and currently cause slowdowns on the main thread.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
&lt;br /&gt;
* [https://html.spec.whatwg.org/multipage/scripting.html#proxying-canvases-to-workers CanvasProxy] does not provide sufficient control to allow synchronization between workers&#039; rendering and DOM updates on the main thread. Keeping this rendering in sync is a requirement from Google&#039;s Maps team.&lt;br /&gt;
* [[CanvasInWorkers]] does not allow a worker to render directly into a canvas on the main thread without running code on the main thread. Allowing completely unsynchronized rendering is a requirement from Mozilla and users of Emscripten such as Epic Games and Unity, in which the desire is to execute all of the game&#039;s rendering on a worker thread.&lt;br /&gt;
* [[WorkerCanvas]] mostly addresses these two use cases, but some implementers objected to the mechanism for displaying the rendering results in image elements. The specific objection was that image elements already have complex internal state (for example, the management of the image&#039;s &amp;quot;loaded&amp;quot; state), and this would make it more complex. It also did not precisely address the use case of producing new frames both on the main thread and in workers.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
[https://blog.mozilla.org/research/2014/07/22/webgl-in-web-workers-today-and-faster-than-expected/ WebGL in Web Workers] details some work attempted in the Emscripten toolchain to address the lack of WebGL in workers. Due to the high volume of calls and large amount of data that is transferred to the graphics card in a typical high-end WebGL application, this approach is not sustainable. It&#039;s necessary for workers to be able to call the WebGL API directly, and present those results to the screen in a manner that does not introduce any copies of the rendering results.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Making canvas rendering contexts available to workers will increase parallelism in web applications, leading to increased performance on multi-core systems.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
See the abovementioned use cases:&lt;br /&gt;
&lt;br /&gt;
* Google&#039;s Maps team&lt;br /&gt;
* Emscripten users such as Epic Games and Unity&lt;br /&gt;
* Many others&lt;br /&gt;
&lt;br /&gt;
== Web IDL ==&lt;br /&gt;
&lt;br /&gt;
 typedef (OffscreenCanvasRenderingContext2D or&lt;br /&gt;
          WebGLRenderingContext or&lt;br /&gt;
          WebGL2RenderingContext) OffscreenRenderingContext;&lt;br /&gt;
 &lt;br /&gt;
 [Constructor(unsigned long width, unsigned long height),&lt;br /&gt;
  Exposed=(Window,Worker)]&lt;br /&gt;
 interface OffscreenCanvas {&lt;br /&gt;
   attribute unsigned long width;&lt;br /&gt;
   attribute unsigned long height;&lt;br /&gt;
   OffscreenRenderingContext? getContext(DOMString contextId, any... arguments); &lt;br /&gt;
 &lt;br /&gt;
   // OffscreenCanvas, like HTMLCanvasElement, maintains an origin-clean flag.&lt;br /&gt;
   // ImageBitmaps created by calling this method also have an&lt;br /&gt;
   // origin-clean flag which is set to the value of the OffscreenCanvas&#039;s&lt;br /&gt;
   // flag at the time of their construction. Uses of the ImageBitmap&lt;br /&gt;
   // in other APIs, such as CanvasRenderingContext2D or&lt;br /&gt;
   // WebGLRenderingContext, propagate this flag like other&lt;br /&gt;
   // CanvasImageSource types do, such as HTMLImageElement.&lt;br /&gt;
   ImageBitmap transferToImageBitmap();&lt;br /&gt;
 &lt;br /&gt;
   // Throws a SecurityError if the OffscreenCanvas&#039;s origin-clean flag&lt;br /&gt;
   // is set to false.&lt;br /&gt;
   Promise&amp;lt;Blob&amp;gt; convertToBlob(optional ImageEncodeOptions options);   &lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 dictionary ImageEncodeOptions {&lt;br /&gt;
   DOMString type = &amp;quot;image/png&amp;quot;;&lt;br /&gt;
   unrestricted double quality = 1.0; // Defaults to 1.0 if value is outside 0:1 range&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 OffscreenCanvas implements Transferable;&lt;br /&gt;
 &lt;br /&gt;
 partial interface HTMLCanvasElement {&lt;br /&gt;
   OffscreenCanvas transferControlToOffscreen();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 typedef (HTMLOrSVGImageElement or&lt;br /&gt;
          HTMLVideoElement or&lt;br /&gt;
          HTMLCanvasElement or&lt;br /&gt;
          ImageBitmap or&lt;br /&gt;
          OffscreenCanvas) CanvasImageSource;&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 interface OffscreenCanvasRenderingContext2D {&lt;br /&gt;
   // commit() can only be used when HTMLCanvasElement has transferred Control&lt;br /&gt;
   // to OffscreenCanvas. Otherwise, an InvalidStateError will be thrown.&lt;br /&gt;
   // commit() can be invoked on main thread or worker thread. When it is invoked,&lt;br /&gt;
   // it is expected to see the image drawn to OffscreenCanvasRenderingContext2D &lt;br /&gt;
   // be displayed in the associated HTMLCanvasElement.&lt;br /&gt;
   void commit();&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute OffscreenCanvas canvas;&lt;br /&gt;
 };&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasState;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTransform;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasCompositing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageSmoothing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFillStrokeStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasShadowStyles;&lt;br /&gt;
 // Reference filters (e.g. &#039;url()&#039;) are not expected to work in Workers&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFilters;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasRect;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawPath;&lt;br /&gt;
 // Text support in workers poses very difficult technical challenges.&lt;br /&gt;
 // Open issue: should we forgo text support in OffscreenCanvas v1?&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasText;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawImage;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageData;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPathDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTextDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPath; &lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 Partial interface CanvasPattern {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 partial interface CanvasGradient {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 partial interface WebGLRenderingContextBase {&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;&lt;br /&gt;
 &lt;br /&gt;
   // If this context is associated with an OffscreenCanvas that was&lt;br /&gt;
   // created by HTMLCanvasElement&#039;s transferControlToOffscreen method,&lt;br /&gt;
   // causes this context&#039;s current rendering results to be pushed&lt;br /&gt;
   // to that canvas element. This has the same effect as returning&lt;br /&gt;
   // control to the main loop in a single-threaded application. Otherwise,&lt;br /&gt;
   // an InvalidStateError will be thrown.&lt;br /&gt;
   void commit();&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== This Solution ===&lt;br /&gt;
&lt;br /&gt;
This proposed API can be used in several ways to satisfy the use cases described above:&lt;br /&gt;
&lt;br /&gt;
* It supports zero-copy transfer of canvases&#039; rendering results between threads, for example from a worker to the main thread. In this model, the main thread controls when to display new frames produced by the worker, so synchronization with other DOM updates is achieved.&lt;br /&gt;
&lt;br /&gt;
* It supports fully asynchronous rendering by a worker into a canvas displayed on the main thread. This satisfies certain Emscripten developers&#039; full-screen use cases.&lt;br /&gt;
&lt;br /&gt;
* It supports using a single WebGLRenderingContext or Canvas2DRenderingContext to efficiently render into multiple regions on the web page.&lt;br /&gt;
&lt;br /&gt;
* It introduces ImageBitmapRenderingContext, a new canvas context type whose sole purpose is to efficiently display ImageBitmaps. This supersedes the [[WorkerCanvas]] proposal&#039;s use of HTMLImageElement for this purpose.&lt;br /&gt;
&lt;br /&gt;
* It supports asynchronous encoding of OffscreenCanvases&#039; rendering results into Blobs which can be consumed by various other web platform APIs.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
This proposal introduces two primary processing models. The first involves &#039;&#039;synchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The application generates new frames using the RenderingContext obtained from the OffscreenCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to &amp;quot;tear off&amp;quot; the most recently rendered image from the OffscreenCanvas -- like a Post-It note. The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling &amp;lt;code&amp;gt;getContext(&#039;bitmaprenderer&#039;)&amp;lt;/code&amp;gt;. Each frame is displayed in the second canvas using the &amp;lt;code&amp;gt;transferImageBitmap&amp;lt;/code&amp;gt; method on this rendering context. Note that the threads producing and consuming the frames may be the same, or they may be different. Note also that a single OffscreenCanvas may transfer frames into an arbitrary number of other ImageBitmapRenderingContexts.&lt;br /&gt;
&lt;br /&gt;
The second processing model involves &#039;&#039;asynchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The main thread instantiates an HTMLCanvasElement and calls &amp;lt;code&amp;gt;transferControlToOffscreeen&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;getContext&amp;lt;/code&amp;gt; is used to obtain a rendering context for that OffscreenCanvas, either on the main thread, or on a worker. The application calls &amp;lt;code&amp;gt;commit&amp;lt;/code&amp;gt; against that rendering context in order to push frames to the original HTMLCanvasElement. In this rendering model, it is not defined when those frames become visible in the original canvas element. However, if the following situations apply:&lt;br /&gt;
&lt;br /&gt;
* It is a worker thread which is calling commit(), and&lt;br /&gt;
* The worker is calling commit() repeatedly against exactly one rendering context&lt;br /&gt;
&lt;br /&gt;
then it is required that the user agent synchronize the calls to commit() to the vsync interval. Calls to commit() conceptually enqueue frames for display, and after an implementation-defined number of frames have been enqueued, further calls to commit() will block until earlier frames have been presented to the screen. (This requirement allows porting of applications which drive their own main loop rather than using an event-driven loop.)&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
&lt;br /&gt;
* A known good way to drive an animation loop from a worker is needed. requestAnimationFrame or a similar API needs to be defined on worker threads.&lt;br /&gt;
* Some parts of the CanvasRenderingContext2D interface shall not be supported due OffscreenCanvas objects having no relation to the DOM or Frame: HitRegions, scrollPathIntoView, drawFocusIfNeeded.&lt;br /&gt;
* Due to technical challenges, some implementors [https://bugzilla.mozilla.org/show_bug.cgi?id=801176#c29 (Google and Mozilla)] have expressed a desire to ship without initially supporting text rendering in 2D contexts. Open Issue: Should text support be formally excluded from the specification until implementors are prepared to ship it (or until a more feasible API is designed)?&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
&lt;br /&gt;
This proposal has been vetted by developers of Apple&#039;s Safari, Google&#039;s Chrome, Microsoft&#039;s Internet Explorer, and Mozilla&#039;s Firefox browsers. All vendors agreed upon the basic form of the API, so it is likely it will be implemented widely and compatibly.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
&lt;br /&gt;
Web page authors have demanded increased parallelism support from the web platform for multiple years. If support for multithreaded rendering is added, it is likely it will be rapidly adopted.&lt;br /&gt;
&lt;br /&gt;
==== Example code ====&lt;br /&gt;
&lt;br /&gt;
Jeff Gilbert from Mozilla has crafted some example code utilizing this API:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jdashg/snippets/tree/master/webgl-from-worker Rendering WebGL from a worker using the commit() API]&lt;br /&gt;
* [https://github.com/jdashg/snippets/blob/master/webgl-one-to-many/index.html Using one WebGL context to render to many Canvas elements]&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10101</id>
		<title>OffscreenCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10101"/>
		<updated>2016-10-14T15:12:54Z</updated>

		<summary type="html">&lt;p&gt;Junov: Added link to pull request&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;The specification is currently under review in this [https://github.com/whatwg/html/pull/1876 GitHub pull request].&#039;&#039;&#039; &lt;br /&gt;
:&#039;&#039;Provides more control over how canvases are rendered. This is a follow-on to the [[WorkerCanvas]] proposal and will be merged once agreement is reached.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
Feedback from web application authors using canvases have shown the need for the following controls:&lt;br /&gt;
&lt;br /&gt;
* (From ShaderToy, Sketchfab, Verold): need to be able to render to multiple regions on the page efficiently using a single canvas context. 3D model warehouse sites desire to show multiple live interactive models on the page, but creating multiple WebGL contexts per page is too inefficient. A single context should be able to render to multiple regions on the page.&lt;br /&gt;
* (From Google Maps): need to be able to render WebGL from a worker, transfer the rendered image to the main thread without making any copy of it, and composite it with other HTML on the page, guaranteeing that the updates are all seen in the same rendered frame.&lt;br /&gt;
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.&lt;br /&gt;
* (From adopters of the Push API): need to be able to dynamically create images to use as notification icons, such as compositing avatars, or adding an unread count&lt;br /&gt;
* (From the Google Docs team): need to be able to layout and render text from a worker using CanvasRenderingContext2D and display those results on the main thread.&lt;br /&gt;
* (From the Google Slides team): want to layout and render the slide thumbnails from a worker. During initial load and heavy collaboration these update frequently, and currently cause slowdowns on the main thread.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
&lt;br /&gt;
* [https://html.spec.whatwg.org/multipage/scripting.html#proxying-canvases-to-workers CanvasProxy] does not provide sufficient control to allow synchronization between workers&#039; rendering and DOM updates on the main thread. Keeping this rendering in sync is a requirement from Google&#039;s Maps team.&lt;br /&gt;
* [[CanvasInWorkers]] does not allow a worker to render directly into a canvas on the main thread without running code on the main thread. Allowing completely unsynchronized rendering is a requirement from Mozilla and users of Emscripten such as Epic Games and Unity, in which the desire is to execute all of the game&#039;s rendering on a worker thread.&lt;br /&gt;
* [[WorkerCanvas]] mostly addresses these two use cases, but some implementers objected to the mechanism for displaying the rendering results in image elements. The specific objection was that image elements already have complex internal state (for example, the management of the image&#039;s &amp;quot;loaded&amp;quot; state), and this would make it more complex. It also did not precisely address the use case of producing new frames both on the main thread and in workers.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
[https://blog.mozilla.org/research/2014/07/22/webgl-in-web-workers-today-and-faster-than-expected/ WebGL in Web Workers] details some work attempted in the Emscripten toolchain to address the lack of WebGL in workers. Due to the high volume of calls and large amount of data that is transferred to the graphics card in a typical high-end WebGL application, this approach is not sustainable. It&#039;s necessary for workers to be able to call the WebGL API directly, and present those results to the screen in a manner that does not introduce any copies of the rendering results.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Making canvas rendering contexts available to workers will increase parallelism in web applications, leading to increased performance on multi-core systems.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
See the abovementioned use cases:&lt;br /&gt;
&lt;br /&gt;
* Google&#039;s Maps team&lt;br /&gt;
* Emscripten users such as Epic Games and Unity&lt;br /&gt;
* Many others&lt;br /&gt;
&lt;br /&gt;
== Web IDL ==&lt;br /&gt;
&lt;br /&gt;
 typedef (OffscreenCanvasRenderingContext2D or&lt;br /&gt;
          WebGLRenderingContext or&lt;br /&gt;
          WebGL2RenderingContext) OffscreenRenderingContext;&lt;br /&gt;
 &lt;br /&gt;
 [Constructor(unsigned long width, unsigned long height),&lt;br /&gt;
  Exposed=(Window,Worker)]&lt;br /&gt;
 interface OffscreenCanvas {&lt;br /&gt;
   attribute unsigned long width;&lt;br /&gt;
   attribute unsigned long height;&lt;br /&gt;
   OffscreenRenderingContext? getContext(DOMString contextId, any... arguments); &lt;br /&gt;
 &lt;br /&gt;
   // OffscreenCanvas, like HTMLCanvasElement, maintains an origin-clean flag.&lt;br /&gt;
   // ImageBitmaps created by calling this method also have an&lt;br /&gt;
   // origin-clean flag which is set to the value of the OffscreenCanvas&#039;s&lt;br /&gt;
   // flag at the time of their construction. Uses of the ImageBitmap&lt;br /&gt;
   // in other APIs, such as CanvasRenderingContext2D or&lt;br /&gt;
   // WebGLRenderingContext, propagate this flag like other&lt;br /&gt;
   // CanvasImageSource types do, such as HTMLImageElement.&lt;br /&gt;
   ImageBitmap transferToImageBitmap();&lt;br /&gt;
 &lt;br /&gt;
   // Throws a SecurityError if the OffscreenCanvas&#039;s origin-clean flag&lt;br /&gt;
   // is set to false.&lt;br /&gt;
   Promise&amp;lt;Blob&amp;gt; convertToBlob(optional ImageEncodeOptions options);   &lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 dictionary ImageEncodeOptions {&lt;br /&gt;
   DOMString type = &amp;quot;image/png&amp;quot;;&lt;br /&gt;
   unrestricted double quality = 1.0; // Defaults to 1.0 if value is outside 0:1 range&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 OffscreenCanvas implements Transferable;&lt;br /&gt;
 &lt;br /&gt;
 partial interface HTMLCanvasElement {&lt;br /&gt;
   OffscreenCanvas transferControlToOffscreen();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 typedef (HTMLOrSVGImageElement or&lt;br /&gt;
          HTMLVideoElement or&lt;br /&gt;
          HTMLCanvasElement or&lt;br /&gt;
          ImageBitmap or&lt;br /&gt;
          OffscreenCanvas) CanvasImageSource;&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 interface OffscreenCanvasRenderingContext2D {&lt;br /&gt;
   // commit() can only be used when HTMLCanvasElement has transferred Control&lt;br /&gt;
   // to OffscreenCanvas. Otherwise, an InvalidStateError will be thrown.&lt;br /&gt;
   // commit() can be invoked on main thread or worker thread. When it is invoked,&lt;br /&gt;
   // it is expected to see the image drawn to OffscreenCanvasRenderingContext2D &lt;br /&gt;
   // be displayed in the associated HTMLCanvasElement.&lt;br /&gt;
   void commit();&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute OffscreenCanvas canvas;&lt;br /&gt;
 };&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasState;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTransform;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasCompositing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageSmoothing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFillStrokeStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasShadowStyles;&lt;br /&gt;
 // Reference filters (e.g. &#039;url()&#039;) are not expected to work in Workers&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFilters;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasRect;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawPath;&lt;br /&gt;
 // Text support in workers poses very difficult technical challenges.&lt;br /&gt;
 // Open issue: should we forgo text support in OffscreenCanvas v1?&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasText;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawImage;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageData;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPathDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTextDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPath; &lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 Partial interface CanvasPattern {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 partial interface CanvasGradient {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 partial interface WebGLRenderingContextBase {&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;&lt;br /&gt;
 &lt;br /&gt;
   // If this context is associated with an OffscreenCanvas that was&lt;br /&gt;
   // created by HTMLCanvasElement&#039;s transferControlToOffscreen method,&lt;br /&gt;
   // causes this context&#039;s current rendering results to be pushed&lt;br /&gt;
   // to that canvas element. This has the same effect as returning&lt;br /&gt;
   // control to the main loop in a single-threaded application. Otherwise,&lt;br /&gt;
   // an InvalidStateError will be thrown.&lt;br /&gt;
   void commit();&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== This Solution ===&lt;br /&gt;
&lt;br /&gt;
This proposed API can be used in several ways to satisfy the use cases described above:&lt;br /&gt;
&lt;br /&gt;
* It supports zero-copy transfer of canvases&#039; rendering results between threads, for example from a worker to the main thread. In this model, the main thread controls when to display new frames produced by the worker, so synchronization with other DOM updates is achieved.&lt;br /&gt;
&lt;br /&gt;
* It supports fully asynchronous rendering by a worker into a canvas displayed on the main thread. This satisfies certain Emscripten developers&#039; full-screen use cases.&lt;br /&gt;
&lt;br /&gt;
* It supports using a single WebGLRenderingContext or Canvas2DRenderingContext to efficiently render into multiple regions on the web page.&lt;br /&gt;
&lt;br /&gt;
* It introduces ImageBitmapRenderingContext, a new canvas context type whose sole purpose is to efficiently display ImageBitmaps. This supersedes the [[WorkerCanvas]] proposal&#039;s use of HTMLImageElement for this purpose.&lt;br /&gt;
&lt;br /&gt;
* It supports asynchronous encoding of OffscreenCanvases&#039; rendering results into Blobs which can be consumed by various other web platform APIs.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
This proposal introduces two primary processing models. The first involves &#039;&#039;synchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The application generates new frames using the RenderingContext obtained from the OffscreenCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to &amp;quot;tear off&amp;quot; the most recently rendered image from the OffscreenCanvas -- like a Post-It note. The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling &amp;lt;code&amp;gt;getContext(&#039;bitmaprenderer&#039;)&amp;lt;/code&amp;gt;. Each frame is displayed in the second canvas using the &amp;lt;code&amp;gt;transferImageBitmap&amp;lt;/code&amp;gt; method on this rendering context. Note that the threads producing and consuming the frames may be the same, or they may be different. Note also that a single OffscreenCanvas may transfer frames into an arbitrary number of other ImageBitmapRenderingContexts.&lt;br /&gt;
&lt;br /&gt;
The second processing model involves &#039;&#039;asynchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The main thread instantiates an HTMLCanvasElement and calls &amp;lt;code&amp;gt;transferControlToOffscreeen&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;getContext&amp;lt;/code&amp;gt; is used to obtain a rendering context for that OffscreenCanvas, either on the main thread, or on a worker. The application calls &amp;lt;code&amp;gt;commit&amp;lt;/code&amp;gt; against that rendering context in order to push frames to the original HTMLCanvasElement. In this rendering model, it is not defined when those frames become visible in the original canvas element. However, if the following situations apply:&lt;br /&gt;
&lt;br /&gt;
* It is a worker thread which is calling commit(), and&lt;br /&gt;
* The worker is calling commit() repeatedly against exactly one rendering context&lt;br /&gt;
&lt;br /&gt;
then it is required that the user agent synchronize the calls to commit() to the vsync interval. Calls to commit() conceptually enqueue frames for display, and after an implementation-defined number of frames have been enqueued, further calls to commit() will block until earlier frames have been presented to the screen. (This requirement allows porting of applications which drive their own main loop rather than using an event-driven loop.)&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
&lt;br /&gt;
* A known good way to drive an animation loop from a worker is needed. requestAnimationFrame or a similar API needs to be defined on worker threads.&lt;br /&gt;
* Some parts of the CanvasRenderingContext2D interface shall not be supported due OffscreenCanvas objects having no relation to the DOM or Frame: HitRegions, scrollPathIntoView, drawFocusIfNeeded.&lt;br /&gt;
* Due to technical challenges, some implementors [https://bugzilla.mozilla.org/show_bug.cgi?id=801176#c29 (Google and Mozilla)] have expressed a desire to ship without initially supporting text rendering in 2D contexts. Open Issue: Should text support be formally excluded from the specification until implementors are prepared to ship it (or until a more feasible API is designed)?&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
&lt;br /&gt;
This proposal has been vetted by developers of Apple&#039;s Safari, Google&#039;s Chrome, Microsoft&#039;s Internet Explorer, and Mozilla&#039;s Firefox browsers. All vendors agreed upon the basic form of the API, so it is likely it will be implemented widely and compatibly.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
&lt;br /&gt;
Web page authors have demanded increased parallelism support from the web platform for multiple years. If support for multithreaded rendering is added, it is likely it will be rapidly adopted.&lt;br /&gt;
&lt;br /&gt;
==== Example code ====&lt;br /&gt;
&lt;br /&gt;
Jeff Gilbert from Mozilla has crafted some example code utilizing this API:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jdashg/snippets/tree/master/webgl-from-worker Rendering WebGL from a worker using the commit() API]&lt;br /&gt;
* [https://github.com/jdashg/snippets/blob/master/webgl-one-to-many/index.html Using one WebGL context to render to many Canvas elements]&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10093</id>
		<title>OffscreenCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10093"/>
		<updated>2016-09-27T16:58:03Z</updated>

		<summary type="html">&lt;p&gt;Junov: Removed definition of ImageBitmapSource because it already includes OffscreenCanvas via CanvasImageSource&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;Provides more control over how canvases are rendered. This is a follow-on to the [[WorkerCanvas]] proposal and will be merged once agreement is reached.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
Feedback from web application authors using canvases have shown the need for the following controls:&lt;br /&gt;
&lt;br /&gt;
* (From ShaderToy, Sketchfab, Verold): need to be able to render to multiple regions on the page efficiently using a single canvas context. 3D model warehouse sites desire to show multiple live interactive models on the page, but creating multiple WebGL contexts per page is too inefficient. A single context should be able to render to multiple regions on the page.&lt;br /&gt;
* (From Google Maps): need to be able to render WebGL from a worker, transfer the rendered image to the main thread without making any copy of it, and composite it with other HTML on the page, guaranteeing that the updates are all seen in the same rendered frame.&lt;br /&gt;
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.&lt;br /&gt;
* (From adopters of the Push API): need to be able to dynamically create images to use as notification icons, such as compositing avatars, or adding an unread count&lt;br /&gt;
* (From the Google Docs team): need to be able to layout and render text from a worker using CanvasRenderingContext2D and display those results on the main thread.&lt;br /&gt;
* (From the Google Slides team): want to layout and render the slide thumbnails from a worker. During initial load and heavy collaboration these update frequently, and currently cause slowdowns on the main thread.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
&lt;br /&gt;
* [https://html.spec.whatwg.org/multipage/scripting.html#proxying-canvases-to-workers CanvasProxy] does not provide sufficient control to allow synchronization between workers&#039; rendering and DOM updates on the main thread. Keeping this rendering in sync is a requirement from Google&#039;s Maps team.&lt;br /&gt;
* [[CanvasInWorkers]] does not allow a worker to render directly into a canvas on the main thread without running code on the main thread. Allowing completely unsynchronized rendering is a requirement from Mozilla and users of Emscripten such as Epic Games and Unity, in which the desire is to execute all of the game&#039;s rendering on a worker thread.&lt;br /&gt;
* [[WorkerCanvas]] mostly addresses these two use cases, but some implementers objected to the mechanism for displaying the rendering results in image elements. The specific objection was that image elements already have complex internal state (for example, the management of the image&#039;s &amp;quot;loaded&amp;quot; state), and this would make it more complex. It also did not precisely address the use case of producing new frames both on the main thread and in workers.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
[https://blog.mozilla.org/research/2014/07/22/webgl-in-web-workers-today-and-faster-than-expected/ WebGL in Web Workers] details some work attempted in the Emscripten toolchain to address the lack of WebGL in workers. Due to the high volume of calls and large amount of data that is transferred to the graphics card in a typical high-end WebGL application, this approach is not sustainable. It&#039;s necessary for workers to be able to call the WebGL API directly, and present those results to the screen in a manner that does not introduce any copies of the rendering results.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Making canvas rendering contexts available to workers will increase parallelism in web applications, leading to increased performance on multi-core systems.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
See the abovementioned use cases:&lt;br /&gt;
&lt;br /&gt;
* Google&#039;s Maps team&lt;br /&gt;
* Emscripten users such as Epic Games and Unity&lt;br /&gt;
* Many others&lt;br /&gt;
&lt;br /&gt;
== Web IDL ==&lt;br /&gt;
&lt;br /&gt;
 typedef (OffscreenCanvasRenderingContext2D or&lt;br /&gt;
          WebGLRenderingContext or&lt;br /&gt;
          WebGL2RenderingContext) OffscreenRenderingContext;&lt;br /&gt;
 &lt;br /&gt;
 [Constructor(unsigned long width, unsigned long height),&lt;br /&gt;
  Exposed=(Window,Worker)]&lt;br /&gt;
 interface OffscreenCanvas {&lt;br /&gt;
   attribute unsigned long width;&lt;br /&gt;
   attribute unsigned long height;&lt;br /&gt;
   OffscreenRenderingContext? getContext(DOMString contextId, any... arguments); &lt;br /&gt;
 &lt;br /&gt;
   // OffscreenCanvas, like HTMLCanvasElement, maintains an origin-clean flag.&lt;br /&gt;
   // ImageBitmaps created by calling this method also have an&lt;br /&gt;
   // origin-clean flag which is set to the value of the OffscreenCanvas&#039;s&lt;br /&gt;
   // flag at the time of their construction. Uses of the ImageBitmap&lt;br /&gt;
   // in other APIs, such as CanvasRenderingContext2D or&lt;br /&gt;
   // WebGLRenderingContext, propagate this flag like other&lt;br /&gt;
   // CanvasImageSource types do, such as HTMLImageElement.&lt;br /&gt;
   ImageBitmap transferToImageBitmap();&lt;br /&gt;
 &lt;br /&gt;
   // Throws a SecurityError if the OffscreenCanvas&#039;s origin-clean flag&lt;br /&gt;
   // is set to false.&lt;br /&gt;
   Promise&amp;lt;Blob&amp;gt; convertToBlob(optional ImageEncodeOptions options);   &lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 dictionary ImageEncodeOptions {&lt;br /&gt;
   DOMString type = &amp;quot;image/png&amp;quot;;&lt;br /&gt;
   unrestricted double quality = 1.0; // Defaults to 1.0 if value is outside 0:1 range&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 OffscreenCanvas implements Transferable;&lt;br /&gt;
 &lt;br /&gt;
 partial interface HTMLCanvasElement {&lt;br /&gt;
   OffscreenCanvas transferControlToOffscreen();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 typedef (HTMLOrSVGImageElement or&lt;br /&gt;
          HTMLVideoElement or&lt;br /&gt;
          HTMLCanvasElement or&lt;br /&gt;
          ImageBitmap or&lt;br /&gt;
          OffscreenCanvas) CanvasImageSource;&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 interface OffscreenCanvasRenderingContext2D {&lt;br /&gt;
   // commit() can only be used when HTMLCanvasElement has transferred Control&lt;br /&gt;
   // to OffscreenCanvas. Otherwise, an InvalidStateError will be thrown.&lt;br /&gt;
   // commit() can be invoked on main thread or worker thread. When it is invoked,&lt;br /&gt;
   // it is expected to see the image drawn to OffscreenCanvasRenderingContext2D &lt;br /&gt;
   // be displayed in the associated HTMLCanvasElement.&lt;br /&gt;
   void commit();&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute OffscreenCanvas canvas;&lt;br /&gt;
 };&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasState;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTransform;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasCompositing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageSmoothing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFillStrokeStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasShadowStyles;&lt;br /&gt;
 // Reference filters (e.g. &#039;url()&#039;) are not expected to work in Workers&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFilters;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasRect;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawPath;&lt;br /&gt;
 // Text support in workers poses very difficult technical challenges.&lt;br /&gt;
 // Open issue: should we forgo text support in OffscreenCanvas v1?&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasText;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawImage;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageData;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPathDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTextDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPath; &lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 Partial interface CanvasPattern {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 partial interface CanvasGradient {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 partial interface WebGLRenderingContextBase {&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;&lt;br /&gt;
 &lt;br /&gt;
   // If this context is associated with an OffscreenCanvas that was&lt;br /&gt;
   // created by HTMLCanvasElement&#039;s transferControlToOffscreen method,&lt;br /&gt;
   // causes this context&#039;s current rendering results to be pushed&lt;br /&gt;
   // to that canvas element. This has the same effect as returning&lt;br /&gt;
   // control to the main loop in a single-threaded application. Otherwise,&lt;br /&gt;
   // an InvalidStateError will be thrown.&lt;br /&gt;
   void commit();&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== This Solution ===&lt;br /&gt;
&lt;br /&gt;
This proposed API can be used in several ways to satisfy the use cases described above:&lt;br /&gt;
&lt;br /&gt;
* It supports zero-copy transfer of canvases&#039; rendering results between threads, for example from a worker to the main thread. In this model, the main thread controls when to display new frames produced by the worker, so synchronization with other DOM updates is achieved.&lt;br /&gt;
&lt;br /&gt;
* It supports fully asynchronous rendering by a worker into a canvas displayed on the main thread. This satisfies certain Emscripten developers&#039; full-screen use cases.&lt;br /&gt;
&lt;br /&gt;
* It supports using a single WebGLRenderingContext or Canvas2DRenderingContext to efficiently render into multiple regions on the web page.&lt;br /&gt;
&lt;br /&gt;
* It introduces ImageBitmapRenderingContext, a new canvas context type whose sole purpose is to efficiently display ImageBitmaps. This supersedes the [[WorkerCanvas]] proposal&#039;s use of HTMLImageElement for this purpose.&lt;br /&gt;
&lt;br /&gt;
* It supports asynchronous encoding of OffscreenCanvases&#039; rendering results into Blobs which can be consumed by various other web platform APIs.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
This proposal introduces two primary processing models. The first involves &#039;&#039;synchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The application generates new frames using the RenderingContext obtained from the OffscreenCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to &amp;quot;tear off&amp;quot; the most recently rendered image from the OffscreenCanvas -- like a Post-It note. The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling &amp;lt;code&amp;gt;getContext(&#039;bitmaprenderer&#039;)&amp;lt;/code&amp;gt;. Each frame is displayed in the second canvas using the &amp;lt;code&amp;gt;transferImageBitmap&amp;lt;/code&amp;gt; method on this rendering context. Note that the threads producing and consuming the frames may be the same, or they may be different. Note also that a single OffscreenCanvas may transfer frames into an arbitrary number of other ImageBitmapRenderingContexts.&lt;br /&gt;
&lt;br /&gt;
The second processing model involves &#039;&#039;asynchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The main thread instantiates an HTMLCanvasElement and calls &amp;lt;code&amp;gt;transferControlToOffscreeen&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;getContext&amp;lt;/code&amp;gt; is used to obtain a rendering context for that OffscreenCanvas, either on the main thread, or on a worker. The application calls &amp;lt;code&amp;gt;commit&amp;lt;/code&amp;gt; against that rendering context in order to push frames to the original HTMLCanvasElement. In this rendering model, it is not defined when those frames become visible in the original canvas element. However, if the following situations apply:&lt;br /&gt;
&lt;br /&gt;
* It is a worker thread which is calling commit(), and&lt;br /&gt;
* The worker is calling commit() repeatedly against exactly one rendering context&lt;br /&gt;
&lt;br /&gt;
then it is required that the user agent synchronize the calls to commit() to the vsync interval. Calls to commit() conceptually enqueue frames for display, and after an implementation-defined number of frames have been enqueued, further calls to commit() will block until earlier frames have been presented to the screen. (This requirement allows porting of applications which drive their own main loop rather than using an event-driven loop.)&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
&lt;br /&gt;
* A known good way to drive an animation loop from a worker is needed. requestAnimationFrame or a similar API needs to be defined on worker threads.&lt;br /&gt;
* Some parts of the CanvasRenderingContext2D interface shall not be supported due OffscreenCanvas objects having no relation to the DOM or Frame: HitRegions, scrollPathIntoView, drawFocusIfNeeded.&lt;br /&gt;
* Due to technical challenges, some implementors [https://bugzilla.mozilla.org/show_bug.cgi?id=801176#c29 (Google and Mozilla)] have expressed a desire to ship without initially supporting text rendering in 2D contexts. Open Issue: Should text support be formally excluded from the specification until implementors are prepared to ship it (or until a more feasible API is designed)?&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
&lt;br /&gt;
This proposal has been vetted by developers of Apple&#039;s Safari, Google&#039;s Chrome, Microsoft&#039;s Internet Explorer, and Mozilla&#039;s Firefox browsers. All vendors agreed upon the basic form of the API, so it is likely it will be implemented widely and compatibly.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
&lt;br /&gt;
Web page authors have demanded increased parallelism support from the web platform for multiple years. If support for multithreaded rendering is added, it is likely it will be rapidly adopted.&lt;br /&gt;
&lt;br /&gt;
==== Example code ====&lt;br /&gt;
&lt;br /&gt;
Jeff Gilbert from Mozilla has crafted some example code utilizing this API:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jdashg/snippets/tree/master/webgl-from-worker Rendering WebGL from a worker using the commit() API]&lt;br /&gt;
* [https://github.com/jdashg/snippets/blob/master/webgl-one-to-many/index.html Using one WebGL context to render to many Canvas elements]&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10091</id>
		<title>OffscreenCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10091"/>
		<updated>2016-08-25T15:44:19Z</updated>

		<summary type="html">&lt;p&gt;Junov: Bikeshed: getAsBlob -&amp;gt; convertToBlob, which sounds more async&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;Provides more control over how canvases are rendered. This is a follow-on to the [[WorkerCanvas]] proposal and will be merged once agreement is reached.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
Feedback from web application authors using canvases have shown the need for the following controls:&lt;br /&gt;
&lt;br /&gt;
* (From ShaderToy, Sketchfab, Verold): need to be able to render to multiple regions on the page efficiently using a single canvas context. 3D model warehouse sites desire to show multiple live interactive models on the page, but creating multiple WebGL contexts per page is too inefficient. A single context should be able to render to multiple regions on the page.&lt;br /&gt;
* (From Google Maps): need to be able to render WebGL from a worker, transfer the rendered image to the main thread without making any copy of it, and composite it with other HTML on the page, guaranteeing that the updates are all seen in the same rendered frame.&lt;br /&gt;
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.&lt;br /&gt;
* (From adopters of the Push API): need to be able to dynamically create images to use as notification icons, such as compositing avatars, or adding an unread count&lt;br /&gt;
* (From the Google Docs team): need to be able to layout and render text from a worker using CanvasRenderingContext2D and display those results on the main thread.&lt;br /&gt;
* (From the Google Slides team): want to layout and render the slide thumbnails from a worker. During initial load and heavy collaboration these update frequently, and currently cause slowdowns on the main thread.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
&lt;br /&gt;
* [https://html.spec.whatwg.org/multipage/scripting.html#proxying-canvases-to-workers CanvasProxy] does not provide sufficient control to allow synchronization between workers&#039; rendering and DOM updates on the main thread. Keeping this rendering in sync is a requirement from Google&#039;s Maps team.&lt;br /&gt;
* [[CanvasInWorkers]] does not allow a worker to render directly into a canvas on the main thread without running code on the main thread. Allowing completely unsynchronized rendering is a requirement from Mozilla and users of Emscripten such as Epic Games and Unity, in which the desire is to execute all of the game&#039;s rendering on a worker thread.&lt;br /&gt;
* [[WorkerCanvas]] mostly addresses these two use cases, but some implementers objected to the mechanism for displaying the rendering results in image elements. The specific objection was that image elements already have complex internal state (for example, the management of the image&#039;s &amp;quot;loaded&amp;quot; state), and this would make it more complex. It also did not precisely address the use case of producing new frames both on the main thread and in workers.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
[https://blog.mozilla.org/research/2014/07/22/webgl-in-web-workers-today-and-faster-than-expected/ WebGL in Web Workers] details some work attempted in the Emscripten toolchain to address the lack of WebGL in workers. Due to the high volume of calls and large amount of data that is transferred to the graphics card in a typical high-end WebGL application, this approach is not sustainable. It&#039;s necessary for workers to be able to call the WebGL API directly, and present those results to the screen in a manner that does not introduce any copies of the rendering results.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Making canvas rendering contexts available to workers will increase parallelism in web applications, leading to increased performance on multi-core systems.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
See the abovementioned use cases:&lt;br /&gt;
&lt;br /&gt;
* Google&#039;s Maps team&lt;br /&gt;
* Emscripten users such as Epic Games and Unity&lt;br /&gt;
* Many others&lt;br /&gt;
&lt;br /&gt;
== Web IDL ==&lt;br /&gt;
&lt;br /&gt;
 typedef (OffscreenCanvasRenderingContext2D or&lt;br /&gt;
          WebGLRenderingContext or&lt;br /&gt;
          WebGL2RenderingContext) OffscreenRenderingContext;&lt;br /&gt;
 &lt;br /&gt;
 [Constructor(unsigned long width, unsigned long height),&lt;br /&gt;
  Exposed=(Window,Worker)]&lt;br /&gt;
 interface OffscreenCanvas {&lt;br /&gt;
   attribute unsigned long width;&lt;br /&gt;
   attribute unsigned long height;&lt;br /&gt;
   OffscreenRenderingContext? getContext(DOMString contextId, any... arguments); &lt;br /&gt;
 &lt;br /&gt;
   // OffscreenCanvas, like HTMLCanvasElement, maintains an origin-clean flag.&lt;br /&gt;
   // ImageBitmaps created by calling this method also have an&lt;br /&gt;
   // origin-clean flag which is set to the value of the OffscreenCanvas&#039;s&lt;br /&gt;
   // flag at the time of their construction. Uses of the ImageBitmap&lt;br /&gt;
   // in other APIs, such as CanvasRenderingContext2D or&lt;br /&gt;
   // WebGLRenderingContext, propagate this flag like other&lt;br /&gt;
   // CanvasImageSource types do, such as HTMLImageElement.&lt;br /&gt;
   ImageBitmap transferToImageBitmap();&lt;br /&gt;
 &lt;br /&gt;
   // Throws a SecurityError if the OffscreenCanvas&#039;s origin-clean flag&lt;br /&gt;
   // is set to false.&lt;br /&gt;
   Promise&amp;lt;Blob&amp;gt; convertToBlob(optional ImageEncodeOptions options);   &lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 dictionary ImageEncodeOptions {&lt;br /&gt;
   DOMString type = &amp;quot;image/png&amp;quot;;&lt;br /&gt;
   unrestricted double quality = 1.0; // Defaults to 1.0 if value is outside 0:1 range&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 OffscreenCanvas implements Transferable;&lt;br /&gt;
 &lt;br /&gt;
 partial interface HTMLCanvasElement {&lt;br /&gt;
   OffscreenCanvas transferControlToOffscreen();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 typedef (HTMLOrSVGImageElement or&lt;br /&gt;
          HTMLVideoElement or&lt;br /&gt;
          HTMLCanvasElement or&lt;br /&gt;
          ImageBitmap or&lt;br /&gt;
          OffscreenCanvas) CanvasImageSource;&lt;br /&gt;
 &lt;br /&gt;
 typedef (CanvasImageSource or&lt;br /&gt;
          Blob or&lt;br /&gt;
          ImageData or&lt;br /&gt;
          OffscreenCanvas) ImageBitmapSource;&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 interface OffscreenCanvasRenderingContext2D {&lt;br /&gt;
   // commit() can only be used when HTMLCanvasElement has transferred Control&lt;br /&gt;
   // to OffscreenCanvas. Otherwise, Exception will be raised.&lt;br /&gt;
   // commit() can be invoked on main thread or worker thread. When it is invoked,&lt;br /&gt;
   // it is expected to see the image drawn to OffscreenCanvasRenderingContext2D &lt;br /&gt;
   // be displayed in the associated HTMLCanvasElement.&lt;br /&gt;
   void commit();&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute OffscreenCanvas canvas;&lt;br /&gt;
 };&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasState;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTransform;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasCompositing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageSmoothing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFillStrokeStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasShadowStyles;&lt;br /&gt;
 // Reference filters (e.g. &#039;url()&#039;) are not expected to work in Workers&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFilters;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasRect;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawPath;&lt;br /&gt;
 // Text support in workers poses very difficult technical challenges.&lt;br /&gt;
 // Open issue: should we forgo text support in OffscreenCanvas v1?&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasText;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawImage;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageData;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPathDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTextDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPath; &lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 Partial interface CanvasPattern {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 partial interface CanvasGradient {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 partial interface WebGLRenderingContextBase {&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;&lt;br /&gt;
 &lt;br /&gt;
   // If this context is associated with an OffscreenCanvas that was&lt;br /&gt;
   // created by HTMLCanvasElement&#039;s transferControlToOffscreen method,&lt;br /&gt;
   // causes this context&#039;s current rendering results to be pushed&lt;br /&gt;
   // to that canvas element. This has the same effect as returning&lt;br /&gt;
   // control to the main loop in a single-threaded application. Otherwise,&lt;br /&gt;
   // this call has no effect.&lt;br /&gt;
   void commit();&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== This Solution ===&lt;br /&gt;
&lt;br /&gt;
This proposed API can be used in several ways to satisfy the use cases described above:&lt;br /&gt;
&lt;br /&gt;
* It supports zero-copy transfer of canvases&#039; rendering results between threads, for example from a worker to the main thread. In this model, the main thread controls when to display new frames produced by the worker, so synchronization with other DOM updates is achieved.&lt;br /&gt;
&lt;br /&gt;
* It supports fully asynchronous rendering by a worker into a canvas displayed on the main thread. This satisfies certain Emscripten developers&#039; full-screen use cases.&lt;br /&gt;
&lt;br /&gt;
* It supports using a single WebGLRenderingContext or Canvas2DRenderingContext to efficiently render into multiple regions on the web page.&lt;br /&gt;
&lt;br /&gt;
* It introduces ImageBitmapRenderingContext, a new canvas context type whose sole purpose is to efficiently display ImageBitmaps. This supersedes the [[WorkerCanvas]] proposal&#039;s use of HTMLImageElement for this purpose.&lt;br /&gt;
&lt;br /&gt;
* It supports asynchronous encoding of OffscreenCanvases&#039; rendering results into Blobs which can be consumed by various other web platform APIs.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
This proposal introduces two primary processing models. The first involves &#039;&#039;synchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The application generates new frames using the RenderingContext obtained from the OffscreenCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to &amp;quot;tear off&amp;quot; the most recently rendered image from the OffscreenCanvas -- like a Post-It note. The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling &amp;lt;code&amp;gt;getContext(&#039;bitmaprenderer&#039;)&amp;lt;/code&amp;gt;. Each frame is displayed in the second canvas using the &amp;lt;code&amp;gt;transferImageBitmap&amp;lt;/code&amp;gt; method on this rendering context. Note that the threads producing and consuming the frames may be the same, or they may be different. Note also that a single OffscreenCanvas may transfer frames into an arbitrary number of other ImageBitmapRenderingContexts.&lt;br /&gt;
&lt;br /&gt;
The second processing model involves &#039;&#039;asynchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The main thread instantiates an HTMLCanvasElement and calls &amp;lt;code&amp;gt;transferControlToOffscreeen&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;getContext&amp;lt;/code&amp;gt; is used to obtain a rendering context for that OffscreenCanvas, either on the main thread, or on a worker. The application calls &amp;lt;code&amp;gt;commit&amp;lt;/code&amp;gt; against that rendering context in order to push frames to the original HTMLCanvasElement. In this rendering model, it is not defined when those frames become visible in the original canvas element. However, if the following situations apply:&lt;br /&gt;
&lt;br /&gt;
* It is a worker thread which is calling commit(), and&lt;br /&gt;
* The worker is calling commit() repeatedly against exactly one rendering context&lt;br /&gt;
&lt;br /&gt;
then it is required that the user agent synchronize the calls to commit() to the vsync interval. Calls to commit() conceptually enqueue frames for display, and after an implementation-defined number of frames have been enqueued, further calls to commit() will block until earlier frames have been presented to the screen. (This requirement allows porting of applications which drive their own main loop rather than using an event-driven loop.)&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
&lt;br /&gt;
* A known good way to drive an animation loop from a worker is needed. requestAnimationFrame or a similar API needs to be defined on worker threads.&lt;br /&gt;
* Some parts of the CanvasRenderingContext2D interface shall not be supported due OffscreenCanvas objects having no relation to the DOM or Frame: HitRegions, scrollPathIntoView, drawFocusIfNeeded.&lt;br /&gt;
* Due to technical challenges, some implementors [https://bugzilla.mozilla.org/show_bug.cgi?id=801176#c29 (Google and Mozilla)] have expressed a desire to ship without initially supporting text rendering in 2D contexts. Open Issue: Should text support be formally excluded from the specification until implementors are prepared to ship it (or until a more feasible API is designed)?&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
&lt;br /&gt;
This proposal has been vetted by developers of Apple&#039;s Safari, Google&#039;s Chrome, Microsoft&#039;s Internet Explorer, and Mozilla&#039;s Firefox browsers. All vendors agreed upon the basic form of the API, so it is likely it will be implemented widely and compatibly.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
&lt;br /&gt;
Web page authors have demanded increased parallelism support from the web platform for multiple years. If support for multithreaded rendering is added, it is likely it will be rapidly adopted.&lt;br /&gt;
&lt;br /&gt;
==== Example code ====&lt;br /&gt;
&lt;br /&gt;
Jeff Gilbert from Mozilla has crafted some example code utilizing this API:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jdashg/snippets/tree/master/webgl-from-worker Rendering WebGL from a worker using the commit() API]&lt;br /&gt;
* [https://github.com/jdashg/snippets/blob/master/webgl-one-to-many/index.html Using one WebGL context to render to many Canvas elements]&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10090</id>
		<title>OffscreenCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10090"/>
		<updated>2016-08-25T15:38:51Z</updated>

		<summary type="html">&lt;p&gt;Junov: /* Web IDL */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;Provides more control over how canvases are rendered. This is a follow-on to the [[WorkerCanvas]] proposal and will be merged once agreement is reached.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
Feedback from web application authors using canvases have shown the need for the following controls:&lt;br /&gt;
&lt;br /&gt;
* (From ShaderToy, Sketchfab, Verold): need to be able to render to multiple regions on the page efficiently using a single canvas context. 3D model warehouse sites desire to show multiple live interactive models on the page, but creating multiple WebGL contexts per page is too inefficient. A single context should be able to render to multiple regions on the page.&lt;br /&gt;
* (From Google Maps): need to be able to render WebGL from a worker, transfer the rendered image to the main thread without making any copy of it, and composite it with other HTML on the page, guaranteeing that the updates are all seen in the same rendered frame.&lt;br /&gt;
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.&lt;br /&gt;
* (From adopters of the Push API): need to be able to dynamically create images to use as notification icons, such as compositing avatars, or adding an unread count&lt;br /&gt;
* (From the Google Docs team): need to be able to layout and render text from a worker using CanvasRenderingContext2D and display those results on the main thread.&lt;br /&gt;
* (From the Google Slides team): want to layout and render the slide thumbnails from a worker. During initial load and heavy collaboration these update frequently, and currently cause slowdowns on the main thread.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
&lt;br /&gt;
* [https://html.spec.whatwg.org/multipage/scripting.html#proxying-canvases-to-workers CanvasProxy] does not provide sufficient control to allow synchronization between workers&#039; rendering and DOM updates on the main thread. Keeping this rendering in sync is a requirement from Google&#039;s Maps team.&lt;br /&gt;
* [[CanvasInWorkers]] does not allow a worker to render directly into a canvas on the main thread without running code on the main thread. Allowing completely unsynchronized rendering is a requirement from Mozilla and users of Emscripten such as Epic Games and Unity, in which the desire is to execute all of the game&#039;s rendering on a worker thread.&lt;br /&gt;
* [[WorkerCanvas]] mostly addresses these two use cases, but some implementers objected to the mechanism for displaying the rendering results in image elements. The specific objection was that image elements already have complex internal state (for example, the management of the image&#039;s &amp;quot;loaded&amp;quot; state), and this would make it more complex. It also did not precisely address the use case of producing new frames both on the main thread and in workers.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
[https://blog.mozilla.org/research/2014/07/22/webgl-in-web-workers-today-and-faster-than-expected/ WebGL in Web Workers] details some work attempted in the Emscripten toolchain to address the lack of WebGL in workers. Due to the high volume of calls and large amount of data that is transferred to the graphics card in a typical high-end WebGL application, this approach is not sustainable. It&#039;s necessary for workers to be able to call the WebGL API directly, and present those results to the screen in a manner that does not introduce any copies of the rendering results.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Making canvas rendering contexts available to workers will increase parallelism in web applications, leading to increased performance on multi-core systems.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
See the abovementioned use cases:&lt;br /&gt;
&lt;br /&gt;
* Google&#039;s Maps team&lt;br /&gt;
* Emscripten users such as Epic Games and Unity&lt;br /&gt;
* Many others&lt;br /&gt;
&lt;br /&gt;
== Web IDL ==&lt;br /&gt;
&lt;br /&gt;
 typedef (OffscreenCanvasRenderingContext2D or&lt;br /&gt;
          WebGLRenderingContext or&lt;br /&gt;
          WebGL2RenderingContext) OffscreenRenderingContext;&lt;br /&gt;
 &lt;br /&gt;
 [Constructor(unsigned long width, unsigned long height),&lt;br /&gt;
  Exposed=(Window,Worker)]&lt;br /&gt;
 interface OffscreenCanvas {&lt;br /&gt;
   attribute unsigned long width;&lt;br /&gt;
   attribute unsigned long height;&lt;br /&gt;
   OffscreenRenderingContext? getContext(DOMString contextId, any... arguments); &lt;br /&gt;
 &lt;br /&gt;
   // OffscreenCanvas, like HTMLCanvasElement, maintains an origin-clean flag.&lt;br /&gt;
   // ImageBitmaps created by calling this method also have an&lt;br /&gt;
   // origin-clean flag which is set to the value of the OffscreenCanvas&#039;s&lt;br /&gt;
   // flag at the time of their construction. Uses of the ImageBitmap&lt;br /&gt;
   // in other APIs, such as CanvasRenderingContext2D or&lt;br /&gt;
   // WebGLRenderingContext, propagate this flag like other&lt;br /&gt;
   // CanvasImageSource types do, such as HTMLImageElement.&lt;br /&gt;
   ImageBitmap transferToImageBitmap();&lt;br /&gt;
 &lt;br /&gt;
   // Throws a SecurityError if the OffscreenCanvas&#039;s origin-clean flag&lt;br /&gt;
   // is set to false.&lt;br /&gt;
   Promise&amp;lt;Blob&amp;gt; getAsBlob(optional ImageEncodeOptions options);   &lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 dictionary ImageEncodeOptions {&lt;br /&gt;
   DOMString type = &amp;quot;image/png&amp;quot;;&lt;br /&gt;
   unrestricted double quality = 1.0; // Defaults to 1.0 if value is outside 0:1 range&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 OffscreenCanvas implements Transferable;&lt;br /&gt;
 &lt;br /&gt;
 partial interface HTMLCanvasElement {&lt;br /&gt;
   OffscreenCanvas transferControlToOffscreen();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 typedef (HTMLOrSVGImageElement or&lt;br /&gt;
          HTMLVideoElement or&lt;br /&gt;
          HTMLCanvasElement or&lt;br /&gt;
          ImageBitmap or&lt;br /&gt;
          OffscreenCanvas) CanvasImageSource;&lt;br /&gt;
 &lt;br /&gt;
 typedef (CanvasImageSource or&lt;br /&gt;
          Blob or&lt;br /&gt;
          ImageData or&lt;br /&gt;
          OffscreenCanvas) ImageBitmapSource;&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 interface OffscreenCanvasRenderingContext2D {&lt;br /&gt;
   // commit() can only be used when HTMLCanvasElement has transferred Control&lt;br /&gt;
   // to OffscreenCanvas. Otherwise, Exception will be raised.&lt;br /&gt;
   // commit() can be invoked on main thread or worker thread. When it is invoked,&lt;br /&gt;
   // it is expected to see the image drawn to OffscreenCanvasRenderingContext2D &lt;br /&gt;
   // be displayed in the associated HTMLCanvasElement.&lt;br /&gt;
   void commit();&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute OffscreenCanvas canvas;&lt;br /&gt;
 };&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasState;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTransform;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasCompositing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageSmoothing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFillStrokeStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasShadowStyles;&lt;br /&gt;
 // Reference filters (e.g. &#039;url()&#039;) are not expected to work in Workers&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFilters;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasRect;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawPath;&lt;br /&gt;
 // Text support in workers poses very difficult technical challenges.&lt;br /&gt;
 // Open issue: should we forgo text support in OffscreenCanvas v1?&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasText;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawImage;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageData;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPathDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTextDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPath; &lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 Partial interface CanvasPattern {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 partial interface CanvasGradient {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 partial interface WebGLRenderingContextBase {&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;&lt;br /&gt;
 &lt;br /&gt;
   // If this context is associated with an OffscreenCanvas that was&lt;br /&gt;
   // created by HTMLCanvasElement&#039;s transferControlToOffscreen method,&lt;br /&gt;
   // causes this context&#039;s current rendering results to be pushed&lt;br /&gt;
   // to that canvas element. This has the same effect as returning&lt;br /&gt;
   // control to the main loop in a single-threaded application. Otherwise,&lt;br /&gt;
   // this call has no effect.&lt;br /&gt;
   void commit();&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== This Solution ===&lt;br /&gt;
&lt;br /&gt;
This proposed API can be used in several ways to satisfy the use cases described above:&lt;br /&gt;
&lt;br /&gt;
* It supports zero-copy transfer of canvases&#039; rendering results between threads, for example from a worker to the main thread. In this model, the main thread controls when to display new frames produced by the worker, so synchronization with other DOM updates is achieved.&lt;br /&gt;
&lt;br /&gt;
* It supports fully asynchronous rendering by a worker into a canvas displayed on the main thread. This satisfies certain Emscripten developers&#039; full-screen use cases.&lt;br /&gt;
&lt;br /&gt;
* It supports using a single WebGLRenderingContext or Canvas2DRenderingContext to efficiently render into multiple regions on the web page.&lt;br /&gt;
&lt;br /&gt;
* It introduces ImageBitmapRenderingContext, a new canvas context type whose sole purpose is to efficiently display ImageBitmaps. This supersedes the [[WorkerCanvas]] proposal&#039;s use of HTMLImageElement for this purpose.&lt;br /&gt;
&lt;br /&gt;
* It supports asynchronous encoding of OffscreenCanvases&#039; rendering results into Blobs which can be consumed by various other web platform APIs.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
This proposal introduces two primary processing models. The first involves &#039;&#039;synchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The application generates new frames using the RenderingContext obtained from the OffscreenCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to &amp;quot;tear off&amp;quot; the most recently rendered image from the OffscreenCanvas -- like a Post-It note. The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling &amp;lt;code&amp;gt;getContext(&#039;bitmaprenderer&#039;)&amp;lt;/code&amp;gt;. Each frame is displayed in the second canvas using the &amp;lt;code&amp;gt;transferImageBitmap&amp;lt;/code&amp;gt; method on this rendering context. Note that the threads producing and consuming the frames may be the same, or they may be different. Note also that a single OffscreenCanvas may transfer frames into an arbitrary number of other ImageBitmapRenderingContexts.&lt;br /&gt;
&lt;br /&gt;
The second processing model involves &#039;&#039;asynchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The main thread instantiates an HTMLCanvasElement and calls &amp;lt;code&amp;gt;transferControlToOffscreeen&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;getContext&amp;lt;/code&amp;gt; is used to obtain a rendering context for that OffscreenCanvas, either on the main thread, or on a worker. The application calls &amp;lt;code&amp;gt;commit&amp;lt;/code&amp;gt; against that rendering context in order to push frames to the original HTMLCanvasElement. In this rendering model, it is not defined when those frames become visible in the original canvas element. However, if the following situations apply:&lt;br /&gt;
&lt;br /&gt;
* It is a worker thread which is calling commit(), and&lt;br /&gt;
* The worker is calling commit() repeatedly against exactly one rendering context&lt;br /&gt;
&lt;br /&gt;
then it is required that the user agent synchronize the calls to commit() to the vsync interval. Calls to commit() conceptually enqueue frames for display, and after an implementation-defined number of frames have been enqueued, further calls to commit() will block until earlier frames have been presented to the screen. (This requirement allows porting of applications which drive their own main loop rather than using an event-driven loop.)&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
&lt;br /&gt;
* A known good way to drive an animation loop from a worker is needed. requestAnimationFrame or a similar API needs to be defined on worker threads.&lt;br /&gt;
* Some parts of the CanvasRenderingContext2D interface shall not be supported due OffscreenCanvas objects having no relation to the DOM or Frame: HitRegions, scrollPathIntoView, drawFocusIfNeeded.&lt;br /&gt;
* Due to technical challenges, some implementors [https://bugzilla.mozilla.org/show_bug.cgi?id=801176#c29 (Google and Mozilla)] have expressed a desire to ship without initially supporting text rendering in 2D contexts. Open Issue: Should text support be formally excluded from the specification until implementors are prepared to ship it (or until a more feasible API is designed)?&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
&lt;br /&gt;
This proposal has been vetted by developers of Apple&#039;s Safari, Google&#039;s Chrome, Microsoft&#039;s Internet Explorer, and Mozilla&#039;s Firefox browsers. All vendors agreed upon the basic form of the API, so it is likely it will be implemented widely and compatibly.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
&lt;br /&gt;
Web page authors have demanded increased parallelism support from the web platform for multiple years. If support for multithreaded rendering is added, it is likely it will be rapidly adopted.&lt;br /&gt;
&lt;br /&gt;
==== Example code ====&lt;br /&gt;
&lt;br /&gt;
Jeff Gilbert from Mozilla has crafted some example code utilizing this API:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jdashg/snippets/tree/master/webgl-from-worker Rendering WebGL from a worker using the commit() API]&lt;br /&gt;
* [https://github.com/jdashg/snippets/blob/master/webgl-one-to-many/index.html Using one WebGL context to render to many Canvas elements]&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10089</id>
		<title>OffscreenCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10089"/>
		<updated>2016-08-25T15:38:16Z</updated>

		<summary type="html">&lt;p&gt;Junov: renamed toBlob -&amp;gt; getAsBlob. Make it use a dictionary so we can easily add more options in the future, such as colorApace, bitDepth, alpha, ...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;Provides more control over how canvases are rendered. This is a follow-on to the [[WorkerCanvas]] proposal and will be merged once agreement is reached.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
Feedback from web application authors using canvases have shown the need for the following controls:&lt;br /&gt;
&lt;br /&gt;
* (From ShaderToy, Sketchfab, Verold): need to be able to render to multiple regions on the page efficiently using a single canvas context. 3D model warehouse sites desire to show multiple live interactive models on the page, but creating multiple WebGL contexts per page is too inefficient. A single context should be able to render to multiple regions on the page.&lt;br /&gt;
* (From Google Maps): need to be able to render WebGL from a worker, transfer the rendered image to the main thread without making any copy of it, and composite it with other HTML on the page, guaranteeing that the updates are all seen in the same rendered frame.&lt;br /&gt;
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.&lt;br /&gt;
* (From adopters of the Push API): need to be able to dynamically create images to use as notification icons, such as compositing avatars, or adding an unread count&lt;br /&gt;
* (From the Google Docs team): need to be able to layout and render text from a worker using CanvasRenderingContext2D and display those results on the main thread.&lt;br /&gt;
* (From the Google Slides team): want to layout and render the slide thumbnails from a worker. During initial load and heavy collaboration these update frequently, and currently cause slowdowns on the main thread.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
&lt;br /&gt;
* [https://html.spec.whatwg.org/multipage/scripting.html#proxying-canvases-to-workers CanvasProxy] does not provide sufficient control to allow synchronization between workers&#039; rendering and DOM updates on the main thread. Keeping this rendering in sync is a requirement from Google&#039;s Maps team.&lt;br /&gt;
* [[CanvasInWorkers]] does not allow a worker to render directly into a canvas on the main thread without running code on the main thread. Allowing completely unsynchronized rendering is a requirement from Mozilla and users of Emscripten such as Epic Games and Unity, in which the desire is to execute all of the game&#039;s rendering on a worker thread.&lt;br /&gt;
* [[WorkerCanvas]] mostly addresses these two use cases, but some implementers objected to the mechanism for displaying the rendering results in image elements. The specific objection was that image elements already have complex internal state (for example, the management of the image&#039;s &amp;quot;loaded&amp;quot; state), and this would make it more complex. It also did not precisely address the use case of producing new frames both on the main thread and in workers.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
[https://blog.mozilla.org/research/2014/07/22/webgl-in-web-workers-today-and-faster-than-expected/ WebGL in Web Workers] details some work attempted in the Emscripten toolchain to address the lack of WebGL in workers. Due to the high volume of calls and large amount of data that is transferred to the graphics card in a typical high-end WebGL application, this approach is not sustainable. It&#039;s necessary for workers to be able to call the WebGL API directly, and present those results to the screen in a manner that does not introduce any copies of the rendering results.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Making canvas rendering contexts available to workers will increase parallelism in web applications, leading to increased performance on multi-core systems.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
See the abovementioned use cases:&lt;br /&gt;
&lt;br /&gt;
* Google&#039;s Maps team&lt;br /&gt;
* Emscripten users such as Epic Games and Unity&lt;br /&gt;
* Many others&lt;br /&gt;
&lt;br /&gt;
== Web IDL ==&lt;br /&gt;
&lt;br /&gt;
 typedef (OffscreenCanvasRenderingContext2D or&lt;br /&gt;
          WebGLRenderingContext or&lt;br /&gt;
          WebGL2RenderingContext) OffscreenRenderingContext;&lt;br /&gt;
 &lt;br /&gt;
 [Constructor(unsigned long width, unsigned long height),&lt;br /&gt;
  Exposed=(Window,Worker)]&lt;br /&gt;
 interface OffscreenCanvas {&lt;br /&gt;
   attribute unsigned long width;&lt;br /&gt;
   attribute unsigned long height;&lt;br /&gt;
   OffscreenRenderingContext? getContext(DOMString contextId, any... arguments); &lt;br /&gt;
 &lt;br /&gt;
   // OffscreenCanvas, like HTMLCanvasElement, maintains an origin-clean flag.&lt;br /&gt;
   // ImageBitmaps created by calling this method also have an&lt;br /&gt;
   // origin-clean flag which is set to the value of the OffscreenCanvas&#039;s&lt;br /&gt;
   // flag at the time of their construction. Uses of the ImageBitmap&lt;br /&gt;
   // in other APIs, such as CanvasRenderingContext2D or&lt;br /&gt;
   // WebGLRenderingContext, propagate this flag like other&lt;br /&gt;
   // CanvasImageSource types do, such as HTMLImageElement.&lt;br /&gt;
   ImageBitmap transferToImageBitmap();&lt;br /&gt;
 &lt;br /&gt;
   // Throws a SecurityError if the OffscreenCanvas&#039;s origin-clean flag&lt;br /&gt;
   // is set to false.&lt;br /&gt;
   Promise&amp;lt;Blob&amp;gt; getAsBlob(optional ImageEncodeOptions options);   &lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
 dictionary ImageEncodeOptions {&lt;br /&gt;
   DOMString type = &amp;quot;image/png&amp;quot;;&lt;br /&gt;
   unrestricted double quality = 1.0; // Defaults to 1.0 if value is outside 0:1 range&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 OffscreenCanvas implements Transferable;&lt;br /&gt;
 &lt;br /&gt;
 partial interface HTMLCanvasElement {&lt;br /&gt;
   OffscreenCanvas transferControlToOffscreen();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 typedef (HTMLOrSVGImageElement or&lt;br /&gt;
          HTMLVideoElement or&lt;br /&gt;
          HTMLCanvasElement or&lt;br /&gt;
          ImageBitmap or&lt;br /&gt;
          OffscreenCanvas) CanvasImageSource;&lt;br /&gt;
 &lt;br /&gt;
 typedef (CanvasImageSource or&lt;br /&gt;
          Blob or&lt;br /&gt;
          ImageData or&lt;br /&gt;
          OffscreenCanvas) ImageBitmapSource;&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 interface OffscreenCanvasRenderingContext2D {&lt;br /&gt;
   // commit() can only be used when HTMLCanvasElement has transferred Control&lt;br /&gt;
   // to OffscreenCanvas. Otherwise, Exception will be raised.&lt;br /&gt;
   // commit() can be invoked on main thread or worker thread. When it is invoked,&lt;br /&gt;
   // it is expected to see the image drawn to OffscreenCanvasRenderingContext2D &lt;br /&gt;
   // be displayed in the associated HTMLCanvasElement.&lt;br /&gt;
   void commit();&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute OffscreenCanvas canvas;&lt;br /&gt;
 };&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasState;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTransform;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasCompositing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageSmoothing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFillStrokeStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasShadowStyles;&lt;br /&gt;
 // Reference filters (e.g. &#039;url()&#039;) are not expected to work in Workers&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFilters;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasRect;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawPath;&lt;br /&gt;
 // Text support in workers poses very difficult technical challenges.&lt;br /&gt;
 // Open issue: should we forgo text support in OffscreenCanvas v1?&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasText;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawImage;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageData;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPathDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTextDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPath; &lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 Partial interface CanvasPattern {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 partial interface CanvasGradient {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 partial interface WebGLRenderingContextBase {&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;&lt;br /&gt;
 &lt;br /&gt;
   // If this context is associated with an OffscreenCanvas that was&lt;br /&gt;
   // created by HTMLCanvasElement&#039;s transferControlToOffscreen method,&lt;br /&gt;
   // causes this context&#039;s current rendering results to be pushed&lt;br /&gt;
   // to that canvas element. This has the same effect as returning&lt;br /&gt;
   // control to the main loop in a single-threaded application. Otherwise,&lt;br /&gt;
   // this call has no effect.&lt;br /&gt;
   void commit();&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== This Solution ===&lt;br /&gt;
&lt;br /&gt;
This proposed API can be used in several ways to satisfy the use cases described above:&lt;br /&gt;
&lt;br /&gt;
* It supports zero-copy transfer of canvases&#039; rendering results between threads, for example from a worker to the main thread. In this model, the main thread controls when to display new frames produced by the worker, so synchronization with other DOM updates is achieved.&lt;br /&gt;
&lt;br /&gt;
* It supports fully asynchronous rendering by a worker into a canvas displayed on the main thread. This satisfies certain Emscripten developers&#039; full-screen use cases.&lt;br /&gt;
&lt;br /&gt;
* It supports using a single WebGLRenderingContext or Canvas2DRenderingContext to efficiently render into multiple regions on the web page.&lt;br /&gt;
&lt;br /&gt;
* It introduces ImageBitmapRenderingContext, a new canvas context type whose sole purpose is to efficiently display ImageBitmaps. This supersedes the [[WorkerCanvas]] proposal&#039;s use of HTMLImageElement for this purpose.&lt;br /&gt;
&lt;br /&gt;
* It supports asynchronous encoding of OffscreenCanvases&#039; rendering results into Blobs which can be consumed by various other web platform APIs.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
This proposal introduces two primary processing models. The first involves &#039;&#039;synchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The application generates new frames using the RenderingContext obtained from the OffscreenCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to &amp;quot;tear off&amp;quot; the most recently rendered image from the OffscreenCanvas -- like a Post-It note. The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling &amp;lt;code&amp;gt;getContext(&#039;bitmaprenderer&#039;)&amp;lt;/code&amp;gt;. Each frame is displayed in the second canvas using the &amp;lt;code&amp;gt;transferImageBitmap&amp;lt;/code&amp;gt; method on this rendering context. Note that the threads producing and consuming the frames may be the same, or they may be different. Note also that a single OffscreenCanvas may transfer frames into an arbitrary number of other ImageBitmapRenderingContexts.&lt;br /&gt;
&lt;br /&gt;
The second processing model involves &#039;&#039;asynchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The main thread instantiates an HTMLCanvasElement and calls &amp;lt;code&amp;gt;transferControlToOffscreeen&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;getContext&amp;lt;/code&amp;gt; is used to obtain a rendering context for that OffscreenCanvas, either on the main thread, or on a worker. The application calls &amp;lt;code&amp;gt;commit&amp;lt;/code&amp;gt; against that rendering context in order to push frames to the original HTMLCanvasElement. In this rendering model, it is not defined when those frames become visible in the original canvas element. However, if the following situations apply:&lt;br /&gt;
&lt;br /&gt;
* It is a worker thread which is calling commit(), and&lt;br /&gt;
* The worker is calling commit() repeatedly against exactly one rendering context&lt;br /&gt;
&lt;br /&gt;
then it is required that the user agent synchronize the calls to commit() to the vsync interval. Calls to commit() conceptually enqueue frames for display, and after an implementation-defined number of frames have been enqueued, further calls to commit() will block until earlier frames have been presented to the screen. (This requirement allows porting of applications which drive their own main loop rather than using an event-driven loop.)&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
&lt;br /&gt;
* A known good way to drive an animation loop from a worker is needed. requestAnimationFrame or a similar API needs to be defined on worker threads.&lt;br /&gt;
* Some parts of the CanvasRenderingContext2D interface shall not be supported due OffscreenCanvas objects having no relation to the DOM or Frame: HitRegions, scrollPathIntoView, drawFocusIfNeeded.&lt;br /&gt;
* Due to technical challenges, some implementors [https://bugzilla.mozilla.org/show_bug.cgi?id=801176#c29 (Google and Mozilla)] have expressed a desire to ship without initially supporting text rendering in 2D contexts. Open Issue: Should text support be formally excluded from the specification until implementors are prepared to ship it (or until a more feasible API is designed)?&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
&lt;br /&gt;
This proposal has been vetted by developers of Apple&#039;s Safari, Google&#039;s Chrome, Microsoft&#039;s Internet Explorer, and Mozilla&#039;s Firefox browsers. All vendors agreed upon the basic form of the API, so it is likely it will be implemented widely and compatibly.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
&lt;br /&gt;
Web page authors have demanded increased parallelism support from the web platform for multiple years. If support for multithreaded rendering is added, it is likely it will be rapidly adopted.&lt;br /&gt;
&lt;br /&gt;
==== Example code ====&lt;br /&gt;
&lt;br /&gt;
Jeff Gilbert from Mozilla has crafted some example code utilizing this API:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jdashg/snippets/tree/master/webgl-from-worker Rendering WebGL from a worker using the commit() API]&lt;br /&gt;
* [https://github.com/jdashg/snippets/blob/master/webgl-one-to-many/index.html Using one WebGL context to render to many Canvas elements]&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10088</id>
		<title>OffscreenCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10088"/>
		<updated>2016-08-18T21:24:12Z</updated>

		<summary type="html">&lt;p&gt;Junov: Cleaning-out parts of the proposal that have already been added to the spec.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;Provides more control over how canvases are rendered. This is a follow-on to the [[WorkerCanvas]] proposal and will be merged once agreement is reached.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
Feedback from web application authors using canvases have shown the need for the following controls:&lt;br /&gt;
&lt;br /&gt;
* (From ShaderToy, Sketchfab, Verold): need to be able to render to multiple regions on the page efficiently using a single canvas context. 3D model warehouse sites desire to show multiple live interactive models on the page, but creating multiple WebGL contexts per page is too inefficient. A single context should be able to render to multiple regions on the page.&lt;br /&gt;
* (From Google Maps): need to be able to render WebGL from a worker, transfer the rendered image to the main thread without making any copy of it, and composite it with other HTML on the page, guaranteeing that the updates are all seen in the same rendered frame.&lt;br /&gt;
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.&lt;br /&gt;
* (From adopters of the Push API): need to be able to dynamically create images to use as notification icons, such as compositing avatars, or adding an unread count&lt;br /&gt;
* (From the Google Docs team): need to be able to layout and render text from a worker using CanvasRenderingContext2D and display those results on the main thread.&lt;br /&gt;
* (From the Google Slides team): want to layout and render the slide thumbnails from a worker. During initial load and heavy collaboration these update frequently, and currently cause slowdowns on the main thread.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
&lt;br /&gt;
* [https://html.spec.whatwg.org/multipage/scripting.html#proxying-canvases-to-workers CanvasProxy] does not provide sufficient control to allow synchronization between workers&#039; rendering and DOM updates on the main thread. Keeping this rendering in sync is a requirement from Google&#039;s Maps team.&lt;br /&gt;
* [[CanvasInWorkers]] does not allow a worker to render directly into a canvas on the main thread without running code on the main thread. Allowing completely unsynchronized rendering is a requirement from Mozilla and users of Emscripten such as Epic Games and Unity, in which the desire is to execute all of the game&#039;s rendering on a worker thread.&lt;br /&gt;
* [[WorkerCanvas]] mostly addresses these two use cases, but some implementers objected to the mechanism for displaying the rendering results in image elements. The specific objection was that image elements already have complex internal state (for example, the management of the image&#039;s &amp;quot;loaded&amp;quot; state), and this would make it more complex. It also did not precisely address the use case of producing new frames both on the main thread and in workers.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
[https://blog.mozilla.org/research/2014/07/22/webgl-in-web-workers-today-and-faster-than-expected/ WebGL in Web Workers] details some work attempted in the Emscripten toolchain to address the lack of WebGL in workers. Due to the high volume of calls and large amount of data that is transferred to the graphics card in a typical high-end WebGL application, this approach is not sustainable. It&#039;s necessary for workers to be able to call the WebGL API directly, and present those results to the screen in a manner that does not introduce any copies of the rendering results.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Making canvas rendering contexts available to workers will increase parallelism in web applications, leading to increased performance on multi-core systems.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
See the abovementioned use cases:&lt;br /&gt;
&lt;br /&gt;
* Google&#039;s Maps team&lt;br /&gt;
* Emscripten users such as Epic Games and Unity&lt;br /&gt;
* Many others&lt;br /&gt;
&lt;br /&gt;
== Web IDL ==&lt;br /&gt;
&lt;br /&gt;
 typedef (OffscreenCanvasRenderingContext2D or&lt;br /&gt;
          WebGLRenderingContext or&lt;br /&gt;
          WebGL2RenderingContext) OffscreenRenderingContext;&lt;br /&gt;
 &lt;br /&gt;
 [Constructor(unsigned long width, unsigned long height),&lt;br /&gt;
  Exposed=(Window,Worker)]&lt;br /&gt;
 interface OffscreenCanvas {&lt;br /&gt;
   attribute unsigned long width;&lt;br /&gt;
   attribute unsigned long height;&lt;br /&gt;
   OffscreenRenderingContext? getContext(DOMString contextId, any... arguments); &lt;br /&gt;
 &lt;br /&gt;
   // OffscreenCanvas, like HTMLCanvasElement, maintains an origin-clean flag.&lt;br /&gt;
   // ImageBitmaps created by calling this method also have an&lt;br /&gt;
   // origin-clean flag which is set to the value of the OffscreenCanvas&#039;s&lt;br /&gt;
   // flag at the time of their construction. Uses of the ImageBitmap&lt;br /&gt;
   // in other APIs, such as CanvasRenderingContext2D or&lt;br /&gt;
   // WebGLRenderingContext, propagate this flag like other&lt;br /&gt;
   // CanvasImageSource types do, such as HTMLImageElement.&lt;br /&gt;
   ImageBitmap transferToImageBitmap();&lt;br /&gt;
 &lt;br /&gt;
   // Throws a SecurityError if the OffscreenCanvas&#039;s origin-clean flag&lt;br /&gt;
   // is set to false.&lt;br /&gt;
   Promise&amp;lt;Blob&amp;gt; toBlob(optional DOMString type, any... arguments);   &lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 OffscreenCanvas implements Transferable;&lt;br /&gt;
 &lt;br /&gt;
 partial interface HTMLCanvasElement {&lt;br /&gt;
   OffscreenCanvas transferControlToOffscreen();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 typedef (HTMLOrSVGImageElement or&lt;br /&gt;
          HTMLVideoElement or&lt;br /&gt;
          HTMLCanvasElement or&lt;br /&gt;
          ImageBitmap or&lt;br /&gt;
          OffscreenCanvas) CanvasImageSource;&lt;br /&gt;
 &lt;br /&gt;
 typedef (CanvasImageSource or&lt;br /&gt;
          Blob or&lt;br /&gt;
          ImageData or&lt;br /&gt;
          OffscreenCanvas) ImageBitmapSource;&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 interface OffscreenCanvasRenderingContext2D {&lt;br /&gt;
   // commit() can only be used when HTMLCanvasElement has transferred Control&lt;br /&gt;
   // to OffscreenCanvas. Otherwise, Exception will be raised.&lt;br /&gt;
   // commit() can be invoked on main thread or worker thread. When it is invoked,&lt;br /&gt;
   // it is expected to see the image drawn to OffscreenCanvasRenderingContext2D &lt;br /&gt;
   // be displayed in the associated HTMLCanvasElement.&lt;br /&gt;
   void commit();&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute OffscreenCanvas canvas;&lt;br /&gt;
 };&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasState;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTransform;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasCompositing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageSmoothing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFillStrokeStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasShadowStyles;&lt;br /&gt;
 // Reference filters (e.g. &#039;url()&#039;) are not expected to work in Workers&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFilters;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasRect;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawPath;&lt;br /&gt;
 // Text support in workers poses very difficult technical challenges.&lt;br /&gt;
 // Open issue: should we forgo text support in OffscreenCanvas v1?&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasText;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawImage;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageData;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPathDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTextDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPath; &lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 Partial interface CanvasPattern {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 partial interface CanvasGradient {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 partial interface WebGLRenderingContextBase {&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;&lt;br /&gt;
 &lt;br /&gt;
   // If this context is associated with an OffscreenCanvas that was&lt;br /&gt;
   // created by HTMLCanvasElement&#039;s transferControlToOffscreen method,&lt;br /&gt;
   // causes this context&#039;s current rendering results to be pushed&lt;br /&gt;
   // to that canvas element. This has the same effect as returning&lt;br /&gt;
   // control to the main loop in a single-threaded application. Otherwise,&lt;br /&gt;
   // this call has no effect.&lt;br /&gt;
   void commit();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== This Solution ===&lt;br /&gt;
&lt;br /&gt;
This proposed API can be used in several ways to satisfy the use cases described above:&lt;br /&gt;
&lt;br /&gt;
* It supports zero-copy transfer of canvases&#039; rendering results between threads, for example from a worker to the main thread. In this model, the main thread controls when to display new frames produced by the worker, so synchronization with other DOM updates is achieved.&lt;br /&gt;
&lt;br /&gt;
* It supports fully asynchronous rendering by a worker into a canvas displayed on the main thread. This satisfies certain Emscripten developers&#039; full-screen use cases.&lt;br /&gt;
&lt;br /&gt;
* It supports using a single WebGLRenderingContext or Canvas2DRenderingContext to efficiently render into multiple regions on the web page.&lt;br /&gt;
&lt;br /&gt;
* It introduces ImageBitmapRenderingContext, a new canvas context type whose sole purpose is to efficiently display ImageBitmaps. This supersedes the [[WorkerCanvas]] proposal&#039;s use of HTMLImageElement for this purpose.&lt;br /&gt;
&lt;br /&gt;
* It supports asynchronous encoding of OffscreenCanvases&#039; rendering results into Blobs which can be consumed by various other web platform APIs.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
This proposal introduces two primary processing models. The first involves &#039;&#039;synchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The application generates new frames using the RenderingContext obtained from the OffscreenCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to &amp;quot;tear off&amp;quot; the most recently rendered image from the OffscreenCanvas -- like a Post-It note. The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling &amp;lt;code&amp;gt;getContext(&#039;bitmaprenderer&#039;)&amp;lt;/code&amp;gt;. Each frame is displayed in the second canvas using the &amp;lt;code&amp;gt;transferImageBitmap&amp;lt;/code&amp;gt; method on this rendering context. Note that the threads producing and consuming the frames may be the same, or they may be different. Note also that a single OffscreenCanvas may transfer frames into an arbitrary number of other ImageBitmapRenderingContexts.&lt;br /&gt;
&lt;br /&gt;
The second processing model involves &#039;&#039;asynchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The main thread instantiates an HTMLCanvasElement and calls &amp;lt;code&amp;gt;transferControlToOffscreeen&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;getContext&amp;lt;/code&amp;gt; is used to obtain a rendering context for that OffscreenCanvas, either on the main thread, or on a worker. The application calls &amp;lt;code&amp;gt;commit&amp;lt;/code&amp;gt; against that rendering context in order to push frames to the original HTMLCanvasElement. In this rendering model, it is not defined when those frames become visible in the original canvas element. However, if the following situations apply:&lt;br /&gt;
&lt;br /&gt;
* It is a worker thread which is calling commit(), and&lt;br /&gt;
* The worker is calling commit() repeatedly against exactly one rendering context&lt;br /&gt;
&lt;br /&gt;
then it is required that the user agent synchronize the calls to commit() to the vsync interval. Calls to commit() conceptually enqueue frames for display, and after an implementation-defined number of frames have been enqueued, further calls to commit() will block until earlier frames have been presented to the screen. (This requirement allows porting of applications which drive their own main loop rather than using an event-driven loop.)&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
&lt;br /&gt;
* A known good way to drive an animation loop from a worker is needed. requestAnimationFrame or a similar API needs to be defined on worker threads.&lt;br /&gt;
* Some parts of the CanvasRenderingContext2D interface shall not be supported due OffscreenCanvas objects having no relation to the DOM or Frame: HitRegions, scrollPathIntoView, drawFocusIfNeeded.&lt;br /&gt;
* Due to technical challenges, some implementors [https://bugzilla.mozilla.org/show_bug.cgi?id=801176#c29 (Google and Mozilla)] have expressed a desire to ship without initially supporting text rendering in 2D contexts. Open Issue: Should text support be formally excluded from the specification until implementors are prepared to ship it (or until a more feasible API is designed)?&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
&lt;br /&gt;
This proposal has been vetted by developers of Apple&#039;s Safari, Google&#039;s Chrome, Microsoft&#039;s Internet Explorer, and Mozilla&#039;s Firefox browsers. All vendors agreed upon the basic form of the API, so it is likely it will be implemented widely and compatibly.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
&lt;br /&gt;
Web page authors have demanded increased parallelism support from the web platform for multiple years. If support for multithreaded rendering is added, it is likely it will be rapidly adopted.&lt;br /&gt;
&lt;br /&gt;
==== Example code ====&lt;br /&gt;
&lt;br /&gt;
Jeff Gilbert from Mozilla has crafted some example code utilizing this API:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jdashg/snippets/tree/master/webgl-from-worker Rendering WebGL from a worker using the commit() API]&lt;br /&gt;
* [https://github.com/jdashg/snippets/blob/master/webgl-one-to-many/index.html Using one WebGL context to render to many Canvas elements]&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10087</id>
		<title>OffscreenCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10087"/>
		<updated>2016-08-18T20:35:43Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;Provides more control over how canvases are rendered. This is a follow-on to the [[WorkerCanvas]] proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
Feedback from web application authors using canvases have shown the need for the following controls:&lt;br /&gt;
&lt;br /&gt;
* (From ShaderToy, Sketchfab, Verold): need to be able to render to multiple regions on the page efficiently using a single canvas context. 3D model warehouse sites desire to show multiple live interactive models on the page, but creating multiple WebGL contexts per page is too inefficient. A single context should be able to render to multiple regions on the page.&lt;br /&gt;
* (From Google Maps): need to be able to render WebGL from a worker, transfer the rendered image to the main thread without making any copy of it, and composite it with other HTML on the page, guaranteeing that the updates are all seen in the same rendered frame.&lt;br /&gt;
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.&lt;br /&gt;
* (From adopters of the Push API): need to be able to dynamically create images to use as notification icons, such as compositing avatars, or adding an unread count&lt;br /&gt;
* (From the Google Docs team): need to be able to layout and render text from a worker using CanvasRenderingContext2D and display those results on the main thread.&lt;br /&gt;
* (From the Google Slides team): want to layout and render the slide thumbnails from a worker. During initial load and heavy collaboration these update frequently, and currently cause slowdowns on the main thread.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
&lt;br /&gt;
* [https://html.spec.whatwg.org/multipage/scripting.html#proxying-canvases-to-workers CanvasProxy] does not provide sufficient control to allow synchronization between workers&#039; rendering and DOM updates on the main thread. Keeping this rendering in sync is a requirement from Google&#039;s Maps team.&lt;br /&gt;
* [[CanvasInWorkers]] does not allow a worker to render directly into a canvas on the main thread without running code on the main thread. Allowing completely unsynchronized rendering is a requirement from Mozilla and users of Emscripten such as Epic Games and Unity, in which the desire is to execute all of the game&#039;s rendering on a worker thread.&lt;br /&gt;
* [[WorkerCanvas]] mostly addresses these two use cases, but some implementers objected to the mechanism for displaying the rendering results in image elements. The specific objection was that image elements already have complex internal state (for example, the management of the image&#039;s &amp;quot;loaded&amp;quot; state), and this would make it more complex. It also did not precisely address the use case of producing new frames both on the main thread and in workers.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
[https://blog.mozilla.org/research/2014/07/22/webgl-in-web-workers-today-and-faster-than-expected/ WebGL in Web Workers] details some work attempted in the Emscripten toolchain to address the lack of WebGL in workers. Due to the high volume of calls and large amount of data that is transferred to the graphics card in a typical high-end WebGL application, this approach is not sustainable. It&#039;s necessary for workers to be able to call the WebGL API directly, and present those results to the screen in a manner that does not introduce any copies of the rendering results.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Making canvas rendering contexts available to workers will increase parallelism in web applications, leading to increased performance on multi-core systems.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
See the abovementioned use cases:&lt;br /&gt;
&lt;br /&gt;
* Google&#039;s Maps team&lt;br /&gt;
* Emscripten users such as Epic Games and Unity&lt;br /&gt;
* Many others&lt;br /&gt;
&lt;br /&gt;
== Web IDL ==&lt;br /&gt;
&lt;br /&gt;
 typedef (OffscreenCanvasRenderingContext2D or&lt;br /&gt;
          WebGLRenderingContext or&lt;br /&gt;
          WebGL2RenderingContext) OffscreenRenderingContext;&lt;br /&gt;
 &lt;br /&gt;
 [Constructor(unsigned long width, unsigned long height),&lt;br /&gt;
  Exposed=(Window,Worker)]&lt;br /&gt;
 interface OffscreenCanvas {&lt;br /&gt;
   attribute unsigned long width;&lt;br /&gt;
   attribute unsigned long height;&lt;br /&gt;
   OffscreenRenderingContext? getContext(DOMString contextId, any... arguments); &lt;br /&gt;
 &lt;br /&gt;
   // OffscreenCanvas, like HTMLCanvasElement, maintains an origin-clean flag.&lt;br /&gt;
   // ImageBitmaps created by calling this method also have an&lt;br /&gt;
   // origin-clean flag which is set to the value of the OffscreenCanvas&#039;s&lt;br /&gt;
   // flag at the time of their construction. Uses of the ImageBitmap&lt;br /&gt;
   // in other APIs, such as CanvasRenderingContext2D or&lt;br /&gt;
   // WebGLRenderingContext, propagate this flag like other&lt;br /&gt;
   // CanvasImageSource types do, such as HTMLImageElement.&lt;br /&gt;
   ImageBitmap transferToImageBitmap();&lt;br /&gt;
 &lt;br /&gt;
   // Throws a SecurityError if the OffscreenCanvas&#039;s origin-clean flag&lt;br /&gt;
   // is set to false.&lt;br /&gt;
   Promise&amp;lt;Blob&amp;gt; toBlob(optional DOMString type, any... arguments);   &lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 OffscreenCanvas implements Transferable;&lt;br /&gt;
 ImageBitmap implements Transferable;&lt;br /&gt;
 &lt;br /&gt;
 // It&#039;s crucial that there be a way to explicitly dispose of ImageBitmaps&lt;br /&gt;
 // since they refer to potentially large graphics resources. Some uses&lt;br /&gt;
 // of this API proposal will result in repeated allocations of ImageBitmaps,&lt;br /&gt;
 // and garbage collection will not reliably reclaim them quickly enough. &lt;br /&gt;
 // Here we reuse close(), which also exists on another Transferable type,&lt;br /&gt;
 // MessagePort. Potentially, all Transferable types should inherit from a&lt;br /&gt;
 // new interface type &amp;quot;Closeable&amp;quot;. &lt;br /&gt;
 partial interface ImageBitmap {&lt;br /&gt;
   // Dispose of all graphical resources associated with this ImageBitmap.&lt;br /&gt;
   void close(); &lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 partial interface HTMLCanvasElement {&lt;br /&gt;
   OffscreenCanvas transferControlToOffscreen();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 // Note that CanvasRenderingContext2D already has a commit() method&lt;br /&gt;
 // from the CanvasProxy spec which this proposal obsoletes.&lt;br /&gt;
 partial interface CanvasRenderingContext2D {&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute HTMLCanvasElement canvas;&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 typedef (HTMLOrSVGImageElement or&lt;br /&gt;
          HTMLVideoElement or&lt;br /&gt;
          HTMLCanvasElement or&lt;br /&gt;
          ImageBitmap or&lt;br /&gt;
          OffscreenCanvas) CanvasImageSource;&lt;br /&gt;
 &lt;br /&gt;
 typedef (CanvasImageSource or&lt;br /&gt;
          Blob or&lt;br /&gt;
          ImageData or&lt;br /&gt;
          OffscreenCanvas) ImageBitmapSource;&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 interface OffscreenCanvasRenderingContext2D {&lt;br /&gt;
   // commit() can only be used when HTMLCanvasElement has transferred Control&lt;br /&gt;
   // to OffscreenCanvas. Otherwise, Exception will be raised.&lt;br /&gt;
   // commit() can be invoked on main thread or worker thread. When it is invoked,&lt;br /&gt;
   // it is expected to see the image drawn in OffscreenCanvasRenderingContext2D &lt;br /&gt;
   // being displayed in the associated HTMLCanvasElement.&lt;br /&gt;
   void commit();&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute OffscreenCanvas canvas;&lt;br /&gt;
 };&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasState;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTransform;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasCompositing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageSmoothing;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFillStrokeStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasShadowStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasFilters;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasRect;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawPath;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasText;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasDrawImage;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasImageData;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPathDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasTextDrawingStyles;&lt;br /&gt;
 OffscreenCanvasRenderingContext2D implements CanvasPath; &lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 Partial interface CanvasPattern {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 [Exposed=Window, Worker]&lt;br /&gt;
 partial interface CanvasGradient {&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 partial interface WebGLRenderingContextBase {&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;&lt;br /&gt;
 &lt;br /&gt;
   // If this context is associated with an OffscreenCanvas that was&lt;br /&gt;
   // created by HTMLCanvasElement&#039;s transferControlToOffscreen method,&lt;br /&gt;
   // causes this context&#039;s current rendering results to be pushed&lt;br /&gt;
   // to that canvas element. This has the same effect as returning&lt;br /&gt;
   // control to the main loop in a single-threaded application. Otherwise,&lt;br /&gt;
   // this call has no effect.&lt;br /&gt;
   void commit();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 // The new ImageBitmapRenderingContext is a canvas rendering context&lt;br /&gt;
 // which only provides the functionality to replace the canvas&#039;s&lt;br /&gt;
 // contents with the given ImageBitmap. Its context id (the first argument&lt;br /&gt;
 // to getContext) is &amp;quot;bitmaprenderer&amp;quot;.&lt;br /&gt;
 //&lt;br /&gt;
 // Note: this interface has already been incorporated into the current WHATWG&lt;br /&gt;
 // specification at https://html.spec.whatwg.org/multipage/scripting.html#the-imagebitmap-rendering-context .&lt;br /&gt;
 interface ImageBitmapRenderingContext {&lt;br /&gt;
   // Displays the given ImageBitmap in the canvas associated with this&lt;br /&gt;
   // rendering context. Ownership of the ImageBitmap is transferred to&lt;br /&gt;
   // the canvas. The caller may not use its reference to the ImageBitmap&lt;br /&gt;
   // after making this call. (This semantic is crucial to enable prompt&lt;br /&gt;
   // reclamation of expensive graphics resources, rather than relying on&lt;br /&gt;
   // garbage collection to do so.)&lt;br /&gt;
   //&lt;br /&gt;
   // The ImageBitmap conceptually replaces the canvas&#039;s bitmap, but&lt;br /&gt;
   // it does not change the canvas&#039;s intrinsic width or height.&lt;br /&gt;
   //&lt;br /&gt;
   // The ImageBitmap, when displayed, is clipped to the rectangle&lt;br /&gt;
   // defined by the canvas&#039;s instrinsic width and height. Pixels that&lt;br /&gt;
   // would be covered by the canvas&#039;s bitmap which are not covered by&lt;br /&gt;
   // the supplied ImageBitmap are rendered transparent black. Any CSS&lt;br /&gt;
   // styles affecting the display of the canvas are applied as usual.&lt;br /&gt;
   void transferFromImageBitmap(ImageBitmap bitmap);&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== This Solution ===&lt;br /&gt;
&lt;br /&gt;
This proposed API can be used in several ways to satisfy the use cases described above:&lt;br /&gt;
&lt;br /&gt;
* It supports zero-copy transfer of canvases&#039; rendering results between threads, for example from a worker to the main thread. In this model, the main thread controls when to display new frames produced by the worker, so synchronization with other DOM updates is achieved.&lt;br /&gt;
&lt;br /&gt;
* It supports fully asynchronous rendering by a worker into a canvas displayed on the main thread. This satisfies certain Emscripten developers&#039; full-screen use cases.&lt;br /&gt;
&lt;br /&gt;
* It supports using a single WebGLRenderingContext or Canvas2DRenderingContext to efficiently render into multiple regions on the web page.&lt;br /&gt;
&lt;br /&gt;
* It introduces ImageBitmapRenderingContext, a new canvas context type whose sole purpose is to efficiently display ImageBitmaps. This supersedes the [[WorkerCanvas]] proposal&#039;s use of HTMLImageElement for this purpose.&lt;br /&gt;
&lt;br /&gt;
* It supports asynchronous encoding of OffscreenCanvases&#039; rendering results into Blobs which can be consumed by various other web platform APIs.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
This proposal introduces two primary processing models. The first involves &#039;&#039;synchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The application generates new frames using the RenderingContext obtained from the OffscreenCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to &amp;quot;tear off&amp;quot; the most recently rendered image from the OffscreenCanvas -- like a Post-It note. The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling &amp;lt;code&amp;gt;getContext(&#039;bitmaprenderer&#039;)&amp;lt;/code&amp;gt;. Each frame is displayed in the second canvas using the &amp;lt;code&amp;gt;transferImageBitmap&amp;lt;/code&amp;gt; method on this rendering context. Note that the threads producing and consuming the frames may be the same, or they may be different. Note also that a single OffscreenCanvas may transfer frames into an arbitrary number of other ImageBitmapRenderingContexts.&lt;br /&gt;
&lt;br /&gt;
The second processing model involves &#039;&#039;asynchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The main thread instantiates an HTMLCanvasElement and calls &amp;lt;code&amp;gt;transferControlToOffscreeen&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;getContext&amp;lt;/code&amp;gt; is used to obtain a rendering context for that OffscreenCanvas, either on the main thread, or on a worker. The application calls &amp;lt;code&amp;gt;commit&amp;lt;/code&amp;gt; against that rendering context in order to push frames to the original HTMLCanvasElement. In this rendering model, it is not defined when those frames become visible in the original canvas element. However, if the following situations apply:&lt;br /&gt;
&lt;br /&gt;
* It is a worker thread which is calling commit(), and&lt;br /&gt;
* The worker is calling commit() repeatedly against exactly one rendering context&lt;br /&gt;
&lt;br /&gt;
then it is required that the user agent synchronize the calls to commit() to the vsync interval. Calls to commit() conceptually enqueue frames for display, and after an implementation-defined number of frames have been enqueued, further calls to commit() will block until earlier frames have been presented to the screen. (This requirement allows porting of applications which drive their own main loop rather than using an event-driven loop.)&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
&lt;br /&gt;
* A known good way to drive an animation loop from a worker is needed. requestAnimationFrame or a similar API needs to be defined on worker threads.&lt;br /&gt;
* Some parts of the CanvasRenderingContext2D interface shall not be supported due OffscreenCanvas objects having no relation to the DOM or Frame: HitRegions, scrollPathIntoView, drawFocusIfNeeded.&lt;br /&gt;
* Due to technical challenges, some implementors [https://bugzilla.mozilla.org/show_bug.cgi?id=801176#c29 (Google and Mozilla)] have expressed a desire to ship without initially supporting text rendering in 2D contexts. Open Issue: Should text support be formally excluded from the specification until implementors are prepared to ship it (or until a more feasible API is designed)?&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
&lt;br /&gt;
This proposal has been vetted by developers of Apple&#039;s Safari, Google&#039;s Chrome, Microsoft&#039;s Internet Explorer, and Mozilla&#039;s Firefox browsers. All vendors agreed upon the basic form of the API, so it is likely it will be implemented widely and compatibly.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
&lt;br /&gt;
Web page authors have demanded increased parallelism support from the web platform for multiple years. If support for multithreaded rendering is added, it is likely it will be rapidly adopted.&lt;br /&gt;
&lt;br /&gt;
==== Example code ====&lt;br /&gt;
&lt;br /&gt;
Jeff Gilbert from Mozilla has crafted some example code utilizing this API:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jdashg/snippets/tree/master/webgl-from-worker Rendering WebGL from a worker using the commit() API]&lt;br /&gt;
* [https://github.com/jdashg/snippets/blob/master/webgl-one-to-many/index.html Using one WebGL context to render to many Canvas elements]&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=WorkerCanvas&amp;diff=10086</id>
		<title>WorkerCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=WorkerCanvas&amp;diff=10086"/>
		<updated>2016-08-18T20:35:11Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;This proposal has been abandonned.  Please refer to the [[OffscreenCanvas]] proposal.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Provides more control over how canvases are rendered.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
Feedback from web application authors using canvases have shown the need for the following controls:&lt;br /&gt;
&lt;br /&gt;
* (From ShaderToy, Sketchfab, Verold): need to be able to render to multiple regions on the page efficiently using a single canvas context. 3D model warehouse sites desire to show multiple live interactive models on the page, but creating multiple WebGL contexts per page is too inefficient. A single context should be able to render to multiple regions on the page.&lt;br /&gt;
* (From Google Maps): need to be able to render WebGL from a worker, transfer the rendered image to the main thread without making any copy of it, and composite it with other HTML on the page, guaranteeing that the updates are all seen in the same rendered frame.&lt;br /&gt;
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.&lt;br /&gt;
&lt;br /&gt;
== WebIDL ==&lt;br /&gt;
&lt;br /&gt;
 [Constructor(unsigned long width, unsigned long height)]&lt;br /&gt;
 interface WorkerCanvas {&lt;br /&gt;
   attribute unsigned long width;&lt;br /&gt;
   attribute unsigned long height;&lt;br /&gt;
   RenderingContext? getContext(DOMString contextId, any... arguments); &lt;br /&gt;
   void toBlob(FileCallback? _callback, optional DOMString type, any... arguments);&lt;br /&gt;
   ImageBitmap transferToImageBitmap();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 WorkerCanvas implements Transferable;&lt;br /&gt;
 ImageBitmap implements Transferable;&lt;br /&gt;
 &lt;br /&gt;
 partial interface HTMLCanvasElement {&lt;br /&gt;
   WorkerCanvas transferControlToWorker();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 partial interface ImageBitmap {&lt;br /&gt;
   void transferToImage(HTMLImageElement image);&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
== Spec changes ==&lt;br /&gt;
&lt;br /&gt;
Transferring of ImageBitmaps has to be defined. It should neuter the ImageBitmap in the sending thread. Neutering sets the ImageBitmap&#039;s width and height to 0.&lt;br /&gt;
&lt;br /&gt;
HTMLCanvasElement.transferControlToWorker behaves like transferControlToProxy in the current WHATWG spec. WorkerCanvas is Transferable, but transfer fails if transferred other than from the main thread to a worker. All its methods throw if not called on a worker, or if it&#039;s neutered.&lt;br /&gt;
&lt;br /&gt;
ImageBitmap.transferToImage removes the image element&#039;s &amp;quot;src&amp;quot; attribute and makes the image display the contents of the ImageBitmap (until the next transferToImage to that image, or until the image&#039;s &amp;quot;src&amp;quot; attribute is set). The ImageBitmap is neutered.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasInWorkers&amp;diff=10085</id>
		<title>CanvasInWorkers</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasInWorkers&amp;diff=10085"/>
		<updated>2016-08-18T20:34:34Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;This proposal has been abandoned.  Please refer to the [[OffscreenCanvas]] proposal.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;This proposal is trying to solve 2 issues: (1) being able to render to a canvas from a worker and (2) being able to render to multiple canvases using a single rendering context&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
There are 2 common uses cases.&lt;br /&gt;
&lt;br /&gt;
Use case #1: Building a multiple view 3D editor (like Blender, or Maya). WebGL, being based on OpenGL, has the limitation that resources belong to a single WebGLRenderingContext. That means if you have 2 or more canvases (left view, top view, perspective view) etc. You currently need a separate context for each one which means you need to load 100s of megabytes of resources multiple times. Allowing a single WebGLRenderingContext to be used with more than 1 canvas would solve this problem.&lt;br /&gt;
&lt;br /&gt;
Use case #2: You&#039;ll like to be able to make better use of multiple cores and avoid jank when drawing to a canvas. Many canvas apps (webgl or canvas2d) need to make thousands of API calls per frame at 60 frames a second. Being able to move those calls to a worker would potentially free up the main thread to do other things. It would also potentially help jank by keeping work off the main thread so it does not block the UI.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
Games in HMTL are becoming more common and we&#039;d like to support developers making even more complex games. A few 3D editors are starting to appear and we&#039;d like to help them be as good as their native app counter parts.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
&lt;br /&gt;
# Allow rendering using the Canvas2D api in a worker.&lt;br /&gt;
# Allow rendering using the WebGL api in a worker.&lt;br /&gt;
# Allow synchronization of canvas rendering with DOM manipulation&lt;br /&gt;
# Allow using 1 Canvas2DRenderingContext with multiple destinations without losing state&lt;br /&gt;
# Allow using 1 WebGLRenderingContext with multiple destinations without losing state&lt;br /&gt;
# Don&#039;t waste memory.&lt;br /&gt;
# Do not break existing content (existing APIs still work as is)&lt;br /&gt;
&lt;br /&gt;
== Non Goals ==&lt;br /&gt;
&lt;br /&gt;
# Sharing WebGL Resources between contexts. That is an orthogonal issue&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
One proposed solution involves &amp;lt;tt&amp;gt;CanvasProxy&amp;lt;/tt&amp;gt; and a &amp;lt;tt&amp;gt;commit&amp;lt;/tt&amp;gt; method. This solution does not meet the goals above. Specifically it does not handle synchronization issues and may waste memory.&lt;br /&gt;
&lt;br /&gt;
=== Suggested Solution ===&lt;br /&gt;
Allow rendering contexts to be created by constructor&lt;br /&gt;
&lt;br /&gt;
    var ctx = new Canvas2DRenderingContext();&lt;br /&gt;
    var gl = new WebGLRenderingContext();&lt;br /&gt;
&lt;br /&gt;
Define `DrawingBuffer`. DrawingBuffer can be considered a ‘handle’ to a single texture (or bucket of pixels). A DrawingBuffer can be passed anywhere a Canvas can be passed. In particular drawImage, texImage2D and texSubImage2D. A DrawingBuffer also has a toDataURL method that is similar to the Canvas’s toDataURL method. A DrawingBuffer can be transferred to and from a worker using the transfer of ownership concept similar to an ArrayBuffer.&lt;br /&gt;
&lt;br /&gt;
A DrawingBuffer is created by constructor as in&lt;br /&gt;
&lt;br /&gt;
    var db = new DrawingBuffer(context, {...creation-parameters...});&lt;br /&gt;
&lt;br /&gt;
The context associated with a DrawingBuffer at creation is the only context that may render to that DrawingBuffer.&lt;br /&gt;
&lt;br /&gt;
Canvas becomes a ‘shell’ whose sole purpose is to display DrawingBuffers.&lt;br /&gt;
&lt;br /&gt;
Add 2 functions to `Canvas`. Canvas.transferDrawingBufferToCanvas() and Canvas.copyDrawingBuffer()&lt;br /&gt;
&lt;br /&gt;
Canvas.transferDrawingBufferToCanvas effectively transfers ownership of the DrawingBuffer. The user&#039;s DrawingBuffer is neutered. This similar to how transferring a DrawingBuffer from the main thread to a worker makes the main thread no longer able to use it.&lt;br /&gt;
&lt;br /&gt;
A single threaded app that wanted to emulate the existing workflow using DrawingBuffers would do something like this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    var gl = new WebGLRenderingContext();&lt;br /&gt;
    function render() {&lt;br /&gt;
      var db = new DrawingBuffer(...);&lt;br /&gt;
      gl.setDrawingBuffer(db);&lt;br /&gt;
      gl.drawXXX();&lt;br /&gt;
      canvas.transferDrawingBufferToCanvas(db);&lt;br /&gt;
      requestAnimationFrame(render);&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Canvas.copyDrawingBuffer() on the other hand copies the DrawingBuffer’s texture/backingstore to the canvas. This is a slower path but emulates the standard Canvas2D persistent backingstore style. The canvas will have to allocate a texture or bucket of pixel to hold the copy if it does not already have one.&lt;br /&gt;
&lt;br /&gt;
Disallow ‘multi-sampled’ / “anti-aliased” DrawingBuffers and instead expose GL_ANGLE_framebuffer_blit&lt;br /&gt;
GL_ANGLE_framebuffer_multisample. (webgl specific)&lt;br /&gt;
&lt;br /&gt;
Define ‘DepthStencilBuffer’. Add a function, WebGLRenderingContext.setDepthStencilBuffer (webgl specific)&lt;br /&gt;
&lt;br /&gt;
== Suggested IDL ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
interface HTMLCanvas {&lt;br /&gt;
   ...&lt;br /&gt;
   void transferDrawingBufferToCanvas(DrawingBuffer b);&lt;br /&gt;
   void copyDrawingBuffer(DrawingBuffer b);&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
interface Canvas2DRenderingContext {&lt;br /&gt;
  readonly attribute DrawingBuffer drawingBuffer;&lt;br /&gt;
  void setDrawingBuffer(DrawingBuffer buffer);&lt;br /&gt;
  CanvasPattern createPattern(DrawingBuffer buffer, ...);&lt;br /&gt;
  void drawImage(DrawingBuffer buffer,&lt;br /&gt;
                 unrestricted double dx,&lt;br /&gt;
                 unrestricted double dy);&lt;br /&gt;
  void drawImage(DrawingBuffer buffer,&lt;br /&gt;
                 unrestricted double dx, unrestricted double dy,&lt;br /&gt;
                 unrestricted double dw, unrestricted double dh);&lt;br /&gt;
  void drawImage(DrawingBuffer buffer,&lt;br /&gt;
                 unrestricted double sx, unrestricted double sy,&lt;br /&gt;
                 unrestricted double sw, unrestricted double sh,&lt;br /&gt;
                 unrestricted double dx, unrestricted double dy,&lt;br /&gt;
                 unrestricted double dw, unrestricted double dh);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
interface WebGLRenderingContext {&lt;br /&gt;
  ...&lt;br /&gt;
  readonly attribute DrawingBuffer drawingBuffer;&lt;br /&gt;
  readonly attribute DepthStencilBuffer depthStencilBuffer;&lt;br /&gt;
  void setDrawingBuffer(DrawingBuffer buffer);&lt;br /&gt;
  void setDepthStencilBuffer(DepthStencilBuffer buffer);&lt;br /&gt;
  void texImage2D(GLenum target, GLint level, GLenum internalformat,&lt;br /&gt;
                  GLenum format, GLenum type, DrawingBuffer buffer);&lt;br /&gt;
  void texSubImage2D(GLenum target, GLint level,&lt;br /&gt;
                     GLint xoffset, GLint yoffset,&lt;br /&gt;
                     GLenum format, GLenum type,&lt;br /&gt;
                     DrawingBuffer buffer);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
[ Constructor(RenderingContext c, any contextCreationParameters) ]&lt;br /&gt;
interface DrawingBuffer {&lt;br /&gt;
   readonly attribute long width;&lt;br /&gt;
   readonly attribute long height;&lt;br /&gt;
   void setSize(long width, long height);&lt;br /&gt;
   DOMString toDataURL(in DOMString type)&lt;br /&gt;
       raises(???Exception);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
[ Constructor(RenderingContext c, any contextCreationParameters) ]&lt;br /&gt;
interface DepthStencilBuffer {&lt;br /&gt;
   attribute long width;&lt;br /&gt;
   attribute long height;&lt;br /&gt;
   readonly attribute long width;&lt;br /&gt;
   readonly attribute long height;&lt;br /&gt;
   void setSize(long width, long height);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Rationale: ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Why get rid of a commit method in workers to propagate changes from a context rendered in a worker to a canvas in the main page?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; Using commit there is no way to synchronize updates in a worker with updates to the DOM in the main thread. This solution makes it possible to make sure that DOM objects positioned in the main thread stay in sync with images rendered by a worker. The worker transfers the DrawingBuffer to the main thread via postMessage, and the main thread calls canvas.transferDrawingBufferToCanvas. This solution also avoids unnecessary blits of the canvas’s contents, which is essential for performance.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Why disallow anti-aliased DrawingBuffers?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; In the existing model when you create a webgl context by calling canvas.getContext() a single multi-sampled renderbuffer is created by the browser. When the browser implicitly does a ‘swapBuffers’ for you it resolves or “blits” this multi-sampled renderbuffer into a texture.&lt;br /&gt;
&lt;br /&gt;
On 30inch display (or a HD-DPI Macbook Pro) a fullscreen multi-sampled renderbuffer requires&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Bytes per pixel = 4 (rgba) * 4 (depth-stencil)&lt;br /&gt;
&lt;br /&gt;
     2560(width)  *&lt;br /&gt;
     1600(height) *&lt;br /&gt;
     8(bytes per pixel) *&lt;br /&gt;
     4 (multi-samples)&lt;br /&gt;
  -----------------------------&lt;br /&gt;
    = 125meg&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the new model, a typical animating application will create a minimum of 2 DrawingBuffers (for double buffering so a worker can render to one while the other is passed back to the main thread for compositing) or possibly 3 (for Triple Buffering). If all the DrawingBuffers are antialiased that’s 375meg of VRAM used up immediately. &lt;br /&gt;
&lt;br /&gt;
On the other hand, if instead we disallow anti-aliased DrawBuffers and expose GL_ANGLE_framebuffer_blit and GL_ANGLE_framebuffer_multisample as WebGL extensions, then a typical app that wants to support anti-aliasing will create a single multisampled renderbuffer and do its own blit to non-multi-sampled DrawingBuffers. For a triple buffered app that would be 218 meg of VRAM.&lt;br /&gt;
&lt;br /&gt;
Another considered solution is to some how magically share a single multi-sample buffer. In a double buffered app you&#039;d transfer a DrawingBuffer from the work to a the main thread. Since the DrawBuffer is intended to be given to a canvas and since you can&#039;t render to the drawing buffer because its context is back in the worker then, at transfer time, you could resolve the multi-sampled buffer. And give that multi-sampled buffer to the next DrawBuffer. Unfortunately that won&#039;t work under this design. Nothing prevents the user from transferring the DrawingBuffer from a worker to another thread and back to the worker. It should come back as it started. If it was resolved on transfer it would not come back. Another issue is there&#039;s nothing preventing the user from making 3 or 4 DrawingBuffers for a single canvas for triple or quadruple buffering. Since you have no idea what a user is going to use a DrawingBuffer or how they are going to use it there&#039;s no easy way for them all to magically share the same multi-sample buffer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Should we allow anti-aliased DrawingBuffers?&lt;br /&gt;
&lt;br /&gt;
# Yes, developers who care about memory can create non anti-aliased buffers. Developers who don’t care can avoid the hassle of needing to make a multi-sampled renderbuffer and blitting&lt;br /&gt;
# No, all developers that want to use DrawingBuffers and get anti-aliasing must use GL_ANGLE_framebuffer_multisample and GL_ANGLE_framebuffer_blit&lt;br /&gt;
&lt;br /&gt;
Resolution: #2. Saving memory is especially important in situations like tablets, multiple tabs, and systems without virtual memory. Rather than let the bad behavior be the easy path we chose to encourage the good behavior.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Why separate out DepthStencilBuffer?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; For similar reasons as disallowing anti-aliasing. (see above)&lt;br /&gt;
&lt;br /&gt;
Given that DrawingBuffers are transferred by transferring ownership, and given that in the common case of transferring a DrawingBuffer to the main thread to be composited, there is no reason to also transfer the depth/stencil buffers. Doing so would mean multiple depth and stencil buffers would need to be allocated so one thread can render to them while the main thread is compositing.&lt;br /&gt;
&lt;br /&gt;
Apps that use GL_ANGLE_framebuffer_multisample and GL_ANGLE_framebuffer_blit to support anti-aliasing will never need to create a ‘DepthStencilBuffer’ as they will end up creating a gl.DEPTH_STENCIL texture or renderbuffer&lt;br /&gt;
&lt;br /&gt;
Separating them out also makes more sense for Canvas2D which never needs a depth/stencil buffer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; For a worker based animated app what’s the expected code flow?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    // render.js: -- worker --&lt;br /&gt;
    var gl = new WebGLRenderingContext();&lt;br /&gt;
    var dsBuffer = new DepthStencilBuffer(gl, …);&lt;br /&gt;
    gl.setDepthStencilBuffer(dsBuffer);&lt;br /&gt;
&lt;br /&gt;
    var render = function(self) {&lt;br /&gt;
       // make a new DrawingBuffer&lt;br /&gt;
       var db = new DrawingBuffer(gl, ...);&lt;br /&gt;
&lt;br /&gt;
       // Render to drawing buffer.&lt;br /&gt;
       gl.setDrawingBuffer(db);&lt;br /&gt;
       gl.drawXXX(...);&lt;br /&gt;
&lt;br /&gt;
       // Pass the drawing buffer to the main thread for compositing&lt;br /&gt;
       self.postMessage(db, [db]);&lt;br /&gt;
&lt;br /&gt;
       // request the next frame.&lt;br /&gt;
       self.requestAnimationFrame(render);&lt;br /&gt;
    }&lt;br /&gt;
    render();&lt;br /&gt;
&lt;br /&gt;
    // Main thread:&lt;br /&gt;
    var canvas = document.getElementById(“someCanvas”);&lt;br /&gt;
    var worker = new Worker(“render.js”);&lt;br /&gt;
    worker.addEventListener(‘message’, function(db) {&lt;br /&gt;
       canvas.transferDrawingBufferToCanvas(db);&lt;br /&gt;
    }, false);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The thing to notice is the worker is creating a new DrawingBuffer every requestAnimationFrame. It then transfers ownership to the main thread. The main thread transfers it to the canvas. The browser can, behind the scenes, keep a queue of DrawingBuffers so that allocation of new ones is fast.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Why does a DrawingBuffer’s constructor take a context?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; DrawingBuffers can only be used with the context they are created with. Putting the context in the constructor spells out this relationship. The following is illegal&lt;br /&gt;
&lt;br /&gt;
    var gl1 = new WebGLRenderingContext();&lt;br /&gt;
    var gl2 = new WebGLRenderingContext();&lt;br /&gt;
    var db = new DrawingBuffer(gl1);&lt;br /&gt;
    gl1.setDrawingBuffer(db);&lt;br /&gt;
    gl2.setDrawingBuffer(db);  // error. db belongs to gl1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you use a Canvas2DRenderingContext without a DrawingBuffer?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; Yes. But only to create patterns, gradients, etc. All methods that rasterize will throw an exception until the canvas is associated with a DrawingBuffer by calling Canvas2DRenderingContext.setDrawingBuffer()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you use a WebGLRenderingContext without a DrawingBuffer?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; Yes, you can create WebGL resources (textures, buffers, programs, etc..) You can render to framebuffer objects and call readPixels on them. Rendering to the default framebuffer &amp;lt;tt&amp;gt;null&amp;lt;/tt&amp;gt; bind target will generate gl.INVALID_FRAMEBUFFER_OPERATION if no valid DrawingBuffer is set. A neutered DrawingBuffer, one that has been transferred to another thread, or one which has been transferred to a canvas, is not a valid drawing buffer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Do you need to call setDrawingBuffer if there is only 1 buffer?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No, creating a DrawingBuffer implicity calls setDrawingBuffer.&lt;br /&gt;
&lt;br /&gt;
    gl = new WebGLRenderingContext():&lt;br /&gt;
    db = new DrawingBuffer(gl, …);&lt;br /&gt;
    gl.clear(gl.COLOR_BUFFER_BIT); // renders to db.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Is any context state lost when setDrawingBuffer is called?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No. The context’s state is preserved across calls to setDrawingBuffer for both Canvas2DRenderingContext and WebGLRenderingContext.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you render to a DrawingBuffer that has been passed to another thread?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No, DrawingBuffers pass ownership. The DrawingBuffer in the thread that passed it is now neutered, just like an transferred ArrayBuffer is neutered.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you transfer a DrawingBuffer to 2 canvases?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No, Canvas.transferDrawingBufferToCanvas takes ownership of the DrawingBuffer. The DrawingBuffer left for the user has been neutered.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; What happens if I transferDrawingBufferToCanvas DrawingBuffers of different sizes?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; The canvas does not change its display size, it just displays the DrawingBuffer transferred at the size defined by css or if no css is specified then the canvas’ original size.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you call getContext on a canvas that has had transferDrawingBufferToCanvas called on it?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you call transferDrawingBufferToCanvas on a canvas that has had its getContext method called?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No. It might be possible to make this work but it’s probably not worth it?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you use these features in “shared workers”?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No (or at least not for now)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; What happens to the Canvas2DRenderingContext.canvas and WebGLRenderingContext.canvas properties?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; For contexts created by constructor they are set to undefined&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Should you be able to reference the current DrawingBuffer on a RenderingContext?&lt;br /&gt;
&lt;br /&gt;
In other words, should there be a getDrawingBuffer or a ‘drawingBuffer’ property?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; Yes, but it’s only set if you call setDrawingBuffer. In other words if you call getContext to make your context then this property would be undefined or if it’s a function it would return undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Should you be able to change the size of a DrawingBuffer?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
# Yes, set width and/or height and its size will change&amp;lt;br /&amp;gt; Issue. Allocating DrawingBuffers is a slow operation so implementations would like to avoid re-allocting once when width is set and again when height is set. Deferring that allocation is no fun to implement.&lt;br /&gt;
# Yes, use a setSize(width, height) method&amp;lt;br /&amp;gt; This avoids the complications of using the writable properties&lt;br /&gt;
# No, just allocate a new DrawingBuffer&amp;lt;br /&amp;gt; The only issue here is quick GCing.&lt;br /&gt;
&lt;br /&gt;
Resolution: #2&lt;br /&gt;
&lt;br /&gt;
== Issues: ==&lt;br /&gt;
&lt;br /&gt;
=== Workers can flood the graphics system with too much work. ===&lt;br /&gt;
&lt;br /&gt;
In the main thread you can write code like this&lt;br /&gt;
&lt;br /&gt;
    // render as fast as you can&lt;br /&gt;
    function render() {&lt;br /&gt;
       for (var i = 0; i &amp;lt; 1000; +i) {&lt;br /&gt;
         gl.drawXXX(...);&lt;br /&gt;
       }&lt;br /&gt;
    }&lt;br /&gt;
    setInterval(render, 0);&lt;br /&gt;
&lt;br /&gt;
This will DoS many systems by saturating the GPU with draw calls. The solution on the main thread is the implicit ‘SwapBuffers’. Every time JavaScript exits the interval event the system can pause or block. But this is not true in workers. As there is no implicit swap and workers can run in infinite loops there is no way to prevent this situation. While preventing infinite loops is outside the scope of what we can deal with consider a worker that generates frames at 90fps and a main thread that composites them at 60fps. There is nothing to stop the worker from generating too much work or a giant backlog of GPU work.&lt;br /&gt;
&lt;br /&gt;
Ideas&lt;br /&gt;
&lt;br /&gt;
# So what. The worker will run out of memory.&amp;lt;br /&amp;gt; Unfortunately until that happens the entire system may be unresponsive (not just the browser)&lt;br /&gt;
# Allow rendering in workers only inside some callback.&amp;lt;br /&amp;gt; For example, if it is only possible to render inside a worker during a requestAnimationFrame event the browser can throttle the worker by sending less events.&amp;lt;br /&amp;gt; The minor problem with this solution is it makes non animating apps slightly convoluted to write. Let’s say you want to make a Maya or Blender type app so you only render on demand. You end up getting a mousemove event, posting a message to a worker, the worker would issue raf so the raf can do the rendering. Maybe that’s not too convoluted.&lt;br /&gt;
# Other?&lt;br /&gt;
&lt;br /&gt;
Note: Exposing DrawingBuffer in the main thread causes the same problem&lt;br /&gt;
&lt;br /&gt;
Which suggests that even the main thread should not be allowed to render to contexts created by a constructor except in requestAnimationFrame?&lt;br /&gt;
&lt;br /&gt;
=== How should GL_ANGLE_framebuffer_multisample be specified w.r.t. number of samples? (webgl specific) ===&lt;br /&gt;
&lt;br /&gt;
We’d like apps not to fail based on  the “samples” parameter to renderbufferStorageMultisample but GL_ANGLE_framebuffer_multisample is specified that it must allocate a renderbuffer with the user specified number of “samples” or greater. That means if an app passes the wrong number in (say hardcodes a 4) and the user’s GPU does not support 4 samples or the user’s GPU multi-sample support is blacklisted the app will fail.&lt;br /&gt;
&lt;br /&gt;
We’d prefer a more permissive API by letting the implementation choose the number of samples so that more apps will succeed.&lt;br /&gt;
&lt;br /&gt;
Ideas&lt;br /&gt;
&lt;br /&gt;
# Leave the API as is. Apps may suddenly fail on different hardware or the same hardware when multi-sampling is blacklisted&lt;br /&gt;
# Leave the API the same but let the implementation choose the actual number of samples. Apps that need to know how many samples were chosen can query how many they got with getRenderbufferParameter.&lt;br /&gt;
# Change the API slightly by providing an enum (high, medium, low, none) as a quality input to renderbufferStorageMultisample instead of specifying the specific number of samples. Implementation can choose their own interpretation of ‘low’, ‘medium’, ‘high’&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasInWorkers&amp;diff=10084</id>
		<title>CanvasInWorkers</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasInWorkers&amp;diff=10084"/>
		<updated>2016-08-18T20:34:21Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;This proposal has been abandoned.  Please refer to the more [[OffscreenCanvas]] proposal.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;This proposal is trying to solve 2 issues: (1) being able to render to a canvas from a worker and (2) being able to render to multiple canvases using a single rendering context&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
There are 2 common uses cases.&lt;br /&gt;
&lt;br /&gt;
Use case #1: Building a multiple view 3D editor (like Blender, or Maya). WebGL, being based on OpenGL, has the limitation that resources belong to a single WebGLRenderingContext. That means if you have 2 or more canvases (left view, top view, perspective view) etc. You currently need a separate context for each one which means you need to load 100s of megabytes of resources multiple times. Allowing a single WebGLRenderingContext to be used with more than 1 canvas would solve this problem.&lt;br /&gt;
&lt;br /&gt;
Use case #2: You&#039;ll like to be able to make better use of multiple cores and avoid jank when drawing to a canvas. Many canvas apps (webgl or canvas2d) need to make thousands of API calls per frame at 60 frames a second. Being able to move those calls to a worker would potentially free up the main thread to do other things. It would also potentially help jank by keeping work off the main thread so it does not block the UI.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
Games in HMTL are becoming more common and we&#039;d like to support developers making even more complex games. A few 3D editors are starting to appear and we&#039;d like to help them be as good as their native app counter parts.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
&lt;br /&gt;
# Allow rendering using the Canvas2D api in a worker.&lt;br /&gt;
# Allow rendering using the WebGL api in a worker.&lt;br /&gt;
# Allow synchronization of canvas rendering with DOM manipulation&lt;br /&gt;
# Allow using 1 Canvas2DRenderingContext with multiple destinations without losing state&lt;br /&gt;
# Allow using 1 WebGLRenderingContext with multiple destinations without losing state&lt;br /&gt;
# Don&#039;t waste memory.&lt;br /&gt;
# Do not break existing content (existing APIs still work as is)&lt;br /&gt;
&lt;br /&gt;
== Non Goals ==&lt;br /&gt;
&lt;br /&gt;
# Sharing WebGL Resources between contexts. That is an orthogonal issue&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
One proposed solution involves &amp;lt;tt&amp;gt;CanvasProxy&amp;lt;/tt&amp;gt; and a &amp;lt;tt&amp;gt;commit&amp;lt;/tt&amp;gt; method. This solution does not meet the goals above. Specifically it does not handle synchronization issues and may waste memory.&lt;br /&gt;
&lt;br /&gt;
=== Suggested Solution ===&lt;br /&gt;
Allow rendering contexts to be created by constructor&lt;br /&gt;
&lt;br /&gt;
    var ctx = new Canvas2DRenderingContext();&lt;br /&gt;
    var gl = new WebGLRenderingContext();&lt;br /&gt;
&lt;br /&gt;
Define `DrawingBuffer`. DrawingBuffer can be considered a ‘handle’ to a single texture (or bucket of pixels). A DrawingBuffer can be passed anywhere a Canvas can be passed. In particular drawImage, texImage2D and texSubImage2D. A DrawingBuffer also has a toDataURL method that is similar to the Canvas’s toDataURL method. A DrawingBuffer can be transferred to and from a worker using the transfer of ownership concept similar to an ArrayBuffer.&lt;br /&gt;
&lt;br /&gt;
A DrawingBuffer is created by constructor as in&lt;br /&gt;
&lt;br /&gt;
    var db = new DrawingBuffer(context, {...creation-parameters...});&lt;br /&gt;
&lt;br /&gt;
The context associated with a DrawingBuffer at creation is the only context that may render to that DrawingBuffer.&lt;br /&gt;
&lt;br /&gt;
Canvas becomes a ‘shell’ whose sole purpose is to display DrawingBuffers.&lt;br /&gt;
&lt;br /&gt;
Add 2 functions to `Canvas`. Canvas.transferDrawingBufferToCanvas() and Canvas.copyDrawingBuffer()&lt;br /&gt;
&lt;br /&gt;
Canvas.transferDrawingBufferToCanvas effectively transfers ownership of the DrawingBuffer. The user&#039;s DrawingBuffer is neutered. This similar to how transferring a DrawingBuffer from the main thread to a worker makes the main thread no longer able to use it.&lt;br /&gt;
&lt;br /&gt;
A single threaded app that wanted to emulate the existing workflow using DrawingBuffers would do something like this.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    var gl = new WebGLRenderingContext();&lt;br /&gt;
    function render() {&lt;br /&gt;
      var db = new DrawingBuffer(...);&lt;br /&gt;
      gl.setDrawingBuffer(db);&lt;br /&gt;
      gl.drawXXX();&lt;br /&gt;
      canvas.transferDrawingBufferToCanvas(db);&lt;br /&gt;
      requestAnimationFrame(render);&lt;br /&gt;
    }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Canvas.copyDrawingBuffer() on the other hand copies the DrawingBuffer’s texture/backingstore to the canvas. This is a slower path but emulates the standard Canvas2D persistent backingstore style. The canvas will have to allocate a texture or bucket of pixel to hold the copy if it does not already have one.&lt;br /&gt;
&lt;br /&gt;
Disallow ‘multi-sampled’ / “anti-aliased” DrawingBuffers and instead expose GL_ANGLE_framebuffer_blit&lt;br /&gt;
GL_ANGLE_framebuffer_multisample. (webgl specific)&lt;br /&gt;
&lt;br /&gt;
Define ‘DepthStencilBuffer’. Add a function, WebGLRenderingContext.setDepthStencilBuffer (webgl specific)&lt;br /&gt;
&lt;br /&gt;
== Suggested IDL ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
interface HTMLCanvas {&lt;br /&gt;
   ...&lt;br /&gt;
   void transferDrawingBufferToCanvas(DrawingBuffer b);&lt;br /&gt;
   void copyDrawingBuffer(DrawingBuffer b);&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
interface Canvas2DRenderingContext {&lt;br /&gt;
  readonly attribute DrawingBuffer drawingBuffer;&lt;br /&gt;
  void setDrawingBuffer(DrawingBuffer buffer);&lt;br /&gt;
  CanvasPattern createPattern(DrawingBuffer buffer, ...);&lt;br /&gt;
  void drawImage(DrawingBuffer buffer,&lt;br /&gt;
                 unrestricted double dx,&lt;br /&gt;
                 unrestricted double dy);&lt;br /&gt;
  void drawImage(DrawingBuffer buffer,&lt;br /&gt;
                 unrestricted double dx, unrestricted double dy,&lt;br /&gt;
                 unrestricted double dw, unrestricted double dh);&lt;br /&gt;
  void drawImage(DrawingBuffer buffer,&lt;br /&gt;
                 unrestricted double sx, unrestricted double sy,&lt;br /&gt;
                 unrestricted double sw, unrestricted double sh,&lt;br /&gt;
                 unrestricted double dx, unrestricted double dy,&lt;br /&gt;
                 unrestricted double dw, unrestricted double dh);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
interface WebGLRenderingContext {&lt;br /&gt;
  ...&lt;br /&gt;
  readonly attribute DrawingBuffer drawingBuffer;&lt;br /&gt;
  readonly attribute DepthStencilBuffer depthStencilBuffer;&lt;br /&gt;
  void setDrawingBuffer(DrawingBuffer buffer);&lt;br /&gt;
  void setDepthStencilBuffer(DepthStencilBuffer buffer);&lt;br /&gt;
  void texImage2D(GLenum target, GLint level, GLenum internalformat,&lt;br /&gt;
                  GLenum format, GLenum type, DrawingBuffer buffer);&lt;br /&gt;
  void texSubImage2D(GLenum target, GLint level,&lt;br /&gt;
                     GLint xoffset, GLint yoffset,&lt;br /&gt;
                     GLenum format, GLenum type,&lt;br /&gt;
                     DrawingBuffer buffer);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
[ Constructor(RenderingContext c, any contextCreationParameters) ]&lt;br /&gt;
interface DrawingBuffer {&lt;br /&gt;
   readonly attribute long width;&lt;br /&gt;
   readonly attribute long height;&lt;br /&gt;
   void setSize(long width, long height);&lt;br /&gt;
   DOMString toDataURL(in DOMString type)&lt;br /&gt;
       raises(???Exception);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
[ Constructor(RenderingContext c, any contextCreationParameters) ]&lt;br /&gt;
interface DepthStencilBuffer {&lt;br /&gt;
   attribute long width;&lt;br /&gt;
   attribute long height;&lt;br /&gt;
   readonly attribute long width;&lt;br /&gt;
   readonly attribute long height;&lt;br /&gt;
   void setSize(long width, long height);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Rationale: ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Why get rid of a commit method in workers to propagate changes from a context rendered in a worker to a canvas in the main page?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; Using commit there is no way to synchronize updates in a worker with updates to the DOM in the main thread. This solution makes it possible to make sure that DOM objects positioned in the main thread stay in sync with images rendered by a worker. The worker transfers the DrawingBuffer to the main thread via postMessage, and the main thread calls canvas.transferDrawingBufferToCanvas. This solution also avoids unnecessary blits of the canvas’s contents, which is essential for performance.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Why disallow anti-aliased DrawingBuffers?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; In the existing model when you create a webgl context by calling canvas.getContext() a single multi-sampled renderbuffer is created by the browser. When the browser implicitly does a ‘swapBuffers’ for you it resolves or “blits” this multi-sampled renderbuffer into a texture.&lt;br /&gt;
&lt;br /&gt;
On 30inch display (or a HD-DPI Macbook Pro) a fullscreen multi-sampled renderbuffer requires&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    Bytes per pixel = 4 (rgba) * 4 (depth-stencil)&lt;br /&gt;
&lt;br /&gt;
     2560(width)  *&lt;br /&gt;
     1600(height) *&lt;br /&gt;
     8(bytes per pixel) *&lt;br /&gt;
     4 (multi-samples)&lt;br /&gt;
  -----------------------------&lt;br /&gt;
    = 125meg&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the new model, a typical animating application will create a minimum of 2 DrawingBuffers (for double buffering so a worker can render to one while the other is passed back to the main thread for compositing) or possibly 3 (for Triple Buffering). If all the DrawingBuffers are antialiased that’s 375meg of VRAM used up immediately. &lt;br /&gt;
&lt;br /&gt;
On the other hand, if instead we disallow anti-aliased DrawBuffers and expose GL_ANGLE_framebuffer_blit and GL_ANGLE_framebuffer_multisample as WebGL extensions, then a typical app that wants to support anti-aliasing will create a single multisampled renderbuffer and do its own blit to non-multi-sampled DrawingBuffers. For a triple buffered app that would be 218 meg of VRAM.&lt;br /&gt;
&lt;br /&gt;
Another considered solution is to some how magically share a single multi-sample buffer. In a double buffered app you&#039;d transfer a DrawingBuffer from the work to a the main thread. Since the DrawBuffer is intended to be given to a canvas and since you can&#039;t render to the drawing buffer because its context is back in the worker then, at transfer time, you could resolve the multi-sampled buffer. And give that multi-sampled buffer to the next DrawBuffer. Unfortunately that won&#039;t work under this design. Nothing prevents the user from transferring the DrawingBuffer from a worker to another thread and back to the worker. It should come back as it started. If it was resolved on transfer it would not come back. Another issue is there&#039;s nothing preventing the user from making 3 or 4 DrawingBuffers for a single canvas for triple or quadruple buffering. Since you have no idea what a user is going to use a DrawingBuffer or how they are going to use it there&#039;s no easy way for them all to magically share the same multi-sample buffer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Should we allow anti-aliased DrawingBuffers?&lt;br /&gt;
&lt;br /&gt;
# Yes, developers who care about memory can create non anti-aliased buffers. Developers who don’t care can avoid the hassle of needing to make a multi-sampled renderbuffer and blitting&lt;br /&gt;
# No, all developers that want to use DrawingBuffers and get anti-aliasing must use GL_ANGLE_framebuffer_multisample and GL_ANGLE_framebuffer_blit&lt;br /&gt;
&lt;br /&gt;
Resolution: #2. Saving memory is especially important in situations like tablets, multiple tabs, and systems without virtual memory. Rather than let the bad behavior be the easy path we chose to encourage the good behavior.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Why separate out DepthStencilBuffer?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; For similar reasons as disallowing anti-aliasing. (see above)&lt;br /&gt;
&lt;br /&gt;
Given that DrawingBuffers are transferred by transferring ownership, and given that in the common case of transferring a DrawingBuffer to the main thread to be composited, there is no reason to also transfer the depth/stencil buffers. Doing so would mean multiple depth and stencil buffers would need to be allocated so one thread can render to them while the main thread is compositing.&lt;br /&gt;
&lt;br /&gt;
Apps that use GL_ANGLE_framebuffer_multisample and GL_ANGLE_framebuffer_blit to support anti-aliasing will never need to create a ‘DepthStencilBuffer’ as they will end up creating a gl.DEPTH_STENCIL texture or renderbuffer&lt;br /&gt;
&lt;br /&gt;
Separating them out also makes more sense for Canvas2D which never needs a depth/stencil buffer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; For a worker based animated app what’s the expected code flow?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
    // render.js: -- worker --&lt;br /&gt;
    var gl = new WebGLRenderingContext();&lt;br /&gt;
    var dsBuffer = new DepthStencilBuffer(gl, …);&lt;br /&gt;
    gl.setDepthStencilBuffer(dsBuffer);&lt;br /&gt;
&lt;br /&gt;
    var render = function(self) {&lt;br /&gt;
       // make a new DrawingBuffer&lt;br /&gt;
       var db = new DrawingBuffer(gl, ...);&lt;br /&gt;
&lt;br /&gt;
       // Render to drawing buffer.&lt;br /&gt;
       gl.setDrawingBuffer(db);&lt;br /&gt;
       gl.drawXXX(...);&lt;br /&gt;
&lt;br /&gt;
       // Pass the drawing buffer to the main thread for compositing&lt;br /&gt;
       self.postMessage(db, [db]);&lt;br /&gt;
&lt;br /&gt;
       // request the next frame.&lt;br /&gt;
       self.requestAnimationFrame(render);&lt;br /&gt;
    }&lt;br /&gt;
    render();&lt;br /&gt;
&lt;br /&gt;
    // Main thread:&lt;br /&gt;
    var canvas = document.getElementById(“someCanvas”);&lt;br /&gt;
    var worker = new Worker(“render.js”);&lt;br /&gt;
    worker.addEventListener(‘message’, function(db) {&lt;br /&gt;
       canvas.transferDrawingBufferToCanvas(db);&lt;br /&gt;
    }, false);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The thing to notice is the worker is creating a new DrawingBuffer every requestAnimationFrame. It then transfers ownership to the main thread. The main thread transfers it to the canvas. The browser can, behind the scenes, keep a queue of DrawingBuffers so that allocation of new ones is fast.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Why does a DrawingBuffer’s constructor take a context?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; DrawingBuffers can only be used with the context they are created with. Putting the context in the constructor spells out this relationship. The following is illegal&lt;br /&gt;
&lt;br /&gt;
    var gl1 = new WebGLRenderingContext();&lt;br /&gt;
    var gl2 = new WebGLRenderingContext();&lt;br /&gt;
    var db = new DrawingBuffer(gl1);&lt;br /&gt;
    gl1.setDrawingBuffer(db);&lt;br /&gt;
    gl2.setDrawingBuffer(db);  // error. db belongs to gl1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you use a Canvas2DRenderingContext without a DrawingBuffer?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; Yes. But only to create patterns, gradients, etc. All methods that rasterize will throw an exception until the canvas is associated with a DrawingBuffer by calling Canvas2DRenderingContext.setDrawingBuffer()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you use a WebGLRenderingContext without a DrawingBuffer?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; Yes, you can create WebGL resources (textures, buffers, programs, etc..) You can render to framebuffer objects and call readPixels on them. Rendering to the default framebuffer &amp;lt;tt&amp;gt;null&amp;lt;/tt&amp;gt; bind target will generate gl.INVALID_FRAMEBUFFER_OPERATION if no valid DrawingBuffer is set. A neutered DrawingBuffer, one that has been transferred to another thread, or one which has been transferred to a canvas, is not a valid drawing buffer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Do you need to call setDrawingBuffer if there is only 1 buffer?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No, creating a DrawingBuffer implicity calls setDrawingBuffer.&lt;br /&gt;
&lt;br /&gt;
    gl = new WebGLRenderingContext():&lt;br /&gt;
    db = new DrawingBuffer(gl, …);&lt;br /&gt;
    gl.clear(gl.COLOR_BUFFER_BIT); // renders to db.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Is any context state lost when setDrawingBuffer is called?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No. The context’s state is preserved across calls to setDrawingBuffer for both Canvas2DRenderingContext and WebGLRenderingContext.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you render to a DrawingBuffer that has been passed to another thread?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No, DrawingBuffers pass ownership. The DrawingBuffer in the thread that passed it is now neutered, just like an transferred ArrayBuffer is neutered.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you transfer a DrawingBuffer to 2 canvases?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No, Canvas.transferDrawingBufferToCanvas takes ownership of the DrawingBuffer. The DrawingBuffer left for the user has been neutered.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; What happens if I transferDrawingBufferToCanvas DrawingBuffers of different sizes?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; The canvas does not change its display size, it just displays the DrawingBuffer transferred at the size defined by css or if no css is specified then the canvas’ original size.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you call getContext on a canvas that has had transferDrawingBufferToCanvas called on it?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you call transferDrawingBufferToCanvas on a canvas that has had its getContext method called?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No. It might be possible to make this work but it’s probably not worth it?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Can you use these features in “shared workers”?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; No (or at least not for now)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; What happens to the Canvas2DRenderingContext.canvas and WebGLRenderingContext.canvas properties?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; For contexts created by constructor they are set to undefined&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Should you be able to reference the current DrawingBuffer on a RenderingContext?&lt;br /&gt;
&lt;br /&gt;
In other words, should there be a getDrawingBuffer or a ‘drawingBuffer’ property?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039; Yes, but it’s only set if you call setDrawingBuffer. In other words if you call getContext to make your context then this property would be undefined or if it’s a function it would return undefined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Q:&#039;&#039;&#039; Should you be able to change the size of a DrawingBuffer?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;A:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
# Yes, set width and/or height and its size will change&amp;lt;br /&amp;gt; Issue. Allocating DrawingBuffers is a slow operation so implementations would like to avoid re-allocting once when width is set and again when height is set. Deferring that allocation is no fun to implement.&lt;br /&gt;
# Yes, use a setSize(width, height) method&amp;lt;br /&amp;gt; This avoids the complications of using the writable properties&lt;br /&gt;
# No, just allocate a new DrawingBuffer&amp;lt;br /&amp;gt; The only issue here is quick GCing.&lt;br /&gt;
&lt;br /&gt;
Resolution: #2&lt;br /&gt;
&lt;br /&gt;
== Issues: ==&lt;br /&gt;
&lt;br /&gt;
=== Workers can flood the graphics system with too much work. ===&lt;br /&gt;
&lt;br /&gt;
In the main thread you can write code like this&lt;br /&gt;
&lt;br /&gt;
    // render as fast as you can&lt;br /&gt;
    function render() {&lt;br /&gt;
       for (var i = 0; i &amp;lt; 1000; +i) {&lt;br /&gt;
         gl.drawXXX(...);&lt;br /&gt;
       }&lt;br /&gt;
    }&lt;br /&gt;
    setInterval(render, 0);&lt;br /&gt;
&lt;br /&gt;
This will DoS many systems by saturating the GPU with draw calls. The solution on the main thread is the implicit ‘SwapBuffers’. Every time JavaScript exits the interval event the system can pause or block. But this is not true in workers. As there is no implicit swap and workers can run in infinite loops there is no way to prevent this situation. While preventing infinite loops is outside the scope of what we can deal with consider a worker that generates frames at 90fps and a main thread that composites them at 60fps. There is nothing to stop the worker from generating too much work or a giant backlog of GPU work.&lt;br /&gt;
&lt;br /&gt;
Ideas&lt;br /&gt;
&lt;br /&gt;
# So what. The worker will run out of memory.&amp;lt;br /&amp;gt; Unfortunately until that happens the entire system may be unresponsive (not just the browser)&lt;br /&gt;
# Allow rendering in workers only inside some callback.&amp;lt;br /&amp;gt; For example, if it is only possible to render inside a worker during a requestAnimationFrame event the browser can throttle the worker by sending less events.&amp;lt;br /&amp;gt; The minor problem with this solution is it makes non animating apps slightly convoluted to write. Let’s say you want to make a Maya or Blender type app so you only render on demand. You end up getting a mousemove event, posting a message to a worker, the worker would issue raf so the raf can do the rendering. Maybe that’s not too convoluted.&lt;br /&gt;
# Other?&lt;br /&gt;
&lt;br /&gt;
Note: Exposing DrawingBuffer in the main thread causes the same problem&lt;br /&gt;
&lt;br /&gt;
Which suggests that even the main thread should not be allowed to render to contexts created by a constructor except in requestAnimationFrame?&lt;br /&gt;
&lt;br /&gt;
=== How should GL_ANGLE_framebuffer_multisample be specified w.r.t. number of samples? (webgl specific) ===&lt;br /&gt;
&lt;br /&gt;
We’d like apps not to fail based on  the “samples” parameter to renderbufferStorageMultisample but GL_ANGLE_framebuffer_multisample is specified that it must allocate a renderbuffer with the user specified number of “samples” or greater. That means if an app passes the wrong number in (say hardcodes a 4) and the user’s GPU does not support 4 samples or the user’s GPU multi-sample support is blacklisted the app will fail.&lt;br /&gt;
&lt;br /&gt;
We’d prefer a more permissive API by letting the implementation choose the number of samples so that more apps will succeed.&lt;br /&gt;
&lt;br /&gt;
Ideas&lt;br /&gt;
&lt;br /&gt;
# Leave the API as is. Apps may suddenly fail on different hardware or the same hardware when multi-sampling is blacklisted&lt;br /&gt;
# Leave the API the same but let the implementation choose the actual number of samples. Apps that need to know how many samples were chosen can query how many they got with getRenderbufferParameter.&lt;br /&gt;
# Change the API slightly by providing an enum (high, medium, low, none) as a quality input to renderbufferStorageMultisample instead of specifying the specific number of samples. Implementation can choose their own interpretation of ‘low’, ‘medium’, ‘high’&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=WorkerCanvas&amp;diff=10083</id>
		<title>WorkerCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=WorkerCanvas&amp;diff=10083"/>
		<updated>2016-08-18T20:33:02Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;This proposal is obsolete.  Please refer to the more up-to-date [[OffscreenCanvas]] proposal.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;Provides more control over how canvases are rendered.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
Feedback from web application authors using canvases have shown the need for the following controls:&lt;br /&gt;
&lt;br /&gt;
* (From ShaderToy, Sketchfab, Verold): need to be able to render to multiple regions on the page efficiently using a single canvas context. 3D model warehouse sites desire to show multiple live interactive models on the page, but creating multiple WebGL contexts per page is too inefficient. A single context should be able to render to multiple regions on the page.&lt;br /&gt;
* (From Google Maps): need to be able to render WebGL from a worker, transfer the rendered image to the main thread without making any copy of it, and composite it with other HTML on the page, guaranteeing that the updates are all seen in the same rendered frame.&lt;br /&gt;
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.&lt;br /&gt;
&lt;br /&gt;
== WebIDL ==&lt;br /&gt;
&lt;br /&gt;
 [Constructor(unsigned long width, unsigned long height)]&lt;br /&gt;
 interface WorkerCanvas {&lt;br /&gt;
   attribute unsigned long width;&lt;br /&gt;
   attribute unsigned long height;&lt;br /&gt;
   RenderingContext? getContext(DOMString contextId, any... arguments); &lt;br /&gt;
   void toBlob(FileCallback? _callback, optional DOMString type, any... arguments);&lt;br /&gt;
   ImageBitmap transferToImageBitmap();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 WorkerCanvas implements Transferable;&lt;br /&gt;
 ImageBitmap implements Transferable;&lt;br /&gt;
 &lt;br /&gt;
 partial interface HTMLCanvasElement {&lt;br /&gt;
   WorkerCanvas transferControlToWorker();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 partial interface ImageBitmap {&lt;br /&gt;
   void transferToImage(HTMLImageElement image);&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
== Spec changes ==&lt;br /&gt;
&lt;br /&gt;
Transferring of ImageBitmaps has to be defined. It should neuter the ImageBitmap in the sending thread. Neutering sets the ImageBitmap&#039;s width and height to 0.&lt;br /&gt;
&lt;br /&gt;
HTMLCanvasElement.transferControlToWorker behaves like transferControlToProxy in the current WHATWG spec. WorkerCanvas is Transferable, but transfer fails if transferred other than from the main thread to a worker. All its methods throw if not called on a worker, or if it&#039;s neutered.&lt;br /&gt;
&lt;br /&gt;
ImageBitmap.transferToImage removes the image element&#039;s &amp;quot;src&amp;quot; attribute and makes the image display the contents of the ImageBitmap (until the next transferToImage to that image, or until the image&#039;s &amp;quot;src&amp;quot; attribute is set). The ImageBitmap is neutered.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10071</id>
		<title>CanvasColorSpace</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10071"/>
		<updated>2016-05-14T03:26:24Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Latest version of the proposal has moved to [https://github.com/junov/CanvasColorSpace/blob/master/CanvasColorSpaceProposal.md here] ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
* Contents displayed through a canvas element should be color managed in order to minimize differences in appearance across browsers and display devices. Improving color fidelity matters a lot for artistic uses (e.g. photo and paint apps) and for e-commerce (product presentation).&lt;br /&gt;
* Canvases should be able to take advantage of the full color gamut of the display device.&lt;br /&gt;
* Creative apps that do image manipulation generally prefer compositing, filtering and interpolation calculations to be performed in a linear color space.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
* The color space of canvases is undefined in the current specification.&lt;br /&gt;
* The bit-depth of canvases is currently fixed to 8 bits per component, which is below the capabilities of some monitors. Monitors with higher contrast ratios require more bits per component to avoid banding.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
The lack of color space interoperability is hard to work around. With some browser implementations that color correct images drawn to canvases by applying the display profile, apps that want to use canvases for color corrected image processing are stuck doing convoluted workarounds, such as:&lt;br /&gt;
* reverse-engineer the display profile by drawing test pattern images to the canvas and inspecting the color corrected result via getImageData&lt;br /&gt;
* bypass CanvasRenderingContext2D.drawImage() and use image decoders implemented in JavaScript to extract raw image data that was not tainted by the browser&#039;s color correction behavior.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An aspect of current implementations that is interoperable is that colors match between CSS/HTML and canvases:&lt;br /&gt;
* A color value used as a canvas drawing style will have the same appearance as if the same color value were used as a CSS style&lt;br /&gt;
* An image resource drawn to a canvas element will have the same appearance as if it were displayed as the replaced content of an HTML element or used as a CSS style value.&lt;br /&gt;
&lt;br /&gt;
This color matching behavior needs to be preserved to avoid breaking pre-existing content.&lt;br /&gt;
&lt;br /&gt;
Some implementations convert images drawn to canvases to the sRGB color space. This has the advantage of making the color correction behavior device independent, but it clamps the gamuts of the rendered content to the sRGB gamut, which is significantly narrower than the gamuts of some current consumer devices.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://github.com/whatwg/html/issues/299]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Allow 2dcontexts to use deeper color buffers&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://bugs.chromium.org/p/chromium/issues/detail?id=425935]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Wrong color profile with 2D canvas&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* Engineers from the Google Photos, Maps and Sheets teams have expressed a desire for canvases to become color managed.  Particularly for the use case of resizing an imaging, using a canvas, prior to uploading it to the server, to save bandwidth. The problem is that the images retrieved from a canvas are in an undefined color space and no color space information is encoded by toDataURL or toBlob. &lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== Proposed solution: CanvasColorSpace ===&lt;br /&gt;
:Add a canvas color space creation parameter that allows user code to chose between backwards compatible behavior and color managed behaviors  The same color space option would exist in the ImageData and ImageBitmap interfaces.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
===== The color-space canvas creation parameter =====&lt;br /&gt;
&lt;br /&gt;
IDL:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
enum CanvasColorSpace {&lt;br /&gt;
  &amp;quot;legacy-srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;linear-rec-2020&amp;quot;,&lt;br /&gt;
  &amp;quot;optimal&amp;quot;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
dictionary CanvasRenderingContext2DSettings {&lt;br /&gt;
  boolean alpha = true;&lt;br /&gt;
  CanvasColorSpace color-space = &amp;quot;legacy-srgb&amp;quot;;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
canvas.getContext(&#039;2d&#039;, { color-space: &amp;quot;srgb&amp;quot;})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== The legacy-srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* Assures backwards compatible behavior&lt;br /&gt;
* Guarantees color matching with CSS and HTML content&lt;br /&gt;
* Color management behavior is implementation specific, may not use strict sRGB space, but is expected to be near sRGB. For example, could be display referred color space.&lt;br /&gt;
* toDataURL/toBlob produce resources with no color profile (backwards compat)&lt;br /&gt;
* Image resources with no color profile are never color corrected (backwards compat). This rule and the previous one allow for lossless toDataURL/drawImage round trips, which is a significant use case.&lt;br /&gt;
&lt;br /&gt;
===== The srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* May break color matching with CSS on implementations that do not color-manage CSS.&lt;br /&gt;
* 8 bit unsigned integers per color component.&lt;br /&gt;
* All content drawn into the canvas must be color corrected to sRGB&lt;br /&gt;
* displayed canvases must be color corrected for the display if a display color profile is available. This color correction happens downstream at the compositing stage, and has no script-visible side-effects.&lt;br /&gt;
* Compositing, filtering and interpolation operations must perform all arithmetic in &#039;&#039;&#039;linear&#039;&#039;&#039; sRGB space.&lt;br /&gt;
* toDataURL/toBlob produce resources tagged as being in the sRGB colorspace&lt;br /&gt;
* Images with no color profile, when drawn to the canvas, are assumed to already be in the sRGB color space.&lt;br /&gt;
&lt;br /&gt;
===== The rec-2020 color space =====&lt;br /&gt;
&lt;br /&gt;
* Color space provided for wide gamut support without increasing memory cost.&lt;br /&gt;
* 8 bit unsigned integers per color component.&lt;br /&gt;
* All content drawn into the canvas must be color corrected to the rec-2020 color space.&lt;br /&gt;
* Displayed canvases must be color corrected for the display if a display color profile is available.  This color correction happens downstream at the compositing stage, and has no script-visible side-effects. If there is no display color profile, the user agent should assume the display uses sRGB.&lt;br /&gt;
* Compositing, filtering and interpolation operations must perform all arithmetic in &#039;&#039;&#039;linear&#039;&#039;&#039; rec-2020 space.&lt;br /&gt;
* toDataURL/toBlob produce image resources in the rec-2020 colorspace.&lt;br /&gt;
* Images with no color profile, when drawn to the canvas, are assumed to be in the sRGB color space, and must therefore be converted from sRGB to rec-2020.&lt;br /&gt;
&lt;br /&gt;
===== The linear-rec-2020 color space =====&lt;br /&gt;
* Color space provided for wide gamut and high dynamic range rendering.&lt;br /&gt;
* User agents may decide not to support the mode, based on host machine capabilities&lt;br /&gt;
* Uses 16-bit floating point representation.&lt;br /&gt;
* The color space corresponds to ITU-R Recommendation BT.2020, &#039;&#039;&#039;without gamma compression&#039;&#039;&#039;.&lt;br /&gt;
* toDataURL/toBlob convert image data to the rec-2020 color space (with gamma), and produce image resources with at least 12 bits per color component, if the format supports it. Thus, in the case of the png format, which supports 8 or 16 bits per component, 16bpc would be used.&lt;br /&gt;
* Image with no color profile, when drawn to the canvas, are assumed to be in the sRGB color space, and are converted to linear-rec-2020 for the purpose of the draw.&lt;br /&gt;
&lt;br /&gt;
===== The optimal color space =====&lt;br /&gt;
The &amp;quot;optimal&amp;quot; option lets the user agent decide which space is optimal for the current display device based on the device&#039;s capabilities and color profile characteristics. &lt;br /&gt;
* This option may not select &amp;quot;legacy-srgb&amp;quot;&lt;br /&gt;
* Graphics devices with color gamuts that extend significantly beyond the sRGB color space should cause the UA to favor rec-2020; and linear linear-rec-2020 should be used for displays with high dynamic ranges.&lt;br /&gt;
* Graphics devices that would not produce noticeably higher quality visual results in rec-2020 or linear-rec-2020 should cause the UA to favor &amp;quot;srgb&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
===== Feature detection =====&lt;br /&gt;
&lt;br /&gt;
Rendering context objects are to expose a new &amp;quot;settings&amp;quot; attribute, which represents the settings that were successfully applied at context creation time.&lt;br /&gt;
&lt;br /&gt;
Note: An alternative approach that was considered was to augment the probablySupportsContext() API by making it check the second argument.  That approach is difficult to consolidate with how dictionary argument are meant to work, where unsupported entries are just ignored.&lt;br /&gt;
&lt;br /&gt;
===== ImageBitmap =====&lt;br /&gt;
&lt;br /&gt;
ImageBitmap objects are augmented to have an internal color space attribute of type CanvasColorSpace. The colorSpaceConversion creation attribute is to be augmented with new enum values for coercing conversions to a specific CanvasColorSpace at creation time.&lt;br /&gt;
&lt;br /&gt;
===== ImageData =====&lt;br /&gt;
&lt;br /&gt;
IDL&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
typedef (Uint8ClampedArray or Float32Array) ImageDataArray;&lt;br /&gt;
&lt;br /&gt;
[Constructor(unsigned long sw, unsigned long sh, optional CanvasColorSpace colorSpace = &amp;quot;legacy-srgb&amp;quot;),&lt;br /&gt;
 Constructor(ImageDataArray data, unsigned long sw, optional unsigned long sh, optional CanvasColorSpace colorSpace),&lt;br /&gt;
 Exposed=(Window,Worker)]&lt;br /&gt;
interface ImageData {&lt;br /&gt;
  readonly attribute unsigned long width;&lt;br /&gt;
  readonly attribute unsigned long height;&lt;br /&gt;
  readonly attribute ImageDataArray data;&lt;br /&gt;
  readonly attribute CanvasColorSpace colorSpace;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Uint8ClampedArray if colorSpace is &amp;quot;srgb&amp;quot; or &amp;quot;legacy-srgb&amp;quot;&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Float32Array if colorSpace is &amp;quot;linear-rec-2020&amp;quot;&lt;br /&gt;
* getImageData() produces an ImageData object in the same color space as the source canvas&lt;br /&gt;
* putImageData() performs a color space conversion to the color space of the destination canvas.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
* No support for arbitrary color spaces and bit depth.  This capability could be added in the future.  The current proposal attempts to solve the problem with a minimal API surface, and keeps the implementation scope reasonable.  The extensible design will allow us to extend the capabilities in the future if necessary.  The rec-2020 space was chosen for its very wide gamut and its non-virtual primary colors, which strikes a balance that is deemed practical.&lt;br /&gt;
* toDataURL is lossy when us on a canvas that is in the linear-rec-2020 space. Possible future improvements could solve or mitigate this issue by adding more file formats or adding options to specify the resource color space.&lt;br /&gt;
* ImageData uses float32, which is inefficient due to memory consumption and necessary conversion operations. Float32 was chosen because it is convenient for manipulation (e.g. image processing) due to its native support in JavaScript (and current CPUs). A possible extension would be to add and option for rec-2020 content to be encoded as float16s packed into Uint16 values.&lt;br /&gt;
&lt;br /&gt;
==== Security and privacy issues ====&lt;br /&gt;
Some current implementations of CanvasRenderingContext2D color correct image resources for the display as they are drawn to the canvas. In other words, the canvas is in output referred color space. This is a known fingerprinting vulnerability since it exposes the user&#039;s display&#039;s color profile to scripts via getImageData.  The current proposal does not solve the fingerprinting issue because it will still exist in legacy-srgb.  To solve the problem, implementations must color-correct CSS colors, then by extension, legacy-srgb mode will be in the true sRGB color space by virtue of the color matching rules outlined above.  When that becomes the case, images drawn to canvases will be color corrected to sRGB, which solves the problem.  There is resistance to adopting this model because going through an sRGB intermediate is lossy compared to directly color correcting images for the display in a single pass (may cause banding and gamut clipping).  This feature proposal mitigates the lossiness argument thanks to the linear-rec-2020 option.&lt;br /&gt;
&lt;br /&gt;
==== Implementation notes ==== &lt;br /&gt;
* Because float16 arithmetic is supported by many GPUs, but not by CPUs, implementations should probably opt to not support rec-2020 on hardware that does not provide any native support.&lt;br /&gt;
* When available, the srgb colorspace should use GPU API extension for SRGB support. This will streamline the conversion overhead for performing filtering and compositing in linear space.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
Lack of color management and color interoperability is a longstanding complaint about the canvas API.&lt;br /&gt;
Authors of games and imaging apps are expected to be enthusiastic adopters.&lt;br /&gt;
&lt;br /&gt;
==== History ====&lt;br /&gt;
This proposal was originally incubated in the Khronos 3D Web group, with the participation of engineers from Google, Microsoft, Apple, Nvidia, and others.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10069</id>
		<title>CanvasColorSpace</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10069"/>
		<updated>2016-05-10T21:20:52Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Color managing canvas contents&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
* Contents displayed through a canvas element should be color managed in order to minimize differences in appearance across browsers and display devices. Improving color fidelity matters a lot for artistic uses (e.g. photo and paint apps) and for e-commerce (product presentation).&lt;br /&gt;
* Canvases should be able to take advantage of the full color gamut of the display device.&lt;br /&gt;
* Creative apps that do image manipulation generally prefer compositing, filtering and interpolation calculations to be performed in a linear color space.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
* The color space of canvases is undefined in the current specification.&lt;br /&gt;
* The bit-depth of canvases is currently fixed to 8 bits per component, which is below the capabilities of some monitors. Monitors with higher contrast ratios require more bits per component to avoid banding.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
The lack of color space interoperability is hard to work around. With some browser implementations that color correct images drawn to canvases by applying the display profile, apps that want to use canvases for color corrected image processing are stuck doing convoluted workarounds, such as:&lt;br /&gt;
* reverse-engineer the display profile by drawing test pattern images to the canvas and inspecting the color corrected result via getImageData&lt;br /&gt;
* bypass CanvasRenderingContext2D.drawImage() and use image decoders implemented in JavaScript to extract raw image data that was not tainted by the browser&#039;s color correction behavior.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An aspect of current implementations that is interoperable is that colors match between CSS/HTML and canvases:&lt;br /&gt;
* A color value used as a canvas drawing style will have the same appearance as if the same color value were used as a CSS style&lt;br /&gt;
* An image resource drawn to a canvas element will have the same appearance as if it were displayed as the replaced content of an HTML element or used as a CSS style value.&lt;br /&gt;
&lt;br /&gt;
This color matching behavior needs to be preserved to avoid breaking pre-existing content.&lt;br /&gt;
&lt;br /&gt;
Some implementations convert images drawn to canvases to the sRGB color space. This has the advantage of making the color correction behavior device independent, but it clamps the gamuts of the rendered content to the sRGB gamut, which is significantly narrower than the gamuts of some current consumer devices.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://github.com/whatwg/html/issues/299]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Allow 2dcontexts to use deeper color buffers&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://bugs.chromium.org/p/chromium/issues/detail?id=425935]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Wrong color profile with 2D canvas&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* Engineers from the Google Photos, Maps and Sheets teams have expressed a desire for canvases to become color managed.  Particularly for the use case of resizing an imaging, using a canvas, prior to uploading it to the server, to save bandwidth. The problem is that the images retrieved from a canvas are in an undefined color space and no color space information is encoded by toDataURL or toBlob. &lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== Proposed solution: CanvasColorSpace ===&lt;br /&gt;
:Add a canvas color space creation parameter that allows user code to chose between backwards compatible behavior and color managed behaviors  The same color space option would exist in the ImageData and ImageBitmap interfaces.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
===== The color-space canvas creation parameter =====&lt;br /&gt;
&lt;br /&gt;
IDL:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
enum CanvasColorSpace {&lt;br /&gt;
  &amp;quot;legacy-srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;linear-rec-2020&amp;quot;,&lt;br /&gt;
  &amp;quot;optimal&amp;quot;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
dictionary CanvasRenderingContext2DSettings {&lt;br /&gt;
  boolean alpha = true;&lt;br /&gt;
  CanvasColorSpace color-space = &amp;quot;legacy-srgb&amp;quot;;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
canvas.getContext(&#039;2d&#039;, { color-space: &amp;quot;srgb&amp;quot;})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== The legacy-srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* Assures backwards compatible behavior&lt;br /&gt;
* Guarantees color matching with CSS and HTML content&lt;br /&gt;
* Color management behavior is implementation specific, may not use strict sRGB space, but is expected to be near sRGB. For example, could be display referred color space.&lt;br /&gt;
* toDataURL/toBlob produce resources with no color profile (backwards compat)&lt;br /&gt;
* Image resources with no color profile are never color corrected (backwards compat). This rule and the previous one allow for lossless toDataURL/drawImage round trips, which is a significant use case.&lt;br /&gt;
&lt;br /&gt;
===== The srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* May break color matching with CSS on implementations that do not color-manage CSS.&lt;br /&gt;
* 8 bit unsigned integers per color component.&lt;br /&gt;
* All content drawn into the canvas must be color corrected to sRGB&lt;br /&gt;
* displayed canvases must be color corrected for the display if a display color profile is available. This color correction happens downstream at the compositing stage, and has no script-visible side-effects.&lt;br /&gt;
* Compositing, filtering and interpolation operations must perform all arithmetic in &#039;&#039;&#039;linear&#039;&#039;&#039; sRGB space.&lt;br /&gt;
* toDataURL/toBlob produce resources tagged as being in the sRGB colorspace&lt;br /&gt;
* Images with no color profile, when drawn to the canvas, are assumed to already be in the sRGB color space.&lt;br /&gt;
&lt;br /&gt;
===== The rec-2020 color space =====&lt;br /&gt;
&lt;br /&gt;
* Color space provided for wide gamut support without increasing memory cost.&lt;br /&gt;
* 8 bit unsigned integers per color component.&lt;br /&gt;
* All content drawn into the canvas must be color corrected to the rec-2020 color space.&lt;br /&gt;
* Displayed canvases must be color corrected for the display if a display color profile is available.  This color correction happens downstream at the compositing stage, and has no script-visible side-effects. If there is no display color profile, the user agent should assume the display uses sRGB.&lt;br /&gt;
* Compositing, filtering and interpolation operations must perform all arithmetic in &#039;&#039;&#039;linear&#039;&#039;&#039; rec-2020 space.&lt;br /&gt;
* toDataURL/toBlob produce image resources in the rec-2020 colorspace.&lt;br /&gt;
* Images with no color profile, when drawn to the canvas, are assumed to be in the sRGB color space, and must therefore be converted from sRGB to rec-2020.&lt;br /&gt;
&lt;br /&gt;
===== The linear-rec-2020 color space =====&lt;br /&gt;
* Color space provided for wide gamut and high dynamic range rendering.&lt;br /&gt;
* User agents may decide not to support the mode, based on host machine capabilities&lt;br /&gt;
* Uses 16-bit floating point representation.&lt;br /&gt;
* The color space corresponds to ITU-R Recommendation BT.2020, &#039;&#039;&#039;without gamma compression&#039;&#039;&#039;.&lt;br /&gt;
* toDataURL/toBlob convert image data to the rec-2020 color space (with gamma), and produce image resources with at least 12 bits per color component, if the format supports it. Thus, in the case of the png format, which supports 8 or 16 bits per component, 16bpc would be used.&lt;br /&gt;
* Image with no color profile, when drawn to the canvas, are assumed to be in the sRGB color space, and are converted to linear-rec-2020 for the purpose of the draw.&lt;br /&gt;
&lt;br /&gt;
===== The optimal color space =====&lt;br /&gt;
The &amp;quot;optimal&amp;quot; option lets the user agent decide which space is optimal for the current display device based on the device&#039;s capabilities and color profile characteristics. &lt;br /&gt;
* This option may not select &amp;quot;legacy-srgb&amp;quot;&lt;br /&gt;
* Graphics devices with color gamuts that extend significantly beyond the sRGB color space should cause the UA to favor rec-2020; and linear linear-rec-2020 should be used for displays with high dynamic ranges.&lt;br /&gt;
* Graphics devices that would not produce noticeably higher quality visual results in rec-2020 or linear-rec-2020 should cause the UA to favor &amp;quot;srgb&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
===== Feature detection =====&lt;br /&gt;
&lt;br /&gt;
Rendering context objects are to expose a new &amp;quot;settings&amp;quot; attribute, which represents the settings that were successfully applied at context creation time.&lt;br /&gt;
&lt;br /&gt;
Note: An alternative approach that was considered was to augment the probablySupportsContext() API by making it check the second argument.  That approach is difficult to consolidate with how dictionary argument are meant to work, where unsupported entries are just ignored.&lt;br /&gt;
&lt;br /&gt;
===== ImageBitmap =====&lt;br /&gt;
&lt;br /&gt;
ImageBitmap objects are augmented to have an internal color space attribute of type CanvasColorSpace. The colorSpaceConversion creation attribute is to be augmented with new enum values for coercing conversions to a specific CanvasColorSpace at creation time.&lt;br /&gt;
&lt;br /&gt;
===== ImageData =====&lt;br /&gt;
&lt;br /&gt;
IDL&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
typedef (Uint8ClampedArray or Float32Array) ImageDataArray;&lt;br /&gt;
&lt;br /&gt;
[Constructor(unsigned long sw, unsigned long sh, optional CanvasColorSpace colorSpace = &amp;quot;legacy-srgb&amp;quot;),&lt;br /&gt;
 Constructor(ImageDataArray data, unsigned long sw, optional unsigned long sh, optional CanvasColorSpace colorSpace),&lt;br /&gt;
 Exposed=(Window,Worker)]&lt;br /&gt;
interface ImageData {&lt;br /&gt;
  readonly attribute unsigned long width;&lt;br /&gt;
  readonly attribute unsigned long height;&lt;br /&gt;
  readonly attribute ImageDataArray data;&lt;br /&gt;
  readonly attribute CanvasColorSpace colorSpace;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Uint8ClampedArray if colorSpace is &amp;quot;srgb&amp;quot; or &amp;quot;legacy-srgb&amp;quot;&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Float32Array if colorSpace is &amp;quot;linear-rec-2020&amp;quot;&lt;br /&gt;
* getImageData() produces an ImageData object in the same color space as the source canvas&lt;br /&gt;
* putImageData() performs a color space conversion to the color space of the destination canvas.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
* No support for arbitrary color spaces and bit depth.  This capability could be added in the future.  The current proposal attempts to solve the problem with a minimal API surface, and keeps the implementation scope reasonable.  The extensible design will allow us to extend the capabilities in the future if necessary.  The rec-2020 space was chosen for its very wide gamut and its non-virtual primary colors, which strikes a balance that is deemed practical.&lt;br /&gt;
* toDataURL is lossy when us on a canvas that is in the linear-rec-2020 space. Possible future improvements could solve or mitigate this issue by adding more file formats or adding options to specify the resource color space.&lt;br /&gt;
* ImageData uses float32, which is inefficient due to memory consumption and necessary conversion operations. Float32 was chosen because it is convenient for manipulation (e.g. image processing) due to its native support in JavaScript (and current CPUs). A possible extension would be to add and option for rec-2020 content to be encoded as float16s packed into Uint16 values.&lt;br /&gt;
&lt;br /&gt;
==== Security and privacy issues ====&lt;br /&gt;
Some current implementations of CanvasRenderingContext2D color correct image resources for the display as they are drawn to the canvas. In other words, the canvas is in output referred color space. This is a known fingerprinting vulnerability since it exposes the user&#039;s display&#039;s color profile to scripts via getImageData.  The current proposal does not solve the fingerprinting issue because it will still exist in legacy-srgb.  To solve the problem, implementations must color-correct CSS colors, then by extension, legacy-srgb mode will be in the true sRGB color space by virtue of the color matching rules outlined above.  When that becomes the case, images drawn to canvases will be color corrected to sRGB, which solves the problem.  There is resistance to adopting this model because going through an sRGB intermediate is lossy compared to directly color correcting images for the display in a single pass (may cause banding and gamut clipping).  This feature proposal mitigates the lossiness argument thanks to the linear-rec-2020 option.&lt;br /&gt;
&lt;br /&gt;
==== Implementation notes ==== &lt;br /&gt;
* Because float16 arithmetic is supported by many GPUs, but not by CPUs, implementations should probably opt to not support rec-2020 on hardware that does not provide any native support.&lt;br /&gt;
* When available, the srgb colorspace should use GPU API extension for SRGB support. This will streamline the conversion overhead for performing filtering and compositing in linear space.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
Lack of color management and color interoperability is a longstanding complaint about the canvas API.&lt;br /&gt;
Authors of games and imaging apps are expected to be enthusiastic adopters.&lt;br /&gt;
&lt;br /&gt;
==== History ====&lt;br /&gt;
This proposal was originally incubated in the Khronos 3D Web group, with the participation of engineers from Google, Microsoft, Apple, Nvidia, and others.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10067</id>
		<title>CanvasColorSpace</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10067"/>
		<updated>2016-05-09T15:05:35Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Color managing canvas contents&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
* Contents displayed through a canvas element should be color managed in order to minimize differences in appearance across browsers and display devices. Improving color fidelity matters a lot for artistic uses (e.g. photo and paint apps) and for e-commerce (product presentation).&lt;br /&gt;
* Canvases should be able to take advantage of the full color gamut of the display device.&lt;br /&gt;
* Creative apps that do image manipulation generally prefer compositing, filtering and interpolation calculations to be performed in a linear color space.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
* The color space of canvases is undefined in the current specification.&lt;br /&gt;
* The bit-depth of canvases is currently fixed to 8 bits per component, which is below the capabilities of some monitors. Monitors with higher contrast ratios require more bits per component to avoid banding.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
The lack of color space interoperability is hard to work around. With some browser implementations that color correct images drawn to canvases by applying the display profile, apps that want to use canvases for color corrected image processing are stuck doing convoluted workarounds, such as:&lt;br /&gt;
* reverse-engineer the display profile by drawing test pattern images to the canvas and inspecting the color corrected result via getImageData&lt;br /&gt;
* bypass CanvasRenderingContext2D.drawImage() and use image decoders implemented in JavaScript to extract raw image data that was not tainted by the browser&#039;s color correction behavior.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An aspect of current implementations that is interoperable is that colors match between CSS/HTML and canvases:&lt;br /&gt;
* A color value used as a canvas drawing style will have the same appearance as if the same color value were used as a CSS style&lt;br /&gt;
* An image resource drawn to a canvas element will have the same appearance as if it were displayed as the replaced content of an HTML element or used as a CSS style value.&lt;br /&gt;
&lt;br /&gt;
This color matching behavior needs to be preserved to avoid breaking pre-existing content.&lt;br /&gt;
&lt;br /&gt;
Some implementations convert images drawn to canvases to the sRGB color space. This has the advantage of making the color correction behavior device independent, but it clamps the gamuts of the rendered content to the sRGB gamut, which is significantly narrower than the gamuts of some current consumer devices.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://github.com/whatwg/html/issues/299]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Allow 2dcontexts to use deeper color buffers&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://bugs.chromium.org/p/chromium/issues/detail?id=425935]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Wrong color profile with 2D canvas&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* Engineers from the Google Photos, Maps and Sheets teams have expressed a desire for canvases to become color managed.  Particularly for the use case of resizing an imaging, using a canvas, prior to uploading it to the server, to save bandwidth. The problem is that the images retrieved from a canvas are in an undefined color space and no color space information is encoded by toDataURL or toBlob. &lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== Proposed solution: CanvasColorSpace ===&lt;br /&gt;
:Add a canvas color space creation parameter that allows user code to chose between backwards compatible behavior and color managed behaviors  The same color space option would exist in the ImageData and ImageBitmap interfaces.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
===== The color-space canvas creation parameter =====&lt;br /&gt;
&lt;br /&gt;
IDL:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
enum CanvasColorSpace {&lt;br /&gt;
  &amp;quot;legacy-srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;linear-rec-2020&amp;quot;,&lt;br /&gt;
  &amp;quot;optimal&amp;quot;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
dictionary CanvasRenderingContext2DSettings {&lt;br /&gt;
  boolean alpha = true;&lt;br /&gt;
  CanvasColorSpace color-space = &amp;quot;legacy-srgb&amp;quot;;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
canvas.getContext(&#039;2d&#039;, { color-space: &amp;quot;srgb&amp;quot;})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== The legacy-srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* Assures backwards compatible behavior&lt;br /&gt;
* Guarantees color matching with CSS and HTML content&lt;br /&gt;
* Color management behavior is implementation specific, may not use strict sRGB space, but is expected to be near sRGB. For example, could be display referred color space.&lt;br /&gt;
* toDataURL/toBlob produce resources with no color profile (backwards compat)&lt;br /&gt;
* Image resources with no color profile are never color corrected (backwards compat). This rule and the previous one allow for lossless toDataURL/drawImage round trips, which is a significant use case.&lt;br /&gt;
&lt;br /&gt;
===== The srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* May break color matching with CSS on implementations that do not color-manage CSS.&lt;br /&gt;
* 8 bit unsigned integers per color component.&lt;br /&gt;
* All content drawn into the canvas must be color corrected to sRGB&lt;br /&gt;
* displayed canvases must be color corrected for the display if a display color profile is available. This color correction happens downstream at the compositing stage, and has no script-visible side-effects.&lt;br /&gt;
* Compositing, filtering and interpolation operations must perform all arithmetic in &#039;&#039;&#039;linear&#039;&#039;&#039; sRGB space.&lt;br /&gt;
* toDataURL/toBlob produce resources tagged as being in the sRGB colorspace&lt;br /&gt;
* Images with no color profile, when drawn to the canvas, are assumed to already be in the sRGB color space.&lt;br /&gt;
&lt;br /&gt;
===== The linear-rec-2020 color space =====&lt;br /&gt;
* Color space provided for wide gamut and high dynamic range rendering.&lt;br /&gt;
* User agents may decide not to support the mode, based on host machine capabilities&lt;br /&gt;
* Uses 16-bit floating point representation.&lt;br /&gt;
* The color space corresponds to ITU-R Recommendation BT.2020, &#039;&#039;&#039;without gamma compression&#039;&#039;&#039;.&lt;br /&gt;
* toDataURL/toBlob convert image data to the rec-2020 color space (with gamma), and produce image resources with at least 12 bits per color component, if the format supports it. Thus, in the case of the png format, which supports 8 or 16 bits per component, 16bpc would be used.&lt;br /&gt;
* Image with no color profile, when drawn to the canvas, are assumed to be in the sRGB color space, and are converted to linear-rec-2020 for the purpose of the draw.&lt;br /&gt;
&lt;br /&gt;
===== The optimal color space =====&lt;br /&gt;
The &amp;quot;optimal&amp;quot; option lets the user agent decide which space is optimal for the current display device based on the device&#039;s capabilities and color profile characteristics. &lt;br /&gt;
* This option may not select &amp;quot;legacy-srgb&amp;quot;&lt;br /&gt;
* Graphics devices with color gamuts  and/or contrast ratios that extend significantly beyond the sRGB color space should cause the UA to favor linear-rec-2020 to avoid undue gamut clipping and/or banding.&lt;br /&gt;
* Graphics devices that would not produce noticeably higher quality visual results in the linear-rec-2020 should cause the UA to favor &amp;quot;srgb&amp;quot; to save on memory consumption&lt;br /&gt;
&lt;br /&gt;
===== Feature detection =====&lt;br /&gt;
&lt;br /&gt;
Rendering context objects are to expose a new &amp;quot;settings&amp;quot; attribute, which represents the settings that were successfully applied at context creation time.&lt;br /&gt;
&lt;br /&gt;
Note: An alternative approach that was considered was to augment the probablySupportsContext() API by making it check the second argument.  That approach is difficult to consolidate with how dictionary argument are meant to work, where unsupported entries are just ignored.&lt;br /&gt;
&lt;br /&gt;
===== ImageBitmap =====&lt;br /&gt;
&lt;br /&gt;
ImageBitmap objects are augmented to have an internal color space attribute of type CanvasColorSpace. The colorSpaceConversion creation attribute is to be augmented with new enum values for coercing conversions to a specific CanvasColorSpace at creation time.&lt;br /&gt;
&lt;br /&gt;
===== ImageData =====&lt;br /&gt;
&lt;br /&gt;
IDL&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
typedef (Uint8ClampedArray or Float32Array) ImageDataArray;&lt;br /&gt;
&lt;br /&gt;
[Constructor(unsigned long sw, unsigned long sh, optional CanvasColorSpace colorSpace = &amp;quot;legacy-srgb&amp;quot;),&lt;br /&gt;
 Constructor(ImageDataArray data, unsigned long sw, optional unsigned long sh, optional CanvasColorSpace colorSpace),&lt;br /&gt;
 Exposed=(Window,Worker)]&lt;br /&gt;
interface ImageData {&lt;br /&gt;
  readonly attribute unsigned long width;&lt;br /&gt;
  readonly attribute unsigned long height;&lt;br /&gt;
  readonly attribute ImageDataArray data;&lt;br /&gt;
  readonly attribute CanvasColorSpace colorSpace;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Uint8ClampedArray if colorSpace is &amp;quot;srgb&amp;quot; or &amp;quot;legacy-srgb&amp;quot;&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Float32Array if colorSpace is &amp;quot;linear-rec-2020&amp;quot;&lt;br /&gt;
* getImageData() produces an ImageData object in the same color space as the source canvas&lt;br /&gt;
* putImageData() performs a color space conversion to the color space of the destination canvas.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
* No support for arbitrary color spaces and bit depth.  This capability could be added in the future.  The current proposal attempts to solve the problem with a minimal API surface, and keeps the implementation scope reasonable.  The extensible design will allow us to extend the capabilities in the future if necessary.  The rec-2020 space was chosen for its very wide gamut and its non-virtual primary colors, which strikes a balance that is deemed practical.&lt;br /&gt;
* toDataURL is lossy when us on a canvas that is in the linear-rec-2020 space. Possible future improvements could solve or mitigate this issue by adding more file formats or adding options to specify the resource color space.&lt;br /&gt;
* ImageData uses float32, which is inefficient due to memory consumption and necessary conversion operations. Float32 was chosen because it is convenient for manipulation (e.g. image processing) due to its native support in JavaScript (and current CPUs). A possible extension would be to add and option for rec-2020 content to be encoded as float16s packed into Uint16 values.&lt;br /&gt;
&lt;br /&gt;
==== Security and privacy issues ====&lt;br /&gt;
Some current implementations of CanvasRenderingContext2D color correct image resources for the display as they are drawn to the canvas. In other words, the canvas is in output referred color space. This is a known fingerprinting vulnerability since it exposes the user&#039;s display&#039;s color profile to scripts via getImageData.  The current proposal does not solve the fingerprinting issue because it will still exist in legacy-srgb.  To solve the problem, implementations must color-correct CSS colors, then by extension, legacy-srgb mode will be in the true sRGB color space by virtue of the color matching rules outlined above.  When that becomes the case, images drawn to canvases will be color corrected to sRGB, which solves the problem.  There is resistance to adopting this model because going through an sRGB intermediate is lossy compared to directly color correcting images for the display in a single pass (may cause banding and gamut clipping).  This feature proposal mitigates the lossiness argument thanks to the linear-rec-2020 option.&lt;br /&gt;
&lt;br /&gt;
==== Implementation notes ==== &lt;br /&gt;
* Because float16 arithmetic is supported by many GPUs, but not by CPUs, implementations should probably opt to not support rec-2020 on hardware that does not provide any native support.&lt;br /&gt;
* When available, the srgb colorspace should use GPU API extension for SRGB support. This will streamline the conversion overhead for performing filtering and compositing in linear space.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
Lack of color management and color interoperability is a longstanding complaint about the canvas API.&lt;br /&gt;
Authors of games and imaging apps are expected to be enthusiastic adopters.&lt;br /&gt;
&lt;br /&gt;
==== History ====&lt;br /&gt;
This proposal was originally incubated in the Khronos 3D Web group, with the participation of engineers from Google, Microsoft, Apple, Nvidia, and others.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10066</id>
		<title>CanvasColorSpace</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10066"/>
		<updated>2016-05-07T15:32:27Z</updated>

		<summary type="html">&lt;p&gt;Junov: /* The linear-rec-2020 color space */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Color managing canvas contents&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
* Contents displayed through a canvas element should be color managed in order to minimize differences in appearance across browsers and display devices. Improving color fidelity matters a lot for artistic uses (e.g. photo and paint apps) and for e-commerce (product presentation).&lt;br /&gt;
* Canvases should be able to take advantage of the full color gamut of the display device.&lt;br /&gt;
* Creative apps that do image manipulation generally prefer compositing, filtering and interpolation calculations to be performed in a linear color space.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
* The color space of canvases is undefined in the current specification.&lt;br /&gt;
* The bit-depth of canvases is currently fixed to 8 bits per component, which is below the capabilities of some monitors. Monitors with higher contrast ratios require more bits per component to avoid banding.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
The lack of color space interoperability is hard to work around. With some browser implementations that color correct images drawn to canvases by applying the display profile, apps that want to use canvases for color corrected image processing are stuck doing convoluted workarounds, such as:&lt;br /&gt;
* reverse-engineer the display profile by drawing test pattern images to the canvas and inspecting the color corrected result via getImageData&lt;br /&gt;
* bypass CanvasRenderingContext2D.drawImage() and use image decoders implemented in JavaScript to extract raw image data that was not tainted by the browser&#039;s color correction behavior.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An aspect of current implementations that is interoperable is that colors match between CSS/HTML and canvases:&lt;br /&gt;
* A color value used as a canvas drawing style will have the same appearance as if the same color value were used as a CSS style&lt;br /&gt;
* An image resource drawn to a canvas element will have the same appearance as if it were displayed as the replaced content of an HTML element or used as a CSS style value.&lt;br /&gt;
&lt;br /&gt;
This color matching behavior needs to be preserved to avoid breaking pre-existing content.&lt;br /&gt;
&lt;br /&gt;
Some implementations convert images drawn to canvases to the sRGB color space. This has the advantage of making the color correction behavior device independent, but it clamps the gamuts of the rendered content to the sRGB gamut, which is significantly narrower than the gamuts of some current consumer devices.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://github.com/whatwg/html/issues/299]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Allow 2dcontexts to use deeper color buffers&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://bugs.chromium.org/p/chromium/issues/detail?id=425935]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Wrong color profile with 2D canvas&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* Engineers from the Google Photos, Maps and Sheets teams have expressed a desire for canvases to become color managed.  Particularly for the use case of resizing an imaging, using a canvas, prior to uploading it to the server, to save bandwidth. The problem is that the images retrieved from a canvas are in an undefined color space and no color space information is encoded by toDataURL or toBlob. &lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== Proposed solution: CanvasColorSpace ===&lt;br /&gt;
:Add a canvas color space creation parameter that allows user code to chose between backwards compatible behavior and color managed behaviors  The same color space option would exist in the ImageData and ImageBitmap interfaces.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
===== The color-space canvas creation parameter =====&lt;br /&gt;
&lt;br /&gt;
IDL:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
enum CanvasColorSpace {&lt;br /&gt;
  &amp;quot;legacy-srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;linear-rec-2020&amp;quot;,&lt;br /&gt;
  &amp;quot;optimal&amp;quot;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
dictionary CanvasRenderingContext2DSettings {&lt;br /&gt;
  boolean alpha = true;&lt;br /&gt;
  CanvasColorSpace color-space = &amp;quot;legacy-srgb&amp;quot;;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
canvas.getContext(&#039;2d&#039;, { color-space: &amp;quot;srgb&amp;quot;})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== The legacy-srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* Assures backwards compatible behavior&lt;br /&gt;
* Guarantees color matching with CSS and HTML content&lt;br /&gt;
* Color management behavior is implementation specific, may not use strict sRGB space, but is expected to be near sRGB. For example, could be display referred color space.&lt;br /&gt;
* toDataURL/toBlob produce resources with no color profile (backwards compat)&lt;br /&gt;
* Image resources with no color profile are never color corrected (backwards compat). This rule and the previous one allow for lossless toDataURL/drawImage round trips, which is a significant use case.&lt;br /&gt;
&lt;br /&gt;
===== The srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* May break color matching with CSS on implementations that do not color-manage CSS.&lt;br /&gt;
* 8 bit unsigned integers per color component.&lt;br /&gt;
* All content drawn into the canvas must be color corrected to sRGB&lt;br /&gt;
* displayed canvases must be color corrected for the display if a display color profile is available. This color correction happens downstream at the compositing stage, and has no script-visible side-effects.&lt;br /&gt;
* Compositing, filtering and interpolation operations must perform all arithmetic in &#039;&#039;&#039;linear&#039;&#039;&#039; sRGB space.&lt;br /&gt;
* toDataURL/toBlob produce resources tagged as being in the sRGB colorspace&lt;br /&gt;
* Images with no color profile, when drawn to the canvas, are assumed to already be in the sRGB color space.&lt;br /&gt;
&lt;br /&gt;
===== The linear-rec-2020 color space =====&lt;br /&gt;
* Color space provided for wide gamut and high dynamic range rendering.&lt;br /&gt;
* User agents may decide not to support the mode, based on host machine capabilities&lt;br /&gt;
* Uses 16-bit floating point representation.&lt;br /&gt;
* The color space corresponds to ITU-R Recommendation BT.2020, &#039;&#039;&#039;without gamma compression&#039;&#039;&#039;.&lt;br /&gt;
* toDataURL/toBlob convert image data to sRGB and produce image resources tagged as being in the sRGB color space.&lt;br /&gt;
* Image with no color profile, when drawn to the canvas, are assumed to be in the sRGB color space, and are converted to linear-rec-2020 for the purpose of the draw.&lt;br /&gt;
&lt;br /&gt;
===== The optimal color space =====&lt;br /&gt;
The &amp;quot;optimal&amp;quot; option lets the user agent decide which space is optimal for the current display device based on the device&#039;s capabilities and color profile characteristics. &lt;br /&gt;
* This option may not select &amp;quot;legacy-srgb&amp;quot;&lt;br /&gt;
* Graphics devices with color gamuts  and/or contrast ratios that extend significantly beyond the sRGB color space should cause the UA to favor linear-rec-2020 to avoid undue gamut clipping and/or banding.&lt;br /&gt;
* Graphics devices that would not produce noticeably higher quality visual results in the linear-rec-2020 should cause the UA to favor &amp;quot;srgb&amp;quot; to save on memory consumption&lt;br /&gt;
&lt;br /&gt;
===== Feature detection =====&lt;br /&gt;
&lt;br /&gt;
Rendering context objects are to expose a new &amp;quot;settings&amp;quot; attribute, which represents the settings that were successfully applied at context creation time.&lt;br /&gt;
&lt;br /&gt;
Note: An alternative approach that was considered was to augment the probablySupportsContext() API by making it check the second argument.  That approach is difficult to consolidate with how dictionary argument are meant to work, where unsupported entries are just ignored.&lt;br /&gt;
&lt;br /&gt;
===== ImageBitmap =====&lt;br /&gt;
&lt;br /&gt;
ImageBitmap objects are augmented to have an internal color space attribute of type CanvasColorSpace. The colorSpaceConversion creation attribute is to be augmented with new enum values for coercing conversions to a specific CanvasColorSpace at creation time.&lt;br /&gt;
&lt;br /&gt;
===== ImageData =====&lt;br /&gt;
&lt;br /&gt;
IDL&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
typedef (Uint8ClampedArray or Float32Array) ImageDataArray;&lt;br /&gt;
&lt;br /&gt;
[Constructor(unsigned long sw, unsigned long sh, optional CanvasColorSpace colorSpace = &amp;quot;legacy-srgb&amp;quot;),&lt;br /&gt;
 Constructor(ImageDataArray data, unsigned long sw, optional unsigned long sh, optional CanvasColorSpace colorSpace),&lt;br /&gt;
 Exposed=(Window,Worker)]&lt;br /&gt;
interface ImageData {&lt;br /&gt;
  readonly attribute unsigned long width;&lt;br /&gt;
  readonly attribute unsigned long height;&lt;br /&gt;
  readonly attribute ImageDataArray data;&lt;br /&gt;
  readonly attribute CanvasColorSpace colorSpace;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Uint8ClampedArray if colorSpace is &amp;quot;srgb&amp;quot; or &amp;quot;legacy-srgb&amp;quot;&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Float32Array if colorSpace is &amp;quot;linear-rec-2020&amp;quot;&lt;br /&gt;
* getImageData() produces an ImageData object in the same color space as the source canvas&lt;br /&gt;
* putImageData() performs a color space conversion to the color space of the destination canvas.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
* No support for arbitrary color spaces and bit depth.  This capability could be added in the future.  The current proposal attempts to solve the problem with a minimal API surface, and keeps the implementation scope reasonable.  The extensible design will allow us to extend the capabilities in the future if necessary.  The rec-2020 space was chosen for its very wide gamut and its non-virtual primary colors, which strikes a balance that is deemed practical.&lt;br /&gt;
* toDataURL is lossy when us on a canvas that is in the linear-rec-2020 space. Possible future improvements could solve or mitigate this issue by adding more file formats or adding options to specify the resource color space.&lt;br /&gt;
* ImageData uses float32, which is inefficient due to memory consumption and necessary conversion operations. Float32 was chosen because it is convenient for manipulation (e.g. image processing) due to its native support in JavaScript (and current CPUs). A possible extension would be to add and option for rec-2020 content to be encoded as float16s packed into Uint16 values.&lt;br /&gt;
&lt;br /&gt;
==== Security and privacy issues ====&lt;br /&gt;
Some current implementations of CanvasRenderingContext2D color correct image resources for the display as they are drawn to the canvas. In other words, the canvas is in output referred color space. This is a known fingerprinting vulnerability since it exposes the user&#039;s display&#039;s color profile to scripts via getImageData.  The current proposal does not solve the fingerprinting issue because it will still exist in legacy-srgb.  To solve the problem, implementations must color-correct CSS colors, then by extension, legacy-srgb mode will be in the true sRGB color space by virtue of the color matching rules outlined above.  When that becomes the case, images drawn to canvases will be color corrected to sRGB, which solves the problem.  There is resistance to adopting this model because going through an sRGB intermediate is lossy compared to directly color correcting images for the display in a single pass (may cause banding and gamut clipping).  This feature proposal mitigates the lossiness argument thanks to the linear-rec-2020 option.&lt;br /&gt;
&lt;br /&gt;
==== Implementation notes ==== &lt;br /&gt;
* Because float16 arithmetic is supported by many GPUs, but not by CPUs, implementations should probably opt to not support rec-2020 on hardware that does not provide any native support.&lt;br /&gt;
* When available, the srgb colorspace should use GPU API extension for SRGB support. This will streamline the conversion overhead for performing filtering and compositing in linear space.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
Lack of color management and color interoperability is a longstanding complaint about the canvas API.&lt;br /&gt;
Authors of games and imaging apps are expected to be enthusiastic adopters.&lt;br /&gt;
&lt;br /&gt;
==== History ====&lt;br /&gt;
This proposal was originally incubated in the Khronos 3D Web group, with the participation of engineers from Google, Microsoft, Apple, Nvidia, and others.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10065</id>
		<title>CanvasColorSpace</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10065"/>
		<updated>2016-05-07T14:12:57Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Color managing canvas contents&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
* Contents displayed through a canvas element should be color managed in order to minimize differences in appearance across browsers and display devices. Improving color fidelity matters a lot for artistic uses (e.g. photo and paint apps) and for e-commerce (product presentation).&lt;br /&gt;
* Canvases should be able to take advantage of the full color gamut of the display device.&lt;br /&gt;
* Creative apps that do image manipulation generally prefer compositing, filtering and interpolation calculations to be performed in a linear color space.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
* The color space of canvases is undefined in the current specification.&lt;br /&gt;
* The bit-depth of canvases is currently fixed to 8 bits per component, which is below the capabilities of some monitors. Monitors with higher contrast ratios require more bits per component to avoid banding.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
The lack of color space interoperability is hard to work around. With some browser implementations that color correct images drawn to canvases by applying the display profile, apps that want to use canvases for color corrected image processing are stuck doing convoluted workarounds, such as:&lt;br /&gt;
* reverse-engineer the display profile by drawing test pattern images to the canvas and inspecting the color corrected result via getImageData&lt;br /&gt;
* bypass CanvasRenderingContext2D.drawImage() and use image decoders implemented in JavaScript to extract raw image data that was not tainted by the browser&#039;s color correction behavior.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An aspect of current implementations that is interoperable is that colors match between CSS/HTML and canvases:&lt;br /&gt;
* A color value used as a canvas drawing style will have the same appearance as if the same color value were used as a CSS style&lt;br /&gt;
* An image resource drawn to a canvas element will have the same appearance as if it were displayed as the replaced content of an HTML element or used as a CSS style value.&lt;br /&gt;
&lt;br /&gt;
This color matching behavior needs to be preserved to avoid breaking pre-existing content.&lt;br /&gt;
&lt;br /&gt;
Some implementations convert images drawn to canvases to the sRGB color space. This has the advantage of making the color correction behavior device independent, but it clamps the gamuts of the rendered content to the sRGB gamut, which is significantly narrower than the gamuts of some current consumer devices.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://github.com/whatwg/html/issues/299]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Allow 2dcontexts to use deeper color buffers&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://bugs.chromium.org/p/chromium/issues/detail?id=425935]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Wrong color profile with 2D canvas&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* Engineers from the Google Photos, Maps and Sheets teams have expressed a desire for canvases to become color managed.  Particularly for the use case of resizing an imaging, using a canvas, prior to uploading it to the server, to save bandwidth. The problem is that the images retrieved from a canvas are in an undefined color space and no color space information is encoded by toDataURL or toBlob. &lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== Proposed solution: CanvasColorSpace ===&lt;br /&gt;
:Add a canvas color space creation parameter that allows user code to chose between backwards compatible behavior and color managed behaviors  The same color space option would exist in the ImageData and ImageBitmap interfaces.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
===== The color-space canvas creation parameter =====&lt;br /&gt;
&lt;br /&gt;
IDL:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
enum CanvasColorSpace {&lt;br /&gt;
  &amp;quot;legacy-srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;linear-rec-2020&amp;quot;,&lt;br /&gt;
  &amp;quot;optimal&amp;quot;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
dictionary CanvasRenderingContext2DSettings {&lt;br /&gt;
  boolean alpha = true;&lt;br /&gt;
  CanvasColorSpace color-space = &amp;quot;legacy-srgb&amp;quot;;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
canvas.getContext(&#039;2d&#039;, { color-space: &amp;quot;srgb&amp;quot;})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== The legacy-srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* Assures backwards compatible behavior&lt;br /&gt;
* Guarantees color matching with CSS and HTML content&lt;br /&gt;
* Color management behavior is implementation specific, may not use strict sRGB space, but is expected to be near sRGB. For example, could be display referred color space.&lt;br /&gt;
* toDataURL/toBlob produce resources with no color profile (backwards compat)&lt;br /&gt;
* Image resources with no color profile are never color corrected (backwards compat). This rule and the previous one allow for lossless toDataURL/drawImage round trips, which is a significant use case.&lt;br /&gt;
&lt;br /&gt;
===== The srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* May break color matching with CSS on implementations that do not color-manage CSS.&lt;br /&gt;
* 8 bit unsigned integers per color component.&lt;br /&gt;
* All content drawn into the canvas must be color corrected to sRGB&lt;br /&gt;
* displayed canvases must be color corrected for the display if a display color profile is available. This color correction happens downstream at the compositing stage, and has no script-visible side-effects.&lt;br /&gt;
* Compositing, filtering and interpolation operations must perform all arithmetic in &#039;&#039;&#039;linear&#039;&#039;&#039; sRGB space.&lt;br /&gt;
* toDataURL/toBlob produce resources tagged as being in the sRGB colorspace&lt;br /&gt;
* Images with no color profile, when drawn to the canvas, are assumed to already be in the sRGB color space.&lt;br /&gt;
&lt;br /&gt;
===== The linear-rec-2020 color space =====&lt;br /&gt;
* Space provide for wide gamut and high dynamic range rendering&lt;br /&gt;
* User agents may decide not to support the mode, based on host machine capabilities&lt;br /&gt;
* Uses 16-bit floating point representation.&lt;br /&gt;
* The color space corresponds to ITU-R Recommendation BT.2020, &#039;&#039;&#039;without gamma compression&#039;&#039;&#039;.&lt;br /&gt;
* toDataURL/toBlob convert image data to sRGB and produce image resources tagged as being in the sRGB color space.&lt;br /&gt;
* Image with no color profile, when drawn to the canvas, are assumed to be in the sRGB color space, and are converted to linear-rec-2020 for the purpose of the draw.&lt;br /&gt;
&lt;br /&gt;
===== The optimal color space =====&lt;br /&gt;
The &amp;quot;optimal&amp;quot; option lets the user agent decide which space is optimal for the current display device based on the device&#039;s capabilities and color profile characteristics. &lt;br /&gt;
* This option may not select &amp;quot;legacy-srgb&amp;quot;&lt;br /&gt;
* Graphics devices with color gamuts  and/or contrast ratios that extend significantly beyond the sRGB color space should cause the UA to favor linear-rec-2020 to avoid undue gamut clipping and/or banding.&lt;br /&gt;
* Graphics devices that would not produce noticeably higher quality visual results in the linear-rec-2020 should cause the UA to favor &amp;quot;srgb&amp;quot; to save on memory consumption&lt;br /&gt;
&lt;br /&gt;
===== Feature detection =====&lt;br /&gt;
&lt;br /&gt;
Rendering context objects are to expose a new &amp;quot;settings&amp;quot; attribute, which represents the settings that were successfully applied at context creation time.&lt;br /&gt;
&lt;br /&gt;
Note: An alternative approach that was considered was to augment the probablySupportsContext() API by making it check the second argument.  That approach is difficult to consolidate with how dictionary argument are meant to work, where unsupported entries are just ignored.&lt;br /&gt;
&lt;br /&gt;
===== ImageBitmap =====&lt;br /&gt;
&lt;br /&gt;
ImageBitmap objects are augmented to have an internal color space attribute of type CanvasColorSpace. The colorSpaceConversion creation attribute is to be augmented with new enum values for coercing conversions to a specific CanvasColorSpace at creation time.&lt;br /&gt;
&lt;br /&gt;
===== ImageData =====&lt;br /&gt;
&lt;br /&gt;
IDL&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
typedef (Uint8ClampedArray or Float32Array) ImageDataArray;&lt;br /&gt;
&lt;br /&gt;
[Constructor(unsigned long sw, unsigned long sh, optional CanvasColorSpace colorSpace = &amp;quot;legacy-srgb&amp;quot;),&lt;br /&gt;
 Constructor(ImageDataArray data, unsigned long sw, optional unsigned long sh, optional CanvasColorSpace colorSpace),&lt;br /&gt;
 Exposed=(Window,Worker)]&lt;br /&gt;
interface ImageData {&lt;br /&gt;
  readonly attribute unsigned long width;&lt;br /&gt;
  readonly attribute unsigned long height;&lt;br /&gt;
  readonly attribute ImageDataArray data;&lt;br /&gt;
  readonly attribute CanvasColorSpace colorSpace;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Uint8ClampedArray if colorSpace is &amp;quot;srgb&amp;quot; or &amp;quot;legacy-srgb&amp;quot;&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Float32Array if colorSpace is &amp;quot;linear-rec-2020&amp;quot;&lt;br /&gt;
* getImageData() produces an ImageData object in the same color space as the source canvas&lt;br /&gt;
* putImageData() performs a color space conversion to the color space of the destination canvas.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
* No support for arbitrary color spaces and bit depth.  This capability could be added in the future.  The current proposal attempts to solve the problem with a minimal API surface, and keeps the implementation scope reasonable.  The extensible design will allow us to extend the capabilities in the future if necessary.  The rec-2020 space was chosen for its very wide gamut and its non-virtual primary colors, which strikes a balance that is deemed practical.&lt;br /&gt;
* toDataURL is lossy when us on a canvas that is in the linear-rec-2020 space. Possible future improvements could solve or mitigate this issue by adding more file formats or adding options to specify the resource color space.&lt;br /&gt;
* ImageData uses float32, which is inefficient due to memory consumption and necessary conversion operations. Float32 was chosen because it is convenient for manipulation (e.g. image processing) due to its native support in JavaScript (and current CPUs). A possible extension would be to add and option for rec-2020 content to be encoded as float16s packed into Uint16 values.&lt;br /&gt;
&lt;br /&gt;
==== Security and privacy issues ====&lt;br /&gt;
Some current implementations of CanvasRenderingContext2D color correct image resources for the display as they are drawn to the canvas. In other words, the canvas is in output referred color space. This is a known fingerprinting vulnerability since it exposes the user&#039;s display&#039;s color profile to scripts via getImageData.  The current proposal does not solve the fingerprinting issue because it will still exist in legacy-srgb.  To solve the problem, implementations must color-correct CSS colors, then by extension, legacy-srgb mode will be in the true sRGB color space by virtue of the color matching rules outlined above.  When that becomes the case, images drawn to canvases will be color corrected to sRGB, which solves the problem.  There is resistance to adopting this model because going through an sRGB intermediate is lossy compared to directly color correcting images for the display in a single pass (may cause banding and gamut clipping).  This feature proposal mitigates the lossiness argument thanks to the linear-rec-2020 option.&lt;br /&gt;
&lt;br /&gt;
==== Implementation notes ==== &lt;br /&gt;
* Because float16 arithmetic is supported by many GPUs, but not by CPUs, implementations should probably opt to not support rec-2020 on hardware that does not provide any native support.&lt;br /&gt;
* When available, the srgb colorspace should use GPU API extension for SRGB support. This will streamline the conversion overhead for performing filtering and compositing in linear space.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
Lack of color management and color interoperability is a longstanding complaint about the canvas API.&lt;br /&gt;
Authors of games and imaging apps are expected to be enthusiastic adopters.&lt;br /&gt;
&lt;br /&gt;
==== History ====&lt;br /&gt;
This proposal was originally incubated in the Khronos 3D Web group, with the participation of engineers from Google, Microsoft, Apple, Nvidia, and others.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10064</id>
		<title>CanvasColorSpace</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10064"/>
		<updated>2016-05-07T13:27:36Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Color managing canvas contents&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
* Contents displayed through a canvas element should be color managed in order to minimize differences in appearance across browsers and display devices. Improving color fidelity matters a lot for artistic uses (e.g. photo and paint apps) and for e-commerce (product presentation).&lt;br /&gt;
* Canvases should be able to take advantage of the full color gamut of the display device.&lt;br /&gt;
* Creative apps that do image manipulation generally prefer compositing, filtering and interpolation calculations to be performed in a linear color space.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
* The color space of canvases is undefined in the current specification.&lt;br /&gt;
* The bit-depth of canvases is currently fixed to 8 bits per component, which is below the capabilities of some monitors. Monitors with higher contrast ratios require more bits per component to avoid banding.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
The lack of color space interoperability is hard to work around. With some browser implementations that color correct images drawn to canvases by applying the display profile, apps that want to use canvases for color corrected image processing are stuck doing convoluted workarounds, such as:&lt;br /&gt;
* reverse-engineer the display profile by drawing test pattern images to the canvas and inspecting the color corrected result via getImageData&lt;br /&gt;
* bypass CanvasRenderingContext2D.drawImage() and use image decoders implemented in JavaScript to extract raw image data that was not tainted by the browser&#039;s color correction behavior.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An aspect of current implementations that is interoperable is that colors match between CSS/HTML and canvases:&lt;br /&gt;
* A color value used as a canvas drawing style will have the same appearance as if the same color value were used as a CSS style&lt;br /&gt;
* An image resource drawn to a canvas element will have the same appearance as if it were displayed as the replaced content of an HTML element or used as a CSS style value.&lt;br /&gt;
&lt;br /&gt;
This color matching behavior needs to be preserved to avoid breaking pre-existing content.&lt;br /&gt;
&lt;br /&gt;
Some implementations convert images drawn to canvases to the sRGB color space. This has the advantage of making the color correction behavior device independent, but it clamps the gamuts of the rendered content to the sRGB gamut, which is significantly narrower than the gamuts of some current consumer devices.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://github.com/whatwg/html/issues/299]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Allow 2dcontexts to use deeper color buffers&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://bugs.chromium.org/p/chromium/issues/detail?id=425935]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Wrong color profile with 2D canvas&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* Engineers from the Google Photos, Maps and Sheets teams have expressed a desire for canvases to become color managed.  Particularly for the use case of resizing an imaging, using a canvas, prior to uploading it to the server, to save bandwidth. The problem is that the images retrieved from a canvas are in an undefined color space and no color space information is encoded by toDataURL or toBlob. &lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== Proposed solution: CanvasColorSpace ===&lt;br /&gt;
:Add a canvas color space creation parameter that allows user code to chose between backwards compatible behavior and color managed behaviors  The same color space option would exist in the ImageData and ImageBitmap interfaces.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
===== The color-space canvas creation parameter =====&lt;br /&gt;
&lt;br /&gt;
IDL:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
enum CanvasColorSpace {&lt;br /&gt;
  &amp;quot;legacy-srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;linear-rec-2020&amp;quot;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
dictionary CanvasRenderingContext2DSettings {&lt;br /&gt;
  boolean alpha = true;&lt;br /&gt;
  CanvasColorSpace color-space = &amp;quot;legacy-srgb&amp;quot;;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
canvas.getContext(&#039;2d&#039;, { color-space: &amp;quot;srgb&amp;quot;})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== The legacy-srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* Assures backwards compatible behavior&lt;br /&gt;
* Guarantees color matching with CSS and HTML content&lt;br /&gt;
* Color management behavior is implementation specific, may not use strict sRGB space, but is expected to be near sRGB. For example, could be display referred color space.&lt;br /&gt;
* toDataURL/toBlob produce resources with no color profile (backwards compat)&lt;br /&gt;
* Image resources with no color profile are never color corrected (backwards compat). This rule and the previous one allow for lossless toDataURL/drawImage round trips, which is a significant use case.&lt;br /&gt;
&lt;br /&gt;
===== The srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* May break color matching with CSS on implementations that do not color-manage CSS.&lt;br /&gt;
* 8 bit unsigned integers per color component.&lt;br /&gt;
* All content drawn into the canvas must be color corrected to sRGB&lt;br /&gt;
* displayed canvases must be color corrected for the display if a display color profile is available. This color correction happens downstream at the compositing stage, and has no script-visible side-effects.&lt;br /&gt;
* Compositing, filtering and interpolation operations must perform all arithmetic in &#039;&#039;&#039;linear&#039;&#039;&#039; sRGB space.&lt;br /&gt;
* toDataURL/toBlob produce resources tagged as being in the sRGB colorspace&lt;br /&gt;
* Images with no color profile, when drawn to the canvas, are assumed to already be in the sRGB color space.&lt;br /&gt;
&lt;br /&gt;
===== The linear-rec-2020 color space =====&lt;br /&gt;
&lt;br /&gt;
* Space provide for wide gamut and high dynamic range rendering&lt;br /&gt;
* User agents may decide not to support the mode, based on host machine capabilities&lt;br /&gt;
* Uses 16-bit floating point representation.&lt;br /&gt;
* The color space corresponds to ITU-R Recommendation BT.2020, &#039;&#039;&#039;without gamma compression&#039;&#039;&#039;.&lt;br /&gt;
* toDataURL/toBlob convert image data to sRGB and produce image resources tagged as being in the sRGB color space.&lt;br /&gt;
* Image with no color profile, when drawn to the canvas, are assumed to be in the sRGB color space, and are converted to linear-rec-2020 for the purpose of the draw.&lt;br /&gt;
&lt;br /&gt;
===== Feature detection =====&lt;br /&gt;
&lt;br /&gt;
Rendering context objects are to expose a new &amp;quot;settings&amp;quot; attribute, which represents the settings that were successfully applied at context creation time.&lt;br /&gt;
&lt;br /&gt;
Note: An alternative approach that was considered was to augment the probablySupportsContext() API by making it check the second argument.  That approach is difficult to consolidate with how dictionary argument are meant to work, where unsupported entries are just ignored.&lt;br /&gt;
&lt;br /&gt;
===== ImageBitmap =====&lt;br /&gt;
&lt;br /&gt;
ImageBitmap objects are augmented to have an internal color space attribute of type CanvasColorSpace. The colorSpaceConversion creation attribute is to be augmented with new enum values for coercing conversions to a specific CanvasColorSpace at creation time.&lt;br /&gt;
&lt;br /&gt;
===== ImageData =====&lt;br /&gt;
&lt;br /&gt;
IDL&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
typedef (Uint8ClampedArray or Float32Array) ImageDataArray;&lt;br /&gt;
&lt;br /&gt;
[Constructor(unsigned long sw, unsigned long sh, optional CanvasColorSpace colorSpace = &amp;quot;legacy-srgb&amp;quot;),&lt;br /&gt;
 Constructor(ImageDataArray data, unsigned long sw, optional unsigned long sh, optional CanvasColorSpace colorSpace),&lt;br /&gt;
 Exposed=(Window,Worker)]&lt;br /&gt;
interface ImageData {&lt;br /&gt;
  readonly attribute unsigned long width;&lt;br /&gt;
  readonly attribute unsigned long height;&lt;br /&gt;
  readonly attribute ImageDataArray data;&lt;br /&gt;
  readonly attribute CanvasColorSpace colorSpace;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Uint8ClampedArray if colorSpace is &amp;quot;srgb&amp;quot; or &amp;quot;legacy-srgb&amp;quot;&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Float32Array if colorSpace is &amp;quot;linear-rec-2020&amp;quot;&lt;br /&gt;
* getImageData() produces an ImageData object in the same color space as the source canvas&lt;br /&gt;
* putImageData() performs a color space conversion to the color space of the destination canvas.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
* No support for arbitrary color spaces and bit depth.  This capability could be added in the future.  The current proposal attempts to solve the problem with a minimal API surface, and keeps the implementation scope reasonable.  The extensible design will allow us to extend the capabilities in the future if necessary.  The rec-2020 space was chosen for its very wide gamut and its non-virtual primary colors, which strikes a balance that is deemed practical.&lt;br /&gt;
* toDataURL is lossy when us on a canvas that is in the linear-rec-2020 space. Possible future improvements could solve or mitigate this issue by adding more file formats or adding options to specify the resource color space.&lt;br /&gt;
* ImageData uses float32, which is inefficient due to memory consumption and necessary conversion operations. Float32 was chosen because it is convenient for manipulation (e.g. image processing) due to its native support in JavaScript (and current CPUs). A possible extension would be to add and option for rec-2020 content to be encoded as float16s packed into Uint16 values.&lt;br /&gt;
&lt;br /&gt;
==== Security and privacy issues ====&lt;br /&gt;
Some current implementations of CanvasRenderingContext2D color correct image resources for the display as they are drawn to the canvas. In other words, the canvas is in output referred color space. This is a known fingerprinting vulnerability since it exposes the user&#039;s display&#039;s color profile to scripts via getImageData.  The current proposal does not solve the fingerprinting issue because it will still exist in legacy-srgb.  To solve the problem, implementations must color-correct CSS colors, then by extension, legacy-srgb mode will be in the true sRGB color space by virtue of the color matching rules outlined above.  When that becomes the case, images drawn to canvases will be color corrected to sRGB, which solves the problem.  There is resistance to adopting this model because going through an sRGB intermediate is lossy compared to directly color correcting images for the display in a single pass (may cause banding and gamut clipping).  This feature proposal mitigates the lossiness argument thanks to the linear-rec-2020 option.&lt;br /&gt;
&lt;br /&gt;
==== Implementation notes ==== &lt;br /&gt;
* Because float16 arithmetic is supported by many GPUs, but not by CPUs, implementations should probably opt to not support rec-2020 on hardware that does not provide any native support.&lt;br /&gt;
* When available, the srgb colorspace should use GPU API extension for SRGB support. This will streamline the conversion overhead for performing filtering and compositing in linear space.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
Lack of color management and color interoperability is a longstanding complaint about the canvas API.&lt;br /&gt;
Authors of games and imaging apps are expected to be enthusiastic adopters.&lt;br /&gt;
&lt;br /&gt;
==== History ====&lt;br /&gt;
This proposal was originally incubated in the Khronos 3D Web group, with the participation of engineers from Google, Microsoft, Apple, Nvidia, and others.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10063</id>
		<title>CanvasColorSpace</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10063"/>
		<updated>2016-05-07T02:34:16Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Color managing canvas contents&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
* Contents displayed through a canvas element should be color managed in order to minimize differences in appearance across browsers and display devices. Improving color fidelity matters a lot for artistic uses (e.g. photo and paint apps) and for e-commerce (product presentation).&lt;br /&gt;
* Canvases should be able to take advantage of the full color gamut of the display device.&lt;br /&gt;
* Creative apps that do image manipulation generally prefer compositing, filtering and interpolation calculations to be performed in a linear color space.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
* The color space of canvases is undefined in the current specification.&lt;br /&gt;
* The bit-depth of canvases is currently fixed to 8 bits per component, which is below the capabilities of some monitors. Monitors with higher contrast ratios require more bits per component to avoid banding.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
The lack of color space interoperability is hard to work around. With some browser implementations that color correct images drawn to canvases by applying the display profile, apps that want to use canvases for color corrected image processing are stuck doing convoluted workarounds, such as:&lt;br /&gt;
* reverse-engineer the display profile by drawing test pattern images to the canvas and inspecting the color corrected result via getImageData&lt;br /&gt;
* bypass CanvasRenderingContext2D.drawImage() and use image decoders implemented in JavaScript to extract raw image data that was not tainted by the browser&#039;s color correction behavior.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An aspect of current implementations that is interoperable is that colors match between CSS/HTML and canvases:&lt;br /&gt;
* A color value used as a canvas drawing style will have the same appearance as if the same color value were used as a CSS style&lt;br /&gt;
* An image resource drawn to a canvas element will have the same appearance as if it were displayed as the replaced content of an HTML element or used as a CSS style value.&lt;br /&gt;
&lt;br /&gt;
This color matching behavior needs to be preserved to avoid breaking pre-existing content.&lt;br /&gt;
&lt;br /&gt;
Some implementations convert images drawn to canvases to the sRGB color space. This has the advantage of making the color correction behavior device independent, but it clamps the gamuts of the rendered content to the sRGB gamut, which is significantly narrower than the gamuts of some current consumer devices.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://github.com/whatwg/html/issues/299]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Allow 2dcontexts to use deeper color buffers&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://bugs.chromium.org/p/chromium/issues/detail?id=425935]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Wrong color profile with 2D canvas&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* Engineers from the Google Photos, Maps and Sheets teams have expressed a desire for canvases to become color managed.  Particularly for the use case of resizing an imaging, using a canvas, prior to uploading it to the server, to save bandwidth. The problem is that the images retrieved from a canvas are in an undefined color space and no color space information is encoded by toDataURL or toBlob. &lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== Proposed solution: CanvasColorSpace ===&lt;br /&gt;
:Add a canvas color space creation parameter that allows user code to chose between backwards compatible behavior and color managed behaviors  The same color space option would exist in the ImageData and ImageBitmap interfaces.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
===== The color-space canvas creation parameter =====&lt;br /&gt;
&lt;br /&gt;
IDL:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
enum CanvasColorSpace {&lt;br /&gt;
  &amp;quot;legacy-srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;linear-rec-2020&amp;quot;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
dictionary CanvasRenderingContext2DSettings {&lt;br /&gt;
  boolean alpha = true;&lt;br /&gt;
  CanvasColorSpace color-space = &amp;quot;legacy-srgb&amp;quot;;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
canvas.getContext(&#039;2d&#039;, { color-space: &amp;quot;srgb&amp;quot;})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== The legacy-srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* Assures backwards compatible behavior&lt;br /&gt;
* Guarantees color matching with CSS and HTML content&lt;br /&gt;
* Color management behavior is implementation specific, may not use strict sRGB space, but is expected to be near sRGB. For example, could be display referred color space.&lt;br /&gt;
* toDataURL/toBlob produce resources with no color profile (backwards compat)&lt;br /&gt;
* Image resources with no color profile are never color corrected (backwards compat). This rule and the previous one allow for lossless toDataURL/drawImage round trips, which is a significant use case.&lt;br /&gt;
&lt;br /&gt;
===== The srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* May break color matching with CSS on implementations that do not color-manage CSS.&lt;br /&gt;
* 8 bit unsigned integers per color component.&lt;br /&gt;
* All content drawn into the canvas must be color corrected to sRGB&lt;br /&gt;
* displayed canvases must be color corrected for the display if a display color profile is available. This color correction happens downstream at the compositing stage, and has no script-visible side-effects.&lt;br /&gt;
* Compositing, filtering and interpolation operations must perform all arithmetic in &#039;&#039;&#039;linear&#039;&#039;&#039; sRGB space.&lt;br /&gt;
* toDataURL/toBlob produce resources tagged as being in the sRGB colorspace&lt;br /&gt;
* Images with no color profile, when drawn to the canvas, are assumed to already be in the sRGB color space.&lt;br /&gt;
&lt;br /&gt;
===== The linear-rec-2020 color space =====&lt;br /&gt;
&lt;br /&gt;
* Space provide for wide gamut and high dynamic range rendering&lt;br /&gt;
* User agents may decide not to support the mode, based on host machine capabilities&lt;br /&gt;
* Uses 16-bit floating point representation.&lt;br /&gt;
* The color space corresponds to ITU-R Recommendation BT.2020, &#039;&#039;&#039;without gamma compression&#039;&#039;&#039;.&lt;br /&gt;
* toDataURL/toBlob convert image data to sRGB and produce image resources tagged as being in the sRGB color space.&lt;br /&gt;
* Image with no color profile, when drawn to the canvas, are assumed to be in the sRGB color space, and are converted to linear-rec-2020 for the purpose of the draw.&lt;br /&gt;
&lt;br /&gt;
===== Feature detection =====&lt;br /&gt;
&lt;br /&gt;
Rendering context objects are to expose a new &amp;quot;settings&amp;quot; attribute, which represents the settings that were successfully applied at context creation time.&lt;br /&gt;
&lt;br /&gt;
Note: An alternative approach that was considered was to augment the probablySupportsContext() API by making it check the second argument.  That approach is difficult to consolidate with how dictionary argument are meant to work, where unsupported entries are just ignored.&lt;br /&gt;
&lt;br /&gt;
===== ImageBitmap =====&lt;br /&gt;
&lt;br /&gt;
ImageBitmap objects are augmented to have an internal color space attribute of type CanvasColorSpace. The colorSpaceConversion creation attribute is to be augmented with new enum values for coercing conversions to a specific CanvasColorSpace at creation time.&lt;br /&gt;
&lt;br /&gt;
===== ImageData =====&lt;br /&gt;
&lt;br /&gt;
IDL&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
typedef (Uint8ClampedArray or Float32Array) ImageDataArray;&lt;br /&gt;
&lt;br /&gt;
[Constructor(unsigned long sw, unsigned long sh, optional CanvasColorSpace colorSpace = &amp;quot;legacy-srgb&amp;quot;),&lt;br /&gt;
 Constructor(ImageDataArray data, unsigned long sw, optional unsigned long sh, optional CanvasColorSpace colorSpace),&lt;br /&gt;
 Exposed=(Window,Worker)]&lt;br /&gt;
interface ImageData {&lt;br /&gt;
  readonly attribute unsigned long width;&lt;br /&gt;
  readonly attribute unsigned long height;&lt;br /&gt;
  readonly attribute ImageDataArray data;&lt;br /&gt;
  readonly attribute CanvasColorSpace colorSpace;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Uint8ClampedArray if colorSpace is &amp;quot;srgb&amp;quot; or &amp;quot;legacy-srgb&amp;quot;&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Float32Array if colorSpace is &amp;quot;linear-rec-2020&amp;quot;&lt;br /&gt;
* getImageData() produces an ImageData object in the same color space as the source canvas&lt;br /&gt;
* putImageData() performs a color space conversion to the color space of the destination canvas.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
* No support for arbitrary color spaces and bit depth.  This capability could be added in the future.  The current proposal attempts to solve the problem with a minimal API surface, and keeps the implementation scope reasonable.  The extensible design will allow us to extend the capabilities in the future if necessary.  The rec-2020 space was chosen for its very wide gamut and its non-virtual primary colors, which strikes a balance that is deemed practical.&lt;br /&gt;
* toDataURL is lossy in the linear-rec-2020 space. Possible future improvements could solve or mitigate this issue by adding more file formats or adding options to specify the resource color space.&lt;br /&gt;
* ImageData uses float32, which is inefficient due to memory consumption and necessary conversion operations. Float32 was chosen because it is convenient for manipulation (e.g. image processing) due to its native support in JavaScript (and current CPUs). A possible extension would be to add and option for rec-2020 content to be encoded as float16s packed into Uint16 values.&lt;br /&gt;
&lt;br /&gt;
==== Security and privacy issues ====&lt;br /&gt;
Some current implementations of CanvasRenderingContext2D color correct image resources for the display as they are drawn to the canvas. In other words, the canvas is in output referred color space. This is a known fingerprinting vulnerability since it exposes the user&#039;s display&#039;s color profile to scripts via getImageData.  The current proposal does not solve the fingerprinting issue because it will still exist in legacy-srgb.  To solve the problem, implementations must color-correct CSS colors, then by extension, legacy-srgb mode will be in the true sRGB color space by virtue of the color matching rules outlined above.  When that becomes the case, images drawn to canvases will be color corrected to sRGB, which solves the problem.  There is resistance to adopting this model because going through an sRGB intermediate is lossy compared to directly color correcting images for the display in a single pass (may cause banding and gamut clipping).  This feature proposal mitigates the lossiness argument thanks to the linear-rec-2020 option.&lt;br /&gt;
&lt;br /&gt;
==== Implementation notes ==== &lt;br /&gt;
* Because float16 arithmetic is supported by many GPUs, but not by CPUs, implementations should probably opt to not support rec-2020 on hardware that does not provide any native support.&lt;br /&gt;
* When available, the srgb colorspace should use GPU API extension for SRGB support. This will streamline the conversion overhead for performing filtering and compositing in linear space.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
Lack of color management and color interoperability is a longstanding complaint about the canvas API.&lt;br /&gt;
Authors of games and imaging apps are expected to be enthusiastic adopters.&lt;br /&gt;
&lt;br /&gt;
==== History ====&lt;br /&gt;
This proposal was originally incubated in the Khronos 3D Web group, with the participation of engineers from Google, Microsoft, Apple, Nvidia, and others.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10062</id>
		<title>CanvasColorSpace</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10062"/>
		<updated>2016-05-06T18:14:53Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Color managing canvas contents&lt;br /&gt;
&lt;br /&gt;
Note: This proposal was originally incubated in the Khronos 3D Web group, with the participation of engineers from Google, Microsoft, Apple, Nvidia, and others&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
* Contents displayed through a canvas element should be color managed in order to minimize differences in appearance across browsers and display devices. Improving color fidelity matters a lot for artistic uses (e.g. photo and paint apps) and for e-commerce (product presentation).&lt;br /&gt;
* Canvases should be able to take advantage of the full color gamut of the display device.&lt;br /&gt;
* Creative apps that do image manipulation generally prefer compositing, filtering and interpolation calculations to be performed in a linear color space.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
* The color space of canvases is undefined in the current specification.&lt;br /&gt;
* The bit-depth of canvases is currently fixed to 8 bits per component, which is below the capabilities of some monitors. Monitors with higher contrast ratios require more bits per component to avoid banding.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
The lack of color space interoperability is hard to work around. With some browser implementations that color correct images drawn to canvases by applying the display profile, apps that want to use canvases for color corrected image processing are stuck doing convoluted workarounds, such as:&lt;br /&gt;
* reverse-engineer the display profile by drawing test pattern images to the canvas and inspecting the color corrected result via getImageData&lt;br /&gt;
* bypass CanvasRenderingContext2D.drawImage() and use image decoders implemented in JavaScript to extract raw image data that was not tainted by the browser&#039;s color correction behavior.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An aspect of current implementations that is interoperable is that colors match between CSS/HTML and canvases:&lt;br /&gt;
* A color value used as a canvas drawing style will have the same appearance as if the same color value were used as a CSS style&lt;br /&gt;
* An image resource drawn to a canvas element will have the same appearance as if it were displayed as the replaced content of an HTML element or used as a CSS style value.&lt;br /&gt;
&lt;br /&gt;
This color matching behavior needs to be preserved to avoid breaking pre-existing content.&lt;br /&gt;
&lt;br /&gt;
Some implementations convert images drawn to canvases to the sRGB color space. This has the advantage of making the color correction behavior device independent, but it clamps the gamuts of the rendered content to the sRGB gamut, which is significantly narrower than the gamuts of some current consumer devices.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://github.com/whatwg/html/issues/299]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Allow 2dcontexts to use deeper color buffers&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://bugs.chromium.org/p/chromium/issues/detail?id=425935]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Wrong color profile with 2D canvas&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* Engineers from the Google Photos, Maps and Sheets teams have expressed a desire for canvases to become color managed.  Particularly for the use case of resizing an imaging, using a canvas, prior to uploading it to the server, to save bandwidth. The problem is that the images retrieved from a canvas are in an undefined color space and no color space information is encoded by toDataURL or toBlob. &lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== Proposed solution: CanvasColorSpace ===&lt;br /&gt;
:Add a canvas color space creation parameter that allows user code to chose between backwards compatible behavior and color managed behaviors  The same color space option would exist in the ImageData and ImageBitmap interfaces.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
===== The color-space canvas creation parameter =====&lt;br /&gt;
&lt;br /&gt;
IDL:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
enum CanvasColorSpace {&lt;br /&gt;
  &amp;quot;legacy-srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;linear-rec-2020&amp;quot;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
dictionary CanvasRenderingContext2DSettings {&lt;br /&gt;
  boolean alpha = true;&lt;br /&gt;
  CanvasColorSpace color-space = &amp;quot;legacy-srgb&amp;quot;;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
canvas.getContext(&#039;2d&#039;, { color-space: &amp;quot;srgb&amp;quot;})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== The legacy-srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* Assures backwards compatible behavior&lt;br /&gt;
* Guarantees color matching with CSS and HTML content&lt;br /&gt;
* Color management behavior is implementation specific, may not use strict sRGB space, but is expected to be near sRGB. For example, could be display referred color space.&lt;br /&gt;
* toDataURL/toBlob produce resources with no color profile (backwards compat)&lt;br /&gt;
* Image resources with no color profile are never color corrected (backwards compat). This rule and the previous one allow for lossless toDataURL/drawImage round trips, which is a significant use case.&lt;br /&gt;
&lt;br /&gt;
===== The srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* May break color matching with CSS on implementations that do not color-manage CSS.&lt;br /&gt;
* 8 bit unsigned integers per color component.&lt;br /&gt;
* All content drawn into the canvas must be color corrected to sRGB&lt;br /&gt;
* displayed canvases must be color corrected for the display if a display color profile is available. This color correction happens downstream at the compositing stage, and has no script-visible side-effects.&lt;br /&gt;
* Compositing, filtering and interpolation operations must perform all arithmetic in &#039;&#039;&#039;linear&#039;&#039;&#039; sRGB space.&lt;br /&gt;
* toDataURL/toBlob produce resources tagged as being in the sRGB colorspace&lt;br /&gt;
* Images with no color profile, when drawn to the canvas, are assumed to already be in the sRGB color space.&lt;br /&gt;
&lt;br /&gt;
===== The linear-rec-2020 color space =====&lt;br /&gt;
&lt;br /&gt;
* Space provide for wide gamut and high dynamic range rendering&lt;br /&gt;
* User agents may decide not to support the mode, based on host machine capabilities&lt;br /&gt;
* Uses 16-bit floating point representation.&lt;br /&gt;
* The color space corresponds to ITU-R Recommendation BT.2020, &#039;&#039;&#039;without gamma compression&#039;&#039;&#039;.&lt;br /&gt;
* toDataURL/toBlob convert image data to sRGB and produce image resources tagged as being in the sRGB color space.&lt;br /&gt;
* Image with no color profile, when drawn to the canvas, are assumed to be in the sRGB color space, and are converted to linear-rec-2020 for the purpose of the draw.&lt;br /&gt;
&lt;br /&gt;
===== Feature detection =====&lt;br /&gt;
&lt;br /&gt;
Rendering context objects are to expose a new &amp;quot;settings&amp;quot; attribute, which represents the settings that were successfully applied at context creation time.&lt;br /&gt;
&lt;br /&gt;
Note: An alternative approach that was considered was to augment the probablySupportsContext() API by making it check the second argument.  That approach is difficult to consolidate with how dictionary argument are meant to work, where unsupported entries are just ignored.&lt;br /&gt;
&lt;br /&gt;
===== ImageBitmap =====&lt;br /&gt;
&lt;br /&gt;
ImageBitmap objects are augmented to have an internal color space attribute of type CanvasColorSpace. The colorSpaceConversion creation attribute is to be augmented with new enum values for coercing conversions to a specific CanvasColorSpace at creation time.&lt;br /&gt;
&lt;br /&gt;
===== ImageData =====&lt;br /&gt;
&lt;br /&gt;
IDL&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
typedef (Uint8ClampedArray or Float32Array) ImageDataArray;&lt;br /&gt;
&lt;br /&gt;
[Constructor(unsigned long sw, unsigned long sh, optional CanvasColorSpace colorSpace = &amp;quot;legacy-srgb&amp;quot;),&lt;br /&gt;
 Constructor(ImageDataArray data, unsigned long sw, optional unsigned long sh, optional CanvasColorSpace colorSpace),&lt;br /&gt;
 Exposed=(Window,Worker)]&lt;br /&gt;
interface ImageData {&lt;br /&gt;
  readonly attribute unsigned long width;&lt;br /&gt;
  readonly attribute unsigned long height;&lt;br /&gt;
  readonly attribute ImageDataArray data;&lt;br /&gt;
  readonly attribute CanvasColorSpace colorSpace;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Uint8ClampedArray if colorSpace is &amp;quot;srgb&amp;quot; or &amp;quot;linear-srgb&amp;quot;&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Float32Array if colorSpace is &amp;quot;linear-rec-2020&amp;quot;&lt;br /&gt;
* getImageData() produces an ImageData object in the same color space as the source canvas&lt;br /&gt;
* putImageData() performs a color space conversion to the color space of the destination canvas.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
* No support for arbitrary color spaces and bit depth.  This capability could be added in the future.  The current proposal attempts to solve the problem with a minimal API surface, and keeps the implementation scope reasonable.  The extensible design will allow us to extend the capabilities in the future if necessary.  The rec-2020 space was chosen for its very wide gamut and its non-virtual primary colors, which strikes a balance that is deemed practical.&lt;br /&gt;
* toDataURL is lossy in the linear-rec-2020 space. Possible future improvements could solve or mitigate this issue by adding more file formats or adding options to specify the resource color space.&lt;br /&gt;
* ImageData uses float32, which is inefficient due to memory consumption and necessary conversion operations. Float32 was chosen because it is convenient for manipulation (e.g. image processing) due to its native support in JavaScript (and current CPUs). A possible extension would be to add and option for rec-2020 content to be encoded as float16s packed into Uint16 values.&lt;br /&gt;
&lt;br /&gt;
==== Implementation notes ==== &lt;br /&gt;
* Because float16 arithmetic is supported by many GPUs, but not by CPUs, implementations should probably opt to not support rec-2020 on hardware that does not provide any native support.&lt;br /&gt;
* When available, the srgb colorspace should use GPU API extension for SRGB support. This will streamline the conversion overhead for performing filtering and compositing in linear space.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
Lack of color management and color interoperability is a longstanding complaint about the canvas API.&lt;br /&gt;
Authors of games and imaging apps are expected to be enthusiastic adopters.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10061</id>
		<title>CanvasColorSpace</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasColorSpace&amp;diff=10061"/>
		<updated>2016-05-06T17:13:17Z</updated>

		<summary type="html">&lt;p&gt;Junov: Created page with &amp;quot;:Color managing canvas contents  Note: This proposal was originally incubated in the Khronos 3D Web group, with the participation of engineers from Google, Microsoft, Apple, N...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Color managing canvas contents&lt;br /&gt;
&lt;br /&gt;
Note: This proposal was originally incubated in the Khronos 3D Web group, with the participation of engineers from Google, Microsoft, Apple, Nvidia, and others&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
* Contents displayed through a canvas element should be color managed in order to minimize differences in appearance across browsers and display devices. Improving color fidelity matters a lot for artistic uses (e.g. photo and paint apps) and for e-commerce (product presentation).&lt;br /&gt;
* Canvases should be able to take advantage of the full color gamut of the display device.&lt;br /&gt;
* Creative apps that do image manipulation generally prefer compositing, filtering and interpolation calculations to be performed in a linear color space.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
* The color space of canvases in undefined in the current specification.&lt;br /&gt;
* The bit-depth of canvases is currently fixed to 8 bits per component, which is below the capabilities of some monitors. Monitors with higher contrast ratios require more bits per component to avoid banding.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
The lack of color space interoperability is hard to work around. With some browser implementations that color correct images drawn to canvases by applying the display profile, apps that want to use canvases for color corrected image processing are stuck doing convoluted workarounds, such as:&lt;br /&gt;
* reverse-engineer the display profile by drawing test pattern images to the canvas and inspecting the color corrected result via getImageData&lt;br /&gt;
* bypass CanvasRenderingContext2D.drawImage() and use image decoders implemented in javascript to extract raw image data that was not tainted by the browser&#039;s color correction behavior.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An aspect of current implementations that is interoperable is that colors match between CSS/HTML and canvases:&lt;br /&gt;
* A color value used as a canvas drawing style will have the same appearance as if the same color value were used as a CSS style&lt;br /&gt;
* An image resource drawn to a canvas element will have the same appearance as if it were displayed as the replaced content of an HTML element or used as a CSS style value.&lt;br /&gt;
&lt;br /&gt;
This color matching behavior needs to be preserved to avoid breaking pre-existing content.&lt;br /&gt;
&lt;br /&gt;
Some implementations convert images drawn to canvases to the sRGB color space. This has the advantage of making the color correction behavior device independent, but it clamps the gamuts of the rendered content to the sRGB gamut, which is significantly narrower than the gamuts of some current consumer devices.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://github.com/whatwg/html/issues/299]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Allow 2dcontexts to use deeper color buffers&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://bugs.chromium.org/p/chromium/issues/detail?id=425935]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;Wrong color profile with 2D canvas&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
* Engineers from the Google Photos, Maps and Sheets teams have expressed a desire for canvases to become color managed.  Particularly for the use case of resizing an imaging, using a canvas, prior to uploading it to the server, to save bandwidth. The problem is that the images retrieved from a canvas are in an undefined color space and no color space information is encoded by toDataURL or toBlob. &lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== Proposed solution: CanvasColorSpace ===&lt;br /&gt;
:Add a canvas color space creation parameter that allows user code to chose between backwards compatible behavior and color managed behaviors  The same color space option would exist in the ImageData and ImageBitmap interfaces.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
===== The color-space canvas creation parameter =====&lt;br /&gt;
&lt;br /&gt;
IDL:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
enum CanvasColorSpace {&lt;br /&gt;
  &amp;quot;legacy-srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;srgb&amp;quot;,&lt;br /&gt;
  &amp;quot;linear-rec-2020&amp;quot;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
dictionary CanvasRenderingContext2DSettings {&lt;br /&gt;
  boolean alpha = true;&lt;br /&gt;
  CanvasColorSpace color-space = &amp;quot;legacy-srgb&amp;quot;;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
canvas.getContext(&#039;2d&#039;, { color-space: &amp;quot;srgb&amp;quot;})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== The legacy-srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* Assures backwards compatible behavior&lt;br /&gt;
* Guarantees color matching with CSS and HTML content&lt;br /&gt;
* Color management behavior is implementation specific, may not use strict sRGB space, but is expected to be near sRGB. For example, could be display referred color space.&lt;br /&gt;
* toDataURL/toBlob produce resources with no color profile (backwards compat)&lt;br /&gt;
* Image resources with no color profile are never color corrected (backwards compat). This rule and the previous one allow for lossless toDataURL/drawImage round trips, which is a significant use case.&lt;br /&gt;
&lt;br /&gt;
===== The srgb color space =====&lt;br /&gt;
&lt;br /&gt;
* May break color matching with CSS on implementations that do not color-manage CSS.&lt;br /&gt;
* 8 bit unsigned integers per color component.&lt;br /&gt;
* All content drawn into the canvas must be color corrected to sRGB&lt;br /&gt;
* displayed canvases must be color corrected for the display if a display color profile is available. This color correction happens downstream at the compositing stage, and has no script-visible side-effects.&lt;br /&gt;
* Compositing, filtering and interpolation operations must perform all arithmetic in &#039;&#039;&#039;linear&#039;&#039;&#039; sRGB space.&lt;br /&gt;
* toDataURL/toBlob produce resources tagged as being in the sRGB colorspace&lt;br /&gt;
* Images with no color profile, when drawn to the canvas, are assumed to already be in the sRGB color space.&lt;br /&gt;
&lt;br /&gt;
===== The linear-rec-2020 color space =====&lt;br /&gt;
&lt;br /&gt;
* Space provide for wide gamut and high dynamic range rendering&lt;br /&gt;
* User agents may decide not to support the mode, based on host machine capabilities&lt;br /&gt;
* Uses 16-bit floating point representation.&lt;br /&gt;
* The color space corresponds to ITU-R Recommendation BT.2020, &#039;&#039;&#039;without gamma compression&#039;&#039;&#039;.&lt;br /&gt;
* toDataURL/toBlob convert image data to sRGB and produce image resources tagged as being in the sRGB color space.&lt;br /&gt;
* Image with no color profile, when drawn to the canvas, are assumed to be in the sRGB color space, and are converted to linear-rec-2020 for the purpose of the draw.&lt;br /&gt;
&lt;br /&gt;
===== Feature detection =====&lt;br /&gt;
&lt;br /&gt;
Rendering context objects are to expose a new &amp;quot;settings&amp;quot; attribute, which represents the settings that were successfully applied at context creation time.&lt;br /&gt;
&lt;br /&gt;
Note: An alternative approach that was considered was to augment the probablySupportsContext() API by making it check the second argument.  That approach is difficult to consolidate with how dictionary argument are meant to work, where unsupported entries are just ignored.&lt;br /&gt;
&lt;br /&gt;
===== ImageBitmap =====&lt;br /&gt;
&lt;br /&gt;
ImageBitmap objects are augmented to have an internal color space attribute of type CanvasColorSpace. The colorSpaceConversion creation attribute is to be augmented with new enum values for coercing conversions to a specific CanvasColorSpace at creation time.&lt;br /&gt;
&lt;br /&gt;
===== ImageData =====&lt;br /&gt;
&lt;br /&gt;
IDL&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;&lt;br /&gt;
typedef (Uint8ClampedArray or Float32Array) ImageDataArray;&lt;br /&gt;
&lt;br /&gt;
[Constructor(unsigned long sw, unsigned long sh, optional CanvasColorSpace colorSpace = &amp;quot;legacy-srgb&amp;quot;),&lt;br /&gt;
 Constructor(ImageDataArray data, unsigned long sw, optional unsigned long sh, optional CanvasColorSpace colorSpace),&lt;br /&gt;
 Exposed=(Window,Worker)]&lt;br /&gt;
interface ImageData {&lt;br /&gt;
  readonly attribute unsigned long width;&lt;br /&gt;
  readonly attribute unsigned long height;&lt;br /&gt;
  readonly attribute ImageDataArray data;&lt;br /&gt;
  readonly attribute CanvasColorSpace colorSpace;&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Uint8ClampedArray if colorSpace is &amp;quot;srgb&amp;quot; or &amp;quot;linear-srgb&amp;quot;&lt;br /&gt;
* &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; is a Float32Array if colorSpace is &amp;quot;linear-rec-2020&amp;quot;&lt;br /&gt;
* getImageData() produces an ImageData object in the same color space as the source canvas&lt;br /&gt;
* putImageData() performs a color space conversion to the color space of the destination canvas.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
* No support for arbitrary color spaces and bit depth.  This capability could be added in the future.  The current proposal attempts to solve the problem with a minimal API surface, and keeps the implementation scope reasonable.  The extensible design will allow us to extend the capabilities in the future if necessary.  The rec-2020 space was chosen for its very wide gamut and its non-virtual primary colors, which strikes a balance that is deemed practical.&lt;br /&gt;
* toDataURL is lossy in the linear-rec-2020 space. Possible future improvements could solve or mitigate this issue by adding more file formats or adding options to specify the resource color space.&lt;br /&gt;
* ImageData uses float32, which is inefficient due to memory consumption and necessary conversion operations. Float32 was chosen because it is convenient for manipulation (e.g. image processing) due to its native support in JavaScript (and current CPUs). A possible extension would be to add and option for rec-2020 content to be encoded as float16s packed into Uint16 values.&lt;br /&gt;
&lt;br /&gt;
==== Implementation notes ==== &lt;br /&gt;
* Because float16 arithmetic is supported by many GPUs, but not by CPUs, implementations should probably opt to not support rec-2020 on hardware that does not provide any native support.&lt;br /&gt;
* When available, the srgb colorspace should use GPU API extension for SRGB support. This will streamline the conversion overhead for performing filtering and compositing in linear space.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
Lack of color management and color interoperability is a longstanding complaint about the canvas API.&lt;br /&gt;
Authors of games and imaging apps are expected to be enthusiastic adopters.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10053</id>
		<title>OffscreenCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=10053"/>
		<updated>2016-04-08T17:18:21Z</updated>

		<summary type="html">&lt;p&gt;Junov: /* Web IDL */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;Provides more control over how canvases are rendered. This is a follow-on to the [[WorkerCanvas]] proposal and will be merged once agreement is reached.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
Feedback from web application authors using canvases have shown the need for the following controls:&lt;br /&gt;
&lt;br /&gt;
* (From ShaderToy, Sketchfab, Verold): need to be able to render to multiple regions on the page efficiently using a single canvas context. 3D model warehouse sites desire to show multiple live interactive models on the page, but creating multiple WebGL contexts per page is too inefficient. A single context should be able to render to multiple regions on the page.&lt;br /&gt;
* (From Google Maps): need to be able to render WebGL from a worker, transfer the rendered image to the main thread without making any copy of it, and composite it with other HTML on the page, guaranteeing that the updates are all seen in the same rendered frame.&lt;br /&gt;
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.&lt;br /&gt;
* (From adopters of the Push API): need to be able to dynamically create images to use as notification icons, such as compositing avatars, or adding an unread count&lt;br /&gt;
* (From the Google Docs team): need to be able to layout and render text from a worker using CanvasRenderingContext2D and display those results on the main thread.&lt;br /&gt;
* (From the Google Slides team): want to layout and render the slide thumbnails from a worker. During initial load and heavy collaboration these update frequently, and currently cause slowdowns on the main thread.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
&lt;br /&gt;
* [https://html.spec.whatwg.org/multipage/scripting.html#proxying-canvases-to-workers CanvasProxy] does not provide sufficient control to allow synchronization between workers&#039; rendering and DOM updates on the main thread. Keeping this rendering in sync is a requirement from Google&#039;s Maps team.&lt;br /&gt;
* [[CanvasInWorkers]] does not allow a worker to render directly into a canvas on the main thread without running code on the main thread. Allowing completely unsynchronized rendering is a requirement from Mozilla and users of Emscripten such as Epic Games and Unity, in which the desire is to execute all of the game&#039;s rendering on a worker thread.&lt;br /&gt;
* [[WorkerCanvas]] mostly addresses these two use cases, but some implementers objected to the mechanism for displaying the rendering results in image elements. The specific objection was that image elements already have complex internal state (for example, the management of the image&#039;s &amp;quot;loaded&amp;quot; state), and this would make it more complex. It also did not precisely address the use case of producing new frames both on the main thread and in workers.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
[https://blog.mozilla.org/research/2014/07/22/webgl-in-web-workers-today-and-faster-than-expected/ WebGL in Web Workers] details some work attempted in the Emscripten toolchain to address the lack of WebGL in workers. Due to the high volume of calls and large amount of data that is transferred to the graphics card in a typical high-end WebGL application, this approach is not sustainable. It&#039;s necessary for workers to be able to call the WebGL API directly, and present those results to the screen in a manner that does not introduce any copies of the rendering results.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Making canvas rendering contexts available to workers will increase parallelism in web applications, leading to increased performance on multi-core systems.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
See the abovementioned use cases:&lt;br /&gt;
&lt;br /&gt;
* Google&#039;s Maps team&lt;br /&gt;
* Emscripten users such as Epic Games and Unity&lt;br /&gt;
* Many others&lt;br /&gt;
&lt;br /&gt;
== Web IDL ==&lt;br /&gt;
&lt;br /&gt;
 [Constructor(unsigned long width, unsigned long height),&lt;br /&gt;
  Exposed=(Window,Worker)]&lt;br /&gt;
 interface OffscreenCanvas {&lt;br /&gt;
   attribute unsigned long width;&lt;br /&gt;
   attribute unsigned long height;&lt;br /&gt;
   RenderingContext? getContext(DOMString contextId, any... arguments); &lt;br /&gt;
 &lt;br /&gt;
   // OffscreenCanvas, like HTMLCanvasElement, maintains an origin-clean flag.&lt;br /&gt;
   // ImageBitmaps created by calling this method also have an&lt;br /&gt;
   // origin-clean flag which is set to the value of the OffscreenCanvas&#039;s&lt;br /&gt;
   // flag at the time of their construction. Uses of the ImageBitmap&lt;br /&gt;
   // in other APIs, such as CanvasRenderingContext2D or&lt;br /&gt;
   // WebGLRenderingContext, propagate this flag like other&lt;br /&gt;
   // CanvasImageSource types do, such as HTMLImageElement.&lt;br /&gt;
   ImageBitmap transferToImageBitmap();&lt;br /&gt;
 &lt;br /&gt;
   // Throws a SecurityError if the OffscreenCanvas&#039;s origin-clean flag&lt;br /&gt;
   // is set to false.&lt;br /&gt;
   Promise&amp;lt;Blob&amp;gt; toBlob(optional DOMString type, any... arguments);   &lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 OffscreenCanvas implements Transferable;&lt;br /&gt;
 ImageBitmap implements Transferable;&lt;br /&gt;
 &lt;br /&gt;
 // It&#039;s crucial that there be a way to explicitly dispose of ImageBitmaps&lt;br /&gt;
 // since they refer to potentially large graphics resources. Some uses&lt;br /&gt;
 // of this API proposal will result in repeated allocations of ImageBitmaps,&lt;br /&gt;
 // and garbage collection will not reliably reclaim them quickly enough. &lt;br /&gt;
 // Here we reuse close(), which also exists on another Transferable type,&lt;br /&gt;
 // MessagePort. Potentially, all Transferable types should inherit from a&lt;br /&gt;
 // new interface type &amp;quot;Closeable&amp;quot;. &lt;br /&gt;
 partial interface ImageBitmap {&lt;br /&gt;
   // Dispose of all graphical resources associated with this ImageBitmap.&lt;br /&gt;
   void close(); &lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 partial interface HTMLCanvasElement {&lt;br /&gt;
   OffscreenCanvas transferControlToOffscreen();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 // Note that CanvasRenderingContext2D already has a commit() method&lt;br /&gt;
 // from the CanvasProxy spec which this proposal obsoletes.&lt;br /&gt;
 partial interface CanvasRenderingContext2D {&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 partial interface WebGLRenderingContextBase {&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;&lt;br /&gt;
 &lt;br /&gt;
   // If this context is associated with an OffscreenCanvas that was&lt;br /&gt;
   // created by HTMLCanvasElement&#039;s transferControlToOffscreen method,&lt;br /&gt;
   // causes this context&#039;s current rendering results to be pushed&lt;br /&gt;
   // to that canvas element. This has the same effect as returning&lt;br /&gt;
   // control to the main loop in a single-threaded application. Otherwise,&lt;br /&gt;
   // this call has no effect.&lt;br /&gt;
   void commit();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 // The new ImageBitmapRenderingContext is a canvas rendering context&lt;br /&gt;
 // which only provides the functionality to replace the canvas&#039;s&lt;br /&gt;
 // contents with the given ImageBitmap. Its context id (the first argument&lt;br /&gt;
 // to getContext) is &amp;quot;bitmaprenderer&amp;quot;.&lt;br /&gt;
 interface ImageBitmapRenderingContext {&lt;br /&gt;
   // Displays the given ImageBitmap in the canvas associated with this&lt;br /&gt;
   // rendering context. Ownership of the ImageBitmap is transferred to&lt;br /&gt;
   // the canvas. The caller may not use its reference to the ImageBitmap&lt;br /&gt;
   // after making this call. (This semantic is crucial to enable prompt&lt;br /&gt;
   // reclamation of expensive graphics resources, rather than relying on&lt;br /&gt;
   // garbage collection to do so.)&lt;br /&gt;
   //&lt;br /&gt;
   // The ImageBitmap conceptually replaces the canvas&#039;s bitmap, but&lt;br /&gt;
   // it does not change the canvas&#039;s intrinsic width or height.&lt;br /&gt;
   //&lt;br /&gt;
   // The ImageBitmap, when displayed, is clipped to the rectangle&lt;br /&gt;
   // defined by the canvas&#039;s instrinsic width and height. Pixels that&lt;br /&gt;
   // would be covered by the canvas&#039;s bitmap which are not covered by&lt;br /&gt;
   // the supplied ImageBitmap are rendered transparent black. Any CSS&lt;br /&gt;
   // styles affecting the display of the canvas are applied as usual.&lt;br /&gt;
   void transferImageBitmap(ImageBitmap bitmap);&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== This Solution ===&lt;br /&gt;
&lt;br /&gt;
This proposed API can be used in several ways to satisfy the use cases described above:&lt;br /&gt;
&lt;br /&gt;
* It supports zero-copy transfer of canvases&#039; rendering results between threads, for example from a worker to the main thread. In this model, the main thread controls when to display new frames produced by the worker, so synchronization with other DOM updates is achieved.&lt;br /&gt;
&lt;br /&gt;
* It supports fully asynchronous rendering by a worker into a canvas displayed on the main thread. This satisfies certain Emscripten developers&#039; full-screen use cases.&lt;br /&gt;
&lt;br /&gt;
* It supports using a single WebGLRenderingContext or Canvas2DRenderingContext to efficiently render into multiple regions on the web page.&lt;br /&gt;
&lt;br /&gt;
* It introduces ImageBitmapRenderingContext, a new canvas context type whose sole purpose is to efficiently display ImageBitmaps. This supersedes the [[WorkerCanvas]] proposal&#039;s use of HTMLImageElement for this purpose.&lt;br /&gt;
&lt;br /&gt;
* It supports asynchronous encoding of OffscreenCanvases&#039; rendering results into Blobs which can be consumed by various other web platform APIs.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
This proposal introduces two primary processing models. The first involves &#039;&#039;synchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The application generates new frames using the RenderingContext obtained from the OffscreenCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to &amp;quot;tear off&amp;quot; the most recently rendered image from the OffscreenCanvas -- like a Post-It note. The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling &amp;lt;code&amp;gt;getContext(&#039;bitmaprenderer&#039;)&amp;lt;/code&amp;gt;. Each frame is displayed in the second canvas using the &amp;lt;code&amp;gt;transferImageBitmap&amp;lt;/code&amp;gt; method on this rendering context. Note that the threads producing and consuming the frames may be the same, or they may be different. Note also that a single OffscreenCanvas may transfer frames into an arbitrary number of other ImageBitmapRenderingContexts.&lt;br /&gt;
&lt;br /&gt;
The second processing model involves &#039;&#039;asynchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The main thread instantiates an HTMLCanvasElement and calls &amp;lt;code&amp;gt;transferControlToOffscreeen&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;getContext&amp;lt;/code&amp;gt; is used to obtain a rendering context for that OffscreenCanvas, either on the main thread, or on a worker. The application calls &amp;lt;code&amp;gt;commit&amp;lt;/code&amp;gt; against that rendering context in order to push frames to the original HTMLCanvasElement. In this rendering model, it is not defined when those frames become visible in the original canvas element. However, if the following situations apply:&lt;br /&gt;
&lt;br /&gt;
* It is a worker thread which is calling commit(), and&lt;br /&gt;
* The worker is calling commit() repeatedly against exactly one rendering context&lt;br /&gt;
&lt;br /&gt;
then it is required that the user agent synchronize the calls to commit() to the vsync interval. Calls to commit() conceptually enqueue frames for display, and after an implementation-defined number of frames have been enqueued, further calls to commit() will block until earlier frames have been presented to the screen. (This requirement allows porting of applications which drive their own main loop rather than using an event-driven loop.)&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
&lt;br /&gt;
* A known good way to drive an animation loop from a worker is needed. requestAnimationFrame or a similar API needs to be defined on worker threads.&lt;br /&gt;
* Some parts of the CanvasRenderingContext2D interface shall not be supported due OffscreenCanvas objects having no relation to the DOM or Frame: HitRegions, scrollPathIntoView, drawFocusIfNeeded.&lt;br /&gt;
* Due to technical challenges, some implementors [https://bugzilla.mozilla.org/show_bug.cgi?id=801176#c29 (Google and Mozilla)] have expressed a desire to ship without initially supporting text rendering in 2D contexts. Open Issue: Should text support be formally excluded from the specification until implementors are prepared to ship it (or until a more feasible API is designed)?&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
&lt;br /&gt;
This proposal has been vetted by developers of Apple&#039;s Safari, Google&#039;s Chrome, Microsoft&#039;s Internet Explorer, and Mozilla&#039;s Firefox browsers. All vendors agreed upon the basic form of the API, so it is likely it will be implemented widely and compatibly.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
&lt;br /&gt;
Web page authors have demanded increased parallelism support from the web platform for multiple years. If support for multithreaded rendering is added, it is likely it will be rapidly adopted.&lt;br /&gt;
&lt;br /&gt;
==== Example code ====&lt;br /&gt;
&lt;br /&gt;
Jeff Gilbert from Mozilla has crafted some example code utilizing this API:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jdashg/snippets/tree/master/webgl-from-worker Rendering WebGL from a worker using the commit() API]&lt;br /&gt;
* [https://github.com/jdashg/snippets/blob/master/webgl-one-to-many/index.html Using one WebGL context to render to many Canvas elements]&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=9998</id>
		<title>OffscreenCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=OffscreenCanvas&amp;diff=9998"/>
		<updated>2015-10-06T01:49:52Z</updated>

		<summary type="html">&lt;p&gt;Junov: Documented some limitations regarding 2D contexts.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;Provides more control over how canvases are rendered. This is a follow-on to the [[WorkerCanvas]] proposal and will be merged once agreement is reached.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
&lt;br /&gt;
Feedback from web application authors using canvases have shown the need for the following controls:&lt;br /&gt;
&lt;br /&gt;
* (From ShaderToy, Sketchfab, Verold): need to be able to render to multiple regions on the page efficiently using a single canvas context. 3D model warehouse sites desire to show multiple live interactive models on the page, but creating multiple WebGL contexts per page is too inefficient. A single context should be able to render to multiple regions on the page.&lt;br /&gt;
* (From Google Maps): need to be able to render WebGL from a worker, transfer the rendered image to the main thread without making any copy of it, and composite it with other HTML on the page, guaranteeing that the updates are all seen in the same rendered frame.&lt;br /&gt;
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.&lt;br /&gt;
* (From adopters of the Push API): need to be able to dynamically create images to use as notification icons, such as compositing avatars, or adding an unread count&lt;br /&gt;
* (From the Google Docs team): need to be able to layout and render text from a worker using CanvasRenderingContext2D and display those results on the main thread.&lt;br /&gt;
* (From the Google Slides team): want to layout and render the slide thumbnails from a worker. During initial load and heavy collaboration these update frequently, and currently cause slowdowns on the main thread.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
&lt;br /&gt;
* [https://html.spec.whatwg.org/multipage/scripting.html#proxying-canvases-to-workers CanvasProxy] does not provide sufficient control to allow synchronization between workers&#039; rendering and DOM updates on the main thread. Keeping this rendering in sync is a requirement from Google&#039;s Maps team.&lt;br /&gt;
* [[CanvasInWorkers]] does not allow a worker to render directly into a canvas on the main thread without running code on the main thread. Allowing completely unsynchronized rendering is a requirement from Mozilla and users of Emscripten such as Epic Games and Unity, in which the desire is to execute all of the game&#039;s rendering on a worker thread.&lt;br /&gt;
* [[WorkerCanvas]] mostly addresses these two use cases, but some implementers objected to the mechanism for displaying the rendering results in image elements. The specific objection was that image elements already have complex internal state (for example, the management of the image&#039;s &amp;quot;loaded&amp;quot; state), and this would make it more complex. It also did not precisely address the use case of producing new frames both on the main thread and in workers.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
[https://blog.mozilla.org/research/2014/07/22/webgl-in-web-workers-today-and-faster-than-expected/ WebGL in Web Workers] details some work attempted in the Emscripten toolchain to address the lack of WebGL in workers. Due to the high volume of calls and large amount of data that is transferred to the graphics card in a typical high-end WebGL application, this approach is not sustainable. It&#039;s necessary for workers to be able to call the WebGL API directly, and present those results to the screen in a manner that does not introduce any copies of the rendering results.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Making canvas rendering contexts available to workers will increase parallelism in web applications, leading to increased performance on multi-core systems.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
See the abovementioned use cases:&lt;br /&gt;
&lt;br /&gt;
* Google&#039;s Maps team&lt;br /&gt;
* Emscripten users such as Epic Games and Unity&lt;br /&gt;
* Many others&lt;br /&gt;
&lt;br /&gt;
== Web IDL ==&lt;br /&gt;
&lt;br /&gt;
 [Constructor(unsigned long width, unsigned long height),&lt;br /&gt;
  Exposed=(Window,Worker)]&lt;br /&gt;
 interface OffscreenCanvas {&lt;br /&gt;
   attribute unsigned long width;&lt;br /&gt;
   attribute unsigned long height;&lt;br /&gt;
   RenderingContext? getContext(DOMString contextId, any... arguments); &lt;br /&gt;
 &lt;br /&gt;
   // OffscreenCanvas, like HTMLCanvasElement, maintains an origin-clean flag.&lt;br /&gt;
   // ImageBitmaps created by calling this method also have an&lt;br /&gt;
   // origin-clean flag which is set to the value of the OffscreenCanvas&#039;s&lt;br /&gt;
   // flag at the time of their construction. Uses of the ImageBitmap&lt;br /&gt;
   // in other APIs, such as CanvasRenderingContext2D or&lt;br /&gt;
   // WebGLRenderingContext, propagate this flag like other&lt;br /&gt;
   // CanvasImageSource types do, such as HTMLImageElement.&lt;br /&gt;
   ImageBitmap transferToImageBitmap();&lt;br /&gt;
 &lt;br /&gt;
   // Throws a SecurityError if the OffscreenCanvas&#039;s origin-clean flag&lt;br /&gt;
   // is set to false.&lt;br /&gt;
   Promise&amp;lt;Blob&amp;gt; toBlob(optional DOMString type, any... arguments);   &lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 OffscreenCanvas implements Transferable;&lt;br /&gt;
 ImageBitmap implements Transferable;&lt;br /&gt;
 &lt;br /&gt;
 // It&#039;s crucial that there be a way to explicitly dispose of ImageBitmaps&lt;br /&gt;
 // since they refer to potentially large graphics resources. Some uses&lt;br /&gt;
 // of this API proposal will result in repeated allocations of ImageBitmaps,&lt;br /&gt;
 // and garbage collection will not reliably reclaim them quickly enough. &lt;br /&gt;
 // Here we reuse close(), which also exists on another Transferable type,&lt;br /&gt;
 // MessagePort. Potentially, all Transferable types should inherit from a&lt;br /&gt;
 // new interface type &amp;quot;Closeable&amp;quot;. &lt;br /&gt;
 partial interface ImageBitmap {&lt;br /&gt;
   // Dispose of all graphical resources associated with this ImageBitmap.&lt;br /&gt;
   void close(); &lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 partial interface HTMLCanvasElement {&lt;br /&gt;
   OffscreenCanvas transferControlToOffscreen();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 // Note that CanvasRenderingContext2D already has a commit() method&lt;br /&gt;
 // from the CanvasProxy spec which this proposal obsoletes.&lt;br /&gt;
 partial interface CanvasRenderingContext2D {&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 partial interface WebGLRenderingContextBase {&lt;br /&gt;
   // back-reference to the canvas&lt;br /&gt;
   readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;&lt;br /&gt;
 &lt;br /&gt;
   // If this context is associated with an OffscreenCanvas that was&lt;br /&gt;
   // created by HTMLCanvasElement&#039;s transferControlToOffscreen method,&lt;br /&gt;
   // causes this context&#039;s current rendering results to be pushed&lt;br /&gt;
   // to that canvas element. This has the same effect as returning&lt;br /&gt;
   // control to the main loop in a single-threaded application. Otherwise,&lt;br /&gt;
   // this call has no effect.&lt;br /&gt;
   void commit();&lt;br /&gt;
 };&lt;br /&gt;
 &lt;br /&gt;
 // The new ImageBitmapRenderingContext is a canvas rendering context&lt;br /&gt;
 // which only provides the functionality to replace the canvas&#039;s&lt;br /&gt;
 // contents with the given ImageBitmap. Its context id (the first argument&lt;br /&gt;
 // to getContext) is &amp;quot;bitmaprenderer&amp;quot;.&lt;br /&gt;
 [Exposed=(Window,Worker)]&lt;br /&gt;
 interface ImageBitmapRenderingContext {&lt;br /&gt;
   // Displays the given ImageBitmap in the canvas associated with this&lt;br /&gt;
   // rendering context. Ownership of the ImageBitmap is transferred to&lt;br /&gt;
   // the canvas. The caller may not use its reference to the ImageBitmap&lt;br /&gt;
   // after making this call. (This semantic is crucial to enable prompt&lt;br /&gt;
   // reclamation of expensive graphics resources, rather than relying on&lt;br /&gt;
   // garbage collection to do so.)&lt;br /&gt;
   //&lt;br /&gt;
   // The ImageBitmap conceptually replaces the canvas&#039;s bitmap, but&lt;br /&gt;
   // it does not change the canvas&#039;s intrinsic width or height.&lt;br /&gt;
   //&lt;br /&gt;
   // The ImageBitmap, when displayed, is clipped to the rectangle&lt;br /&gt;
   // defined by the canvas&#039;s instrinsic width and height. Pixels that&lt;br /&gt;
   // would be covered by the canvas&#039;s bitmap which are not covered by&lt;br /&gt;
   // the supplied ImageBitmap are rendered transparent black. Any CSS&lt;br /&gt;
   // styles affecting the display of the canvas are applied as usual.&lt;br /&gt;
   void transferImageBitmap(ImageBitmap bitmap);&lt;br /&gt;
 };&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
=== This Solution ===&lt;br /&gt;
&lt;br /&gt;
This proposed API can be used in several ways to satisfy the use cases described above:&lt;br /&gt;
&lt;br /&gt;
* It supports zero-copy transfer of canvases&#039; rendering results between threads, for example from a worker to the main thread. In this model, the main thread controls when to display new frames produced by the worker, so synchronization with other DOM updates is achieved.&lt;br /&gt;
&lt;br /&gt;
* It supports fully asynchronous rendering by a worker into a canvas displayed on the main thread. This satisfies certain Emscripten developers&#039; full-screen use cases.&lt;br /&gt;
&lt;br /&gt;
* It supports using a single WebGLRenderingContext or Canvas2DRenderingContext to efficiently render into multiple regions on the web page.&lt;br /&gt;
&lt;br /&gt;
* It introduces ImageBitmapRenderingContext, a new canvas context type whose sole purpose is to efficiently display ImageBitmaps. This supersedes the [[WorkerCanvas]] proposal&#039;s use of HTMLImageElement for this purpose.&lt;br /&gt;
&lt;br /&gt;
* It supports asynchronous encoding of OffscreenCanvases&#039; rendering results into Blobs which can be consumed by various other web platform APIs.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
&lt;br /&gt;
This proposal introduces two primary processing models. The first involves &#039;&#039;synchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The application generates new frames using the RenderingContext obtained from the OffscreenCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to &amp;quot;tear off&amp;quot; the most recently rendered image from the OffscreenCanvas -- like a Post-It note. The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling &amp;lt;code&amp;gt;getContext(&#039;bitmaprenderer&#039;)&amp;lt;/code&amp;gt;. Each frame is displayed in the second canvas using the &amp;lt;code&amp;gt;transferImageBitmap&amp;lt;/code&amp;gt; method on this rendering context. Note that the threads producing and consuming the frames may be the same, or they may be different. Note also that a single OffscreenCanvas may transfer frames into an arbitrary number of other ImageBitmapRenderingContexts.&lt;br /&gt;
&lt;br /&gt;
The second processing model involves &#039;&#039;asynchronous&#039;&#039; display of new frames produced by the OffscreenCanvas. The main thread instantiates an HTMLCanvasElement and calls &amp;lt;code&amp;gt;transferControlToOffscreeen&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;getContext&amp;lt;/code&amp;gt; is used to obtain a rendering context for that OffscreenCanvas, either on the main thread, or on a worker. The application calls &amp;lt;code&amp;gt;commit&amp;lt;/code&amp;gt; against that rendering context in order to push frames to the original HTMLCanvasElement. In this rendering model, it is not defined when those frames become visible in the original canvas element. However, if the following situations apply:&lt;br /&gt;
&lt;br /&gt;
* It is a worker thread which is calling commit(), and&lt;br /&gt;
* The worker is calling commit() repeatedly against exactly one rendering context&lt;br /&gt;
&lt;br /&gt;
then it is required that the user agent synchronize the calls to commit() to the vsync interval. Calls to commit() conceptually enqueue frames for display, and after an implementation-defined number of frames have been enqueued, further calls to commit() will block until earlier frames have been presented to the screen. (This requirement allows porting of applications which drive their own main loop rather than using an event-driven loop.)&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
&lt;br /&gt;
* A known good way to drive an animation loop from a worker is needed. requestAnimationFrame or a similar API needs to be defined on worker threads.&lt;br /&gt;
* Some parts of the CanvasRenderingContext2D interface shall not be supported due OffscreenCanvas objects having no relation to the DOM or Frame: HitRegions, scrollPathIntoView, drawFocusIfNeeded.&lt;br /&gt;
* Due to technical challenges, some implementors [https://bugzilla.mozilla.org/show_bug.cgi?id=801176#c29 (Google and Mozilla)] have expressed a desire to ship without initially supporting text rendering in 2D contexts. Open Issue: Should text support be formally excluded from the specification until implementors are prepared to ship it (or until a more feasible API is designed)?&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
&lt;br /&gt;
This proposal has been vetted by developers of Apple&#039;s Safari, Google&#039;s Chrome, Microsoft&#039;s Internet Explorer, and Mozilla&#039;s Firefox browsers. All vendors agreed upon the basic form of the API, so it is likely it will be implemented widely and compatibly.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
&lt;br /&gt;
Web page authors have demanded increased parallelism support from the web platform for multiple years. If support for multithreaded rendering is added, it is likely it will be rapidly adopted.&lt;br /&gt;
&lt;br /&gt;
==== Example code ====&lt;br /&gt;
&lt;br /&gt;
Jeff Gilbert from Mozilla has crafted some example code utilizing this API:&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jdashg/snippets/tree/master/webgl-from-worker Rendering WebGL from a worker using the commit() API]&lt;br /&gt;
* [https://github.com/jdashg/snippets/blob/master/webgl-one-to-many/index.html Using one WebGL context to render to many Canvas elements]&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Changes_to_ImageBitmap_for_OffscreenCanvas&amp;diff=9945</id>
		<title>Changes to ImageBitmap for OffscreenCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Changes_to_ImageBitmap_for_OffscreenCanvas&amp;diff=9945"/>
		<updated>2015-04-21T15:09:55Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;&#039;WORK IN PROGRESS&#039;&#039;&#039; &#039;&#039;Amendments to ImageBitmap to provide zero-copy paths for moving pixel data and for working with the OffscreenCanvas proposal&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
A common complaint from web developers writing applications that manipulate large images is that the browser tends to consume large amounts of RAM and GPU memory. Part of this is due to intermediate copies of image data that are made by the browser and transient copies that are kept around, awaiting garbage collection. Another complaint is that manipulation large images on the browser&#039;s main thread often makes applications janky.&lt;br /&gt;
Specific use cases:&lt;br /&gt;
* Generating or processing pixel data in javascript, possibly in a Worker thread, and bringing it to screen.&lt;br /&gt;
* Taking a snapshot of canvas-rendered content and uploading it as an image file to a remote server, possibly using XHR2 with a progress updates.&lt;br /&gt;
* Using 2D canvas to produce images that are subsequently used as textures in WebGL.&lt;br /&gt;
* Transferring a canvas-rendered image to anywhere an image URL can be used (e.g. CSS properties)&lt;br /&gt;
* Save locally rendered canvas content to a local disk.&lt;br /&gt;
* Stream snapshots of a local canvas to a remote server. ex: Broadcasting a live presentation rendered in WebGL&lt;br /&gt;
* Appending a locally rendered canvas image to form data as an image file.&lt;br /&gt;
* Save a canvas image locally with the anchor &amp;quot;download&amp;quot; attribute.&lt;br /&gt;
* Capturing a snapshot from a video stream (possibly WebRTC), and uploading the snapshot to a remote server.&lt;br /&gt;
&lt;br /&gt;
Note: Many of these use cases were reported by web developpers in support of implementing the HTMLCanvasElement.toBlob interface. Below we counter-propose that putting toBlob on ImageBitmap will serve those use cases better or as well as having toBlob on &amp;lt;canvas&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
The current API often forces the web developer to go through a canvas for moving pixel data. Because canvases are mutable, intermediate copies are often unavoidable, and may result in unnecessary multiply and divide by alpha, which is expensive and lossy.  ImageBitmap, being an opaque immutable object type would allow for low friction means of moving pixel data, but it is missing some functionality that prevents it from being a universal pixel vehicle. In particular, it is missing transfer methods (required for zero-copy behavior). &lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&#039;&#039;Some evidence that this feature is desperately needed on the web.  You may provide a separate examples page for listing these.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For sending image data to a remote server, a compressed image file in binary form is the preferred vehicle. This can currently be achieved by using HTMLCanvasElement.toBlob (or equivalent interface) with web browser that provide that API, or with this approach on other browsers: http://stackoverflow.com/questions/4998908/convert-data-uri-to-file-then-append-to-formdata.  This method has some disadvantages: because it lives on a DOM interface, it can only be invoked from the main thread; because the canvas backing editable, the implementation is required to make a read-only snapshot for itself; In many use cases, the canvas itself represent an additional copy of the image data.  Preventing the proliferation of copies of image data in RAM is very important for applications that manipulate large images (e.g. full res photos, maps), which are vulnerable to malfunctions cause by out-of-memory errors, particularly on mobile.&lt;br /&gt;
&lt;br /&gt;
Experience has shown that garbage collection is often a performance liability when dealing with large temporary objects, particularly those that consume GPU memory. The ability to explicity discard image buffers when they are no longer needed has proven to be widely useful with 2D canvas, which offer this possibility indirectly by setting the intrinsic size to 0.  This practice allows resources (RAM, GPU memory) to be freed as early as possible, hence avoiding process bloat, and it helps reduce the frequency of garbage collections, which can cause jank. Using ImageBitmaps as currently spec&#039;ed implies losing this advantage.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
The proposed additions to the ImageBitmap interface aim to provide leaner browser memory consumption and fast, smoother performance for the above mentioned use cases.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[http://example.com Source]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;I would like this feature ...&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solution ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Exposed=(Window,Worker)]&lt;br /&gt;
interface ImageBitmap implements Transferable {&lt;br /&gt;
  readonly attribute unsigned long width;&lt;br /&gt;
  readonly attribute unsigned long width;&lt;br /&gt;
  void close();&lt;br /&gt;
  // Throws a SecurityError if the OffscreenCanvas&#039;s origin-clean flag&lt;br /&gt;
  // is set to false.&lt;br /&gt;
  Promise&amp;lt;Blob&amp;gt; toBlob(optional DOMString type, any... arguments);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
interface&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Making ImageBitmap Transferrable allows it to be used as a zero-copy vehicle for passing pixel data between threads, hence reducing peak RAM consumption and avoiding the CPU cost of performing a copy.&lt;br /&gt;
&lt;br /&gt;
Offering toBlob on ImageBitmap, as opposed to HTMLCanvasElement:&lt;br /&gt;
* Allows to toBlob be invoked from a worker, which help reduce jank on the main thread (even if the encode is on a separate thread, make a copy and interacting with the blob store can jank the main thread).&lt;br /&gt;
* Allows the data flow to bypass the canvas (for cases where the source of the image data is not a canvas), therefore avoiding an alpha multiply+divide in some cases, and avoiding an unnecessary intermediate (the canvas itself)&lt;br /&gt;
* Because ImageBitmap is immutable, no safety copy is required for toBlob to operate asynchronously. A counter argument to this is that an implementation of HTMLCanvasElement.toBlob could perform the copy lazily, only when the canvas backing is about to be mutated while a toBlob is in progress, thus performing a copy only in use case that require it. Though this is true, it results in performance characteritcs that may seem idiosyncratic to developers (i.e. the preformance of toBlob degrades if something is done to the canvas after calling toBlob.), as opposed to a flow that guarantees zero copies: OffscreenCanvas.transferToImageBitmap, followed by ImageBitmap.toBlob.&lt;br /&gt;
&lt;br /&gt;
The close() method effectively neuters the ImageBitmap. It is a means of explicitly de-allocating large resources, which avoids waiting for garbage collection, and therefore reduces the frequency of GCs and brings down peak RAM consumption.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:&#039;&#039;Explanation of the changes introduced by this solution. It explains how the document is processed, and how errors are handled. This should be very clear, including things such as event timing if the solution involves events, how to create graphs representing the data in the case of semantic proposals, etc.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:&#039;&#039;Cases not covered by this solution in relation to the problem description; other problems with this solution, if any.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
:&#039;&#039;Description of how and why browser vendors would take advantage of this feature.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:&#039;&#039;Reasons why page authors would use this solution.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Changes_to_ImageBitmap_for_OffscreenCanvas&amp;diff=9944</id>
		<title>Changes to ImageBitmap for OffscreenCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Changes_to_ImageBitmap_for_OffscreenCanvas&amp;diff=9944"/>
		<updated>2015-04-17T15:40:16Z</updated>

		<summary type="html">&lt;p&gt;Junov: Created page with &amp;quot;:&amp;#039;&amp;#039;&amp;#039;WORK IN PROGRESS&amp;#039;&amp;#039;&amp;#039; &amp;#039;&amp;#039;Amendments to ImageBitmap to provide zero-copy paths for moving pixel data and for working with the OffscreenCanvas proposal&amp;#039;&amp;#039;  == Use Case Descripti...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;&#039;WORK IN PROGRESS&#039;&#039;&#039; &#039;&#039;Amendments to ImageBitmap to provide zero-copy paths for moving pixel data and for working with the OffscreenCanvas proposal&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
A common complaint from web developers writing applications that manipulate large images is that the browser tends to consume large amounts of RAM and GPU memory. Part of this is due to intermediate copies of image data that are made by the browser and transient copies that are kept around, awaiting garbage collection. Another complaint is that manipulation large images on the browser&#039;s main thread often makes applications janky.&lt;br /&gt;
Specific use cases:&lt;br /&gt;
* Generating pixel data in javascript, and bringing it to screen.&lt;br /&gt;
* Taking a snapshot of canvas-rendered content and uploading it as an image file to a remote server, possibly using XHR2 with a progress updates.&lt;br /&gt;
* Using 2D canvas to produce images that are subsequently used as textures in WebGL.&lt;br /&gt;
* Transfer a canvas-rendered image to anywhere an image URL can be used (e.g. CSS properties)&lt;br /&gt;
* Save locally rendered canvas content to a local disk.&lt;br /&gt;
* Stream snapshots of a local canvas to a remote server. ex: Broadcasting a live presentation rendered in WebGL&lt;br /&gt;
* Appending a locally rendered canvas image to form data as an image file.&lt;br /&gt;
* Save a canvas image locally with the anchor &amp;quot;download&amp;quot; attribute.&lt;br /&gt;
* Capturing a snapshot from a video stream (possibly WebRTC), and uploading the snapshot to a remote server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: Many of these use cases were reported by web developpers in support of the HTMLCanvasElement.toBlob interface.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
The current API often forces the web developer to go through a canvas for moving pixel data. Because canvases are mutable, avoiding intermediate copies is often unavoidable, and may result in unnecessary multiply and divide by alpha, which is expensive and lossy.  ImageBitmap, being an opaque immutable object type would allow for low friction means of moving pixel data, but it is missing some functionality that prevents it from being a universal pixel vehicle. In particular, it is missing transfer methods (required for zero-copy behavior). &lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&#039;&#039;Some evidence that this feature is desperately needed on the web.  You may provide a separate examples page for listing these.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For sending image data to a remote server, a compressed image file in binary form is the preferred vehicle. This can currently be achieved by using HTMLCanvasElement.toBlob (or equivalent interface) with web browser that provide that API, or with this approach on other browsers: http://stackoverflow.com/questions/4998908/convert-data-uri-to-file-then-append-to-formdata.  This method has some disadvantages: because it lives on a DOM interface, it can only be invoked from the main thread; because the canvas backing editable, the implementation is required to make a read-only snapshot for itself; In many use cases, the canvas itself represent an additional copy of the image data.  Preventing the proliferation of copies of image data in RAM is very important for applications that manipulate large images (e.g. full res photos, maps), which are vulnerable to malfunctions cause by out-of-memory errors, particularly on mobile.&lt;br /&gt;
&lt;br /&gt;
Experience has shown that garbage collection is often a performance liability when dealing with large temporary objects, particularly those that consume GPU memory. The ability to explicity discard image buffers when they are no longer needed has proven to be widely useful with 2D canvas, which offer this possibility indirectly by setting the intrinsic size to 0.  This practice allows resources (RAM, GPU memory) to be freed as early as possible, hence avoiding process bloat, and it helps reduce the frequency of garbage collections, which can cause jank. Using ImageBitmaps as currently spec&#039;ed implies losing this advantage.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
The proposed additions to the ImageBitmap interface aim to provide leaner browser memory consumption and fast, smoother performance for the above mentioned use cases.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[http://example.com Source]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;I would like this feature ...&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solution ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Exposed=(Window,Worker)]&lt;br /&gt;
interface ImageBitmap implements Transferable {&lt;br /&gt;
  readonly attribute unsigned long width;&lt;br /&gt;
  readonly attribute unsigned long width;&lt;br /&gt;
  void close();&lt;br /&gt;
  // Throws a SecurityError if the OffscreenCanvas&#039;s origin-clean flag&lt;br /&gt;
  // is set to false.&lt;br /&gt;
  Promise&amp;lt;Blob&amp;gt; toBlob(optional DOMString type, any... arguments);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
interface&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Making ImageBitmap Transferrable allows it to be used as a zero-copy vehicle for passing pixel data between threads, hence reducing peak RAM consumption and avoiding the CPU cost of performing a copy.&lt;br /&gt;
&lt;br /&gt;
Offering toBlob on ImageBitmap, as opposed to HTMLCanvasElement:&lt;br /&gt;
* Allows to toBlob be invoked from a worker, which help reduce jank on the main thread (even if the encode is on a separate thread, make a copy and interacting with the blob store can jank the main thread).&lt;br /&gt;
* Allows the data flow to bypass the canvas (for cases where the source of the image data is not a canvas), therefore avoiding an alpha multiply+divide in some cases, and avoiding an unnecessary intermediate (the canvas itself)&lt;br /&gt;
* Because ImageBitmap is immutable, no safety copy is required for toBlob to operate asynchronously. A counter argument to this is that an implementation of HTMLCanvasElement.toBlob could perform the copy lazily, only when the canvas backing is about to be mutated while a toBlob is in progress, thus performing a copy only in use case that require it. Though this is true, it results in performance characteritcs that may seem idiosyncratic to developers (i.e. the preformance of toBlob degrades if something is done to the canvas after calling toBlob.), as opposed to a flow that guarantees zero copies: OffscreenCanvas.transferToImageBitmap, followed by ImageBitmap.toBlob.&lt;br /&gt;
&lt;br /&gt;
The close() method effectively neuters the ImageBitmap. It is a means of explicitly de-allocate large resources, which avoids waiting for garbage collection, and therefore reducing the frequency of GCs and bringing down peak RAM consumption.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:&#039;&#039;Explanation of the changes introduced by this solution. It explains how the document is processed, and how errors are handled. This should be very clear, including things such as event timing if the solution involves events, how to create graphs representing the data in the case of semantic proposals, etc.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:&#039;&#039;Cases not covered by this solution in relation to the problem description; other problems with this solution, if any.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
:&#039;&#039;Description of how and why browser vendors would take advantage of this feature.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:&#039;&#039;Reasons why page authors would use this solution.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasRenderedPixelSize&amp;diff=9942</id>
		<title>CanvasRenderedPixelSize</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasRenderedPixelSize&amp;diff=9942"/>
		<updated>2015-04-15T18:05:53Z</updated>

		<summary type="html">&lt;p&gt;Junov: /* Issues: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;This proposal is to allow Web developers to match canvas backing store resolution to device resolution.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Descriptions ==&lt;br /&gt;
&lt;br /&gt;
Use case #1: a Web application needs to render content to a canvas (using 2D or WebGL contexts) being displayed on a high-DPI screen. For best quality output, the pixels of the canvas backing store need to match 1:1 with screen pixels.&lt;br /&gt;
&lt;br /&gt;
Use case #2: As #1, and the user moves the page from screen to screen resulting in dynamic DPI changes.&lt;br /&gt;
&lt;br /&gt;
Use case #3: As #1, and the canvas is inside a CSS scale transform which may change dynamically.&lt;br /&gt;
&lt;br /&gt;
Use case #4: As #1, and the canvas&#039;s CSS content box has arbitrary fractional CSS pixel size and offset from the viewport.&lt;br /&gt;
&lt;br /&gt;
Use case #5: As #3, but the canvas is inside an &amp;lt;code&amp;gt;&amp;amp;lt;iframe&amp;amp;gt;&amp;lt;/code&amp;gt; in a page with a different origin which contains the CSS transform.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
Safari shipped a canvas implementation that automatically resized the backing store but this caused various problems, such as incompatibility with some existing content and a need for new &amp;quot;HD&amp;quot; APIs that authors were not motivated to use. This approach does not work for WebGL in any case.&lt;br /&gt;
&lt;br /&gt;
The current best workaround is to set the canvas backing store size by computing the canvas content box in CSS pixels relative to the viewport, as accurately as possible (e.g. using &amp;lt;code&amp;gt;getBoxQuads&amp;lt;/code&amp;gt;), multiplying by &amp;lt;code&amp;gt;window.devicePixelRatio&amp;lt;/code&amp;gt;, and using that to set the canvas width and height. This approach does not easily adapt to dynamic changes and does not handle use cases #4 or #5. In use-case #4 the computed device pixel size may be non-integer, in which case the UA is likely to snap the content-box to device pixel edges and the Web developer cannot predict how that snapping will be done. In use-case #5 the developer cannot know about the CSS transform in the foreign-origin content.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
&lt;br /&gt;
# Leave the application in control of setting the pixel size of the backing store.&lt;br /&gt;
# Retain the invariant that 1 unit in canvas coordinate space is 1 backing store pixel.&lt;br /&gt;
# Make it easy for applications to set the pixel size of the backing store so that backing store pixels are 1:1 with device pixels, when that&#039;s possible.&lt;br /&gt;
&lt;br /&gt;
== Non Goals ==&lt;br /&gt;
&lt;br /&gt;
# Enable high-resolution backing store for existing content.&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
Hixie proposed a canvas attribute to opt into automatic sizing of the backing store, but this violates goals #1 and #2. Violating goal #1 is a problem because applications often use temporary canvases and other assets which are dependent on the pixel size of the backing store. Goal #1 is also essential for WebGL.&lt;br /&gt;
&lt;br /&gt;
=== Suggested Solution ===&lt;br /&gt;
&lt;br /&gt;
Expose a preferred size for the canvas backing store directly to Web authors. Web authors who want to match screen resolution set the canvas width and height to that size. Expose an event on the canvas that fires when the preferred size changes.&lt;br /&gt;
&lt;br /&gt;
== Suggested IDL ==&lt;br /&gt;
&lt;br /&gt;
Add two new DOM attributes to &amp;lt;code&amp;gt;HTMLCanvasElement&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;partial interface HTMLCanvasElement {&lt;br /&gt;
  readonly attribute long renderedPixelWidth;&lt;br /&gt;
  readonly attribute long renderedPixelHeight;&lt;br /&gt;
};&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The values of these attributes are not normatively defined. For example, a UA might reduce these values to conserve system resources.&lt;br /&gt;
However, under normal steady-state conditions a UA is expected to choose values such that, if the canvas element has a CSS background color exactly filling its content-box, and the UA renders that CSS background color as a rectangle whose edges are aligned to device pixel edges, then &amp;lt;code&amp;gt;renderedPixelWidth&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;renderedPixelHeight&amp;lt;/code&amp;gt; are the device pixel size of that rectangle.&lt;br /&gt;
&lt;br /&gt;
Add a new event &amp;lt;code&amp;gt;renderedsizechange&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;HTMLCanvasElement&amp;lt;/code&amp;gt;. This event does not bubble and is not cancelable. Whenever the value that would be returned by &amp;lt;code&amp;gt;renderedPixelWidth&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;renderedPixelHeight&amp;lt;/code&amp;gt; changes, queue a task to fire &amp;lt;code&amp;gt;renderedsizechange&amp;lt;/code&amp;gt; at the &amp;lt;code&amp;gt;HTMLCanvasElement&amp;lt;/code&amp;gt; if there is not already a task pending to fire such an event at that element.&lt;br /&gt;
&lt;br /&gt;
== Rationale: ==&lt;br /&gt;
&lt;br /&gt;
This API appears to be the minimal API needed to achieve the goals, is opt-in, and can be used in several different ways depending on how ambitious the app developer is:&lt;br /&gt;
* Simplest possible usage: set the canvas size once when loading and don&#039;t handle dynamic changes.&lt;br /&gt;
&amp;lt;pre&amp;gt;  &amp;amp;lt;script&amp;amp;gt;&lt;br /&gt;
  canvas.width = canvas.renderedPixelWidth; canvas.height = canvas.renderedPixelHeight;&lt;br /&gt;
  &amp;amp;lt;/script&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Update canvas size for every frame of a game:&lt;br /&gt;
&amp;lt;pre&amp;gt;  &amp;amp;lt;script&amp;amp;gt;&lt;br /&gt;
  function renderFrame() {&lt;br /&gt;
    canvas.width = canvas.renderedPixelWidth; canvas.height = canvas.renderedPixelHeight;&lt;br /&gt;
    ...&lt;br /&gt;
    window.requestAnimationFrame(renderFrame);&lt;br /&gt;
  }&lt;br /&gt;
  &amp;amp;lt;/script&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Update canvas size and re-render only when DPI changes:&lt;br /&gt;
&amp;lt;pre&amp;gt;  &amp;amp;lt;script&amp;amp;gt;&lt;br /&gt;
  function render() {&lt;br /&gt;
    canvas.width = canvas.renderedPixelWidth; canvas.height = canvas.renderedPixelHeight;&lt;br /&gt;
    ...&lt;br /&gt;
  }&lt;br /&gt;
  canvas.addEventHandler(&amp;quot;renderedsizechange&amp;quot;, render, false);&lt;br /&gt;
  &amp;amp;lt;/script&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Issues: ==&lt;br /&gt;
&lt;br /&gt;
* Hard to determine a rendered size when the canvas is not attached to the DOM. Perhaps in that case the current intrinsic size should be returned? (junov@chromium.org)&lt;br /&gt;
* After a layout change that affects rendered pixel size, there is no guarantee that the size change event will be handled before the layout change is propagated to screen, so the content may be temporarily displayed in an inconsistent state. Note: the same issue exists with existing methods that may be based on mutation observers or window.onresize, for example. Though it is not the stated objective of this proposal to solve this problem, there may be an opportunity to do so. (junov)&lt;br /&gt;
* Accessing rendered pixel size is layout-inducing. To avoid layout thrashing, we should consider making this an asynchronous getter (e.g. asyncGetBoundignClientRect). This would also prevent renderedsizechanged events from firing from within the evaluation of rendered pixel size, which is weird. (junov, credit: nduca)&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasRenderedPixelSize&amp;diff=9941</id>
		<title>CanvasRenderedPixelSize</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasRenderedPixelSize&amp;diff=9941"/>
		<updated>2015-04-15T16:54:31Z</updated>

		<summary type="html">&lt;p&gt;Junov: /* Issues: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;This proposal is to allow Web developers to match canvas backing store resolution to device resolution.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Descriptions ==&lt;br /&gt;
&lt;br /&gt;
Use case #1: a Web application needs to render content to a canvas (using 2D or WebGL contexts) being displayed on a high-DPI screen. For best quality output, the pixels of the canvas backing store need to match 1:1 with screen pixels.&lt;br /&gt;
&lt;br /&gt;
Use case #2: As #1, and the user moves the page from screen to screen resulting in dynamic DPI changes.&lt;br /&gt;
&lt;br /&gt;
Use case #3: As #1, and the canvas is inside a CSS scale transform which may change dynamically.&lt;br /&gt;
&lt;br /&gt;
Use case #4: As #1, and the canvas&#039;s CSS content box has arbitrary fractional CSS pixel size and offset from the viewport.&lt;br /&gt;
&lt;br /&gt;
Use case #5: As #3, but the canvas is inside an &amp;lt;code&amp;gt;&amp;amp;lt;iframe&amp;amp;gt;&amp;lt;/code&amp;gt; in a page with a different origin which contains the CSS transform.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
Safari shipped a canvas implementation that automatically resized the backing store but this caused various problems, such as incompatibility with some existing content and a need for new &amp;quot;HD&amp;quot; APIs that authors were not motivated to use. This approach does not work for WebGL in any case.&lt;br /&gt;
&lt;br /&gt;
The current best workaround is to set the canvas backing store size by computing the canvas content box in CSS pixels relative to the viewport, as accurately as possible (e.g. using &amp;lt;code&amp;gt;getBoxQuads&amp;lt;/code&amp;gt;), multiplying by &amp;lt;code&amp;gt;window.devicePixelRatio&amp;lt;/code&amp;gt;, and using that to set the canvas width and height. This approach does not easily adapt to dynamic changes and does not handle use cases #4 or #5. In use-case #4 the computed device pixel size may be non-integer, in which case the UA is likely to snap the content-box to device pixel edges and the Web developer cannot predict how that snapping will be done. In use-case #5 the developer cannot know about the CSS transform in the foreign-origin content.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
&lt;br /&gt;
# Leave the application in control of setting the pixel size of the backing store.&lt;br /&gt;
# Retain the invariant that 1 unit in canvas coordinate space is 1 backing store pixel.&lt;br /&gt;
# Make it easy for applications to set the pixel size of the backing store so that backing store pixels are 1:1 with device pixels, when that&#039;s possible.&lt;br /&gt;
&lt;br /&gt;
== Non Goals ==&lt;br /&gt;
&lt;br /&gt;
# Enable high-resolution backing store for existing content.&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
Hixie proposed a canvas attribute to opt into automatic sizing of the backing store, but this violates goals #1 and #2. Violating goal #1 is a problem because applications often use temporary canvases and other assets which are dependent on the pixel size of the backing store. Goal #1 is also essential for WebGL.&lt;br /&gt;
&lt;br /&gt;
=== Suggested Solution ===&lt;br /&gt;
&lt;br /&gt;
Expose a preferred size for the canvas backing store directly to Web authors. Web authors who want to match screen resolution set the canvas width and height to that size. Expose an event on the canvas that fires when the preferred size changes.&lt;br /&gt;
&lt;br /&gt;
== Suggested IDL ==&lt;br /&gt;
&lt;br /&gt;
Add two new DOM attributes to &amp;lt;code&amp;gt;HTMLCanvasElement&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;partial interface HTMLCanvasElement {&lt;br /&gt;
  readonly attribute long renderedPixelWidth;&lt;br /&gt;
  readonly attribute long renderedPixelHeight;&lt;br /&gt;
};&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The values of these attributes are not normatively defined. For example, a UA might reduce these values to conserve system resources.&lt;br /&gt;
However, under normal steady-state conditions a UA is expected to choose values such that, if the canvas element has a CSS background color exactly filling its content-box, and the UA renders that CSS background color as a rectangle whose edges are aligned to device pixel edges, then &amp;lt;code&amp;gt;renderedPixelWidth&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;renderedPixelHeight&amp;lt;/code&amp;gt; are the device pixel size of that rectangle.&lt;br /&gt;
&lt;br /&gt;
Add a new event &amp;lt;code&amp;gt;renderedsizechange&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;HTMLCanvasElement&amp;lt;/code&amp;gt;. This event does not bubble and is not cancelable. Whenever the value that would be returned by &amp;lt;code&amp;gt;renderedPixelWidth&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;renderedPixelHeight&amp;lt;/code&amp;gt; changes, queue a task to fire &amp;lt;code&amp;gt;renderedsizechange&amp;lt;/code&amp;gt; at the &amp;lt;code&amp;gt;HTMLCanvasElement&amp;lt;/code&amp;gt; if there is not already a task pending to fire such an event at that element.&lt;br /&gt;
&lt;br /&gt;
== Rationale: ==&lt;br /&gt;
&lt;br /&gt;
This API appears to be the minimal API needed to achieve the goals, is opt-in, and can be used in several different ways depending on how ambitious the app developer is:&lt;br /&gt;
* Simplest possible usage: set the canvas size once when loading and don&#039;t handle dynamic changes.&lt;br /&gt;
&amp;lt;pre&amp;gt;  &amp;amp;lt;script&amp;amp;gt;&lt;br /&gt;
  canvas.width = canvas.renderedPixelWidth; canvas.height = canvas.renderedPixelHeight;&lt;br /&gt;
  &amp;amp;lt;/script&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Update canvas size for every frame of a game:&lt;br /&gt;
&amp;lt;pre&amp;gt;  &amp;amp;lt;script&amp;amp;gt;&lt;br /&gt;
  function renderFrame() {&lt;br /&gt;
    canvas.width = canvas.renderedPixelWidth; canvas.height = canvas.renderedPixelHeight;&lt;br /&gt;
    ...&lt;br /&gt;
    window.requestAnimationFrame(renderFrame);&lt;br /&gt;
  }&lt;br /&gt;
  &amp;amp;lt;/script&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Update canvas size and re-render only when DPI changes:&lt;br /&gt;
&amp;lt;pre&amp;gt;  &amp;amp;lt;script&amp;amp;gt;&lt;br /&gt;
  function render() {&lt;br /&gt;
    canvas.width = canvas.renderedPixelWidth; canvas.height = canvas.renderedPixelHeight;&lt;br /&gt;
    ...&lt;br /&gt;
  }&lt;br /&gt;
  canvas.addEventHandler(&amp;quot;renderedsizechange&amp;quot;, render, false);&lt;br /&gt;
  &amp;amp;lt;/script&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Issues: ==&lt;br /&gt;
&lt;br /&gt;
* Hard to determine a rendered size when css size is &#039;auto&#039;, or when the canvas is not attached to the DOM. Perhaps in those cases the current intrinsic size should be returned? (junov@chromium.org)&lt;br /&gt;
* Presumably the layout must be updated when the rendered size attributes are evaluated(maybe that should be stated explicitly in the spec?). Assuming that is the case, the layout update may result in a &amp;quot;renderedsizechange&amp;quot; event being fired. It seems weird that this event would be fired as a result of reading the rendered pixel size. Maybe that is fine... (junov@chromium.org)&lt;br /&gt;
* After a layout change that affects rendered pixel size, there is no guarantee that the size change event will be handled before the layout change is propagated to screen, so the content may be temporarily displayed in an inconsistent state. Note: the same issue exists with existing methods that may be based on mutation observers or window.onresize, for example. Though it is not the stated objective of this proposal to solve this problem, there may be an opportunity to do so. (junov)&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasRenderedPixelSize&amp;diff=9935</id>
		<title>CanvasRenderedPixelSize</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasRenderedPixelSize&amp;diff=9935"/>
		<updated>2015-04-10T21:33:20Z</updated>

		<summary type="html">&lt;p&gt;Junov: /* Issues: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;This proposal is to allow Web developers to match canvas backing store resolution to device resolution.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Descriptions ==&lt;br /&gt;
&lt;br /&gt;
Use case #1: a Web application needs to render content to a canvas (using 2D or WebGL contexts) being displayed on a high-DPI screen. For best quality output, the pixels of the canvas backing store need to match 1:1 with screen pixels.&lt;br /&gt;
&lt;br /&gt;
Use case #2: As #1, and the user moves the page from screen to screen resulting in dynamic DPI changes.&lt;br /&gt;
&lt;br /&gt;
Use case #3: As #1, and the canvas is inside a CSS scale transform which may change dynamically.&lt;br /&gt;
&lt;br /&gt;
Use case #4: As #1, and the canvas&#039;s CSS content box has arbitrary fractional CSS pixel size and offset from the viewport.&lt;br /&gt;
&lt;br /&gt;
Use case #5: As #3, but the canvas is inside an &amp;lt;code&amp;gt;&amp;amp;lt;iframe&amp;amp;gt;&amp;lt;/code&amp;gt; in a page with a different origin which contains the CSS transform.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
Safari shipped a canvas implementation that automatically resized the backing store but this caused various problems, such as incompatibility with some existing content and a need for new &amp;quot;HD&amp;quot; APIs that authors were not motivated to use. This approach does not work for WebGL in any case.&lt;br /&gt;
&lt;br /&gt;
The current best workaround is to set the canvas backing store size by computing the canvas content box in CSS pixels relative to the viewport, as accurately as possible (e.g. using &amp;lt;code&amp;gt;getBoxQuads&amp;lt;/code&amp;gt;), multiplying by &amp;lt;code&amp;gt;window.devicePixelRatio&amp;lt;/code&amp;gt;, and using that to set the canvas width and height. This approach does not easily adapt to dynamic changes and does not handle use cases #4 or #5. In use-case #4 the computed device pixel size may be non-integer, in which case the UA is likely to snap the content-box to device pixel edges and the Web developer cannot predict how that snapping will be done. In use-case #5 the developer cannot know about the CSS transform in the foreign-origin content.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
&lt;br /&gt;
# Leave the application in control of setting the pixel size of the backing store.&lt;br /&gt;
# Retain the invariant that 1 unit in canvas coordinate space is 1 backing store pixel.&lt;br /&gt;
# Make it easy for applications to set the pixel size of the backing store so that backing store pixels are 1:1 with device pixels, when that&#039;s possible.&lt;br /&gt;
&lt;br /&gt;
== Non Goals ==&lt;br /&gt;
&lt;br /&gt;
# Enable high-resolution backing store for existing content.&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
Hixie proposed a canvas attribute to opt into automatic sizing of the backing store, but this violates goals #1 and #2. Violating goal #1 is a problem because applications often use temporary canvases and other assets which are dependent on the pixel size of the backing store. Goal #1 is also essential for WebGL.&lt;br /&gt;
&lt;br /&gt;
=== Suggested Solution ===&lt;br /&gt;
&lt;br /&gt;
Expose a preferred size for the canvas backing store directly to Web authors. Web authors who want to match screen resolution set the canvas width and height to that size. Expose an event on the canvas that fires when the preferred size changes.&lt;br /&gt;
&lt;br /&gt;
== Suggested IDL ==&lt;br /&gt;
&lt;br /&gt;
Add two new DOM attributes to &amp;lt;code&amp;gt;HTMLCanvasElement&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;partial interface HTMLCanvasElement {&lt;br /&gt;
  readonly attribute long renderedPixelWidth;&lt;br /&gt;
  readonly attribute long renderedPixelHeight;&lt;br /&gt;
};&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The values of these attributes are not normatively defined. For example, a UA might reduce these values to conserve system resources.&lt;br /&gt;
However, under normal steady-state conditions a UA is expected to choose values such that, if the canvas element has a CSS background color exactly filling its content-box, and the UA renders that CSS background color as a rectangle whose edges are aligned to device pixel edges, then &amp;lt;code&amp;gt;renderedPixelWidth&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;renderedPixelHeight&amp;lt;/code&amp;gt; are the device pixel size of that rectangle.&lt;br /&gt;
&lt;br /&gt;
Add a new event &amp;lt;code&amp;gt;renderedsizechange&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;HTMLCanvasElement&amp;lt;/code&amp;gt;. This event does not bubble and is not cancelable. Whenever the value that would be returned by &amp;lt;code&amp;gt;renderedPixelWidth&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;renderedPixelHeight&amp;lt;/code&amp;gt; changes, queue a task to fire &amp;lt;code&amp;gt;renderedsizechange&amp;lt;/code&amp;gt; at the &amp;lt;code&amp;gt;HTMLCanvasElement&amp;lt;/code&amp;gt; if there is not already a task pending to fire such an event at that element.&lt;br /&gt;
&lt;br /&gt;
== Rationale: ==&lt;br /&gt;
&lt;br /&gt;
This API appears to be the minimal API needed to achieve the goals, is opt-in, and can be used in several different ways depending on how ambitious the app developer is:&lt;br /&gt;
* Simplest possible usage: set the canvas size once when loading and don&#039;t handle dynamic changes.&lt;br /&gt;
&amp;lt;pre&amp;gt;  &amp;amp;lt;script&amp;amp;gt;&lt;br /&gt;
  canvas.width = canvas.renderedPixelWidth; canvas.height = canvas.renderedPixelHeight;&lt;br /&gt;
  &amp;amp;lt;/script&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Update canvas size for every frame of a game:&lt;br /&gt;
&amp;lt;pre&amp;gt;  &amp;amp;lt;script&amp;amp;gt;&lt;br /&gt;
  function renderFrame() {&lt;br /&gt;
    canvas.width = canvas.renderedPixelWidth; canvas.height = canvas.renderedPixelHeight;&lt;br /&gt;
    ...&lt;br /&gt;
    window.requestAnimationFrame(renderFrame);&lt;br /&gt;
  }&lt;br /&gt;
  &amp;amp;lt;/script&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Update canvas size and re-render only when DPI changes:&lt;br /&gt;
&amp;lt;pre&amp;gt;  &amp;amp;lt;script&amp;amp;gt;&lt;br /&gt;
  function render() {&lt;br /&gt;
    canvas.width = canvas.renderedPixelWidth; canvas.height = canvas.renderedPixelHeight;&lt;br /&gt;
    ...&lt;br /&gt;
  }&lt;br /&gt;
  canvas.addEventHandler(&amp;quot;renderedsizechange&amp;quot;, render, false);&lt;br /&gt;
  &amp;amp;lt;/script&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Issues: ==&lt;br /&gt;
&lt;br /&gt;
* Hard to determine a rendered size when css size is &#039;auto&#039;, or when the canvas is not attached to the DOM. Perhaps in those cases the current intrinsic size should be returned? (junov@chromium.org)&lt;br /&gt;
* Presumably the layout must be updated when the rendered size attributes are evaluated(maybe that should be stated explicitly in the spec?). Assuming that is the case, the layout update may result in a &amp;quot;renderedsizechange&amp;quot; event being fired. It seems weird that this event would be fired as a result of reading the rendered pixel size. Maybe that is fine... (junov@chromium.org)&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=CanvasRenderedPixelSize&amp;diff=9934</id>
		<title>CanvasRenderedPixelSize</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=CanvasRenderedPixelSize&amp;diff=9934"/>
		<updated>2015-04-10T20:48:48Z</updated>

		<summary type="html">&lt;p&gt;Junov: /* Issues: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:&#039;&#039;This proposal is to allow Web developers to match canvas backing store resolution to device resolution.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Use Case Descriptions ==&lt;br /&gt;
&lt;br /&gt;
Use case #1: a Web application needs to render content to a canvas (using 2D or WebGL contexts) being displayed on a high-DPI screen. For best quality output, the pixels of the canvas backing store need to match 1:1 with screen pixels.&lt;br /&gt;
&lt;br /&gt;
Use case #2: As #1, and the user moves the page from screen to screen resulting in dynamic DPI changes.&lt;br /&gt;
&lt;br /&gt;
Use case #3: As #1, and the canvas is inside a CSS scale transform which may change dynamically.&lt;br /&gt;
&lt;br /&gt;
Use case #4: As #1, and the canvas&#039;s CSS content box has arbitrary fractional CSS pixel size and offset from the viewport.&lt;br /&gt;
&lt;br /&gt;
Use case #5: As #3, but the canvas is inside an &amp;lt;code&amp;gt;&amp;amp;lt;iframe&amp;amp;gt;&amp;lt;/code&amp;gt; in a page with a different origin which contains the CSS transform.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
&lt;br /&gt;
Safari shipped a canvas implementation that automatically resized the backing store but this caused various problems, such as incompatibility with some existing content and a need for new &amp;quot;HD&amp;quot; APIs that authors were not motivated to use. This approach does not work for WebGL in any case.&lt;br /&gt;
&lt;br /&gt;
The current best workaround is to set the canvas backing store size by computing the canvas content box in CSS pixels relative to the viewport, as accurately as possible (e.g. using &amp;lt;code&amp;gt;getBoxQuads&amp;lt;/code&amp;gt;), multiplying by &amp;lt;code&amp;gt;window.devicePixelRatio&amp;lt;/code&amp;gt;, and using that to set the canvas width and height. This approach does not easily adapt to dynamic changes and does not handle use cases #4 or #5. In use-case #4 the computed device pixel size may be non-integer, in which case the UA is likely to snap the content-box to device pixel edges and the Web developer cannot predict how that snapping will be done. In use-case #5 the developer cannot know about the CSS transform in the foreign-origin content.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
&lt;br /&gt;
# Leave the application in control of setting the pixel size of the backing store.&lt;br /&gt;
# Retain the invariant that 1 unit in canvas coordinate space is 1 backing store pixel.&lt;br /&gt;
# Make it easy for applications to set the pixel size of the backing store so that backing store pixels are 1:1 with device pixels, when that&#039;s possible.&lt;br /&gt;
&lt;br /&gt;
== Non Goals ==&lt;br /&gt;
&lt;br /&gt;
# Enable high-resolution backing store for existing content.&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
Hixie proposed a canvas attribute to opt into automatic sizing of the backing store, but this violates goals #1 and #2. Violating goal #1 is a problem because applications often use temporary canvases and other assets which are dependent on the pixel size of the backing store. Goal #1 is also essential for WebGL.&lt;br /&gt;
&lt;br /&gt;
=== Suggested Solution ===&lt;br /&gt;
&lt;br /&gt;
Expose a preferred size for the canvas backing store directly to Web authors. Web authors who want to match screen resolution set the canvas width and height to that size. Expose an event on the canvas that fires when the preferred size changes.&lt;br /&gt;
&lt;br /&gt;
== Suggested IDL ==&lt;br /&gt;
&lt;br /&gt;
Add two new DOM attributes to &amp;lt;code&amp;gt;HTMLCanvasElement&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;partial interface HTMLCanvasElement {&lt;br /&gt;
  readonly attribute long renderedPixelWidth;&lt;br /&gt;
  readonly attribute long renderedPixelHeight;&lt;br /&gt;
};&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The values of these attributes are not normatively defined. For example, a UA might reduce these values to conserve system resources.&lt;br /&gt;
However, under normal steady-state conditions a UA is expected to choose values such that, if the canvas element has a CSS background color exactly filling its content-box, and the UA renders that CSS background color as a rectangle whose edges are aligned to device pixel edges, then &amp;lt;code&amp;gt;renderedPixelWidth&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;renderedPixelHeight&amp;lt;/code&amp;gt; are the device pixel size of that rectangle.&lt;br /&gt;
&lt;br /&gt;
Add a new event &amp;lt;code&amp;gt;renderedsizechange&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;HTMLCanvasElement&amp;lt;/code&amp;gt;. This event does not bubble and is not cancelable. Whenever the value that would be returned by &amp;lt;code&amp;gt;renderedPixelWidth&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;renderedPixelHeight&amp;lt;/code&amp;gt; changes, queue a task to fire &amp;lt;code&amp;gt;renderedsizechange&amp;lt;/code&amp;gt; at the &amp;lt;code&amp;gt;HTMLCanvasElement&amp;lt;/code&amp;gt; if there is not already a task pending to fire such an event at that element.&lt;br /&gt;
&lt;br /&gt;
== Rationale: ==&lt;br /&gt;
&lt;br /&gt;
This API appears to be the minimal API needed to achieve the goals, is opt-in, and can be used in several different ways depending on how ambitious the app developer is:&lt;br /&gt;
* Simplest possible usage: set the canvas size once when loading and don&#039;t handle dynamic changes.&lt;br /&gt;
&amp;lt;pre&amp;gt;  &amp;amp;lt;script&amp;amp;gt;&lt;br /&gt;
  canvas.width = canvas.renderedPixelWidth; canvas.height = canvas.renderedPixelHeight;&lt;br /&gt;
  &amp;amp;lt;/script&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Update canvas size for every frame of a game:&lt;br /&gt;
&amp;lt;pre&amp;gt;  &amp;amp;lt;script&amp;amp;gt;&lt;br /&gt;
  function renderFrame() {&lt;br /&gt;
    canvas.width = canvas.renderedPixelWidth; canvas.height = canvas.renderedPixelHeight;&lt;br /&gt;
    ...&lt;br /&gt;
    window.requestAnimationFrame(renderFrame);&lt;br /&gt;
  }&lt;br /&gt;
  &amp;amp;lt;/script&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Update canvas size and re-render only when DPI changes:&lt;br /&gt;
&amp;lt;pre&amp;gt;  &amp;amp;lt;script&amp;amp;gt;&lt;br /&gt;
  function render() {&lt;br /&gt;
    canvas.width = canvas.renderedPixelWidth; canvas.height = canvas.renderedPixelHeight;&lt;br /&gt;
    ...&lt;br /&gt;
  }&lt;br /&gt;
  canvas.addEventHandler(&amp;quot;renderedsizechange&amp;quot;, render, false);&lt;br /&gt;
  &amp;amp;lt;/script&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Issues: ==&lt;br /&gt;
&lt;br /&gt;
* Hard to determine a rendered size when css size is &#039;auto&#039;, or when the canvas is not attached to the DOM. Perhaps in those cases the current intrinsic size should be returned? (junov@chromium.org)&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Talk:WorkerCanvas&amp;diff=9868</id>
		<title>Talk:WorkerCanvas</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Talk:WorkerCanvas&amp;diff=9868"/>
		<updated>2015-03-20T15:36:13Z</updated>

		<summary type="html">&lt;p&gt;Junov: Created page with &amp;quot;Why do ImageBitmaps need to be transferable?  The object are immutable, so I think it is Okay for threads to reference the same object without cloning.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Why do ImageBitmaps need to be transferable?  The object are immutable, so I think it is Okay for threads to reference the same object without cloning.&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9656</id>
		<title>Canvas Batch drawImage</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9656"/>
		<updated>2014-08-04T17:19:42Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Batching drawImage calls to achieve near-native performance for sprite/image based animations and games.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
Many web applications and games use 2D Canvases and the drawImage method to bring sprites to screen. Often, a large number of calls to drawImage occur for each frame presented to screen.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
When calling drawImage hundreds or thousands of times per animation frame, API bindings overhead and internal bookkeeping costs associated with individual draw calls become a significant performance bottleneck, which prevents web applications from achieving near-native performance levels.&lt;br /&gt;
&lt;br /&gt;
=== Current Workaround ===&lt;br /&gt;
With WebGL, batching sprite draws is possible thanks to vertex buffers. WebGL is not as broadly supported as 2D canvas, and is a considerably more complex solution to a problem that could otherwise be solved with a 2D canvas.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Rendering Performance.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/RzHob5VS8Dg]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;On Nexus 7 device, it can reach 60fps for 1000 sprites drawing at the same time [referring to a native canvas implementation]. However, the fps is very poor (about 9fps) if the demo is running in Chromium 36.&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solution ==&lt;br /&gt;
&lt;br /&gt;
=== Batch versions of drawImage ===&lt;br /&gt;
:New batch variants of drawImage that accept array arguments.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:We would have new variants of the existing drawImage method where the numerical arguments are are packed into a Float32Array. The image argument may or may not be an array. If it is not an array, the same source image is used for for each draw. When rendering from a single sprite sheet, it would be preferable to not specify the image argument as an array in order to minimize bindings overhead and redundant parameter validation.&lt;br /&gt;
&lt;br /&gt;
==== Suggested IDL ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enum CanvasDrawImageParameterFormat { position, destination-rectangle, source-and-destination-rectangles, source-rectangle-and-transform};&lt;br /&gt;
&lt;br /&gt;
interface CanvasRenderingContext2D {&lt;br /&gt;
    ...&lt;br /&gt;
    void drawImageBatch(sequence&amp;amp;lt;CanvasImageSource&amp;gt; image, CanvasDrawImageParameterFormat parameterFormat, Float32Array drawParameters);&lt;br /&gt;
    void drawImageBatch(CanvasImageSource image, ParameterFormat parameterFormat, Float32Array drawParameters);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:The drawParameters argument is to be interpreted as a table in row-major order. Where each row represents a single draw and each column represents an individual parameter. The mapping of the columns to draw parameters depends on the parameterFormat argument.&lt;br /&gt;
:* position: dx, dy&lt;br /&gt;
:* destination-rectangle: dx, dy, dw, dh&lt;br /&gt;
:* source-and-destination-rectangles: sx, sy, sw, sh, dx, dy, dw, dh&lt;br /&gt;
:* source-rectangle-and-transform: sx, sy, sw, sh, a, b, c, d, e, f&lt;br /&gt;
&lt;br /&gt;
:The parameters sx, sy, sw, sh, dx, dy, dw, and dh have the same meaning as with drawImage()&lt;br /&gt;
:The parameters a, b, c, d, e, f have the same meaning as with transform()&lt;br /&gt;
:With &#039;source-rectangle-and-transform&#039; the destination rectangle is a unit square defined by the vertices (0, 0), (1, 0), (1, 1), and (0, 1). The destination rectangle is transformed by the transform defined by a, b, c, d, e and f, and the by the canvas&#039;s current transform.&lt;br /&gt;
&lt;br /&gt;
Feedback Anne: Perhaps rather than overloading we should introduce new methods? IDL overloading is somewhat costly and not loved much by the JS community. Also, please float this by public-script-coord@w3.org at some point.&lt;br /&gt;
&lt;br /&gt;
==== Exceptions ==== &lt;br /&gt;
&lt;br /&gt;
:An INDEX_SIZE_ERR DOM exception is thrown if the size of drawParameters is not a multiple of the number of numeric parameters corresponding to &#039;parameterFormat&#039;&lt;br /&gt;
:An INDEX_SIZE_ERR DOM exception is thrown if &#039;image&#039; is a sequence and the size of drawParameters is not equal to the number of elements in &#039;image&#039; multiplied by the number of parameters corresponding to &#039;parameterFormat&#039;.&lt;br /&gt;
:All the same exceptions that apply for calls to drawImage also apply to drawImageBatch. If any individual draw results in an exception being thrown, the entire call to drawImageBatch must abort without drawing anything to the canvas.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
&lt;br /&gt;
:The use of Float32Array is not very programmer friendly. This compromise on usability is justified by performance considerations, and is necessary to achieve near-native or WebGL-like performance.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
* The most naive implementation, which would consist in expanding the batch drawImage call into multiple internal drawImage calls would already increase performance by reducing API bindings overhead.&lt;br /&gt;
* The use of typed arrays will dramatically reduce the argument type checking burden in the bindings.&lt;br /&gt;
* More advanced implementations would carry the batching down to a lower level in the graphics stack. For example, a GPU-accelerated implementation of 2D canvas could use OpenGL vertex buffer objects, or the DirectX sprite interface.&lt;br /&gt;
* Some existing implementations of drawImage already detect batching opportunities and will batch consecutive drawImage calls at a lower lever of the graphics stack (for example skia&#039;s drawBitmap used in Blink). This auto-detection does improve rasterization performance, but it does not eliminate the bindings overhead and it involves additional overhead to determine whether consecutive calls to drawImage can be grouped together to form a batch. With explicitly batched calls to drawImage, that overhead can be eliminated because the individuals sprite draws would be known to use identical rendering context state, and the calls down to the graphics platform implementation layer would only occur once for the entire batch.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Game/app devs looking for high frame rates for sprite blitting are likely to adopt enthusiastically.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9655</id>
		<title>Canvas Batch drawImage</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9655"/>
		<updated>2014-08-04T17:14:46Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Batching drawImage calls to achieve near-native performance for sprite/image based animations and games.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
Many web applications and games use 2D Canvases and the drawImage method to bring sprites to screen. Often, a large number of calls to drawImage occur for each frame presented to screen.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
When calling drawImage hundreds or thousands of times per animation frame, API bindings overhead and internal bookkeeping costs associated with individual draw calls become a significant performance bottleneck, which prevents web applications from achieving near-native performance levels.&lt;br /&gt;
&lt;br /&gt;
=== Current Workaround ===&lt;br /&gt;
With WebGL, batching sprite draws is possible thanks to vertex buffers. WebGL is not as broadly supported as 2D canvas, and is a considerably more complex solution to a problem that could otherwise be solved with a 2D canvas.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Rendering Performance.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/RzHob5VS8Dg]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;On Nexus 7 device, it can reach 60fps for 1000 sprites drawing at the same time [referring to a native canvas implementation]. However, the fps is very poor (about 9fps) if the demo is running in Chromium 36.&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solution ==&lt;br /&gt;
&lt;br /&gt;
=== Batch versions of drawImage ===&lt;br /&gt;
:New batch variants of drawImage that accept array arguments.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:We would have new variants of the existing drawImage method where the numerical arguments are are packed into a Float32Array. The image argument may or may not be an array. If it is not an array, the same source image is used for for each draw. When rendering from a single sprite sheet, it would be preferable to not specify the image argument as an array in order to minimize bindings overhead and redundant parameter validation.&lt;br /&gt;
&lt;br /&gt;
==== Suggested IDL ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enum CanvasDrawImageParameterFormat { position, destination-rectangle, source-and-destination-rectangles, source-rectangle-and-transform};&lt;br /&gt;
&lt;br /&gt;
interface CanvasRenderingContext2D {&lt;br /&gt;
    ...&lt;br /&gt;
    void drawImageBatch(sequence&amp;amp;lt;CanvasImageSource&amp;gt; image, CanvasDrawImageParameterFormat parameterFormat, Float32Array drawParameters);&lt;br /&gt;
    void drawImageBatch(CanvasImageSource image, ParameterFormat parameterFormat, Float32Array drawParameters);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:The drawParameters argument is to be interpreted as a table in row-major order. Where each row represents a single draw and each column represents an individual parameter. The mapping of the columns to draw parameters depends on the parameterFormat argument.&lt;br /&gt;
:* position: dx, dy&lt;br /&gt;
:* destination-rectangle: dx, dy, dw, dh&lt;br /&gt;
:* source-and-destination-rectangles: sx, sy, sw, sh, dx, dy, dw, dh&lt;br /&gt;
:* source-rectangle-and-transform: sx, sy, sw, sh, a, b, c, d, e, f&lt;br /&gt;
&lt;br /&gt;
:The parameters sx, sy, sw, sh, dx, dy, dw, and dh have the same meaning as with drawImage()&lt;br /&gt;
:The parameters a, b, c, d, e, f have the same meaning as with transform()&lt;br /&gt;
:With &#039;source-rectangle-and-transform&#039; the destination rectangle is defined by the vertices (0, 0), (sw, 0), (sw, sh), and (0, sh). The destination rectangle is transformed by the transform defined by a, b, c, d, e and f, and the by the canvas&#039;s current transform.&lt;br /&gt;
&lt;br /&gt;
Feedback Anne: Perhaps rather than overloading we should introduce new methods? IDL overloading is somewhat costly and not loved much by the JS community. Also, please float this by public-script-coord@w3.org at some point.&lt;br /&gt;
&lt;br /&gt;
==== Exceptions ==== &lt;br /&gt;
&lt;br /&gt;
:An INDEX_SIZE_ERR DOM exception is thrown if the size of drawParameters is not a multiple of the number of numeric parameters corresponding to &#039;parameterFormat&#039;&lt;br /&gt;
:An INDEX_SIZE_ERR DOM exception is thrown if &#039;image&#039; is a sequence and the size of drawParameters is not equal to the number of elements in &#039;image&#039; multiplied by the number of parameters corresponding to &#039;parameterFormat&#039;.&lt;br /&gt;
:All the same exceptions that apply for calls to drawImage also apply to drawImageBatch. If any individual draw results in an exception being thrown, the entire call to drawImageBatch must abort without drawing anything to the canvas.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
&lt;br /&gt;
:The use of Float32Array is not very programmer friendly. This compromise on usability is justified by performance considerations, and is necessary to achieve near-native or WebGL-like performance.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
* The most naive implementation, which would consist in expanding the batch drawImage call into multiple internal drawImage calls would already increase performance by reducing API bindings overhead.&lt;br /&gt;
* The use of typed arrays will dramatically reduce the argument type checking burden in the bindings.&lt;br /&gt;
* More advanced implementations would carry the batching down to a lower level in the graphics stack. For example, a GPU-accelerated implementation of 2D canvas could use OpenGL vertex buffer objects, or the DirectX sprite interface.&lt;br /&gt;
* Some existing implementations of drawImage already detect batching opportunities and will batch consecutive drawImage calls at a lower lever of the graphics stack (for example skia&#039;s drawBitmap used in Blink). This auto-detection does improve rasterization performance, but it does not eliminate the bindings overhead and it involves additional overhead to determine whether consecutive calls to drawImage can be grouped together to form a batch. With explicitly batched calls to drawImage, that overhead can be eliminated because the individuals sprite draws would be known to use identical rendering context state, and the calls down to the graphics platform implementation layer would only occur once for the entire batch.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Game/app devs looking for high frame rates for sprite blitting are likely to adopt enthusiastically.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9654</id>
		<title>Canvas Batch drawImage</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9654"/>
		<updated>2014-08-04T17:14:00Z</updated>

		<summary type="html">&lt;p&gt;Junov: /* Proposed Solution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Batching drawImage calls to achieve near-native performance for sprite/image based animations and games.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
Many web applications and games use 2D Canvases and the drawImage method to bring sprites to screen. Often, a large number of calls to drawImage occur for each frame presented to screen.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
When calling drawImage hundreds or thousands of times per animation frame, API bindings overhead and internal bookkeeping costs associated with individual draw calls become a significant performance bottleneck, which prevents web applications from achieving near-native performance levels.&lt;br /&gt;
&lt;br /&gt;
=== Current Workaround ===&lt;br /&gt;
With WebGL, batching sprite draws is possible thanks to vertex buffers. WebGL is not as broadly supported as 2D canvas, and is a considerably more complex solution to a problem that could otherwise be solved with a 2D canvas.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Rendering Performance.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/RzHob5VS8Dg]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;On Nexus 7 device, it can reach 60fps for 1000 sprites drawing at the same time [referring to a native canvas implementation]. However, the fps is very poor (about 9fps) if the demo is running in Chromium 36.&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solution ==&lt;br /&gt;
&lt;br /&gt;
=== Batch versions of drawImage ===&lt;br /&gt;
:New batch variants of drawImage that accept array arguments.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:We would have new variants of the existing drawImage method where the numerical arguments are are packed into a Float32Array. The image argument may or may not be an array. If it is not an array, the same source image is used for for each draw. When rendering from a single sprite sheet, it would be preferable to not specify the image argument as an array in order to minimize bindings overhead and redundant parameter validation.&lt;br /&gt;
&lt;br /&gt;
==== Suggested IDL ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enum CanvasDrawImageParameterFormat { position, destination-rectangle, source-and-destination-rectangles, source-rectangle-and-transform};&lt;br /&gt;
&lt;br /&gt;
interface CanvasRenderingContext2D {&lt;br /&gt;
    ...&lt;br /&gt;
    void drawImageBatch(sequence&amp;amp;lt;CanvasImageSource&amp;gt; image, CanvasDrawImageParameterFormat parameterFormat, Float32Array drawParameters);&lt;br /&gt;
    void drawImageBatch(CanvasImageSource image, ParameterFormat parameterFormat, Float32Array drawParameters);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:The drawParameters argument is to be interpreted as a table in row-major order. Where each row represents a single draw and each column represents an individual parameter. The mapping of the columns to draw parameters depends on the parameterFormat argument.&lt;br /&gt;
:* position: dx, dy&lt;br /&gt;
:* destination-rectangle: dx, dy, dw, dh&lt;br /&gt;
:* source-and-destination-rectangles: sx, sy, sw, sh, dx, dy, dw, dh&lt;br /&gt;
:* source-rectangle-and-transform: sx, sy, sw, sh, a, b, c, d, e, f&lt;br /&gt;
&lt;br /&gt;
:The parameters sx, sy, sw, sh, dx, dy, dw, and dh have the same meaning as with drawImage()&lt;br /&gt;
:The parameters a, b, c, d, e, f have the same meaning as with transform()&lt;br /&gt;
:With &#039;source-rectangle-and-transform&#039; the destination rectangle is defined by the vertices (0, 0), (sw, 0), (sw, sh), and (0, sh). The destination rectangle is transformed by the transform defined by a, b, c, d, e and f, and the by the canvas&#039;s current transform.&lt;br /&gt;
&lt;br /&gt;
Feedback Anne: Perhaps rather than overloading we should introduce new methods? IDL overloading is somewhat costly and not loved much by the JS community. Also, please float this by public-script-coord@w3.org at some point.&lt;br /&gt;
&lt;br /&gt;
==== Exceptions ==== &lt;br /&gt;
&lt;br /&gt;
:An INDEX_SIZE_ERR DOM exception is thrown if the size of drawParameters is not a multiple of the number of numeric parameters corresponding to &#039;parameterFormat&#039;&lt;br /&gt;
:An INDEX_SIZE_ERR DOM exception is thrown if &#039;image&#039; is a sequence and the size of drawParameters is not equal to the number of elements in &#039;image&#039; multiplied by the number of parameters corresponding to &#039;parameterFormat&#039;.&lt;br /&gt;
All the same exceptions that apply for calls to drawImage also apply to drawImageBatch. If any individual draw results in an exception being thrown, the entire call to drawImageBatch must abort without drawing anything to the canvas.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
&lt;br /&gt;
:The use of Float32Array is not very programmer friendly. This compromise on usability is justified by performance considerations, and is necessary to achieve near-native or WebGL-like performance.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
* The most naive implementation, which would consist in expanding the batch drawImage call into multiple internal drawImage calls would already increase performance by reducing API bindings overhead.&lt;br /&gt;
* The use of typed arrays will dramatically reduce the argument type checking burden in the bindings.&lt;br /&gt;
* More advanced implementations would carry the batching down to a lower level in the graphics stack. For example, a GPU-accelerated implementation of 2D canvas could use OpenGL vertex buffer objects, or the DirectX sprite interface.&lt;br /&gt;
* Some existing implementations of drawImage already detect batching opportunities and will batch consecutive drawImage calls at a lower lever of the graphics stack (for example skia&#039;s drawBitmap used in Blink). This auto-detection does improve rasterization performance, but it does not eliminate the bindings overhead and it involves additional overhead to determine whether consecutive calls to drawImage can be grouped together to form a batch. With explicitly batched calls to drawImage, that overhead can be eliminated because the individuals sprite draws would be known to use identical rendering context state, and the calls down to the graphics platform implementation layer would only occur once for the entire batch.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Game/app devs looking for high frame rates for sprite blitting are likely to adopt enthusiastically.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9652</id>
		<title>Canvas Batch drawImage</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9652"/>
		<updated>2014-08-04T17:05:09Z</updated>

		<summary type="html">&lt;p&gt;Junov: /* Use Case Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Batching drawImage calls to achieve near-native performance for sprite/image based animations and games.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
Many web applications and games use 2D Canvases and the drawImage method to bring sprites to screen. Often, a large number of calls to drawImage occur for each frame presented to screen.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
When calling drawImage hundreds or thousands of times per animation frame, API bindings overhead and internal bookkeeping costs associated with individual draw calls become a significant performance bottleneck, which prevents web applications from achieving near-native performance levels.&lt;br /&gt;
&lt;br /&gt;
=== Current Workaround ===&lt;br /&gt;
With WebGL, batching sprite draws is possible thanks to vertex buffers. WebGL is not as broadly supported as 2D canvas, and is a considerably more complex solution to a problem that could otherwise be solved with a 2D canvas.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Rendering Performance.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/RzHob5VS8Dg]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;On Nexus 7 device, it can reach 60fps for 1000 sprites drawing at the same time [referring to a native canvas implementation]. However, the fps is very poor (about 9fps) if the demo is running in Chromium 36.&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solution ==&lt;br /&gt;
&lt;br /&gt;
=== Batch versions of drawImage ===&lt;br /&gt;
:Additional overloads of drawImage that accept array arguments.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:We would have new variants of the existing drawImage overloads where the numerical arguments are are packed into a Float32Array. The image argument may or may not be an array. if it is not an array, the same source image is used for for each draw. When rendering from a single sprite sheet, it would be preferable to not specify the image argument as an array in order to minimize bindings overhead.&lt;br /&gt;
&lt;br /&gt;
:An INDEX_SIZE_ERR DOM the Float32Array size is not a multiple of the number of numeric parameter or if it does not match the size of the image argument, if image is an array.&lt;br /&gt;
&lt;br /&gt;
==== Suggested IDL ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enum CanvasDrawImageParameterFormat { position, destination-rectangle, source-and-destination-rectangles, source-rectangle-and-transform};&lt;br /&gt;
&lt;br /&gt;
interface CanvasRenderingContext2D {&lt;br /&gt;
    ...&lt;br /&gt;
    void drawImageBatch(sequence&amp;amp;lt;CanvasImageSource&amp;gt; image, CanvasDrawImageParameterFormat parameterFormat, Float32Array drawParameters);&lt;br /&gt;
    void drawImageBatch(CanvasImageSource image, ParameterFormat parameterFormat, Float32Array drawParameters);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:The drawParameters argument is to be interpreted as a table in row-major order. Where each row represents a single draw and each column represents an individual parameter. The mapping of the columns to draw parameters depends on the parameterFormat argument.&lt;br /&gt;
:* position: dx, dy&lt;br /&gt;
:* destination-rectangle: dx, dy, dw, dh&lt;br /&gt;
:* source-and-destination-rectangles: sx, sy, sw, sh, dx, dy, dw, dh&lt;br /&gt;
:* source-rectangle-and-transform: sx, sy, sw, sh, a, b, c, d, e, f&lt;br /&gt;
&lt;br /&gt;
:The parameters sx, sy, sw, sh, dx, dy, dw, and dh have the same meaning as with drawImage()&lt;br /&gt;
:The parameters a, b, c, d, e, f have the same meaning as with transform()&lt;br /&gt;
:With &#039;source-rectangle-and-transform&#039; the destination rectangle is defined by the vertices (0, 0), (sw, 0), (sw, sh), and (0, sh). The destination rectangle is transformed by the transform defined by a, b, c, d, e and f, and the by the canvas&#039;s current transform.&lt;br /&gt;
&lt;br /&gt;
Feedback Anne: Perhaps rather than overloading we should introduce new methods? IDL overloading is somewhat costly and not loved much by the JS community. Also, please float this by public-script-coord@w3.org at some point.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:The use of Float32Array is not very programmer friendly. This compromise on usability is justified by performance considerations, and is necessary to achieve near-native or WebGL-like performance.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
* The most naive implementation, which would consist in expanding the batch drawImage call into multiple internal drawImage calls would already increase performance by reducing API bindings overhead.&lt;br /&gt;
* The use of typed arrays will dramatically reduce the argument type checking burden in the bindings.&lt;br /&gt;
* More advanced implementations would carry the batching down to a lower level in the graphics stack. For example, a GPU-accelerated implementation of 2D canvas could use OpenGL vertex buffer objects, or the DirectX sprite interface.&lt;br /&gt;
* Some existing implementations of drawImage already detect batching opportunities and will batch consecutive drawImage calls at a lower lever of the graphics stack (for example skia&#039;s drawBitmap used in Blink). This auto-detection does improve rasterization performance, but it does not eliminate the bindings overhead and it involves additional overhead to determine whether consecutive calls to drawImage can be grouped together to form a batch. With explicitly batched calls to drawImage, that overhead can be eliminated because the individuals sprite draws would be known to use identical rendering context state, and the calls down to the graphics platform implementation layer would only occur once for the entire batch.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Game/app devs looking for high frame rates for sprite blitting are likely to adopt enthusiastically.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9651</id>
		<title>Canvas Batch drawImage</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9651"/>
		<updated>2014-08-04T17:04:10Z</updated>

		<summary type="html">&lt;p&gt;Junov: /* Batch versions of drawImage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Batching drawImage calls to achieve near-native performance for sprite/image based animations and games.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
Many web applications and games use 2D Canvases and the drawImage method to bring sprites to screen. Often, a large number of calls to drawImage occur for each frame presented to screen.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
When calling drawImage hundreds or thousands of times per animation frame, API bindings overhead and internal bookkeeping costs associated with individual draw calls become a significant performance bottleneck, which prevents web application from achieving near-native performance levels.&lt;br /&gt;
&lt;br /&gt;
=== Current Workaround ===&lt;br /&gt;
With WebGL, batching sprite draws is possible thanks to vertex buffers. WebGL is not as broadly supported as 2D canvas, and is a considerably more complex solution to a problem that could otherwise be solved with a 2D canvas.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Rendering Performance.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/RzHob5VS8Dg]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;On Nexus 7 device, it can reach 60fps for 1000 sprites drawing at the same time [referring to a native canvas implementation]. However, the fps is very poor (about 9fps) if the demo is running in Chromium 36.&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solution ==&lt;br /&gt;
&lt;br /&gt;
=== Batch versions of drawImage ===&lt;br /&gt;
:Additional overloads of drawImage that accept array arguments.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:We would have new variants of the existing drawImage overloads where the numerical arguments are are packed into a Float32Array. The image argument may or may not be an array. if it is not an array, the same source image is used for for each draw. When rendering from a single sprite sheet, it would be preferable to not specify the image argument as an array in order to minimize bindings overhead.&lt;br /&gt;
&lt;br /&gt;
:An INDEX_SIZE_ERR DOM the Float32Array size is not a multiple of the number of numeric parameter or if it does not match the size of the image argument, if image is an array.&lt;br /&gt;
&lt;br /&gt;
==== Suggested IDL ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
enum CanvasDrawImageParameterFormat { position, destination-rectangle, source-and-destination-rectangles, source-rectangle-and-transform};&lt;br /&gt;
&lt;br /&gt;
interface CanvasRenderingContext2D {&lt;br /&gt;
    ...&lt;br /&gt;
    void drawImageBatch(sequence&amp;amp;lt;CanvasImageSource&amp;gt; image, CanvasDrawImageParameterFormat parameterFormat, Float32Array drawParameters);&lt;br /&gt;
    void drawImageBatch(CanvasImageSource image, ParameterFormat parameterFormat, Float32Array drawParameters);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:The drawParameters argument is to be interpreted as a table in row-major order. Where each row represents a single draw and each column represents an individual parameter. The mapping of the columns to draw parameters depends on the parameterFormat argument.&lt;br /&gt;
:* position: dx, dy&lt;br /&gt;
:* destination-rectangle: dx, dy, dw, dh&lt;br /&gt;
:* source-and-destination-rectangles: sx, sy, sw, sh, dx, dy, dw, dh&lt;br /&gt;
:* source-rectangle-and-transform: sx, sy, sw, sh, a, b, c, d, e, f&lt;br /&gt;
&lt;br /&gt;
:The parameters sx, sy, sw, sh, dx, dy, dw, and dh have the same meaning as with drawImage()&lt;br /&gt;
:The parameters a, b, c, d, e, f have the same meaning as with transform()&lt;br /&gt;
:With &#039;source-rectangle-and-transform&#039; the destination rectangle is defined by the vertices (0, 0), (sw, 0), (sw, sh), and (0, sh). The destination rectangle is transformed by the transform defined by a, b, c, d, e and f, and the by the canvas&#039;s current transform.&lt;br /&gt;
&lt;br /&gt;
Feedback Anne: Perhaps rather than overloading we should introduce new methods? IDL overloading is somewhat costly and not loved much by the JS community. Also, please float this by public-script-coord@w3.org at some point.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:The use of Float32Array is not very programmer friendly. This compromise on usability is justified by performance considerations, and is necessary to achieve near-native or WebGL-like performance.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
* The most naive implementation, which would consist in expanding the batch drawImage call into multiple internal drawImage calls would already increase performance by reducing API bindings overhead.&lt;br /&gt;
* The use of typed arrays will dramatically reduce the argument type checking burden in the bindings.&lt;br /&gt;
* More advanced implementations would carry the batching down to a lower level in the graphics stack. For example, a GPU-accelerated implementation of 2D canvas could use OpenGL vertex buffer objects, or the DirectX sprite interface.&lt;br /&gt;
* Some existing implementations of drawImage already detect batching opportunities and will batch consecutive drawImage calls at a lower lever of the graphics stack (for example skia&#039;s drawBitmap used in Blink). This auto-detection does improve rasterization performance, but it does not eliminate the bindings overhead and it involves additional overhead to determine whether consecutive calls to drawImage can be grouped together to form a batch. With explicitly batched calls to drawImage, that overhead can be eliminated because the individuals sprite draws would be known to use identical rendering context state, and the calls down to the graphics platform implementation layer would only occur once for the entire batch.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Game/app devs looking for high frame rates for sprite blitting are likely to adopt enthusiastically.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9646</id>
		<title>Canvas Batch drawImage</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9646"/>
		<updated>2014-07-30T21:25:14Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Batching drawImage calls to achieve near-native performance for sprite/image based animations and games.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
Many web applications and games use 2D Canvases and the drawImage method to bring sprites to screen. Often, a large number of calls to drawImage occur for each frame presented to screen.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
When calling drawImage hundreds or thousands of times per animation frame, API bindings overhead and internal bookkeeping costs associated with individual draw calls become a significant performance bottleneck, which prevents web application from achieving near-native performance levels.&lt;br /&gt;
&lt;br /&gt;
=== Current Workaround ===&lt;br /&gt;
With WebGL, batching sprite draws is possible thanks to vertex buffers. WebGL is not as broadly supported as 2D canvas, and is a considerably more complex solution to a problem that could otherwise be solved with a 2D canvas.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Rendering Performance.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/RzHob5VS8Dg]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;On Nexus 7 device, it can reach 60fps for 1000 sprites drawing at the same time [referring to a native canvas implementation]. However, the fps is very poor (about 9fps) if the demo is running in Chromium 36.&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solution ==&lt;br /&gt;
&lt;br /&gt;
=== Batch versions of drawImage ===&lt;br /&gt;
:Additional overloads of drawImage that accept array arguments.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:We would have new variants of the existing drawImage overloads where all the arguments are arrays (Float32Array for numeric arguments), and other variants where all the arguments except for the image are arrays.  The case where the image is not an array would be equivalent to having an array where each element is the same image (convenient for sprite sheets, and more efficient than using an array)&lt;br /&gt;
&lt;br /&gt;
:An INDEX_SIZE_ERR DOM exception would be thrown if not all the array arguments have the same size.&lt;br /&gt;
&lt;br /&gt;
==== Suggested IDL ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
interface Canvas2DRenderingContext {&lt;br /&gt;
    ...&lt;br /&gt;
    // existing variants&lt;br /&gt;
    void drawImage(CanvasImageSource image, unrestricted double dx, unrestricted double dy);&lt;br /&gt;
    void drawImage(CanvasImageSource image, unrestricted double dx, unrestricted double dy, unrestricted double dw, unrestricted double dh);&lt;br /&gt;
    void drawImage(CanvasImageSource image, unrestricted double sx, unrestricted double sy, unrestricted double sw, unrestricted double sh, unrestricted double dx, unrestricted double dy, unrestricted double dw, unrestricted double dh);&lt;br /&gt;
    // batch variants&lt;br /&gt;
    void drawImage(CanvasImageSource image[], Float32Array dx, Float32Array dy);&lt;br /&gt;
    void drawImage(CanvasImageSource image[], Float32Array dx, Float32Array dy, Float32Array dw, Float32Array dh);&lt;br /&gt;
    void drawImage(CanvasImageSource image[], Float32Array sx, Float32Array sy, Float32Array sw, Float32Array sh, Float32Array dx, Float32Array dy, Float32Array dw, Float32Array dh);&lt;br /&gt;
    // single image batch variants&lt;br /&gt;
    void drawImage(CanvasImageSource image, Float32Array dx, Float32Array dy);&lt;br /&gt;
    void drawImage(CanvasImageSource image, Float32Array dx, Float32Array dy, Float32Array dw, Float32Array dh);&lt;br /&gt;
    void drawImage(CanvasImageSource image, Float32Array sx, Float32Array sy, Float32Array sw, Float32Array sh, Float32Array dx, Float32Array dy, Float32Array dw, Float32Array dh);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:Use cases where per-sprite transforms are *required* are not covered. For example, if sprites are individually rotated. If appropriate, this could be resolved by adding more variants to the proposal (a matrix argument?)&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
* The most naive implementation, which would consist in expanding the batch drawImage call into multiple internal drawImage calls would already increase performance by reducing API bindings overhead.&lt;br /&gt;
* The use of typed arrays will dramatically reduce the argument type checking burden in the bindings.&lt;br /&gt;
* More advanced implementations would carry the batching down to a lower level in the graphics stack. For example, a GPU-accelerated implementation of 2D canvas could use OpenGL vertex buffer objects, or the DirectX sprite interface.&lt;br /&gt;
* Some existing implementations of drawImage already detect batching opportunities and will batch consecutive drawImage calls at a lower lever of the graphics stack (for example skia&#039;s drawBitmap used in Blink). This auto-detection does improve rasterization performance, but it does not eliminate the bindings overhead and it involves additional overhead to determine whether consecutive calls to drawImage can be grouped together to form a batch. With explicitly batched calls to drawImage, that overhead can be eliminated because the individuals sprite draws would be known to use identical rendering context state, and the calls down to the graphics platform implementation layer would only occur once for the entire batch.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Game/app devs looking for high frame rates for sprite blitting are likely to adopt enthusiastically.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9645</id>
		<title>Canvas Batch drawImage</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Batch_drawImage&amp;diff=9645"/>
		<updated>2014-07-30T20:58:02Z</updated>

		<summary type="html">&lt;p&gt;Junov: Created page with &amp;quot;:Batching drawImage calls to achieve near-native performance for sprite/image based animations and games.  == Use Case Description == Many web applications and games use 2D Ca...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:Batching drawImage calls to achieve near-native performance for sprite/image based animations and games.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
Many web applications and games use 2D Canvases and the drawImage method to bring sprites to screen. Often, a large number of calls to drawImage occur for each frame presented to screen.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
When calling drawImage hundreds or thousands of times per animation frame, API bindings overhead and internal bookkeeping costs associated with individual draw calls become a significant performance bottleneck, which prevents web application from achieving near-native performance levels.&lt;br /&gt;
&lt;br /&gt;
=== Current Workaround ===&lt;br /&gt;
With WebGL, batching sprite draws is possible thanks to vertex buffers. WebGL is not as broadly supported as 2D canvas, and is a considerably more complex solution to a problem that could otherwise be solved with a 2D canvas.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
Rendering Performance.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/RzHob5VS8Dg]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;On Nexus 7 device, it can reach 60fps for 1000 sprites drawing at the same time [referring to a native canvas implementation]. However, the fps is very poor (about 9fps) if the demo is running in Chromium 36.&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solution ==&lt;br /&gt;
&lt;br /&gt;
=== Batch versions of drawImage ===&lt;br /&gt;
:Additional overloads of drawImage that accept array arguments.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:We would have new variants of the existing drawImage overloads where all the arguments are arrays (Float32Array for numeric arguments), and other variants where all the arguments except for the image are arrays.  The case where the image is not an array would be equivalent to having an array where each element is the same image (convenient for sprite sheets, and more efficient than using an array)&lt;br /&gt;
&lt;br /&gt;
:An INDEX_SIZE_ERR DOM exception would be thrown if not all the array arguments have the same size.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:Use cases where per-sprite transforms are *required* are not covered. For example, if sprites are individually rotated. If appropriate, this could be resolved by adding more variants to the proposal (a matrix argument?)&lt;br /&gt;
&lt;br /&gt;
==== Implementation ==== &lt;br /&gt;
* The most naive implementation, which would consist in expanding the batch drawImage call into multiple internal drawImage calls would already increase performance by reducing API bindings overhead.&lt;br /&gt;
* The use of typed arrays will dramatically reduce the argument type checking burden in the bindings.&lt;br /&gt;
* More advanced implementations would carry the batching down to a lower level in the graphics stack. For example, a GPU-accelerated implementation of 2D canvas could use OpenGL vertex buffer objects, or the DirectX sprite interface.&lt;br /&gt;
* Some existing implementations of drawImage already detect batching opportunities and will batch consecutive drawImage calls at a lower lever of the graphics stack (for example skia&#039;s drawBitmap used in Blink). This auto-detection does improve rasterization performance, but it does not eliminate the bindings overhead and it involves additional overhead to determine whether consecutive calls to drawImage can be grouped together to form a batch. With explicitly batched calls to drawImage, that overhead can be eliminated because the individuals sprite draws would be known to use identical rendering context state, and the calls down to the graphics platform implementation layer would only occur once for the entire batch.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Game/app devs looking for high frame rates for sprite blitting are likely to adopt enthusiastically.&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9489</id>
		<title>Canvas Context Loss and Restoration</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9489"/>
		<updated>2014-03-12T21:31:48Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:2D Canvas Rendering contexts are currently required to have persistent backing stores. This proposal aims to relax that requirement by introducing an API that allows canvases to be discarded by the browser and re-drawn by the web application on demand.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
:The 2d canvas backing store persistence requirement often leads to large amounts of RAM (or GPU memory) being consumed by canvas elements that are not actively used because they are off screen or in background tabs or occluded windows. Other types of RAM-greedy HTML elements can release resources in such cases.&lt;br /&gt;
&lt;br /&gt;
:The expectation of canvas content persistence also makes it very difficult--if not impossible--for many web apps to recover from a GPU context reset in web browsers that store 2D canvas contents in GPU memory.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
:In theory, web apps do have the capability of discarding canvas backing stores (set canvas size to zero) and regenerating canvas content. However, web apps are not and should not be expected to be responsible for resolving resource contention issues.  The browser is responsible for monitoring resource usage and availability and is expected to take all necessary and reasonable measures to avoid crashes, hangs, and catastrophic performance degradations that may be caused by resource contention.  Under the current specification, browsers have no options for evicting resources held by 2D canvases because there are no means of guaranteeing that the application will redraw the contents when needed.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
:Web apps can track events to detect when the page is no longer visible (http://www.w3.org/TR/page-visibility/) and deallocate backing stores at that time by setting the size of the canvas element to 0. Conversely, they can detect when the page is visible again and reinitialize at that time&lt;br /&gt;
&lt;br /&gt;
:Web apps can track events that are often associated with GPU context losses (e.g. waking-up from hibernation), and conservatively reinitialize the 2D canvas by resetting the context (set canvas width/height) and redrawing, just in case.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
:* Empower the browser to monitor resources to decide whether to drop canvas backing stores and in what order (LRU backgroung tabs?) in order to achieve better performance and stability. If web apps must handle resource eviction themselves, they may often free resources when not necessary, which may lead to unnecessary tab switching lag.&lt;br /&gt;
:* Make recovery from GPU context losses more robust.&lt;br /&gt;
:* Allow GPU-accelerated 2D canvases on platforms that are known to drop graphics contexts often or unpredictably.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
:* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/CQJXpXxO6dk E-mail thread on Chromium graphics-dev mailing list]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;&amp;quot;Is there any reason why we don&#039;t add a similar optional callback to the 2D context? (in reference to WebGL context loss API)&amp;quot; --Rik Cabanier, Adobe&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
The following solutions were pondered in the discussion thread cited above:&lt;br /&gt;
:* Generalize the WebGL context lost / context recovered API, so that it applies to all types of canvases.&lt;br /&gt;
:* Add a redraw callback on the canvas element&lt;br /&gt;
&lt;br /&gt;
=== Retained Solution : Upstream the context lost/recovered API form the WebGL specification into the parent canvas specification. ===&lt;br /&gt;
:General Concept:&lt;br /&gt;
:* a contextlost event is fired after the context is lost.&lt;br /&gt;
:* a contextrestored event is fired immediately after a previously lost canvas context is brought back to a usable state. The canvas context is returned to its initial state and the canvas&#039;s backing store is blank (transparent black) when restored.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:Rendering context losses may be intended by the user agent (to resolve resource contention), or may be forced by external factors (e.g. a graphics driver reset).&lt;br /&gt;
&lt;br /&gt;
:For convenience, the lost state should be accessible. To do so, the isContextLost method that is defined in the WebGLRenderingContext API should also exist in the CanvasRenderingContext2D API.&lt;br /&gt;
&lt;br /&gt;
:Losing contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* The UA is only allowed to lose contexts intentionally if the context was opted-in to have discarable storage (see below).&lt;br /&gt;
:* The UA is free to use any set of rules to decide which contexts are dropped when and in what order.&lt;br /&gt;
:* The return value of isContextLost() may transition from false to true before the contextlost event is dispatched (like webGL).&lt;br /&gt;
:* All objects that depend on the content of the canvas (e.g. patterns, imageBitmaps) are neutered when the context is lost. The neutering propagates through creation dependency chains, so a Pattern created from an ImageBitmap created from and ImageBitmap created from a canvas will be neutered if the canvas&#039;s context is lost.  This rule makes it safe for implementations to optimize their memory consumption by sharing pixel buffers between objects when possible.&lt;br /&gt;
:* The contextlost event does not bubble&lt;br /&gt;
:* The contextlost event is cancellable.  Cancelling the even has the effect of cancelling the future restoration of the context.&lt;br /&gt;
&lt;br /&gt;
:Restoring contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Between the time a context is recovered and invoking the listener for the contextrestored event, no other user code can be executed. This is to prevent race conditions that could result from the rendering context validly executing draw commands before the contextrestored event is dispatched.  To respect this constraint, a user agent that restores contexts asynchronously would have to behave as if the context was not yet restored between the time the context is restored internally and the time the contextrestored event listener is called.&lt;br /&gt;
:* The app is responsible for re-creating the objects that were neutered when the context was lost.&lt;br /&gt;
:* There can only be one contextrestored event pending at a time. When there are multiple canvases to be restored, the next canvas to be restored can only be restored after any pending contextrestored events--from previously restored canvases--have been handled.&lt;br /&gt;
:* Context restoration can only be initiated by the user agent (can not by triggered by a script action).&lt;br /&gt;
:* A lost context can only be restored after its context lost event has been dispatched. This avoids synchronization inconsistencies with isContextLost().&lt;br /&gt;
:* The return value of isContextLost() transitions from true to false at the time that contextrestored event is dispatched (like webGL). So the contextrestored event listener is always the first task to be execute after the transition.&lt;br /&gt;
:* The contextrestored event does not bubble and is not cancellable.&lt;br /&gt;
&lt;br /&gt;
:Behavior when using a lost context (applies to 2d contexts, WebGL may behave differently):&lt;br /&gt;
:* All draw calls exit without drawing.&lt;br /&gt;
:* All rendering context API calls will throw the same exceptions as they would if called on a valid context.&lt;br /&gt;
:* All calls that read back canvas pixels from either the canvas element or the canvas rendering context (getImageData, toDataURL, toBlob, createPattern, creatImageBitmap, drawImage with lost canvas as source) will behave as they would if the canvas context were valid and blank (all transparent black pixels).&lt;br /&gt;
:* Using a Patterns or an ImageBitmap that was neutered because the context of its source canvas was lost will behave as if the Pattern or ImageBitmap were valid and blank (all transparent black pixels).&lt;br /&gt;
&lt;br /&gt;
:Behaviors specific to GPU-accelerated implementation of 2d canvas (non-normative?)&lt;br /&gt;
:* If the context was lost due to a GPU-related failure, and the browser is actively restoring GPU functionality, and expects to restore in a timely manner, then context restoration should wait until the browser is ready to resume GPU functionality and the restored canvas should continue to be GPU-accelerated.  Conversely, if GPU functionality is permanently disable or if it is unknow whether or how long it may take to resume GPU operation, then the canvas should be restored immediately without GPU-acceleration&lt;br /&gt;
:* If an accelerated canvas is resized while the GPU is temporarily unavailable, the creation of the new canvas buffer should be postponed until the GPU functionality is restored.&lt;br /&gt;
:* When getContext() is called while GPU functionality is temporarily unavailable:&lt;br /&gt;
:: a) If the canvas does not have an associated context, create an new unaccelerated context.&lt;br /&gt;
:: b) If the canvas already has an associate context, return the existing context even if it is in a lost state.&lt;br /&gt;
&lt;br /&gt;
:Opting-in to discardable backing stores.&lt;br /&gt;
:* A new context creation parameter named &#039;storage&#039; is to be added. Possible values are &#039;persistent&#039; and &#039;discardable&#039;. Usage: canvas.getContext(&#039;2d&#039;, {storage: &#039;discardable&#039;});&lt;br /&gt;
:* In &#039;persistent&#039; mode the UA may not elect to evict the canvas&#039;s backing store.&lt;br /&gt;
:* The default is &#039;persistent&#039;, which provides backwards compatible behavior for application that expect canvas backing store content to never be lost.&lt;br /&gt;
:* Note: This parameter is a string rather than a boolean because it is anticipated that additional storage modes may be added in the future.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:Web browsers will not reap the stability and performance rewards associated with this API with Web apps that do not provide at least a handler for the context recovered event. In order for this feature to improve the state of the web, apps will need to opt in to this new API, which unfortunately needs to be optional for backwards compatibility reasons.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ====&lt;br /&gt;
:Browser vendors should be highly motivated to implement this API in order to improve platform resilience and performance, especially on mobile platforms where RAM contention is often a critical issue.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Web App developers should be motivated to adopt this API in order to improve the stability of their products:&lt;br /&gt;
:* Avoid triggering out-of-memory crashes in the browser.&lt;br /&gt;
:* Avoid browser performance issues associated with having a given app in a background tab&lt;br /&gt;
:* Robustly recover from GPU failures.&lt;br /&gt;
&lt;br /&gt;
:Top tier apps (particularly from developers that track usage metrics) would be expected to enthusiastically adopt this API.&lt;br /&gt;
&lt;br /&gt;
==== Specification ====&lt;br /&gt;
:WebGL would continue to behave as currently specified (http://www.khronos.org/webgl/wiki/HandlingContextLost), but the wording of the WebGL specification could be modified to refer to the parent canvas specification and would only specify differences in behavior with respect to 2d canvas. &lt;br /&gt;
&lt;br /&gt;
==== Open Issues ====&lt;br /&gt;
:* Should there be a base CanvasRenderingContext interface that defines isContextLost and is inherited by both WebGLRenderingContext and CanvasRenderingContext2D?&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9488</id>
		<title>Canvas Context Loss and Restoration</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9488"/>
		<updated>2014-03-12T15:30:03Z</updated>

		<summary type="html">&lt;p&gt;Junov: /* Processing Model */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:2D Canvas Rendering contexts are currently required to have persistent backing stores. This proposal aims to relax that requirement by introducing an API that allows canvases to be discarded by the browser and re-drawn by the web application on demand.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
:The 2d canvas backing store persistence requirement often leads to large amounts of RAM (or GPU memory) being consumed by canvas elements that are not actively used because they are off screen or in background tabs or occluded windows. Other types of RAM-greedy HTML elements can release resources in such cases.&lt;br /&gt;
&lt;br /&gt;
:The expectation of canvas content persistence also makes it very difficult--if not impossible--for many web apps to recover from a GPU context reset in web browsers that store 2D canvas contents in GPU memory.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
:In theory, web apps do have the capability of discarding canvas backing stores (set canvas size to zero) and regenerating canvas content. However, web apps are not and should not be expected to be responsible for resolving resource contention issues.  The browser is responsible for monitoring resource usage and availability and is expected to take all necessary and reasonable measures to avoid crashes, hangs, and catastrophic performance degradations that may be caused by resource contention.  Under the current specification, browsers have no options for evicting resources held by 2D canvases because there are no means of guaranteeing that the application will redraw the contents when needed.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
:Web apps can track events to detect when the page is no longer visible (http://www.w3.org/TR/page-visibility/) and deallocate backing stores at that time by setting the size of the canvas element to 0. Conversely, they can detect when the page is visible again and reinitialize at that time&lt;br /&gt;
&lt;br /&gt;
:Web apps can track events that are often associated with GPU context losses (e.g. waking-up from hibernation), and conservatively reinitialize the 2D canvas by resetting the context (set canvas width/height) and redrawing, just in case.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
:* Empower the browser to monitor resources to decide whether to drop canvas backing stores and in what order (LRU backgroung tabs?) in order to achieve better performance and stability. If web apps must handle resource eviction themselves, they may often free resources when not necessary, which may lead to unnecessary tab switching lag.&lt;br /&gt;
:* Make recovery from GPU context losses more robust.&lt;br /&gt;
:* Allow GPU-accelerated 2D canvases on platforms that are known to drop graphics contexts often or unpredictably.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
:* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/CQJXpXxO6dk E-mail thread on Chromium graphics-dev mailing list]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;&amp;quot;Is there any reason why we don&#039;t add a similar optional callback to the 2D context? (in reference to WebGL context loss API)&amp;quot; --Rik Cabanier, Adobe&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
The following solutions were pondered in the discussion thread cited above:&lt;br /&gt;
:* Generalize the WebGL context lost / context recovered API, so that it applies to all types of canvases.&lt;br /&gt;
:* Add a redraw callback on the canvas element&lt;br /&gt;
&lt;br /&gt;
=== Retained Solution : Upstream the context lost/recovered API form the WebGL specification into the parent canvas specification. ===&lt;br /&gt;
:General Concept:&lt;br /&gt;
:* a renderingContextLost event is fired after the context is lost.&lt;br /&gt;
:* a renderingContextRestored event is fired immediately after a previously lost canvas context is brought back to a usable state. The canvas context is returned to its initial state and the canvas&#039;s backing store is blank (transparent black) when restored.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:Rendering context losses may be intended by the user agent (to resolve resource contention), or may be forced by external factors (e.g. a graphics driver reset).&lt;br /&gt;
&lt;br /&gt;
:For convenience, the lost state should be accessible. To do so, the isContextLost method that is defined in the WebGLRenderingContext API should also exist in the CanvasRenderingContext2D API.&lt;br /&gt;
&lt;br /&gt;
:Losing contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* The UA is only allowed to lose contexts intentionally if the context was opted-in to have discarable storage (see below).&lt;br /&gt;
:* The UA is free to use any set of rules to decide which contexts are dropped when and in what order.&lt;br /&gt;
:* The return value of isContextLost() may transition from false to true before the contextLost event is dispatched (like webGL).&lt;br /&gt;
:* All objects that depend on the content of the canvas (e.g. patterns, imageBitmaps) are neutered when the context is lost. The neutering propagates through creation dependency chains, so a Pattern created from an ImageBitmap created from and ImageBitmap created from a canvas will be neutered if the canvas&#039;s context is lost.  This rule makes it safe for implementations to optimize their memory consumption by sharing pixel buffers between objects when possible.&lt;br /&gt;
&lt;br /&gt;
:Restoring contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Between the time a context is recovered and invoking the listener for the contextRestored event, no other user code can be executed. This is to prevent race conditions that could result from the rendering context validly executing draw commands before the contextRestored event is dispatched.  To respect this constraint, a user agent that restores contexts asynchronously would have to behave as if the context was not yet restored between the time the context is restored internally and the time the contextRestored event listener is called.&lt;br /&gt;
:* The app is responsible for re-creating the objects that were neutered when the context was lost.&lt;br /&gt;
:* There can only be one contextRestored event pending at a time. When there are multiple canvases to be restored, the next canvas to be restored can only be restored after any pending contextRestored events--from previously restored canvases--have been handled.&lt;br /&gt;
:* Context restoration can only be initiated by the user agent (can not by triggered by a script action).&lt;br /&gt;
:* A lost context can only be restored after its context lost event has been dispatched. This avoids synchronization inconsistencies with isContextLost().&lt;br /&gt;
:* The return value of isContextLost() transitions from true to false at the time that contextRestored event is dispatched (like webGL). So the contextRestored event listener is always the first task to be execute after the transition.&lt;br /&gt;
&lt;br /&gt;
:Behavior when using a lost context (applies to 2d contexts, WebGL may behave differently):&lt;br /&gt;
:* All draw calls exit without drawing.&lt;br /&gt;
:* All rendering context API calls will throw the same exceptions as they would if called on a valid context.&lt;br /&gt;
:* All calls that read back canvas pixels from either the canvas element or the canvas rendering context (getImageData, toDataURL, toBlob, createPattern, creatImageBitmap, drawImage with lost canvas as source) will behave as they would if the canvas context were valid and blank (all transparent black pixels).&lt;br /&gt;
:* Using a Patterns or an ImageBitmap that was neutered because the context of its source canvas was lost will behave as if the Pattern or ImageBitmap were valid and blank (all transparent black pixels).&lt;br /&gt;
&lt;br /&gt;
:Behaviors specific to GPU-accelerated implementation of 2d canvas (non-normative?)&lt;br /&gt;
:* If the context was lost due to a GPU-related failure, and the browser is actively restoring GPU functionality, and expects to restore in a timely manner, then context restoration should wait until the browser is ready to resume GPU functionality and the restored canvas should continue to be GPU-accelerated.  Conversely, if GPU functionality is permanently disable or if it is unknow whether or how long it may take to resume GPU operation, then the canvas should be restored immediately without GPU-acceleration&lt;br /&gt;
:* If an accelerated canvas is resized while the GPU is temporarily unavailable, the creation of the new canvas buffer should be postponed until the GPU functionality is restored.&lt;br /&gt;
:* When getContext() is called while GPU functionality is temporarily unavailable:&lt;br /&gt;
:: a) If the canvas does not have an associated context, create an new unaccelerated context.&lt;br /&gt;
:: b) If the canvas already has an associate context, return the existing context even if it is in a lost state.&lt;br /&gt;
&lt;br /&gt;
:Opting-in to discardable backing stores.&lt;br /&gt;
:* A new context creation parameter named &#039;storage&#039; is to be added. Possible values are &#039;persistent&#039; and &#039;discardable&#039;. Usage: canvas.getContext(&#039;2d&#039;, {storage: &#039;discardable&#039;});&lt;br /&gt;
:* In &#039;persistent&#039; mode the UA may not elect to evict the canvas&#039;s backing store.&lt;br /&gt;
:* The default is &#039;persistent&#039;, which provides backwards compatible behavior for application that expect canvas backing store content to never be lost.&lt;br /&gt;
:* Note: This parameter is a string rather than a boolean because it is anticipated that additional storage modes may be added in the future.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:Web browsers will not reap the stability and performance rewards associated with this API with Web apps that do not provide at least a handler for the context recovered event. In order for this feature to improve the state of the web, apps will need to opt in to this new API, which unfortunately needs to be optional for backwards compatibility reasons.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ====&lt;br /&gt;
:Browser vendors should be highly motivated to implement this API in order to improve platform resilience and performance, especially on mobile platforms where RAM contention is often a critical issue.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Web App developers should be motivated to adopt this API in order to improve the stability of their products:&lt;br /&gt;
:* Avoid triggering out-of-memory crashes in the browser.&lt;br /&gt;
:* Avoid browser performance issues associated with having a given app in a background tab&lt;br /&gt;
:* Robustly recover from GPU failures.&lt;br /&gt;
&lt;br /&gt;
:Top tier apps (particularly from developers that track usage metrics) would be expected to enthusiastically adopt this API.&lt;br /&gt;
&lt;br /&gt;
==== Specification ====&lt;br /&gt;
:WebGL would continue to behave as currently specified (http://www.khronos.org/webgl/wiki/HandlingContextLost), but the wording of the WebGL specification could be modified to refer to the parent canvas specification and would only specify differences in behavior with respect to 2d canvas. &lt;br /&gt;
&lt;br /&gt;
==== Open Issues ====&lt;br /&gt;
:* Should there be a base CanvasRenderingContext interface that defines isContextLost and is inherited by both WebGLRenderingContext and CanvasRenderingContext2D?&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9487</id>
		<title>Canvas Context Loss and Restoration</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9487"/>
		<updated>2014-03-12T15:24:17Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:2D Canvas Rendering contexts are currently required to have persistent backing stores. This proposal aims to relax that requirement by introducing an API that allows canvases to be discarded by the browser and re-drawn by the web application on demand.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
:The 2d canvas backing store persistence requirement often leads to large amounts of RAM (or GPU memory) being consumed by canvas elements that are not actively used because they are off screen or in background tabs or occluded windows. Other types of RAM-greedy HTML elements can release resources in such cases.&lt;br /&gt;
&lt;br /&gt;
:The expectation of canvas content persistence also makes it very difficult--if not impossible--for many web apps to recover from a GPU context reset in web browsers that store 2D canvas contents in GPU memory.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
:In theory, web apps do have the capability of discarding canvas backing stores (set canvas size to zero) and regenerating canvas content. However, web apps are not and should not be expected to be responsible for resolving resource contention issues.  The browser is responsible for monitoring resource usage and availability and is expected to take all necessary and reasonable measures to avoid crashes, hangs, and catastrophic performance degradations that may be caused by resource contention.  Under the current specification, browsers have no options for evicting resources held by 2D canvases because there are no means of guaranteeing that the application will redraw the contents when needed.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
:Web apps can track events to detect when the page is no longer visible (http://www.w3.org/TR/page-visibility/) and deallocate backing stores at that time by setting the size of the canvas element to 0. Conversely, they can detect when the page is visible again and reinitialize at that time&lt;br /&gt;
&lt;br /&gt;
:Web apps can track events that are often associated with GPU context losses (e.g. waking-up from hibernation), and conservatively reinitialize the 2D canvas by resetting the context (set canvas width/height) and redrawing, just in case.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
:* Empower the browser to monitor resources to decide whether to drop canvas backing stores and in what order (LRU backgroung tabs?) in order to achieve better performance and stability. If web apps must handle resource eviction themselves, they may often free resources when not necessary, which may lead to unnecessary tab switching lag.&lt;br /&gt;
:* Make recovery from GPU context losses more robust.&lt;br /&gt;
:* Allow GPU-accelerated 2D canvases on platforms that are known to drop graphics contexts often or unpredictably.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
:* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/CQJXpXxO6dk E-mail thread on Chromium graphics-dev mailing list]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;&amp;quot;Is there any reason why we don&#039;t add a similar optional callback to the 2D context? (in reference to WebGL context loss API)&amp;quot; --Rik Cabanier, Adobe&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
The following solutions were pondered in the discussion thread cited above:&lt;br /&gt;
:* Generalize the WebGL context lost / context recovered API, so that it applies to all types of canvases.&lt;br /&gt;
:* Add a redraw callback on the canvas element&lt;br /&gt;
&lt;br /&gt;
=== Retained Solution : Upstream the context lost/recovered API form the WebGL specification into the parent canvas specification. ===&lt;br /&gt;
:General Concept:&lt;br /&gt;
:* a renderingContextLost event is fired after the context is lost.&lt;br /&gt;
:* a renderingContextRestored event is fired immediately after a previously lost canvas context is brought back to a usable state. The canvas context is returned to its initial state and the canvas&#039;s backing store is blank (transparent black) when restored.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:Rendering context losses may be intended by the user agent (to resolve resource contention), or may be forced by external factors (e.g. a graphics driver reset).&lt;br /&gt;
&lt;br /&gt;
:For convenience, the lost state should be accessible. To do so, the isContextLost method that is defined in the WebGLRenderingContext API should also exist in the CanvasRenderingContext2D API.&lt;br /&gt;
&lt;br /&gt;
:Losing contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* The UA is only allowed to lose contexts intentionally if the context was opted-in to have discarable storage (see below).&lt;br /&gt;
:* The UA is free to use any set of rules to decide which contexts are dropped when and in what order.&lt;br /&gt;
:* The return value of isContextLost() may transition from false to true before the contextLost event is dispatched (like webGL).&lt;br /&gt;
:* All objects that depend on the content of the canvas (e.g. patterns, imageBitmaps) are neutered when the context is lost. The neutering propagates through creation dependency chains, so a Pattern created from an ImageBitmap created from and ImageBitmap created from a canvas will be neutered if the canvas&#039;s context is lost.  This rule makes it safe for implementations to optimize their memory consumption by sharing pixel buffers between objects when possible.&lt;br /&gt;
&lt;br /&gt;
:Restoring contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Between the time a context is recovered and invoking the listener for the contextRestored event, no other user code can be executed. This is to prevent race conditions that could result from the rendering context validly executing draw commands before the contextRestored event is dispatched.  To respect this constraint, a user agent that restores contexts asynchronously would have to behave as if the context was not yet restored between the time the context is restored internally and the time the contextRestored event listener is called.&lt;br /&gt;
:* The app is responsible for re-creating the objects that were neutered when the context was lost.&lt;br /&gt;
:* There can only be one contextRestored event pending at a time. When there are multiple canvases to be restored, the next canvas to be restored can only be restored after any pending contextRestored events--from previously restored canvases--have been handled.&lt;br /&gt;
:* Context restoration can only be initiated by the user agent (can not by triggered by a script action).&lt;br /&gt;
:* A lost context can only be restored after its context lost event has been dispatched. This avoids synchronization inconsistencies with isContextLost().&lt;br /&gt;
:* The return value of isContextLost() transitions from true to false at the time that contextRestored event is dispatched (like webGL). So the contextRestored event listener is always the first task to be execute after the transition.&lt;br /&gt;
&lt;br /&gt;
:Behavior when using a lost context (applies to 2d contexts, WebGL may behave differently):&lt;br /&gt;
:* All draw calls exit without drawing.&lt;br /&gt;
:* All rendering context API calls will throw the same exceptions as they would if called on a valid context.&lt;br /&gt;
:* All calls that read back canvas pixels from either the canvas element or the canvas rendering context (getImageData, toDataURL, toBlob, createPattern, creatImageBitmap, drawImage with lost canvas as source) will behave as they would if the canvas context were valid and blank (all transparent black pixels).&lt;br /&gt;
:* Using a Patterns or an ImageBitmap that was neutered because the context of its source canvas was lost will behave as if the Pattern or ImageBitmap were valid and blank (all transparent black pixels).&lt;br /&gt;
&lt;br /&gt;
:Behaviors specific to GPU-accelerated implementation of 2d canvas (non-normative?)&lt;br /&gt;
:* If the context was lost due to a GPU-related failure, and the browser is actively restoring GPU functionality, and expects to restore in a timely manner, then context restoration should wait until the browser is ready to resume GPU functionality and the restored canvas should continue to be GPU-accelerated.  Conversely, if GPU functionality is permanently disable or if it is unknow whether or how long it may take to resume GPU operation, then the canvas should be restored immediately without GPU-acceleration&lt;br /&gt;
:* If an accelerated canvas is resized while the GPU is temporarily unavailable, the creation of the new canvas buffer should be postponed until the GPU functionality is restored.&lt;br /&gt;
:* When getContext() is called while GPU functionality is temporarily unavailable:&lt;br /&gt;
:: a) If the canvas does not have an associated context, create an new unaccelerated context.&lt;br /&gt;
:: b) If the canvas already has an associate context, return the existing context even if it is in a lost state.&lt;br /&gt;
&lt;br /&gt;
:Opting-in to discardable backing stores.&lt;br /&gt;
:* A new context creation parameter named &#039;storage&#039; is to be added. Possible values are &#039;persistent&#039; and &#039;discardable&#039;.&lt;br /&gt;
:* In &#039;persistent&#039; mode the UA may not elect to evict the canvas&#039;s backing store.&lt;br /&gt;
:* The default is &#039;persistent&#039;, which provides backwards compatible behavior for application that expect canvas backing store content to never be lost.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:Web browsers will not reap the stability and performance rewards associated with this API with Web apps that do not provide at least a handler for the context recovered event. In order for this feature to improve the state of the web, apps will need to opt in to this new API, which unfortunately needs to be optional for backwards compatibility reasons.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ====&lt;br /&gt;
:Browser vendors should be highly motivated to implement this API in order to improve platform resilience and performance, especially on mobile platforms where RAM contention is often a critical issue.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Web App developers should be motivated to adopt this API in order to improve the stability of their products:&lt;br /&gt;
:* Avoid triggering out-of-memory crashes in the browser.&lt;br /&gt;
:* Avoid browser performance issues associated with having a given app in a background tab&lt;br /&gt;
:* Robustly recover from GPU failures.&lt;br /&gt;
&lt;br /&gt;
:Top tier apps (particularly from developers that track usage metrics) would be expected to enthusiastically adopt this API.&lt;br /&gt;
&lt;br /&gt;
==== Specification ====&lt;br /&gt;
:WebGL would continue to behave as currently specified (http://www.khronos.org/webgl/wiki/HandlingContextLost), but the wording of the WebGL specification could be modified to refer to the parent canvas specification and would only specify differences in behavior with respect to 2d canvas. &lt;br /&gt;
&lt;br /&gt;
==== Open Issues ====&lt;br /&gt;
:* Should there be a base CanvasRenderingContext interface that defines isContextLost and is inherited by both WebGLRenderingContext and CanvasRenderingContext2D?&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9460</id>
		<title>Canvas Context Loss and Restoration</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9460"/>
		<updated>2014-01-28T16:43:19Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:2D Canvas Rendering contexts are currently required to have persistent backing stores. This proposal aims to relax that requirement by introducing an API that allows canvases to be discarded by the browser and re-drawn by the web application on demand.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
:The 2d canvas backing store persistence requirement often leads to large amounts of RAM (or GPU memory) being consumed by canvas elements that are not actively used because they are off screen or in background tabs or occluded windows. Other types of RAM-greedy HTML elements can release resources in such cases.&lt;br /&gt;
&lt;br /&gt;
:The expectation of canvas content persistence also makes it very difficult--if not impossible--for many web apps to recover from a GPU context reset in web browsers that store 2D canvas contents in GPU memory.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
:In theory, web apps do have the capability of discarding canvas backing stores (set canvas size to zero) and regenerating canvas content. However, web apps are not and should not be expected to be responsible for resolving resource contention issues.  The browser is responsible for monitoring resource usage and availability and is expected to take all necessary and reasonable measures to avoid crashes, hangs, and catastrophic performance degradations that may be caused by resource contention.  Under the current specification, browsers have no options for evicting resources held by 2D canvases because there are no means of guaranteeing that the application will redraw the contents when needed.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
:Web apps can track events to detect when the page is no longer visible (http://www.w3.org/TR/page-visibility/) and deallocate backing stores at that time by setting the size of the canvas element to 0. Conversely, they can detect when the page is visible again and reinitialize at that time&lt;br /&gt;
&lt;br /&gt;
:Web apps can track events that are often associated with GPU context losses (e.g. waking-up from hibernation), and conservatively reinitialize the 2D canvas by resetting the context (set canvas width/height) and redrawing, just in case.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
:* Empower the browser to monitor resources to decide whether to drop canvas backing stores and in what order (LRU backgroung tabs?) in order to achieve better performance and stability. If web apps must handle resource eviction themselves, they may often free resources when not necessary, which may lead to unnecessary tab switching lag.&lt;br /&gt;
:* Make recovery from GPU context losses more robust.&lt;br /&gt;
:* Allow GPU-accelerated 2D canvases on platforms that are known to drop graphics contexts often or unpredictably.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
:* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/CQJXpXxO6dk E-mail thread on Chromium graphics-dev mailing list]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;&amp;quot;Is there any reason why we don&#039;t add a similar optional callback to the 2D context? (in reference to WebGL context loss API)&amp;quot; --Rik Cabanier, Adobe&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
The following solutions were pondered in the discussion thread cited above:&lt;br /&gt;
:* Generalize the WebGL context lost / context recovered API, so that it applies to all types of canvases.&lt;br /&gt;
:* Add a redraw callback on the canvas element&lt;br /&gt;
&lt;br /&gt;
=== Retained Solution : Upstream the context lost/recovered API form the WebGL specification into the parent canvas specification. ===&lt;br /&gt;
:General Concept:&lt;br /&gt;
:* a renderingContextLost event is fired after the context is lost.&lt;br /&gt;
:* a renderingContextRestored event is fired immediately after a previously lost canvas context is brought back to a usable state. The canvas context is returned to its initial state and the canvas&#039;s backing store is blank (transparent black) when restored.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:Rendering context losses may be intended by the user agent (to resolve resource contention), or may be forced by external factors (e.g. a graphics driver reset).&lt;br /&gt;
&lt;br /&gt;
:For convenience, the lost state should be accessible. To do so, the isContextLost method that is defined in the WebGLRenderingContext API should also exist in the CanvasRenderingContext2D API.&lt;br /&gt;
&lt;br /&gt;
:Losing contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Intentional losses are only allowed if the contextRestored event is handled on the canvas element associated with the context.&lt;br /&gt;
:* The UA is free to use any set of rules to decide which contexts are dropped when and in what order.&lt;br /&gt;
:* The return value of isContextLost() may transition from false to true before the contextLost event is dispatched (like webGL).&lt;br /&gt;
:* All objects that depend on the content of the canvas (e.g. patterns, imageBitmaps) are neutered when the context is lost. The neutering propagates through creation dependency chains, so a Pattern created from an ImageBitmap created from and ImageBitmap created from a canvas will be neutered if the canvas&#039;s context is lost.  This rule makes it safe for implementations to optimize their memory consumption by sharing pixel buffers between objects when possible.&lt;br /&gt;
&lt;br /&gt;
:Restoring contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Between the time a context is recovered and invoking the listener for the contextRestored event, no other user code can be executed. This is to prevent race conditions that could result from the rendering context validly executing draw commands before the contextRestored event is dispatched.  To respect this constraint, a user agent that restores contexts asynchronously would have to behave as if the context was not yet restored between the time the context is restored internally and the time the contextRestored event listener is called.&lt;br /&gt;
:* The app is responsible for re-creating the objects that were neutered when the context was lost.&lt;br /&gt;
:* There can only be one contextRestored event pending at a time. When there are multiple canvases to be restored, the next canvas to be restored can only be restored after any pending contextRestored events--from previously restored canvases--have been handled.&lt;br /&gt;
:* Context restoration can only be initiated by the user agent (can not by triggered by a script action).&lt;br /&gt;
:* A lost context can only be restored after its context lost event has been dispatched. This avoids synchronization inconsistencies with isContextLost().&lt;br /&gt;
:* The return value of isContextLost() transitions from true to false at the time that contextRestored event is dispatched (like webGL). So the contextRestored event listener is always the first task to be execute after the transition.&lt;br /&gt;
&lt;br /&gt;
:Behavior when using a lost context (applies to 2d contexts, WebGL may behave differently):&lt;br /&gt;
:* All draw calls exit without drawing.&lt;br /&gt;
:* All rendering context API calls will throw the same exceptions as they would if called on a valid context.&lt;br /&gt;
:* All calls that read back canvas pixels from either the canvas element or the canvas rendering context (getImageData, toDataURL, toBlob, createPattern, creatImageBitmap, drawImage with lost canvas as source) will behave as they would if the canvas context were valid and blank (all transparent black pixels).&lt;br /&gt;
:* Using a Patterns or an ImageBitmap that was neutered because the context of its source canvas was lost will behave as if the Pattern or ImageBitmap were valid and blank (all transparent black pixels).&lt;br /&gt;
&lt;br /&gt;
:Behaviors specific to GPU-accelerated implementation of 2d canvas (non-normative?)&lt;br /&gt;
:* If the context was lost due to a GPU-related failure, and the browser is actively restoring GPU functionality, and expects to restore in a timely manner, then context restoration should wait until the browser is ready to resume GPU functionality and the restored canvas should continue to be GPU-accelerated.  Conversely, if GPU functionality is permanently disable or if it is unknow whether or how long it may take to resume GPU operation, then the canvas should be restored immediately without GPU-acceleration&lt;br /&gt;
:* If an accelerated canvas is resized while the GPU is temporarily unavailable, the creation of the new canvas buffer should be postponed until the GPU functionality is restored.&lt;br /&gt;
:* When getContext() is called while GPU functionality is temporarily unavailable:&lt;br /&gt;
:: a) If the canvas does not have an associated context, create an new unaccelerated context.&lt;br /&gt;
:: b) If the canvas already has an associate context, return the existing context even if it is in a lost state.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:Web browsers will not reap the stability and performance rewards associated with this API with Web apps that do not provide at least a handler for the context recovered event. In order for this feature to improve the state of the web, apps will need to opt in to this new API, which unfortunately needs to be optional for backwards compatibility reasons.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ====&lt;br /&gt;
:Browser vendors should be highly motivated to implement this API in order to improve platform resilience and performance, especially on mobile platforms where RAM contention is often a critical issue.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Web App developers should be motivated to adopt this API in order to improve the stability of their products:&lt;br /&gt;
:* Avoid triggering out-of-memory crashes in the browser.&lt;br /&gt;
:* Avoid browser performance issues associated with having a given app in a background tab&lt;br /&gt;
:* Robustly recover from GPU failures.&lt;br /&gt;
&lt;br /&gt;
:Top tier apps (particularly from developers that track usage metrics) would be expected to enthusiastically adopt this API.&lt;br /&gt;
&lt;br /&gt;
==== Specification ====&lt;br /&gt;
:WebGL would continue to behave as currently specified (http://www.khronos.org/webgl/wiki/HandlingContextLost), but the wording of the WebGL specification could be modified to refer to the parent canvas specification and would only specify differences in behavior with respect to 2d canvas. &lt;br /&gt;
&lt;br /&gt;
==== Open Issues ====&lt;br /&gt;
:* Should there be a base CanvasRenderingContext interface that defines isContextLost and is inherited by both WebGLRenderingContext and CanvasRenderingContext2D?&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9412</id>
		<title>Canvas Context Loss and Restoration</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9412"/>
		<updated>2013-12-04T20:11:25Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:2D Canvas Rendering contexts are currently required to have persistent backing stores. This proposal aims to relax that requirement by introducing an API to allows canvases to be discarded by the browser and re-drawn by the web application on demand.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
:The 2d canvas backing store persistence requirement often leads to large amounts of RAM (or GPU memory) being consumed by canvas elements that are not actively used because they are off screen or in background tabs or occluded windows. Other types of RAM-greedy HTML elements can release resources in such cases.&lt;br /&gt;
&lt;br /&gt;
:The expectation of canvas content persistence also makes it very difficult--if not impossible--for many web apps to recover from a GPU context reset in web browsers that store 2D canvas contents in GPU memory.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
:In theory, web apps do have the capability of discarding canvas backing stores (set canvas size to zero) and regenerating canvas content. However, web apps are not and should not be expected to be responsible for resolving resource contention issues.  The browser is responsible for monitoring resource usage and availability and is expected to take all necessary and reasonable measures to avoid crashes, hangs, and catastrophic performance degradations that may be caused by resource contention.  Under the current specification, browsers have no options for evicting resources held by 2D canvases because there are no means of guaranteeing that the application will redraw the contents when needed.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
:Web apps can track events to detect when the page is no longer visible (http://www.w3.org/TR/page-visibility/) and deallocate backing stores at that time by setting the size of the canvas element to 0. Conversely, they can detect when the page is visible again and reinitialize at that time&lt;br /&gt;
&lt;br /&gt;
:Web apps can track events that are often associated with GPU context losses (e.g. waking-up from hibernation), and conservatively reinitialize the 2D canvas by resetting the context (set canvas width/height) and redrawing, just in case.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
:* Empower the browser to monitor resources to decide whether to drop canvas backing stores and in what order (LRU backgroung tabs?) in order to achieve better performance and stability. If web apps must handle resource eviction themselves, they may often free resources when not necessary, which may lead to unnecessary tab switching lag.&lt;br /&gt;
:* Make recovery from GPU context losses more robust.&lt;br /&gt;
:* Allow GPU-accelerated 2D canvases on platforms that are known to drop graphics contexts often or unpredictably.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
:* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/CQJXpXxO6dk E-mail thread on Chromium graphics-dev mailing list]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;&amp;quot;Is there any reason why we don&#039;t add a similar optional callback to the 2D context? (in reference to WebGL context loss API)&amp;quot; --Rik Cabanier, Adobe&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
The following solutions were pondered in the discussion thread cited above:&lt;br /&gt;
:* Generalize the WebGL context lost / context recovered API, so that it applies to all types of canvases.&lt;br /&gt;
:* Add a redraw callback on the canvas element&lt;br /&gt;
&lt;br /&gt;
=== Retained Solution : Upstream the context lost/recovered API form the WebGL specification into the parent canvas specification. ===&lt;br /&gt;
:General Concept:&lt;br /&gt;
:* a renderingContextLost event is fired after the context is lost.&lt;br /&gt;
:* a renderingContextRestored event is fired immediately after a previously lost canvas context is brought back to a usable state. The canvas context is returned to its initial state and the canvas&#039;s backing store is blank (transparent black) when restored.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:Rendering context losses may be intended by the user agent (to resolve resource contention), or may be forced by external factors (e.g. a graphics driver reset).&lt;br /&gt;
&lt;br /&gt;
:For convenience, the lost state should be accessible. To do so, the isContextLost method that is defined in the WebGLRenderingContext API should also exist in the CanvasRenderingContext2D API.&lt;br /&gt;
&lt;br /&gt;
:Losing contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Intentional losses are only allowed if the contextRestored event is handled on the canvas element associated with the context.&lt;br /&gt;
:* The UA is free to use any set of rules to decide which contexts are dropped when and in what order.&lt;br /&gt;
:* The return value of isContextLost() may transition from false to true before the contextLost event is dispatched (like webGL).&lt;br /&gt;
:* All objects that depend on the content of the canvas (e.g. patterns, imageBitmaps) are neutered when the context is lost. The neutering propagates through creation dependency chains, so a Pattern created from an ImageBitmap created from and ImageBitmap created from a canvas will be neutered if the canvas&#039;s context is lost.  This rule makes it safe for implementations to optimize their memory consumption by sharing pixel buffers between objects when possible.&lt;br /&gt;
&lt;br /&gt;
:Restoring contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Between the time a context is recovered and invoking the listener for the contextRestored event, no other user code can be executed. This is to prevent race conditions that could result from the rendering context validly executing draw commands before the contextRestored event is dispatched.  To respect this constraint, a user agent that restores contexts asynchronously would have to behave as if the context was not yet restored between the time the context is restored internally and the time the contextRestored event listener is called.&lt;br /&gt;
:* The app is responsible for re-creating the objects that were neutered when the context was lost.&lt;br /&gt;
:* There can only be one contextRestored event pending at a time. When there are multiple canvases to be restored, the next canvas to be restored can only be restored after any pending contextRestored events--from previously restored canvases--have been handled.&lt;br /&gt;
:* Context restoration can only be initiated by the user agent (can not by triggered by a script action).&lt;br /&gt;
:* A lost context can only be restored after its context lost event has been dispatched. This avoids synchronization inconsistencies with isContextLost().&lt;br /&gt;
:* The return value of isContextLost() transitions from true to false at the time that contextRestored event is dispatched (like webGL). So the contextRestored event listener is always the first task to be execute after the transition.&lt;br /&gt;
&lt;br /&gt;
:Behavior when using a lost context (applies to 2d contexts, WebGL may behave differently):&lt;br /&gt;
:* All draw calls exit without drawing.&lt;br /&gt;
:* All rendering context API calls will throw the same exceptions as they would if called on a valid context.&lt;br /&gt;
:* All calls that read back canvas pixels from either the canvas element or the canvas rendering context (getImageData, toDataURL, toBlob, createPattern, creatImageBitmap, drawImage with lost canvas as source) will behave as they would if the canvas context were valid and blank (all transparent black pixels).&lt;br /&gt;
:* Using a Patterns or an ImageBitmap that was neutered because the context of its source canvas was lost will behave as if the Pattern or ImageBitmap were valid and blank (all transparent black pixels).&lt;br /&gt;
&lt;br /&gt;
:Behaviors specific to GPU-accelerated implementation of 2d canvas (non-normative?)&lt;br /&gt;
:* If the context was lost due to a GPU-related failure, and the browser is actively restoring GPU functionality, and expects to restore in a timely manner, then context restoration should wait until the browser is ready to resume GPU functionality and the restored canvas should continue to be GPU-accelerated.  Conversely, if GPU functionality is permanently disable or if it is unknow whether or how long it may take to resume GPU operation, then the canvas should be restored immediately without GPU-acceleration&lt;br /&gt;
:* If an accelerated canvas is resized while the GPU is temporarily unavailable, the creation of the new canvas buffer should be postponed until the GPU functionality is restored.&lt;br /&gt;
:* When getContext() is called while GPU functionality is temporarily unavailable:&lt;br /&gt;
:: a) If the canvas does not have an associated context, create an new unaccelerated context.&lt;br /&gt;
:: b) If the canvas already has an associate context, return the existing context even if it is in a lost state.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:Web browsers will not reap the stability and performance rewards associated with this API with Web apps that do not provide at least a handler for the context recovered event. In order for this feature to improve the state of the web, apps will need to opt in to this new API, which unfortunately needs to be optional for backwards compatibility reasons.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ====&lt;br /&gt;
:Browser vendors should be highly motivated to implement this API in order to improve platform resilience and performance, especially on mobile platforms where RAM contention is often a critical issue.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Web App developers should be motivated to adopt this API in order to improve the stability of their products:&lt;br /&gt;
:* Avoid triggering out-of-memory crashes in the browser.&lt;br /&gt;
:* Avoid browser performance issues associated with having a given app in a background tab&lt;br /&gt;
:* Robustly recover from GPU failures.&lt;br /&gt;
&lt;br /&gt;
:Top tier apps (particularly from developers that track usage metrics) would be expected to enthusiastically adopt this API.&lt;br /&gt;
&lt;br /&gt;
==== Specification ====&lt;br /&gt;
:WebGL would continue to behave as currently specified (http://www.khronos.org/webgl/wiki/HandlingContextLost), but the wording of the WebGL specification could be modified to refer to the parent canvas specification and would only specify differences in behavior with respect to 2d canvas. &lt;br /&gt;
&lt;br /&gt;
==== Open Issues ====&lt;br /&gt;
:* Should there be a base CanvasRenderingContext interface that defines isContextLost and is inherited by both WebGLRenderingContext and CanvasRenderingContext2D?&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9337</id>
		<title>Canvas Context Loss and Restoration</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9337"/>
		<updated>2013-10-22T15:07:52Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:2D Canvas Rendering contexts are currently required to have persistent backing stores. This proposal aims to relax that requirement by introducing an API to allows canvases to be discarded by the browser and re-drawn by the web application on demand.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
:The 2d canvas backing store persistence requirement often leads to large amounts of RAM (or GPU memory) being consumed by canvas elements that are not actively used because they are off screen or in background tabs or occluded windows. Other types of RAM-greedy HTML elements can release resources in such cases.&lt;br /&gt;
&lt;br /&gt;
:The expectation of canvas content persistence also makes it very difficult--if not impossible--for many web apps to recover from a GPU context reset in web browsers that store 2D canvas contents in GPU memory.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
:In theory, web apps do have the capability of discarding canvas backing stores (set canvas size to zero) and regenerating canvas content. However, web apps are not and should not be expected to be responsible for resolving resource contention issues.  The browser is responsible for monitoring resource usage and availability and is expected to take all necessary and reasonable measures to avoid crashes, hangs, and catastrophic performance degradations that may be caused by resource contention.  Under the current specification, browsers have no options for evicting resources held by 2D canvases because there are no means of guaranteeing that the application will redraw the contents when needed.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
:Web apps can track events to detect when the page is no longer visible (http://www.w3.org/TR/page-visibility/) and deallocate backing stores at that time by setting the size of the canvas element to 0. Conversely, they can detect when the page is visible again and reinitialize at that time&lt;br /&gt;
&lt;br /&gt;
:Web apps can track events that are often associated with GPU context losses (e.g. waking-up from hibernation), and conservatively reinitialize the 2D canvas by resetting the context (set canvas width/height) and redrawing, just in case.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
:* Empower the browser to monitor resources to decide whether to drop canvas backing stores and in what order (LRU backgroung tabs?) in order to achieve better performance and stability. If web apps must handle resource eviction themselves, they may often free resources when not necessary, which may lead to unnecessary tab switching lag.&lt;br /&gt;
:* Make recovery from GPU context losses more robust.&lt;br /&gt;
:* Allow GPU-accelerated 2D canvases on platforms that are known to drop graphics contexts often or unpredictably.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
:* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/CQJXpXxO6dk E-mail thread on Chromium graphics-dev mailing list]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;&amp;quot;Is there any reason why we don&#039;t add a similar optional callback to the 2D context? (in reference to WebGL context loss API)&amp;quot; --Rik Cabanier, Adobe&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
The following solutions were pondered in the discussion thread cited above:&lt;br /&gt;
:* Generalize the WebGL context lost / context recovered API, so that it applies to all types of canvases.&lt;br /&gt;
:* Add a redraw callback on the canvas element&lt;br /&gt;
&lt;br /&gt;
=== Retained Solution : Upstream the context lost/recovered API form the WebGL specification into the parent canvas specification. ===&lt;br /&gt;
:General Concept:&lt;br /&gt;
:* a renderingContextLost event is fired after the context is lost.&lt;br /&gt;
:* a renderingContextRestored event is fired immediately after a previously lost canvas context is brought back to a usable state. The canvas context is returned to its initial state and the canvas&#039;s backing store is blank (transparent black) when restored.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:Rendering context losses may be intended by the user agent (to resolve resource contention), or may be forced by external factors (e.g. a graphics driver reset).&lt;br /&gt;
&lt;br /&gt;
:For convenience, the lost state should be accessible. To do so, the isContextLost method that is defined in the WebGLRenderingContext API should also exist in the CanvasRenderingContext2D API.&lt;br /&gt;
&lt;br /&gt;
:Losing contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Intentional losses are only allowed if the contextRestored event is handled on the canvas element associated with the context.&lt;br /&gt;
:* The UA is free to use any set of rules to decide which contexts are dropped when and in what order.&lt;br /&gt;
:* The return value of isContextLost() may transition from false to true before the contextLost event is dispatched (like webGL).&lt;br /&gt;
:* All objects that depend on the content of the canvas (e.g. patterns, imageBitmaps) are neutered when the context is lost. The neutering propagates through creation dependency chains, so a Pattern created from an ImageBitmap created from and ImageBitmap created from a canvas will be neutered if the canvas&#039;s context is lost.  This rule makes it safe for implementations to optimize their memory consumption by sharing pixel buffers between objects when possible.&lt;br /&gt;
&lt;br /&gt;
:Restoring contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Between the time a context is recovered and invoking the listener for the contextRestored event, no other user code can be executed.&lt;br /&gt;
:* The app is responsible for re-creating the objects that were neutered when the context was lost.&lt;br /&gt;
:* There can only be one contextRestored event pending at a time. When there are multiple canvases to be restored, the next canvas to be restored can only be restored after any pending contextRestored events--from previously restored canvases--have been handled.&lt;br /&gt;
:* Context restoration can only be initiated by the user agent (can not by triggered by a script action).&lt;br /&gt;
:* A lost context can only be restored after its context lost event has been dispatched. This avoids synchronization inconsistencies with isContextLost().&lt;br /&gt;
:* The return value of isContextLost() transitions from true to false at the time that contextRestored event is dispatched (like webGL). So the contextRestored event listener is always the first task to be execute after the transition.&lt;br /&gt;
&lt;br /&gt;
:Behavior when using a lost context (applies to 2d contexts, WebGL may behave differently):&lt;br /&gt;
:* All draw calls exit without drawing.&lt;br /&gt;
:* All rendering context API calls will throw the same exceptions as they would if called on a valid context.&lt;br /&gt;
:* All calls that read back canvas pixels from either the canvas element or the canvas rendering context (getImageData, toDataURL, toBlob, createPattern, creatImageBitmap, drawImage with lost canvas as source) will behave as they would if the canvas context were valid and blank (all transparent black pixels).&lt;br /&gt;
:* Using a Patterns or an ImageBitmap that was neutered because the context of its source canvas was lost will behave as if the Pattern or ImageBitmap were valid and blank (all transparent black pixels).&lt;br /&gt;
&lt;br /&gt;
:Behaviors specific to GPU-accelerated implementation of 2d canvas (non-normative?)&lt;br /&gt;
:* If the context was lost due to a GPU-related failure, and the browser is actively restoring GPU functionality, and expects to restore in a timely manner, then context restoration should wait until the browser is ready to resume GPU functionality and the restored canvas should continue to be GPU-accelerated.  Conversely, if GPU functionality is permanently disable or if it is unknow whether or how long it may take to resume GPU operation, then the canvas should be restored immediately without GPU-acceleration&lt;br /&gt;
:* If an accelerated canvas is resized while the GPU is temporarily unavailable, the creation of the new canvas buffer should be postponed until the GPU functionality is restored.&lt;br /&gt;
:* When getContext() is called while GPU functionality is temporarily unavailable:&lt;br /&gt;
:: a) If the canvas does not have an associated context, create an new unaccelerated context.&lt;br /&gt;
:: b) If the canvas already has an associate context, return the existing context even if it is in a lost state.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:Web browsers will not reap the stability and performance rewards associated with this API with Web apps that do not provide at least a handler for the context recovered event. In order for this feature to improve the state of the web, apps will need to opt in to this new API, which unfortunately needs to be optional for backwards compatibility reasons.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ====&lt;br /&gt;
:Browser vendors should be highly motivated to implement this API in order to improve platform resilience and performance, especially on mobile platforms where RAM contention is often a critical issue.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Web App developers should be motivated to adopt this API in order to improve the stability of their products:&lt;br /&gt;
:* Avoid triggering out-of-memory crashes in the browser.&lt;br /&gt;
:* Avoid browser performance issues associated with having a given app in a background tab&lt;br /&gt;
:* Robustly recover from GPU failures.&lt;br /&gt;
&lt;br /&gt;
:Top tier apps (particularly from developers that track usage metrics) would be expected to enthusiastically adopt this API.&lt;br /&gt;
&lt;br /&gt;
==== Specification ====&lt;br /&gt;
:WebGL would continue to behave as currently specified (http://www.khronos.org/webgl/wiki/HandlingContextLost), but the wording of the WebGL specification could be modified to refer to the parent canvas specification and would only specify differences in behavior with respect to 2d canvas. &lt;br /&gt;
&lt;br /&gt;
==== Open Issues ====&lt;br /&gt;
:* Should there be a base CanvasRenderingContext interface that defines isContextLost and is inherited by both WebGLRenderingContext and CanvasRenderingContext2D?&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9336</id>
		<title>Canvas Context Loss and Restoration</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9336"/>
		<updated>2013-10-21T15:26:49Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:2D Canvas Rendering contexts are currently required to have persistent backing stores. This proposal aims to relax that requirement by introducing an API to allows canvases to be discarded by the browser and re-drawn by the web application on demand.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
:The 2d canvas backing store persistence requirement often leads to large amounts of RAM (or GPU memory) being consumed by canvas elements that are not actively used because they are off screen or in background tabs or occluded windows. Other types of RAM-greedy HTML elements can release resources in such cases.&lt;br /&gt;
&lt;br /&gt;
:The expectation of canvas content persistence also makes it very difficult--if not impossible--for many web apps to recover from a GPU context reset in web browsers that store 2D canvas contents in GPU memory.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
:In theory, web apps do have the capability of discarding canvas backing stores (set canvas size to zero) and regenerating canvas content. However, web apps are not and should not be expected to be responsible for resolving resource contention issues.  The browser is responsible for monitoring resource usage and availability and is expected to take all necessary and reasonable measures to avoid crashes, hangs, and catastrophic performance degradations that may be caused by resource contention.  Under the current specification, browsers have no options for evicting resources held by 2D canvases because there are no means of guaranteeing that the application will redraw the contents when needed.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
:Web apps can track events to detect when the page is no longer visible (http://www.w3.org/TR/page-visibility/) and deallocate backing stores at that time by setting the size of the canvas element to 0. Conversely, they can detect when the page is visible again and reinitialize at that time&lt;br /&gt;
&lt;br /&gt;
:Web apps can track events that are often associated with GPU context losses (e.g. waking-up from hibernation), and conservatively reinitialize the 2D canvas by resetting the context (set canvas width/height) and redrawing, just in case.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
:* Empower the browser to monitor resources to decide whether to drop canvas backing stores and in what order (LRU backgroung tabs?) in order to achieve better performance and stability. If web apps must handle resource eviction themselves, they may often free resources when not necessary, which may lead to unnecessary tab switching lag.&lt;br /&gt;
:* Make recovery from GPU context losses more robust.&lt;br /&gt;
:* Allow GPU-accelerated 2D canvases on platforms that are known to drop graphics contexts often or unpredictably.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
:* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/CQJXpXxO6dk E-mail thread on Chromium graphics-dev mailing list]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;&amp;quot;Is there any reason why we don&#039;t add a similar optional callback to the 2D context? (in reference to WebGL context loss API)&amp;quot; --Rik Cabanier, Adobe&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
The following solutions were pondered in the discussion thread cited above:&lt;br /&gt;
:* Generalize the WebGL context lost / context recovered API, so that it applies to all types of canvases.&lt;br /&gt;
:* Add a redraw callback on the canvas element&lt;br /&gt;
&lt;br /&gt;
=== Retained Solution : Upstream the context lost/recovered API form the WebGL specification into the parent canvas specification. ===&lt;br /&gt;
:General Concept:&lt;br /&gt;
:* a renderingContextLost event is fired after the context is lost.&lt;br /&gt;
:* a renderingContextRestored event is fired immediately after a previously lost canvas context is brought back to a usable state. The canvas context is returned to its initial state and the canvas&#039;s backing store is blank (transparent black) when restored.&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:Rendering context losses may be intended by the user agent (to resolve resource contention), or may be forced by external factors (e.g. a graphics driver reset).&lt;br /&gt;
&lt;br /&gt;
:For convenience, the lost state should be accessible. To do so, the isContextLost method that is defined in the WebGLRenderingContext API should also exist in the CanvasRenderingContext2D API.&lt;br /&gt;
&lt;br /&gt;
:Losing contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Intentional losses are only allowed if the contextRestored event is handled on the canvas element associated with the context.&lt;br /&gt;
:* The UA is free to use any set of rules to decide which contexts are dropped when and in what order.&lt;br /&gt;
:* The return value of isContextLost() may transition from false to true before the contextLost event is dispatched (like webGL).&lt;br /&gt;
:* All objects that depend on the content of the canvas (e.g. patterns, imageBitmaps) are neutered when the context is lost. The neutering propagates through creation dependency chains, so a Pattern created from an ImageBitmap created from and ImageBitmap created from a canvas will be neutered if the canvas&#039;s context is lost.  This rule makes it safe for implementations to optimize their memory consumption by sharing pixel buffers between objects when possible.&lt;br /&gt;
&lt;br /&gt;
:Restoring contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Between the time a context is recovered and invoking the listener for the contextRestored event, no other user code can be executed.&lt;br /&gt;
:* The app is responsible for re-creating the objects that were neutered when the context was lost.&lt;br /&gt;
:* There can only be one contextRestored event pending at a time. When there are multiple canvases to be restored, the next canvas to be restored can only be restored after any pending contextRestored events--from previously restored canvases--have been handled.&lt;br /&gt;
:* Context restoration can only be initiated by the user agent (can not by triggered by a script action).&lt;br /&gt;
:* A lost context can only be restored after its context lost event has been dispatched. This avoids synchronization inconsistencies with isContextLost().&lt;br /&gt;
:* The return value of isContextLost() transitions from true to false at the time that contextRestored event is dispatched (like webGL). So the contextRestored event listener is always the first task to be execute after the transition.&lt;br /&gt;
&lt;br /&gt;
:Behavior when using a lost context (applies to 2d contexts, WebGL may behave differently):&lt;br /&gt;
:* All draw calls exit without drawing.&lt;br /&gt;
:* All rendering context API calls will throw the same exceptions as they would if called on a valid context.&lt;br /&gt;
:* All calls that read back canvas pixels from either the canvas element or the canvas rendering context (getImageData, toDataURL, toBlob, createPattern, creatImageBitmap, drawImage with lost canvas as source) will behave as they would if the canvas context were valid and blank (all transparent black pixels).&lt;br /&gt;
:* Using a Patterns or an ImageBitmap that was neutered because the context of its source canvas was lost will behave as if the Pattern or ImageBitmap were valid and blank (all transparent black pixels).&lt;br /&gt;
&lt;br /&gt;
:Behaviors specific to GPU-accelerated implementation of 2d canvas (non-normative?)&lt;br /&gt;
:* If the context was lost due to a GPU-related failure, and the browser is actively restoring GPU functionality, and expects to restore in a timely manner, then context restoration should wait until the browser is ready to resume GPU functionality and the restored canvas should continue to be GPU-accelerated.  Conversely, if GPU functionality is permanently disable or if it is unknow whether or how long it may take to resume GPU operation, then the canvas should be restored immediately without GPU-acceleration&lt;br /&gt;
:* If an accelerated canvas is resized while the GPU is temporarily unavailable, the creation of the new canvas buffer should be postponed until the GPU functionality is restored.&lt;br /&gt;
:* When getContext() is called while GPU functionality is temporarily unavailable:&lt;br /&gt;
:: a) If the canvas does not have an associated context, create an new unaccelerated context.&lt;br /&gt;
:: b) If the canvas already has an associate context, return the existing context even if it is in a lost state.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:Web browsers will not reap the stability and performance rewards associated with this API with Web apps that do not provide at least a handler for the context recovered event. In order for this feature to improve the state of the web, apps will need to opt in to this new API, which unfortunately needs to be optional for backwards compatibility reasons.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ====&lt;br /&gt;
:Browser vendors should be highly motivated to implement this API in order to improve platform resilience and performance, especially on mobile platforms where RAM contention is often a critical issue.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Web App developers should be motivated to adopt this API in order to improve the stability of their products:&lt;br /&gt;
:* Avoid triggering out-of-memory crashes in the browser.&lt;br /&gt;
:* Avoid browser performance issues associated with having a given app in a background tab&lt;br /&gt;
:* Robustly recover from GPU failures.&lt;br /&gt;
&lt;br /&gt;
:Top tier apps (particularly from developers that track usage metrics) would be expected to enthusiastically adopt this API.&lt;br /&gt;
&lt;br /&gt;
==== Specification ====&lt;br /&gt;
:WebGL would continue to behave as currently specified (http://www.khronos.org/webgl/wiki/HandlingContextLost), but the wording of the WebGL specification could be modified to refer to the parent canvas specification and would only specify differences in behavior with respect to 2d canvas. &lt;br /&gt;
&lt;br /&gt;
==== Open Issues ====&lt;br /&gt;
:* Should there be a base CanvasRenderingContext interface that defines isContextLost and is inherited by both WebGLRenderingContext and CanvasRenderingContext2D?&lt;br /&gt;
:* Should there be loseContext/restoreContext methods. Could be very useful for testing purposes, or for app-initiated resource management.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9335</id>
		<title>Canvas Context Loss and Restoration</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9335"/>
		<updated>2013-10-21T15:23:20Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:2D Canvas Rendering contexts are currently required to have persistent backing stores. This proposal aims to relax that requirement by introducing an API to allows canvases to be discarded by the browser and re-drawn by the web application on demand.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
:The 2d canvas backing store persistence requirement often leads to large amounts of RAM (or GPU memory) being consumed by canvas elements that are not actively used because they are off screen or in background tabs or occluded windows. Other types of RAM-greedy HTML elements can release resources in such cases.&lt;br /&gt;
&lt;br /&gt;
:The expectation of canvas content persistence also makes it very difficult--if not impossible--for many web apps to recover from a GPU context reset in web browsers that store 2D canvas contents in GPU memory.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
:In theory, web apps do have the capability of discarding canvas backing stores (set canvas size to zero) and regenerating canvas content. However, web apps are not and should not be expected to be responsible for resolving resource contention issues.  The browser is responsible for monitoring resource usage and availability and is expected to take all necessary and reasonable measures to avoid crashes, hangs, and catastrophic performance degradations that may be caused by resource contention.  Under the current specification, browsers have no options for evicting resources held by 2D canvases because there are no means of guaranteeing that the application will redraw the contents when needed.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
:Web apps can track events to detect when the page is no longer visible (http://www.w3.org/TR/page-visibility/) and deallocate backing stores at that time by setting the size of the canvas element to 0. Conversely, they can detect when the page is visible again and reinitialize at that time&lt;br /&gt;
&lt;br /&gt;
:Web apps can track events that are often associated with GPU context losses (e.g. waking-up from hibernation), and conservatively reinitialize the 2D canvas by resetting the context (set canvas width/height) and redrawing, just in case.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
:* Empower the browser to monitor resources to decide whether to drop canvas backing stores and in what order (LRU backgroung tabs?) in order to achieve better performance and stability. If web apps must handle resource eviction themselves, they may often free resources when not necessary, which may lead to unnecessary tab switching lag.&lt;br /&gt;
:* Make recovery from GPU context losses more robust.&lt;br /&gt;
:* Allow GPU-accelerated 2D canvases on platforms that are known to drop graphics contexts often or unpredictably.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
:* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/CQJXpXxO6dk E-mail thread on Chromium graphics-dev mailing list]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;&amp;quot;Is there any reason why we don&#039;t add a similar optional callback to the 2D context? (in reference to WebGL context loss API)&amp;quot; --Rik Cabanier, Adobe&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
The following solutions were pondered in the discussion thread cited above:&lt;br /&gt;
:* Generalize the WebGL context lost / context recovered API, so that it applies to all types of canvases.&lt;br /&gt;
:* Add a redraw callback on the canvas element&lt;br /&gt;
&lt;br /&gt;
=== Retained Solution : Upstream the context lost/recovered API form the WebGL specification into the parent canvas specification. ===&lt;br /&gt;
&lt;br /&gt;
:General Concept:&lt;br /&gt;
:* a renderingContextLost event is fired after the context is lost.&lt;br /&gt;
:* a renderingContextRestored event is fired immediately after a previously lost canvas context is brought back to a usable state. The canvas context is returned to its initial state and the canvas&#039;s backing store is blank (transparent black) when restored.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:Rendering context losses may be intended by the user agent (to resolve resource contention), or may be forced by external factors (e.g. a graphics driver reset).&lt;br /&gt;
&lt;br /&gt;
:For convenience, the lost state should be accessible. To do so, the isContextLost method that is defined in the WebGLRenderingContext API should also exist in the CanvasRenderingContext2D API.&lt;br /&gt;
&lt;br /&gt;
:Losing contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Intentional losses are only allowed if the contextRestored event is handled on the canvas element associated with the context.&lt;br /&gt;
:* The UA is free to use any set of rules to decide which contexts are dropped when and in what order.&lt;br /&gt;
:* The return value of isContextLost() may transition from false to true before the contextLost event is dispatched (like webGL).&lt;br /&gt;
:* All objects that depend on the content of the canvas (e.g. patterns, imageBitmaps) are neutered when the context is lost. The neutering propagates through creation dependency chains, so a Pattern created from an ImageBitmap created from and ImageBitmap created from a canvas will be neutered if the canvas&#039;s context is lost.  This rule makes it safe for implementations to optimize their memory consumption by sharing pixel buffers between objects when possible.&lt;br /&gt;
&lt;br /&gt;
:Restoring contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Between the time a context is recovered and invoking the listener for the contextRestored event, no other user code can be executed.&lt;br /&gt;
:* The app is responsible for re-creating the objects that were neutered when the context was lost.&lt;br /&gt;
:* There can only be one contextRestored event pending at a time. When there are multiple canvases to be restored, the next canvas to be restored can only be restored after any pending contextRestored events--from previously restored canvases--have been handled.&lt;br /&gt;
:* Context restoration can only be initiated by the user agent (can not by triggered by a script action).&lt;br /&gt;
:* A lost context can only be restored after its context lost event has been dispatched. This avoids synchronization inconsistencies with isContextLost().&lt;br /&gt;
:* The return value of isContextLost() transitions from true to false at the time that contextRestored event is dispatched (like webGL). So the contextRestored event listener is always the first task to be execute after the transition.&lt;br /&gt;
&lt;br /&gt;
:Behavior when using a lost context (applies to 2d contexts, WebGL may behave differently):&lt;br /&gt;
:* All draw calls exit without drawing.&lt;br /&gt;
:* All rendering context API calls will throw the same exceptions as they would if called on a valid context.&lt;br /&gt;
:* All calls that read back canvas pixels from either the canvas element or the canvas rendering context (getImageData, toDataURL, toBlob, createPattern, creatImageBitmap, drawImage with lost canvas as source) will behave as they would if the canvas context were valid and blank (all transparent black pixels).&lt;br /&gt;
:* Using a Patterns or an ImageBitmap that was neutered because the context of its source canvas was lost will behave as if the Pattern or ImageBitmap were valid and blank (all transparent black pixels).&lt;br /&gt;
&lt;br /&gt;
:Behaviors specific to GPU-accelerated implementation of 2d canvas (non-normative?)&lt;br /&gt;
:* If the context was lost due to a GPU-related failure, and the browser is actively restoring GPU functionality, and expects to restore in a timely manner, then context restoration should wait until the browser is ready to resume GPU functionality and the restored canvas should continue to be GPU-accelerated.  Conversely, if GPU functionality is permanently disable or if it is unknow whether or how long it may take to resume GPU operation, then the canvas should be restored immediately without GPU-acceleration&lt;br /&gt;
:* If an accelerated canvas is resized while the GPU is temporarily unavailable, the creation of the new canvas buffer should be postponed until the GPU functionality is restored.&lt;br /&gt;
:* When getContext() is called while GPU functionality is temporarily unavailable:&lt;br /&gt;
:: a) If the canvas does not have an associated context, create an new unaccelerated context.&lt;br /&gt;
:: b) If the canvas already has an associate context, return the existing context even if it is in a lost state.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:Web browsers will not reap the stability and performance rewards associated with this API with Web apps that do not provide at least a handler for the context recovered event. In order for this feature to improve the state of the web, apps will need to opt in to this new API, which unfortunately needs to be optional for backwards compatibility reasons.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ====&lt;br /&gt;
:Browser vendors should be highly motivated to implement this API in order to improve platform resilience and performance, especially on mobile platforms where RAM contention is often a critical issue.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Web App developers should be motivated to adopt this API in order to improve the stability of their products:&lt;br /&gt;
:* Avoid triggering out-of-memory crashes in the browser.&lt;br /&gt;
:* Avoid browser performance issues associated with having a given app in a background tab&lt;br /&gt;
:* Robustly recover from GPU failures.&lt;br /&gt;
&lt;br /&gt;
Top tier apps (particularly from developers that track usage metrics) would be expected to enthusiastically adopt this API.&lt;br /&gt;
&lt;br /&gt;
==== Specification ====&lt;br /&gt;
: WebGL would continue to behave as currently specified (http://www.khronos.org/webgl/wiki/HandlingContextLost), but the wording of the WebGL specification could be modified to refer to the parent canvas specification and would only specify differences in behavior with respect to 2d canvas. &lt;br /&gt;
&lt;br /&gt;
==== Open Issues ====&lt;br /&gt;
:* Should there be a base CanvasRenderingContext interface that defines isContextLost and is inherited by both WebGLRenderingContext and CanvasRenderingContext2D?&lt;br /&gt;
:* Should there be loseContext/restoreContext methods. Could be very useful for testing purposes, or for app-initiated resource management.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9334</id>
		<title>Canvas Context Loss and Restoration</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9334"/>
		<updated>2013-10-18T18:18:39Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:2D Canvas Rendering contexts are currently required to have persistent backing stores. This proposal aims to relax that requirement by introducing an API to allows canvases to be discarded by the browser and re-drawn by the web application on demand.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
:The 2d canvas backing store persistence requirement often leads to large amounts of RAM (or GPU memory) being consumed by canvas elements that are not actively used because they are off screen or in background tabs or occluded windows. Other types of RAM-greedy HTML elements can release resources in such cases.&lt;br /&gt;
&lt;br /&gt;
:The expectation of canvas content persistence also makes it very difficult--if not impossible--for many web apps to recover from a GPU context reset in web browsers that store 2D canvas contents in GPU memory.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
:In theory, web apps do have the capability of discarding canvas backing stores (set canvas size to zero) and regenerating canvas content. However, web apps are not and should not be expected to be responsible for resolving resource contention issues.  The browser is responsible for monitoring resource usage and availability and is expected to take all necessary and reasonable measures to avoid crashes, hangs, and catastrophic performance degradations that may be caused by resource contention.  Under the current specification, browsers have no options for evicting resources held by 2D canvases because there are no means of guaranteeing that the application will redraw the contents when needed.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
:Web apps can track events to detect when the page is no longer visible (http://www.w3.org/TR/page-visibility/) and deallocate backing stores at that time by setting the size of the canvas element to 0. Conversely, they can detect when the page is visible again and reinitialize at that time&lt;br /&gt;
&lt;br /&gt;
:Web apps can track events that are often associated with GPU context losses (e.g. waking-up from hibernation), and conservatively reinitialize the 2D canvas by resetting the context (set canvas width/height) and redrawing, just in case.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
:* Empower the browser monitor resources to decide whether to drop canvas backing stores and in what order (LRU backgroung tabs?) in order to achieve better performance and stability. If web apps must handle resource eviction themselves, they may often free resources when not necessary, which may lead to unnecessary tab switching lag.&lt;br /&gt;
:* Make recovery from GPU context losses more robust.&lt;br /&gt;
:* Allow GPU-accelerated 2D canvases on platforms that are known to drop graphics contexts often or unpredictably.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
:* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/CQJXpXxO6dk E-mail thread on Chromium graphics-dev mailing list]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;&amp;quot;Is there any reason why we don&#039;t add a similar optional callback to the 2D context? (in reference to WebGL context loss API)&amp;quot; --Rik Cabanier, Adobe&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
The following solutions were pondered in the discussion thread cited above:&lt;br /&gt;
:* Generalize the WebGL context lost / context recovered API, so that it applies to all types of canvases.&lt;br /&gt;
:* Add a redraw callback on the canvas element&lt;br /&gt;
&lt;br /&gt;
=== Retained Solution : Upstream the context lost/recovered API form the WebGL specification into the parent canvas specification. ===&lt;br /&gt;
&lt;br /&gt;
:General Concept:&lt;br /&gt;
:* a renderingContextLost event is fired after the context is lost.&lt;br /&gt;
:* a renderingContextRestored event is fired immediately after a previously lost canvas context is brought back to a usable state. The canvas context is returned to its initial state and the canvas&#039;s backing store is blank (transparent black) when restored.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:Rendering context losses may be intended by the user agent (to resolve resource contention), or may be forced by external factors (e.g. a graphics driver reset).&lt;br /&gt;
&lt;br /&gt;
:For convenience, the lost state should be accessible. To do so, the isContextLost method that is defined in the WebGLRenderingContext API should also exist in the CanvasRenderingContext2D API.&lt;br /&gt;
&lt;br /&gt;
:Losing contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Intentional losses are only allowed if the contextRestored event is handled on the canvas element associated with the context.&lt;br /&gt;
:* The UA is free to use any set of rules to decide which contexts are dropped when and in what order.&lt;br /&gt;
:* The return value of isContextLost() may transition from false to true before the contextLost event is dispatched (like webGL).&lt;br /&gt;
:* All objects that depend on the content of the canvas (e.g. patterns, imageBitmaps) are neutered when the context is lost. The neutering propagates through creation dependency chains, so a Pattern created from an ImageBitmap created from and ImageBitmap created from a canvas will be neutered if the canvas&#039;s context is lost.  This rule makes it safe for implementations to optimize their memory consumption by sharing pixel buffers between objects when possible.&lt;br /&gt;
&lt;br /&gt;
:Restoring contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Between the time a context is recovered and invoking the listener for the contextRestored event, no other user code can be executed.&lt;br /&gt;
:* The app is responsible for re-creating the objects that were neutered when the context was lost.&lt;br /&gt;
:* There can only be one contextRestored event pending at a time. When there are multiple canvases to be restored, the next canvas to be restored can only be restored after any pending contextRestored events--from previously restored canvases--have been handled.&lt;br /&gt;
:* Context restoration can only be initiated by the user agent (can not by triggered by a script action).&lt;br /&gt;
:* A lost context can only be restored after its context lost event has been dispatched. This avoids synchronization inconsistencies with isContextLost().&lt;br /&gt;
:* The return value of isContextLost() transitions from true to false at the time that contextRestored event is dispatched (like webGL). So the contextRestored event listener is always the first task to be execute after the transition.&lt;br /&gt;
&lt;br /&gt;
:Behavior when using a lost context (applies to 2d contexts, WebGL may behave differently):&lt;br /&gt;
:* All draw calls exit without drawing.&lt;br /&gt;
:* All rendering context API calls will throw the same exceptions as they would if called on a valid context.&lt;br /&gt;
:* All calls that read back canvas pixels from either the canvas element or the canvas rendering context (getImageData, toDataURL, toBlob, createPattern, creatImageBitmap, drawImage with lost canvas as source) will behave as they would if the canvas context were valid and blank (all transparent black pixels).&lt;br /&gt;
:* Using a Patterns or an ImageBitmap that was neutered because the context of its source canvas was lost will behave as if the Pattern or ImageBitmap were valid and blank (all transparent black pixels).&lt;br /&gt;
&lt;br /&gt;
:Behaviors specific to GPU-accelerated implementation of 2d canvas (non-normative?)&lt;br /&gt;
:* If the context was lost due to a GPU-related failure, and the browser is actively restoring GPU functionality, and expects to restore in a timely manner, then context restoration should wait until the browser is ready to resume GPU functionality and the restored canvas should continue to be GPU-accelerated.  Conversely, if GPU functionality is permanently disable or if it is unknow whether or how long it may take to resume GPU operation, then the canvas should be restored immediately without GPU-acceleration&lt;br /&gt;
:* If an accelerated canvas is resized while the GPU is temporarily unavailable, the creation of the new canvas buffer should be postponed until the GPU functionality is restored.&lt;br /&gt;
:* When getContext() is called while GPU functionality is temporarily unavailable:&lt;br /&gt;
:: a) If the canvas does not have an associated context, create an new unaccelerated context.&lt;br /&gt;
:: b) If the canvas already has an associate context, return the existing context even if it is in a lost state.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:Web browsers will not reap the stability and performance rewards associated with this API with Web apps that do not provide at least a handler for the context recovered event. In order for this feature to improve the state of the web, apps will need to opt in to this new API, which unfortunately needs to be optional for backwards compatibility reasons.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ====&lt;br /&gt;
:Browser vendors should be highly motivated to implement this API in order to improve platform resilience and performance, especially on mobile platforms where RAM contention is often a critical issue.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Web App developers should be motivated to adopt this API in order to improve the stability of their products:&lt;br /&gt;
:* Avoid triggering out-of-memory crashes in the browser.&lt;br /&gt;
:* Avoid browser performance issues associated with having a given app in a background tab&lt;br /&gt;
:* Robustly recover from GPU failures.&lt;br /&gt;
&lt;br /&gt;
Top tier apps (particularly from developers that track usage metrics) would be expected to enthusiastically adopt this API.&lt;br /&gt;
&lt;br /&gt;
==== Specification ====&lt;br /&gt;
: WebGL would continue to behave as currently specified (http://www.khronos.org/webgl/wiki/HandlingContextLost), but the wording of the WebGL specification could be modified to refer to the parent canvas specification and would only specify differences in behavior with respect to 2d canvas. &lt;br /&gt;
&lt;br /&gt;
==== Open Issues ====&lt;br /&gt;
:* Should there be a base CanvasRenderingContext interface that defines isContextLost and is inherited by both WebGLRenderingContext and CanvasRenderingContext2D?&lt;br /&gt;
:* Should there be loseContext/restoreContext methods. Could be very useful for testing purposes, or for app-initiated resource management.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9333</id>
		<title>Canvas Context Loss and Restoration</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9333"/>
		<updated>2013-10-18T17:59:51Z</updated>

		<summary type="html">&lt;p&gt;Junov: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:2D Canvas Rendering contexts are currently required to have persistent backing stores. This proposal aims to relax that requirement by introducing an API to allows canvases to be discarded by the browser and re-drawn by the web application on demand.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
:The 2d canvas backing store persistence requirement often leads to large amounts of RAM (or GPU memory) being consumed by canvas elements that are not actively used because they are off screen or in background tabs or occluded windows. Other types of RAM-greedy HTML elements can release resources in such cases.&lt;br /&gt;
&lt;br /&gt;
:The expectation of canvas content persistence also makes it very difficult--if not impossible--for many web apps to recover from a GPU context reset in web browsers that store 2D canvas contents in GPU memory.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
:In theory, web apps do have the capability of discarding canvas backing stores (set canvas size to zero) and regenerating canvas the content. However, web apps are not and should not be expected to be responsible for resolving resource contention issues.  The browser is responsible for monitoring resource usage and availability and is expected to take all necessary and reasonable measures to avoid crashes, hangs, and catastrophic performance degradations that may be caused by resource contention.  Under the current specification, browsers have no options for evicting resources held by 2D canvases because there are no means of guaranteeing that the application will redraw the contents when needed.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
:Web apps can track events to detect when the page is no longer visible (http://www.w3.org/TR/page-visibility/) and deallocate backing stores at that time by setting the size of the canvas element to 0. Conversely, they can detect when the page is visible again and reinitialize at that time&lt;br /&gt;
&lt;br /&gt;
:Web apps can track events that are often associated with GPU context losses (e.g. waking-up from hibernation), and conservatively reinitialize the 2D canvas by resetting the context (set canvas width/height) and redrawing, just in case.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
:* Empower the browser monitor resources to decide whether to drop canvas backing stores and in what order (LRU backgroung tabs?) in order to achieve better performance and stability. If web apps must handle resource eviction themselves, they may often free resources when not necessary, which may lead to unnecessary tab switching lag.&lt;br /&gt;
:* Make recovery from GPU context losses more robust.&lt;br /&gt;
:* Allow GPU-accelerated 2D canvases on platforms that are known to drop graphics contexts often or unpredictably.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
:* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/CQJXpXxO6dk E-mail thread on Chromium graphics-dev mailing list]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;&amp;quot;Is there any reason why we don&#039;t add a similar optional callback to the 2D context? (in reference to WebGL context loss API)&amp;quot; --Rik Cabanier, Adobe&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
The following solutions were pondered in the discussion thread cited above:&lt;br /&gt;
:* Generalize the WebGL context lost / context recovered API, so that it applies to all types of canvases.&lt;br /&gt;
:* Add a redraw callback on the canvas element&lt;br /&gt;
&lt;br /&gt;
=== Retained Solution : Upstream the context lost/recovered API form the WebGL specification into the parent canvas specification. ===&lt;br /&gt;
&lt;br /&gt;
:General Concept:&lt;br /&gt;
:* a renderingContextLost event is fired after the context is lost.&lt;br /&gt;
:* a renderingContextRestored event is fired immediately after a previously lost canvas context is brought back to a usable state. The canvas context is returned to its initial state and the canvas&#039;s backing store is blank (transparent black) when restored.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:Rendering context losses may be intended by the user agent (to resolve resource contention), or may be forced by external factors (e.g. a graphics driver reset).&lt;br /&gt;
&lt;br /&gt;
:For convenience, the lost state should be accessible. To do so, the isContextLost method that is defined in the WebGLRenderingContext API should also exist in the CanvasRenderingContext2D API.&lt;br /&gt;
&lt;br /&gt;
:Losing contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Intentional losses are only allowed if the contextRestored event is handled on the canvas element associated with the context.&lt;br /&gt;
:* The UA is free to use any set of rules to decide which contexts are dropped when and in what order.&lt;br /&gt;
:* The return value of isContextLost() may transition from false to true before the contextLost event is dispatched (like webGL).&lt;br /&gt;
:* All objects that depend on the content of the canvas (e.g. patterns, imageBitmaps) are neutered when the context is lost. The neutering propagates through creation dependency chains, so a Pattern created from an ImageBitmap created from and ImageBitmap created from a canvas will be neutered if the canvas&#039;s context is lost.  This rule makes it safe for implementations to optimize their memory consumption by sharing pixel buffers between objects when possible.&lt;br /&gt;
&lt;br /&gt;
:Restoring contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Between the time a context is recovered and invoking the listener for the contextRestored event, no other user code can be executed.&lt;br /&gt;
:* The app is responsible for re-creating the objects that were neutered when the context was lost.&lt;br /&gt;
:* There can only be one contextRestored event pending at a time. When there are multiple canvases to be restored, the next canvas to be restored can only be restored after any pending contextRestored events--from previously restored canvases--have been handled.&lt;br /&gt;
:* Context restoration can only be initiated by the user agent (can not by triggered by a script action).&lt;br /&gt;
:* A lost context can only be restored after its context lost event has been dispatched. This avoids synchronization inconsistencies with isContextLost().&lt;br /&gt;
:* The return value of isContextLost() transitions from true to false at the time that contextRestored event is dispatched (like webGL). So the contextRestored event listener is always the first task to be execute after the transition.&lt;br /&gt;
&lt;br /&gt;
:Behavior when using a lost context (applies to 2d contexts, WebGL may behave differently):&lt;br /&gt;
:* All draw calls exit without drawing.&lt;br /&gt;
:* All rendering context API calls will throw the same exceptions as they would if called on a valid context.&lt;br /&gt;
:* All calls that read back canvas pixels from either the canvas element or the canvas rendering context (getImageData, toDataURL, toBlob, createPattern, creatImageBitmap, drawImage with lost canvas as source) will behave as they would if the canvas context were valid and blank (all transparent black pixels).&lt;br /&gt;
:* Using a Patterns or an ImageBitmap that was neutered because the context of its source canvas was lost will behave as if the Pattern or ImageBitmap were valid and blank (all transparent black pixels).&lt;br /&gt;
&lt;br /&gt;
:Behaviors specific to GPU-accelerated implementation of 2d canvas (non-normative?)&lt;br /&gt;
:* If the context was lost due to a GPU-related failure, and the browser is actively restoring GPU functionality, and expects to restore in a timely manner, then context restoration should wait until the browser is ready to resume GPU functionality and the restored canvas should continue to be GPU-accelerated.  Conversely, if GPU functionality is permanently disable or if it is unknow whether or how long it may take to resume GPU operation, then the canvas should be restored immediately without GPU-acceleration&lt;br /&gt;
:* If an accelerated canvas is resized while the GPU is temporarily unavailable, the creation of the new canvas buffer should be postponed until the GPU functionality is restored.&lt;br /&gt;
:* When getContext() is called while GPU functionality is temporarily unavailable:&lt;br /&gt;
:: a) If the canvas does not have an associated context, create an new unaccelerated context.&lt;br /&gt;
:: b) If the canvas already has an associate context, return the existing context even if it is in a lost state.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:Web browsers will not reap the stability and performance rewards associated with this API with Web apps that do not provide at least a handler for the context recovered event. In order for this feature to improve the state of the web, apps will need to opt in to this new API, which unfortunately needs to be optional for backwards compatibility reasons.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ====&lt;br /&gt;
:Browser vendors should be highly motivated to implement this API in order to improve platform resilience and performance, especially on mobile platforms where RAM contention is often a critical issue.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Web App developers should be motivated to adopt this API in order to improve the stability of their products:&lt;br /&gt;
:* Avoid triggering out-of-memory crashes in the browser.&lt;br /&gt;
:* Avoid browser performance issues associated with having a given app in a background tab&lt;br /&gt;
:* Robustly recover from GPU failures.&lt;br /&gt;
&lt;br /&gt;
Top tier apps (particularly from developers that track usage metrics) would be expected to enthusiastically adopt this API.&lt;br /&gt;
&lt;br /&gt;
==== Specification ====&lt;br /&gt;
: WebGL would continue to behave as currently specified (http://www.khronos.org/webgl/wiki/HandlingContextLost), but the wording of the WebGL specification could be modified to refer to the parent canvas specification and would only specify differences in behavior with respect to 2d canvas. &lt;br /&gt;
&lt;br /&gt;
==== Open Issues ====&lt;br /&gt;
:* Should there be a base CanvasRenderingContext interface that defines isContextLost and is inherited by both WebGLRenderingContext and CanvasRenderingContext2D?&lt;br /&gt;
:* Should there be loseContext/restoreContext methods. Could be very useful for testing purposes, or for app-initiated resource management.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
	<entry>
		<id>https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9332</id>
		<title>Canvas Context Loss and Restoration</title>
		<link rel="alternate" type="text/html" href="https://wiki.whatwg.org/index.php?title=Canvas_Context_Loss_and_Restoration&amp;diff=9332"/>
		<updated>2013-10-18T17:57:25Z</updated>

		<summary type="html">&lt;p&gt;Junov: Created page with &amp;quot;:2D Canvas Rendering contexts are currently required to have persistent backing stores. This proposal aims to relax that requirement by introducing an API to allows canvases t...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;:2D Canvas Rendering contexts are currently required to have persistent backing stores. This proposal aims to relax that requirement by introducing an API to allows canvases to be discarded by the browser and re-drawn by the web application on demand.&lt;br /&gt;
&lt;br /&gt;
== Use Case Description ==&lt;br /&gt;
:The 2d canvas backing store persistence requirement often leads to large amounts of RAM (or GPU memory) being consumed by canvas elements that are not actively used because they are off screen or in background tabs or occluded windows. Other types of RAM-greedy HTML elements can release resources in such cases.&lt;br /&gt;
&lt;br /&gt;
:The expectation of canvas content persistence also makes it very difficult--if not impossible--for many web apps to recover from a GPU context reset in web browsers that store 2D canvas contents in GPU memory.&lt;br /&gt;
&lt;br /&gt;
=== Current Limitations ===&lt;br /&gt;
:In theory, web apps do have the capability of discarding canvas backing stores (set canvas size to zero) and regenerating canvas the content. However, web apps are not and should not be expected to be responsible for resolving resource contention issues.  The browser is responsible for monitoring resource usage and availability and is expected to take all necessary and reasonable measures to avoid crashes, hangs, and catastrophic performance degradations that may be caused by resource contention.  Under the current specification, browsers have no options for evicting resources held by 2D canvases because there are no means of guaranteeing that the application will redraw the contents when needed.&lt;br /&gt;
&lt;br /&gt;
=== Current Usage and Workarounds ===&lt;br /&gt;
:Web apps can track events to detect when the page is no longer visible (http://www.w3.org/TR/page-visibility/) and deallocate backing stores at that time by setting the size of the canvas element to 0. Conversely, they can detect when the page is visible again and reinitialize at that time&lt;br /&gt;
&lt;br /&gt;
:Web apps can track events that are often associated with GPU context losses (e.g. waking-up from hibernation), and conservatively reinitialize the 2D canvas by resetting the context (set canvas width/height) and redrawing, just in case.&lt;br /&gt;
&lt;br /&gt;
=== Benefits ===&lt;br /&gt;
&lt;br /&gt;
:* Empower the browser monitor resources to decide whether to drop canvas backing stores and in what order (LRU backgroung tabs?) in order to achieve better performance and stability. If web apps must handle resource eviction themselves, they may often free resources when not necessary, which may lead to unnecessary tab switching lag.&lt;br /&gt;
:* Make recovery from GPU context losses more robust.&lt;br /&gt;
:* Allow GPU-accelerated 2D canvases on platforms that are known to drop graphics contexts often or unpredictably.&lt;br /&gt;
&lt;br /&gt;
=== Requests for this Feature ===&lt;br /&gt;
&lt;br /&gt;
:* &amp;lt;cite&amp;gt;[https://groups.google.com/a/chromium.org/forum/#!topic/graphics-dev/CQJXpXxO6dk E-mail thread on Chromium graphics-dev mailing list]&amp;lt;/cite&amp;gt; &amp;lt;blockquote&amp;gt;&amp;lt;p&amp;gt;&amp;quot;Is there any reason why we don&#039;t add a similar optional callback to the 2D context? (in reference to WebGL context loss API)&amp;quot; --Rik Cabanier, Adobe&amp;lt;/p&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions ==&lt;br /&gt;
&lt;br /&gt;
The following solutions were pondered in the discussion thread cited above:&lt;br /&gt;
:* Generalize the WebGL context lost / context recovered API, so that it applies to all types of canvases.&lt;br /&gt;
:* Add a redraw callback on the canvas element&lt;br /&gt;
&lt;br /&gt;
=== Retained Solution : Upstream the context lost/recovered API form the WebGL specification into the parent canvas specification. ===&lt;br /&gt;
&lt;br /&gt;
:General Concept:&lt;br /&gt;
:* a renderingContextLost event is fired after the context is lost.&lt;br /&gt;
:* a renderingContextRestored event is fired immediately after a previously lost canvas context is brought back to a usable state. The canvas context is returned to its initial state and the canvas&#039;s backing store is blank (transparent black) when restored.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Processing Model ====&lt;br /&gt;
:Rendering context losses may be intended by the user agent (to resolve resource contention), or may be forced by external factors (e.g. a graphics driver reset).&lt;br /&gt;
&lt;br /&gt;
:For convenience, the lost state should be accessible. To do so, the isContextLost method that is defined in the WebGLRenderingContext API should also exist in the CanvasRenderingContext2D API.&lt;br /&gt;
&lt;br /&gt;
:Losing contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Intentional losses are only allowed if the contextRestored event is handled on the canvas element associated with the context.&lt;br /&gt;
:* The UA is free to use any set of rules to decide which contexts are dropped when and in what order.&lt;br /&gt;
:* The return value of isContextLost() may transition from false to true before the contextLost event is dispatched (like webGL).&lt;br /&gt;
:* All objects that depend on the content of the canvas (e.g. patterns, imageBitmaps) are neutered when the context is lost. The neutering propagates through creation dependency chains, so a Pattern created from an ImageBitmap created from and ImageBitmap created from a canvas will be neutered if the canvas&#039;s context is lost.  This rule makes it safe for implementations to optimize their memory consumption by sharing pixel buffers between objects when possible.&lt;br /&gt;
&lt;br /&gt;
:Restoring contexts (applies to 2d contexts, some aspects different for WebGL):&lt;br /&gt;
:* Between the time a context is recovered and invoking the listener for the contextRestored event, no other user code can be executed.&lt;br /&gt;
:* The app is responsible for re-creating the objects that were neutered when the context was lost.&lt;br /&gt;
:* There can only be one contextRestored event pending at a time. When there are multiple canvases to be restored, the next canvas to be restored can only be restored after any pending contextRestored events--from previously restored canvases--have been handled.&lt;br /&gt;
:* Context restoration can only be initiated by the user agent (can not by triggered by a script action).&lt;br /&gt;
:* A lost context can only be restored after its context lost event has been dispatched. This avoids synchronization inconsistencies with isContextLost().&lt;br /&gt;
:* The return value of isContextLost() transitions from true to false at the time that contextRestored event is dispatched (like webGL). So the contextRestored event listener is always the first task to be execute after the transition.&lt;br /&gt;
&lt;br /&gt;
:Behavior when using a lost context (applies to 2d contexts, WebGL may behave differently):&lt;br /&gt;
:* All draw calls exit without drawing.&lt;br /&gt;
:* All rendering context API calls will throw the same exceptions as they would if called on a valid context.&lt;br /&gt;
:* All calls that read back canvas pixels from either the canvas element or the canvas rendering context (getImageData, toDataURL, toBlob, createPattern, creatImageBitmap, drawImage with lost canvas as source) will behave as they would if the canvas context were valid and blank (all transparent black pixels).&lt;br /&gt;
:* Using a Patterns or an ImageBitmap that was neutered because the context of its source canvas was lost will behave as if the Pattern or ImageBitmap were valid and blank (all transparent black pixels).&lt;br /&gt;
&lt;br /&gt;
:Behaviors specific to GPU-accelerated implementation of 2d canvas (non-normative?)&lt;br /&gt;
:* If the context was lost due to a GPU-related failure, and the browser is actively restoring GPU functionality, and expects to restore in a timely manner, then context restoration should wait until the browser is ready to resume GPU functionality and the restored canvas should continue to be GPU-accelerated.  Conversely, if GPU functionality is permanently disable or if it is unknow whether or how long it may take to resume GPU operation, then the canvas should be restored immediately without GPU-acceleration&lt;br /&gt;
:* If an accelerated canvas is resized while the GPU is temporarily unavailable, the creation of the new canvas buffer should be postponed until the GPU functionality is restored.&lt;br /&gt;
:* When getContext() is called while GPU functionality is temporarily unavailable:&lt;br /&gt;
:: a) If the canvas does not have an associated context, create an new unaccelerated context.&lt;br /&gt;
:: b) If the canvas already has an associate context, return the existing context even if it is in a lost state.&lt;br /&gt;
&lt;br /&gt;
==== Limitations ==== &lt;br /&gt;
:Web browsers will not reap the stability and performance rewards associated with this API with Web apps that do not provide at least a handler for the context recovered event. In order for this feature to improve the state of the web, apps will need to opt in to this new API, which unfortunately needs to be optional for backwards compatibility reasons.&lt;br /&gt;
&lt;br /&gt;
==== Implementation ====&lt;br /&gt;
:Browser vendors should be highly motivated to implement this API in order to improve platform resilience and performance, especially on mobile platforms where RAM contention is often a critical issue.&lt;br /&gt;
&lt;br /&gt;
==== Adoption ==== &lt;br /&gt;
:Web App developers should be motivated to adopt this API in order to improve the stability of their products:&lt;br /&gt;
:* Avoid triggering out-of-memory crashes in the browser.&lt;br /&gt;
:* Avoid browser performance issues associated with having a given app in a background tab&lt;br /&gt;
:* Robustly recover from GPU failures.&lt;br /&gt;
&lt;br /&gt;
Top tier apps from developers that track usage metrics would be expected to enthusiastically adopt this API.&lt;br /&gt;
&lt;br /&gt;
==== Specification ====&lt;br /&gt;
: WebGL would continue to behave as currently specified (http://www.khronos.org/webgl/wiki/HandlingContextLost), but the wording of the WebGL specification could be modified to refer to the parent canvas specification and would only specify differences in behavior with respect to 2d canvas. &lt;br /&gt;
&lt;br /&gt;
==== Open Issues ====&lt;br /&gt;
:* Should there be a base CanvasRenderingContext interface that defines isContextLost and is inherited by both WebGLRenderingContext and CanvasRenderingContext2D?&lt;br /&gt;
:* Should there be loseContext/restoreContext methods. Could be very useful for testing purposes, or for app-initiated resource management.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Proposals]]&lt;/div&gt;</summary>
		<author><name>Junov</name></author>
	</entry>
</feed>