A user account is required in order to edit this wiki, but we've had to disable public user registrations due to spam.

To request an account, ask an autoconfirmed user on Chat (such as one of these permanent autoconfirmed members).

OffscreenCanvas: Difference between revisions

From WHATWG Wiki
Jump to navigation Jump to search
m (Kenrussell moved page WorkerCanvas2 to OffscreenCanvas: Per feedback from annevk@, renaming the main type in this proposal.)
mNo edit summary
 
(25 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{obsolete|spec=[https://html.spec.whatwg.org/multipage/canvas.html#the-offscreencanvas-interface <code>OffscreenCanvas</code> in the HTML Standard]}}
:''Provides more control over how canvases are rendered. This is a follow-on to the [[WorkerCanvas]] proposal and will be merged once agreement is reached.''
:''Provides more control over how canvases are rendered. This is a follow-on to the [[WorkerCanvas]] proposal and will be merged once agreement is reached.''


Line 9: Line 11:
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.
* (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.
* (From adopters of the Push API): need to be able to dynamically create images to use as notification icons, such as compositing avatars, or adding an unread count
* (From adopters of the Push API): need to be able to dynamically create images to use as notification icons, such as compositing avatars, or adding an unread count
* (From the Google Docs team): need to be able to layout and render text from a worker using CanvasRenderingContext2D and display those results on the main thread.
* (From the Google Slides team): want to layout and render the slide thumbnails from a worker. During initial load and heavy collaboration these update frequently, and currently cause slowdowns on the main thread.


=== Current Limitations ===
=== Current Limitations ===
Line 14: Line 18:
* [https://html.spec.whatwg.org/multipage/scripting.html#proxying-canvases-to-workers CanvasProxy] does not provide sufficient control to allow synchronization between workers' rendering and DOM updates on the main thread. Keeping this rendering in sync is a requirement from Google's Maps team.
* [https://html.spec.whatwg.org/multipage/scripting.html#proxying-canvases-to-workers CanvasProxy] does not provide sufficient control to allow synchronization between workers' rendering and DOM updates on the main thread. Keeping this rendering in sync is a requirement from Google's Maps team.
* [[CanvasInWorkers]] does not allow a worker to render directly into a canvas on the main thread without running code on the main thread. Allowing completely unsynchronized rendering is a requirement from Mozilla and users of Emscripten such as Epic Games and Unity, in which the desire is to execute all of the game's rendering on a worker thread.
* [[CanvasInWorkers]] does not allow a worker to render directly into a canvas on the main thread without running code on the main thread. Allowing completely unsynchronized rendering is a requirement from Mozilla and users of Emscripten such as Epic Games and Unity, in which the desire is to execute all of the game's rendering on a worker thread.
* [[WorkerCanvas]] addresses these two use cases, but the specific mechanism for displaying the rendering results (in image elements) was not palatable to some implementers in recent face-to-face meetings.
* [[WorkerCanvas]] mostly addresses these two use cases, but some implementers objected to the mechanism for displaying the rendering results in image elements. The specific objection was that image elements already have complex internal state (for example, the management of the image's "loaded" state), and this would make it more complex. It also did not precisely address the use case of producing new frames both on the main thread and in workers.


=== Current Usage and Workarounds ===
=== Current Usage and Workarounds ===
Line 34: Line 38:
== Web IDL ==
== Web IDL ==


typedef (OffscreenCanvasRenderingContext2D or
          WebGLRenderingContext or
          WebGL2RenderingContext) OffscreenRenderingContext;
  [Constructor(unsigned long width, unsigned long height),
  [Constructor(unsigned long width, unsigned long height),
   Exposed=(Window,Worker)]
   Exposed=(Window,Worker)]
  interface WorkerCanvas {
  interface OffscreenCanvas {
   attribute unsigned long width;
   attribute unsigned long width;
   attribute unsigned long height;
   attribute unsigned long height;
   RenderingContext? getContext(DOMString contextId, any... arguments);  
   OffscreenRenderingContext? getContext(DOMString contextId, any... arguments);  
   
   
   // WorkerCanvas, like HTMLCanvasElement, maintains an origin-clean flag.
   // OffscreenCanvas, like HTMLCanvasElement, maintains an origin-clean flag.
   // ImageBitmaps created by calling this method also have an
   // ImageBitmaps created by calling this method also have an
   // origin-clean flag which is set to the value of the WorkerCanvas's
   // origin-clean flag which is set to the value of the OffscreenCanvas's
   // flag at the time of their construction. Uses of the ImageBitmap
   // flag at the time of their construction. Uses of the ImageBitmap
   // in other APIs, such as CanvasRenderingContext2D or
   // in other APIs, such as CanvasRenderingContext2D or
Line 50: Line 58:
   ImageBitmap transferToImageBitmap();
   ImageBitmap transferToImageBitmap();
   
   
   // Throws a SecurityError if the WorkerCanvas's origin-clean flag
   // Throws a SecurityError if the OffscreenCanvas's origin-clean flag
   // is set to false.
   // is set to false.
   Promise<Blob> toBlob(optional DOMString type, any... arguments);   
   Promise<Blob> convertToBlob(optional ImageEncodeOptions options);   
  };
  };
   
   
  WorkerCanvas implements Transferable;
  dictionary ImageEncodeOptions {
  ImageBitmap implements Transferable;
  DOMString type = "image/png";
  unrestricted double quality = 1.0; // Defaults to 1.0 if value is outside 0:1 range
};
   
OffscreenCanvas implements Transferable;
   
   
  partial interface HTMLCanvasElement {
  partial interface HTMLCanvasElement {
   WorkerCanvas transferControlToWorker();
   OffscreenCanvas transferControlToOffscreen();
  };
  };
typedef (HTMLOrSVGImageElement or
          HTMLVideoElement or
          HTMLCanvasElement or
          ImageBitmap or
          OffscreenCanvas) CanvasImageSource;
[Exposed=Window, Worker]
interface OffscreenCanvasRenderingContext2D {
  // commit() can only be used when HTMLCanvasElement has transferred Control
  // to OffscreenCanvas. Otherwise, an InvalidStateError will be thrown.
  // commit() can be invoked on main thread or worker thread. When it is invoked,
  // it is expected to see the image drawn to OffscreenCanvasRenderingContext2D
  // be displayed in the associated HTMLCanvasElement.
  void commit();
  // back-reference to the canvas
  readonly attribute OffscreenCanvas canvas;
};
OffscreenCanvasRenderingContext2D implements CanvasState;
OffscreenCanvasRenderingContext2D implements CanvasTransform;
OffscreenCanvasRenderingContext2D implements CanvasCompositing;
OffscreenCanvasRenderingContext2D implements CanvasImageSmoothing;
OffscreenCanvasRenderingContext2D implements CanvasFillStrokeStyles;
OffscreenCanvasRenderingContext2D implements CanvasShadowStyles;
// Reference filters (e.g. 'url()') are not expected to work in Workers
OffscreenCanvasRenderingContext2D implements CanvasFilters;
OffscreenCanvasRenderingContext2D implements CanvasRect;
OffscreenCanvasRenderingContext2D implements CanvasDrawPath;
// Text support in workers poses very difficult technical challenges.
// Open issue: should we forgo text support in OffscreenCanvas v1?
OffscreenCanvasRenderingContext2D implements CanvasText;
OffscreenCanvasRenderingContext2D implements CanvasDrawImage;
OffscreenCanvasRenderingContext2D implements CanvasImageData;
OffscreenCanvasRenderingContext2D implements CanvasPathDrawingStyles;
OffscreenCanvasRenderingContext2D implements CanvasTextDrawingStyles;
OffscreenCanvasRenderingContext2D implements CanvasPath;
[Exposed=Window, Worker]
Partial interface CanvasPattern {
}
[Exposed=Window, Worker]
partial interface CanvasGradient {
}
   
   
  partial interface WebGLRenderingContextBase {
  partial interface WebGLRenderingContextBase {
   // If this context is associated with a WorkerCanvas that was
  // back-reference to the canvas
   // created by HTMLCanvasElement's transferControlToWorker method,
  readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;
   // If this context is associated with an OffscreenCanvas that was
   // created by HTMLCanvasElement's transferControlToOffscreen method,
   // causes this context's current rendering results to be pushed
   // causes this context's current rendering results to be pushed
   // to that canvas element. This has the same effect as returning
   // to that canvas element. This has the same effect as returning
   // control to the main loop in a single-threaded application.
   // control to the main loop in a single-threaded application. Otherwise,
  // an InvalidStateError will be thrown.
   void commit();
   void commit();
};
// The new ImageBitmapRenderingContext is a canvas rendering context
// which only provides the functionality to replace the canvas's
// contents with the given ImageBitmap. Its context id (the first argument
// to getContext) is "bitmaprenderer".
[Exposed=(Window,Worker)]
interface ImageBitmapRenderingContext {
  // Displays the given ImageBitmap in the canvas associated with this
  // rendering context. Ownership of the ImageBitmap is transferred to
  // the canvas. The caller may not use its reference to the ImageBitmap
  // after making this call. (This semantic is crucial to enable prompt
  // reclamation of expensive graphics resources, rather than relying on
  // garbage collection to do so.)
  //
  // The ImageBitmap conceptually replaces the canvas's bitmap, but
  // it does not change the canvas's intrinsic width or height.
  //
  // The ImageBitmap, when displayed, is clipped to the rectangle
  // defined by the canvas's instrinsic width and height. Pixels that
  // would be covered by the canvas's bitmap which are not covered by
  // the supplied ImageBitmap are rendered transparent black. Any CSS
  // styles affecting the display of the canvas are applied as usual.
  void transferImageBitmap(ImageBitmap bitmap);
  };
  };


Line 109: Line 145:
* It introduces ImageBitmapRenderingContext, a new canvas context type whose sole purpose is to efficiently display ImageBitmaps. This supersedes the [[WorkerCanvas]] proposal's use of HTMLImageElement for this purpose.
* It introduces ImageBitmapRenderingContext, a new canvas context type whose sole purpose is to efficiently display ImageBitmaps. This supersedes the [[WorkerCanvas]] proposal's use of HTMLImageElement for this purpose.


* It supports asynchronous encoding of WorkerCanvases' rendering results into Blobs which can be consumed by various other web platform APIs.
* It supports asynchronous encoding of OffscreenCanvases' rendering results into Blobs which can be consumed by various other web platform APIs.


==== Processing Model ====
==== Processing Model ====


This proposal introduces two primary processing models. The first involves ''synchronous'' display of new frames produced by the WorkerCanvas. The application generates new frames using the RenderingContext obtained from the WorkerCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to "tear off" the most recently rendered image from the WorkerCanvas -- like a Post-It note. The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling <code>getContext('bitmaprenderer')</code>. Each frame is displayed in the second canvas using the <code>transferImageBitmap</code> method on this rendering context. Note that the threads producing and consuming the frames may be the same, or they may be different. Note also that a single WorkerCanvas may transfer frames into an arbitrary number of other ImageBitmapRenderingContexts.
This proposal introduces two primary processing models. The first involves ''synchronous'' display of new frames produced by the OffscreenCanvas. The application generates new frames using the RenderingContext obtained from the OffscreenCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to "tear off" the most recently rendered image from the OffscreenCanvas -- like a Post-It note. The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling <code>getContext('bitmaprenderer')</code>. Each frame is displayed in the second canvas using the <code>transferImageBitmap</code> method on this rendering context. Note that the threads producing and consuming the frames may be the same, or they may be different. Note also that a single OffscreenCanvas may transfer frames into an arbitrary number of other ImageBitmapRenderingContexts.


The second processing model involves ''asynchronous'' display of new frames produced by the WorkerCanvas. The main thread instantiates an HTMLCanvasElement and calls <code>transferControlToWorker</code>. <code>getContext</code> is used to obtain a rendering context for that WorkerCanvas, either on the main thread, or on a worker. The application calls <code>commit</code> against that rendering context in order to push frames to the original HTMLCanvasElement. In this rendering model, it is not defined when those frames become visible in the original canvas element.
The second processing model involves ''asynchronous'' display of new frames produced by the OffscreenCanvas. The main thread instantiates an HTMLCanvasElement and calls <code>transferControlToOffscreeen</code>. <code>getContext</code> is used to obtain a rendering context for that OffscreenCanvas, either on the main thread, or on a worker. The application calls <code>commit</code> against that rendering context in order to push frames to the original HTMLCanvasElement. In this rendering model, it is not defined when those frames become visible in the original canvas element. However, if the following situations apply:
 
* It is a worker thread which is calling commit(), and
* The worker is calling commit() repeatedly against exactly one rendering context
 
then it is required that the user agent synchronize the calls to commit() to the vsync interval. Calls to commit() conceptually enqueue frames for display, and after an implementation-defined number of frames have been enqueued, further calls to commit() will block until earlier frames have been presented to the screen. (This requirement allows porting of applications which drive their own main loop rather than using an event-driven loop.)


==== Limitations ====  
==== Limitations ====  


* A known good way to drive an animation loop from a worker is needed. requestAnimationFrame or a similar API needs to be defined on worker threads.
* A known good way to drive an animation loop from a worker is needed. requestAnimationFrame or a similar API needs to be defined on worker threads.
* Some parts of the CanvasRenderingContext2D interface shall not be supported due OffscreenCanvas objects having no relation to the DOM or Frame: HitRegions, scrollPathIntoView, drawFocusIfNeeded.
* Due to technical challenges, some implementors [https://bugzilla.mozilla.org/show_bug.cgi?id=801176#c29 (Google and Mozilla)] have expressed a desire to ship without initially supporting text rendering in 2D contexts. Open Issue: Should text support be formally excluded from the specification until implementors are prepared to ship it (or until a more feasible API is designed)?


==== Implementation ====  
==== Implementation ====  
Line 128: Line 171:


Web page authors have demanded increased parallelism support from the web platform for multiple years. If support for multithreaded rendering is added, it is likely it will be rapidly adopted.
Web page authors have demanded increased parallelism support from the web platform for multiple years. If support for multithreaded rendering is added, it is likely it will be rapidly adopted.
==== Example code ====
Jeff Gilbert from Mozilla has crafted some example code utilizing this API:
* [https://github.com/jdashg/snippets/tree/master/webgl-from-worker Rendering WebGL from a worker using the commit() API]
* [https://github.com/jdashg/snippets/blob/master/webgl-one-to-many/index.html Using one WebGL context to render to many Canvas elements]


[[Category:Proposals]]
[[Category:Proposals]]

Latest revision as of 17:36, 24 January 2018

This document is obsolete.

For the current specification, see: OffscreenCanvas in the HTML Standard


Provides more control over how canvases are rendered. This is a follow-on to the WorkerCanvas proposal and will be merged once agreement is reached.

Use Case Description

Feedback from web application authors using canvases have shown the need for the following controls:

  • (From ShaderToy, Sketchfab, Verold): need to be able to render to multiple regions on the page efficiently using a single canvas context. 3D model warehouse sites desire to show multiple live interactive models on the page, but creating multiple WebGL contexts per page is too inefficient. A single context should be able to render to multiple regions on the page.
  • (From Google Maps): need to be able to render WebGL from a worker, transfer the rendered image to the main thread without making any copy of it, and composite it with other HTML on the page, guaranteeing that the updates are all seen in the same rendered frame.
  • (From Mozilla and partners using Emscripten and asm.js): need to be able to render WebGL entirely asynchronously from a worker, displaying the results in a canvas owned by the main thread, without any synchronization with the main thread. In this mode, the entire application runs in the worker. The main thread only receives input events and sends them to the worker for processing.
  • (From adopters of the Push API): need to be able to dynamically create images to use as notification icons, such as compositing avatars, or adding an unread count
  • (From the Google Docs team): need to be able to layout and render text from a worker using CanvasRenderingContext2D and display those results on the main thread.
  • (From the Google Slides team): want to layout and render the slide thumbnails from a worker. During initial load and heavy collaboration these update frequently, and currently cause slowdowns on the main thread.

Current Limitations

  • CanvasProxy does not provide sufficient control to allow synchronization between workers' rendering and DOM updates on the main thread. Keeping this rendering in sync is a requirement from Google's Maps team.
  • CanvasInWorkers does not allow a worker to render directly into a canvas on the main thread without running code on the main thread. Allowing completely unsynchronized rendering is a requirement from Mozilla and users of Emscripten such as Epic Games and Unity, in which the desire is to execute all of the game's rendering on a worker thread.
  • WorkerCanvas mostly addresses these two use cases, but some implementers objected to the mechanism for displaying the rendering results in image elements. The specific objection was that image elements already have complex internal state (for example, the management of the image's "loaded" state), and this would make it more complex. It also did not precisely address the use case of producing new frames both on the main thread and in workers.

Current Usage and Workarounds

WebGL in Web Workers details some work attempted in the Emscripten toolchain to address the lack of WebGL in workers. Due to the high volume of calls and large amount of data that is transferred to the graphics card in a typical high-end WebGL application, this approach is not sustainable. It's necessary for workers to be able to call the WebGL API directly, and present those results to the screen in a manner that does not introduce any copies of the rendering results.

Benefits

Making canvas rendering contexts available to workers will increase parallelism in web applications, leading to increased performance on multi-core systems.

Requests for this Feature

See the abovementioned use cases:

  • Google's Maps team
  • Emscripten users such as Epic Games and Unity
  • Many others

Web IDL

typedef (OffscreenCanvasRenderingContext2D or
         WebGLRenderingContext or
         WebGL2RenderingContext) OffscreenRenderingContext;

[Constructor(unsigned long width, unsigned long height),
 Exposed=(Window,Worker)]
interface OffscreenCanvas {
  attribute unsigned long width;
  attribute unsigned long height;
  OffscreenRenderingContext? getContext(DOMString contextId, any... arguments); 

  // OffscreenCanvas, like HTMLCanvasElement, maintains an origin-clean flag.
  // ImageBitmaps created by calling this method also have an
  // origin-clean flag which is set to the value of the OffscreenCanvas's
  // flag at the time of their construction. Uses of the ImageBitmap
  // in other APIs, such as CanvasRenderingContext2D or
  // WebGLRenderingContext, propagate this flag like other
  // CanvasImageSource types do, such as HTMLImageElement.
  ImageBitmap transferToImageBitmap();

  // Throws a SecurityError if the OffscreenCanvas's origin-clean flag
  // is set to false.
  Promise<Blob> convertToBlob(optional ImageEncodeOptions options);   
};

dictionary ImageEncodeOptions {
  DOMString type = "image/png";
  unrestricted double quality = 1.0; // Defaults to 1.0 if value is outside 0:1 range
};

OffscreenCanvas implements Transferable;

partial interface HTMLCanvasElement {
  OffscreenCanvas transferControlToOffscreen();
};

typedef (HTMLOrSVGImageElement or
         HTMLVideoElement or
         HTMLCanvasElement or
         ImageBitmap or
         OffscreenCanvas) CanvasImageSource;

[Exposed=Window, Worker]
interface OffscreenCanvasRenderingContext2D {
  // commit() can only be used when HTMLCanvasElement has transferred Control
  // to OffscreenCanvas. Otherwise, an InvalidStateError will be thrown.
  // commit() can be invoked on main thread or worker thread. When it is invoked,
  // it is expected to see the image drawn to OffscreenCanvasRenderingContext2D 
  // be displayed in the associated HTMLCanvasElement.
  void commit();
  // back-reference to the canvas
  readonly attribute OffscreenCanvas canvas;
};
OffscreenCanvasRenderingContext2D implements CanvasState;
OffscreenCanvasRenderingContext2D implements CanvasTransform;
OffscreenCanvasRenderingContext2D implements CanvasCompositing;
OffscreenCanvasRenderingContext2D implements CanvasImageSmoothing;
OffscreenCanvasRenderingContext2D implements CanvasFillStrokeStyles;
OffscreenCanvasRenderingContext2D implements CanvasShadowStyles;
// Reference filters (e.g. 'url()') are not expected to work in Workers
OffscreenCanvasRenderingContext2D implements CanvasFilters;
OffscreenCanvasRenderingContext2D implements CanvasRect;
OffscreenCanvasRenderingContext2D implements CanvasDrawPath;
// Text support in workers poses very difficult technical challenges.
// Open issue: should we forgo text support in OffscreenCanvas v1?
OffscreenCanvasRenderingContext2D implements CanvasText;
OffscreenCanvasRenderingContext2D implements CanvasDrawImage;
OffscreenCanvasRenderingContext2D implements CanvasImageData;
OffscreenCanvasRenderingContext2D implements CanvasPathDrawingStyles;
OffscreenCanvasRenderingContext2D implements CanvasTextDrawingStyles;
OffscreenCanvasRenderingContext2D implements CanvasPath; 

[Exposed=Window, Worker]
Partial interface CanvasPattern {
}

[Exposed=Window, Worker]
partial interface CanvasGradient {
}

partial interface WebGLRenderingContextBase {
  // back-reference to the canvas
  readonly attribute (HTMLCanvasElement or OffscreenCanvas) canvas;

  // If this context is associated with an OffscreenCanvas that was
  // created by HTMLCanvasElement's transferControlToOffscreen method,
  // causes this context's current rendering results to be pushed
  // to that canvas element. This has the same effect as returning
  // control to the main loop in a single-threaded application. Otherwise,
  // an InvalidStateError will be thrown.
  void commit();
};

Proposed Solutions

This Solution

This proposed API can be used in several ways to satisfy the use cases described above:

  • It supports zero-copy transfer of canvases' rendering results between threads, for example from a worker to the main thread. In this model, the main thread controls when to display new frames produced by the worker, so synchronization with other DOM updates is achieved.
  • It supports fully asynchronous rendering by a worker into a canvas displayed on the main thread. This satisfies certain Emscripten developers' full-screen use cases.
  • It supports using a single WebGLRenderingContext or Canvas2DRenderingContext to efficiently render into multiple regions on the web page.
  • It introduces ImageBitmapRenderingContext, a new canvas context type whose sole purpose is to efficiently display ImageBitmaps. This supersedes the WorkerCanvas proposal's use of HTMLImageElement for this purpose.
  • It supports asynchronous encoding of OffscreenCanvases' rendering results into Blobs which can be consumed by various other web platform APIs.

Processing Model

This proposal introduces two primary processing models. The first involves synchronous display of new frames produced by the OffscreenCanvas. The application generates new frames using the RenderingContext obtained from the OffscreenCanvas. When the application is finished rendering each new frame, it calls transferToImageBitmap to "tear off" the most recently rendered image from the OffscreenCanvas -- like a Post-It note. The resulting ImageBitmap can then be used in any API receiving that data type; notably, it can be displayed in a second canvas without introducing a copy. An ImageBitmapRenderingContext is obtained from the second canvas by calling getContext('bitmaprenderer'). Each frame is displayed in the second canvas using the transferImageBitmap method on this rendering context. Note that the threads producing and consuming the frames may be the same, or they may be different. Note also that a single OffscreenCanvas may transfer frames into an arbitrary number of other ImageBitmapRenderingContexts.

The second processing model involves asynchronous display of new frames produced by the OffscreenCanvas. The main thread instantiates an HTMLCanvasElement and calls transferControlToOffscreeen. getContext is used to obtain a rendering context for that OffscreenCanvas, either on the main thread, or on a worker. The application calls commit against that rendering context in order to push frames to the original HTMLCanvasElement. In this rendering model, it is not defined when those frames become visible in the original canvas element. However, if the following situations apply:

  • It is a worker thread which is calling commit(), and
  • The worker is calling commit() repeatedly against exactly one rendering context

then it is required that the user agent synchronize the calls to commit() to the vsync interval. Calls to commit() conceptually enqueue frames for display, and after an implementation-defined number of frames have been enqueued, further calls to commit() will block until earlier frames have been presented to the screen. (This requirement allows porting of applications which drive their own main loop rather than using an event-driven loop.)

Limitations

  • A known good way to drive an animation loop from a worker is needed. requestAnimationFrame or a similar API needs to be defined on worker threads.
  • Some parts of the CanvasRenderingContext2D interface shall not be supported due OffscreenCanvas objects having no relation to the DOM or Frame: HitRegions, scrollPathIntoView, drawFocusIfNeeded.
  • Due to technical challenges, some implementors (Google and Mozilla) have expressed a desire to ship without initially supporting text rendering in 2D contexts. Open Issue: Should text support be formally excluded from the specification until implementors are prepared to ship it (or until a more feasible API is designed)?

Implementation

This proposal has been vetted by developers of Apple's Safari, Google's Chrome, Microsoft's Internet Explorer, and Mozilla's Firefox browsers. All vendors agreed upon the basic form of the API, so it is likely it will be implemented widely and compatibly.

Adoption

Web page authors have demanded increased parallelism support from the web platform for multiple years. If support for multithreaded rendering is added, it is likely it will be rapidly adopted.

Example code

Jeff Gilbert from Mozilla has crafted some example code utilizing this API: