A user account is required in order to edit this wiki, but we've had to disable public user registrations due to spam.

To request an account, ask an autoconfirmed user on Chat (such as one of these permanent autoconfirmed members).

CanvasInWorkers

From WHATWG Wiki
Jump to navigation Jump to search

This proposal has been abandoned. Please refer to the OffscreenCanvas proposal.

This proposal is trying to solve 2 issues: (1) being able to render to a canvas from a worker and (2) being able to render to multiple canvases using a single rendering context

Use Case Description

There are 2 common uses cases.

Use case #1: Building a multiple view 3D editor (like Blender, or Maya). WebGL, being based on OpenGL, has the limitation that resources belong to a single WebGLRenderingContext. That means if you have 2 or more canvases (left view, top view, perspective view) etc. You currently need a separate context for each one which means you need to load 100s of megabytes of resources multiple times. Allowing a single WebGLRenderingContext to be used with more than 1 canvas would solve this problem.

Use case #2: You'll like to be able to make better use of multiple cores and avoid jank when drawing to a canvas. Many canvas apps (webgl or canvas2d) need to make thousands of API calls per frame at 60 frames a second. Being able to move those calls to a worker would potentially free up the main thread to do other things. It would also potentially help jank by keeping work off the main thread so it does not block the UI.

Current Usage and Workarounds

Games in HMTL are becoming more common and we'd like to support developers making even more complex games. A few 3D editors are starting to appear and we'd like to help them be as good as their native app counter parts.

Goals

  1. Allow rendering using the Canvas2D api in a worker.
  2. Allow rendering using the WebGL api in a worker.
  3. Allow synchronization of canvas rendering with DOM manipulation
  4. Allow using 1 Canvas2DRenderingContext with multiple destinations without losing state
  5. Allow using 1 WebGLRenderingContext with multiple destinations without losing state
  6. Don't waste memory.
  7. Do not break existing content (existing APIs still work as is)

Non Goals

  1. Sharing WebGL Resources between contexts. That is an orthogonal issue

Proposed Solutions

One proposed solution involves CanvasProxy and a commit method. This solution does not meet the goals above. Specifically it does not handle synchronization issues and may waste memory.

Suggested Solution

Allow rendering contexts to be created by constructor

   var ctx = new Canvas2DRenderingContext();
   var gl = new WebGLRenderingContext();

Define `DrawingBuffer`. DrawingBuffer can be considered a ‘handle’ to a single texture (or bucket of pixels). A DrawingBuffer can be passed anywhere a Canvas can be passed. In particular drawImage, texImage2D and texSubImage2D. A DrawingBuffer also has a toDataURL method that is similar to the Canvas’s toDataURL method. A DrawingBuffer can be transferred to and from a worker using the transfer of ownership concept similar to an ArrayBuffer.

A DrawingBuffer is created by constructor as in

   var db = new DrawingBuffer(context, {...creation-parameters...});

The context associated with a DrawingBuffer at creation is the only context that may render to that DrawingBuffer.

Canvas becomes a ‘shell’ whose sole purpose is to display DrawingBuffers.

Add 2 functions to `Canvas`. Canvas.transferDrawingBufferToCanvas() and Canvas.copyDrawingBuffer()

Canvas.transferDrawingBufferToCanvas effectively transfers ownership of the DrawingBuffer. The user's DrawingBuffer is neutered. This similar to how transferring a DrawingBuffer from the main thread to a worker makes the main thread no longer able to use it.

A single threaded app that wanted to emulate the existing workflow using DrawingBuffers would do something like this.

    var gl = new WebGLRenderingContext();
    function render() {
      var db = new DrawingBuffer(...);
      gl.setDrawingBuffer(db);
      gl.drawXXX();
      canvas.transferDrawingBufferToCanvas(db);
      requestAnimationFrame(render);
    }

Canvas.copyDrawingBuffer() on the other hand copies the DrawingBuffer’s texture/backingstore to the canvas. This is a slower path but emulates the standard Canvas2D persistent backingstore style. The canvas will have to allocate a texture or bucket of pixel to hold the copy if it does not already have one.

Disallow ‘multi-sampled’ / “anti-aliased” DrawingBuffers and instead expose GL_ANGLE_framebuffer_blit GL_ANGLE_framebuffer_multisample. (webgl specific)

Define ‘DepthStencilBuffer’. Add a function, WebGLRenderingContext.setDepthStencilBuffer (webgl specific)

Suggested IDL

interface HTMLCanvas {
   ...
   void transferDrawingBufferToCanvas(DrawingBuffer b);
   void copyDrawingBuffer(DrawingBuffer b);
 }

interface Canvas2DRenderingContext {
  readonly attribute DrawingBuffer drawingBuffer;
  void setDrawingBuffer(DrawingBuffer buffer);
  CanvasPattern createPattern(DrawingBuffer buffer, ...);
  void drawImage(DrawingBuffer buffer,
                 unrestricted double dx,
                 unrestricted double dy);
  void drawImage(DrawingBuffer buffer,
                 unrestricted double dx, unrestricted double dy,
                 unrestricted double dw, unrestricted double dh);
  void drawImage(DrawingBuffer buffer,
                 unrestricted double sx, unrestricted double sy,
                 unrestricted double sw, unrestricted double sh,
                 unrestricted double dx, unrestricted double dy,
                 unrestricted double dw, unrestricted double dh);
}

interface WebGLRenderingContext {
  ...
  readonly attribute DrawingBuffer drawingBuffer;
  readonly attribute DepthStencilBuffer depthStencilBuffer;
  void setDrawingBuffer(DrawingBuffer buffer);
  void setDepthStencilBuffer(DepthStencilBuffer buffer);
  void texImage2D(GLenum target, GLint level, GLenum internalformat,
                  GLenum format, GLenum type, DrawingBuffer buffer);
  void texSubImage2D(GLenum target, GLint level,
                     GLint xoffset, GLint yoffset,
                     GLenum format, GLenum type,
                     DrawingBuffer buffer);
}

[ Constructor(RenderingContext c, any contextCreationParameters) ]
interface DrawingBuffer {
   readonly attribute long width;
   readonly attribute long height;
   void setSize(long width, long height);
   DOMString toDataURL(in DOMString type)
       raises(???Exception);
}

[ Constructor(RenderingContext c, any contextCreationParameters) ]
interface DepthStencilBuffer {
   attribute long width;
   attribute long height;
   readonly attribute long width;
   readonly attribute long height;
   void setSize(long width, long height);
}

Rationale:

Q: Why get rid of a commit method in workers to propagate changes from a context rendered in a worker to a canvas in the main page?

A: Using commit there is no way to synchronize updates in a worker with updates to the DOM in the main thread. This solution makes it possible to make sure that DOM objects positioned in the main thread stay in sync with images rendered by a worker. The worker transfers the DrawingBuffer to the main thread via postMessage, and the main thread calls canvas.transferDrawingBufferToCanvas. This solution also avoids unnecessary blits of the canvas’s contents, which is essential for performance.


Q: Why disallow anti-aliased DrawingBuffers?

A: In the existing model when you create a webgl context by calling canvas.getContext() a single multi-sampled renderbuffer is created by the browser. When the browser implicitly does a ‘swapBuffers’ for you it resolves or “blits” this multi-sampled renderbuffer into a texture.

On 30inch display (or a HD-DPI Macbook Pro) a fullscreen multi-sampled renderbuffer requires

    Bytes per pixel = 4 (rgba) * 4 (depth-stencil)

     2560(width)  *
     1600(height) *
     8(bytes per pixel) *
     4 (multi-samples)
  -----------------------------
    = 125meg

In the new model, a typical animating application will create a minimum of 2 DrawingBuffers (for double buffering so a worker can render to one while the other is passed back to the main thread for compositing) or possibly 3 (for Triple Buffering). If all the DrawingBuffers are antialiased that’s 375meg of VRAM used up immediately.

On the other hand, if instead we disallow anti-aliased DrawBuffers and expose GL_ANGLE_framebuffer_blit and GL_ANGLE_framebuffer_multisample as WebGL extensions, then a typical app that wants to support anti-aliasing will create a single multisampled renderbuffer and do its own blit to non-multi-sampled DrawingBuffers. For a triple buffered app that would be 218 meg of VRAM.

Another considered solution is to some how magically share a single multi-sample buffer. In a double buffered app you'd transfer a DrawingBuffer from the work to a the main thread. Since the DrawBuffer is intended to be given to a canvas and since you can't render to the drawing buffer because its context is back in the worker then, at transfer time, you could resolve the multi-sampled buffer. And give that multi-sampled buffer to the next DrawBuffer. Unfortunately that won't work under this design. Nothing prevents the user from transferring the DrawingBuffer from a worker to another thread and back to the worker. It should come back as it started. If it was resolved on transfer it would not come back. Another issue is there's nothing preventing the user from making 3 or 4 DrawingBuffers for a single canvas for triple or quadruple buffering. Since you have no idea what a user is going to use a DrawingBuffer or how they are going to use it there's no easy way for them all to magically share the same multi-sample buffer.


Q: Should we allow anti-aliased DrawingBuffers?

  1. Yes, developers who care about memory can create non anti-aliased buffers. Developers who don’t care can avoid the hassle of needing to make a multi-sampled renderbuffer and blitting
  2. No, all developers that want to use DrawingBuffers and get anti-aliasing must use GL_ANGLE_framebuffer_multisample and GL_ANGLE_framebuffer_blit

Resolution: #2. Saving memory is especially important in situations like tablets, multiple tabs, and systems without virtual memory. Rather than let the bad behavior be the easy path we chose to encourage the good behavior.


Q: Why separate out DepthStencilBuffer?

A: For similar reasons as disallowing anti-aliasing. (see above)

Given that DrawingBuffers are transferred by transferring ownership, and given that in the common case of transferring a DrawingBuffer to the main thread to be composited, there is no reason to also transfer the depth/stencil buffers. Doing so would mean multiple depth and stencil buffers would need to be allocated so one thread can render to them while the main thread is compositing.

Apps that use GL_ANGLE_framebuffer_multisample and GL_ANGLE_framebuffer_blit to support anti-aliasing will never need to create a ‘DepthStencilBuffer’ as they will end up creating a gl.DEPTH_STENCIL texture or renderbuffer

Separating them out also makes more sense for Canvas2D which never needs a depth/stencil buffer.


Q: For a worker based animated app what’s the expected code flow?

A:

    // render.js: -- worker --
    var gl = new WebGLRenderingContext();
    var dsBuffer = new DepthStencilBuffer(gl, …);
    gl.setDepthStencilBuffer(dsBuffer);

    var render = function(self) {
       // make a new DrawingBuffer
       var db = new DrawingBuffer(gl, ...);

       // Render to drawing buffer.
       gl.setDrawingBuffer(db);
       gl.drawXXX(...);

       // Pass the drawing buffer to the main thread for compositing
       self.postMessage(db, [db]);

       // request the next frame.
       self.requestAnimationFrame(render);
    }
    render();

    // Main thread:
    var canvas = document.getElementById(“someCanvas”);
    var worker = new Worker(“render.js”);
    worker.addEventListener(‘message’, function(db) {
       canvas.transferDrawingBufferToCanvas(db);
    }, false);

The thing to notice is the worker is creating a new DrawingBuffer every requestAnimationFrame. It then transfers ownership to the main thread. The main thread transfers it to the canvas. The browser can, behind the scenes, keep a queue of DrawingBuffers so that allocation of new ones is fast.


Q: Why does a DrawingBuffer’s constructor take a context?

A: DrawingBuffers can only be used with the context they are created with. Putting the context in the constructor spells out this relationship. The following is illegal

   var gl1 = new WebGLRenderingContext();
   var gl2 = new WebGLRenderingContext();
   var db = new DrawingBuffer(gl1);
   gl1.setDrawingBuffer(db);
   gl2.setDrawingBuffer(db);  // error. db belongs to gl1


Q: Can you use a Canvas2DRenderingContext without a DrawingBuffer?

A: Yes. But only to create patterns, gradients, etc. All methods that rasterize will throw an exception until the canvas is associated with a DrawingBuffer by calling Canvas2DRenderingContext.setDrawingBuffer()


Q: Can you use a WebGLRenderingContext without a DrawingBuffer?

A: Yes, you can create WebGL resources (textures, buffers, programs, etc..) You can render to framebuffer objects and call readPixels on them. Rendering to the default framebuffer null bind target will generate gl.INVALID_FRAMEBUFFER_OPERATION if no valid DrawingBuffer is set. A neutered DrawingBuffer, one that has been transferred to another thread, or one which has been transferred to a canvas, is not a valid drawing buffer.


Q: Do you need to call setDrawingBuffer if there is only 1 buffer?

A: No, creating a DrawingBuffer implicity calls setDrawingBuffer.

   gl = new WebGLRenderingContext():
   db = new DrawingBuffer(gl, …);
   gl.clear(gl.COLOR_BUFFER_BIT); // renders to db.


Q: Is any context state lost when setDrawingBuffer is called?

A: No. The context’s state is preserved across calls to setDrawingBuffer for both Canvas2DRenderingContext and WebGLRenderingContext.


Q: Can you render to a DrawingBuffer that has been passed to another thread?

A: No, DrawingBuffers pass ownership. The DrawingBuffer in the thread that passed it is now neutered, just like an transferred ArrayBuffer is neutered.


Q: Can you transfer a DrawingBuffer to 2 canvases?

A: No, Canvas.transferDrawingBufferToCanvas takes ownership of the DrawingBuffer. The DrawingBuffer left for the user has been neutered.


Q: What happens if I transferDrawingBufferToCanvas DrawingBuffers of different sizes?

A: The canvas does not change its display size, it just displays the DrawingBuffer transferred at the size defined by css or if no css is specified then the canvas’ original size.


Q: Can you call getContext on a canvas that has had transferDrawingBufferToCanvas called on it?

A: No


Q: Can you call transferDrawingBufferToCanvas on a canvas that has had its getContext method called?

A: No. It might be possible to make this work but it’s probably not worth it?


Q: Can you use these features in “shared workers”?

A: No (or at least not for now)


Q: What happens to the Canvas2DRenderingContext.canvas and WebGLRenderingContext.canvas properties?

A: For contexts created by constructor they are set to undefined


Q: Should you be able to reference the current DrawingBuffer on a RenderingContext?

In other words, should there be a getDrawingBuffer or a ‘drawingBuffer’ property?

A: Yes, but it’s only set if you call setDrawingBuffer. In other words if you call getContext to make your context then this property would be undefined or if it’s a function it would return undefined.


Q: Should you be able to change the size of a DrawingBuffer?

A:

  1. Yes, set width and/or height and its size will change
    Issue. Allocating DrawingBuffers is a slow operation so implementations would like to avoid re-allocting once when width is set and again when height is set. Deferring that allocation is no fun to implement.
  2. Yes, use a setSize(width, height) method
    This avoids the complications of using the writable properties
  3. No, just allocate a new DrawingBuffer
    The only issue here is quick GCing.

Resolution: #2

Issues:

Workers can flood the graphics system with too much work.

In the main thread you can write code like this

   // render as fast as you can
   function render() {
      for (var i = 0; i < 1000; +i) {
        gl.drawXXX(...);
      }
   }
   setInterval(render, 0);

This will DoS many systems by saturating the GPU with draw calls. The solution on the main thread is the implicit ‘SwapBuffers’. Every time JavaScript exits the interval event the system can pause or block. But this is not true in workers. As there is no implicit swap and workers can run in infinite loops there is no way to prevent this situation. While preventing infinite loops is outside the scope of what we can deal with consider a worker that generates frames at 90fps and a main thread that composites them at 60fps. There is nothing to stop the worker from generating too much work or a giant backlog of GPU work.

Ideas

  1. So what. The worker will run out of memory.
    Unfortunately until that happens the entire system may be unresponsive (not just the browser)
  2. Allow rendering in workers only inside some callback.
    For example, if it is only possible to render inside a worker during a requestAnimationFrame event the browser can throttle the worker by sending less events.
    The minor problem with this solution is it makes non animating apps slightly convoluted to write. Let’s say you want to make a Maya or Blender type app so you only render on demand. You end up getting a mousemove event, posting a message to a worker, the worker would issue raf so the raf can do the rendering. Maybe that’s not too convoluted.
  3. Other?

Note: Exposing DrawingBuffer in the main thread causes the same problem

Which suggests that even the main thread should not be allowed to render to contexts created by a constructor except in requestAnimationFrame?

How should GL_ANGLE_framebuffer_multisample be specified w.r.t. number of samples? (webgl specific)

We’d like apps not to fail based on the “samples” parameter to renderbufferStorageMultisample but GL_ANGLE_framebuffer_multisample is specified that it must allocate a renderbuffer with the user specified number of “samples” or greater. That means if an app passes the wrong number in (say hardcodes a 4) and the user’s GPU does not support 4 samples or the user’s GPU multi-sample support is blacklisted the app will fail.

We’d prefer a more permissive API by letting the implementation choose the number of samples so that more apps will succeed.

Ideas

  1. Leave the API as is. Apps may suddenly fail on different hardware or the same hardware when multi-sampling is blacklisted
  2. Leave the API the same but let the implementation choose the actual number of samples. Apps that need to know how many samples were chosen can query how many they got with getRenderbufferParameter.
  3. Change the API slightly by providing an enum (high, medium, low, none) as a quality input to renderbufferStorageMultisample instead of specifying the specific number of samples. Implementation can choose their own interpretation of ‘low’, ‘medium’, ‘high’