A user account is required in order to edit this wiki, but we've had to disable public user registrations due to spam.

To request an account, ask an autoconfirmed user on Chat (such as one of these permanent autoconfirmed members).

Canvas: Difference between revisions

From WHATWG Wiki
Jump to navigation Jump to search
 
(97 intermediate revisions by 4 users not shown)
Line 7: Line 7:
* http://www.htmlstack.com/checkbox/
* http://www.htmlstack.com/checkbox/
* http://shapecatcher.com/index.html
* http://shapecatcher.com/index.html
* http://www.ludei.com/sumon (have no idea if its better done in canvas)
* http://www.ludei.com/games (other games on this site)
* https://www.lucidchart.com/documents/edit?button#4766-6fcc-4f18275d-b546-71450a7ac5be?demo=on&branch=5a613773-81d2-48fb-b3a5-4fe780978ab4 (The drawing objects could be done with SVG but SVG has more accessibility deficiencies at this time)
* http://archive.dojotoolkit.org/nightly/dojotoolkit/dojox/gfx/demos/circles.html Dojo Gfx Circles Example. Dojo GfX is used extensively by Cognos Business Analytics software to produce charts, etc.
* http://archive.dojotoolkit.org/nightly/dojotoolkit/dojox/gfx/demos/career_test.html Career Aptitude Test Example. Dojo GfX is used extensively by Cognos Business Analytics software to produce charts, etc.
* http://www.phpied.com/canvas-pie/ Pie Charts are a common output of Cognos BI which makes use of Dojo Gfx which can use either canvas or SVG. So, it it is not a stand alone application but would be used in a real world application.
* http://www.kevs3d.co.uk/dev/asteroids/
= Needs of AI Squared Magnifier to assist in change proposals =
Here are the needs from AISquared for ZoomText:
* In general having a bounding rectangle for the path will allow us to track the object into the magnified view.
* When ZoomText tracks it aligns the object in a certain way.  ZoomText users can specify center, parent or edge alignment for text cursor, focus and window objects differently. In order to provide the correct alignment for the user we need a way to understand the role of a given path.
* For the parent alignment option we need to be able to retrieve the location of the parent object of the path. The goal of parent alignment is to keep the parent object as good as possible in view while making sure the current object is displayed within the magnified view.
* When ZoomText navigates or reads a web page, we scroll elements that are currently not displayed into view. We have access to the elements in fallback content but not their location.
* Another reason why ZoomText needs to understand the role of the path are the screen enhancement that we provide. ZoomText provides different shaped enhancements for text cursor and for keyboard focus and (currently) no enhancement for text selection.


= Limitations of real-world use cases =
= Limitations of real-world use cases =
Line 13: Line 30:


;https://zewt.org/curves/
;https://zewt.org/curves/
:Keyboard users can't tab to specific points and move them from the keyboard.
: * Keyboard users can't tab to specific points and move them from the keyboard.
:* Should show focus ring around the selected point when moving by keyboard movement.
: * Should show focus ring around the selected point when moving by keyboard movement.
:Limited-vision users can't zoom around the specific area that the user is manipulating.
: * Limited-vision users can't zoom around the specific area that the user is manipulating.
:It's a pity the mouse cursor has to be manually changed onmousemove.
: * It's a pity the mouse cursor has to be manually changed onmousemove.
: * Finger users can't drag their finger across the canvas to find the various interactive parts of the document, because the user agent doesn't know ahead of times which parts are interactive.
 
;http://www.ludei.com/sumon
: * can't navigate and activate buttons using keyboard
: * various UI controls not identified as controls or operable for range of users.
: * can't zoom in to specific controls on the UI
: * With scrollIntoView the author could bring the numbers into view within the canvas but the assistive technology cannot assess how to place the magnification point around the element as it does not know the corresponding role of the element. Many magnifier users are able to use a mouse so they would not be relying on keyboard for focus. '''would be good to have examples of how ATs treat different roles when it comes to magnification'''
: * A magnifier vendor will want to allow the user to search the gridd of numbers to find an appropriate match. However, the magnifier cannot provide the ability to magnify around the number as it does not know the location of the element.
 
;https://www.lucidchart.com/documents/edit?button#4766-6fcc-4f18275d-b546-71450a7ac5be?demo=on&branch=5a613773-81d2-48fb-b3a5-4fe780978ab4
: * With scrollPathIntoView the author could bring the numbers into view within the canvas but the assistive technology cannot assess how to place the magnification point around the element as it does not know the corresponding role of the element. Many magnifier users are able to use a mouse.
: * A magnifier vendor will want to allow the user to search the drawing surface of the flow chart to find an appropriate match. However, the magnifier cannot provide the ability to magnify around the number as it does not know the location of the element.
: * The magnifier cannot assess the location of the element in order to position the magnifier within the context of a list box.
: * When images are turned off canvas disappears. With fallback content you now have semantic content. Should the fallback content be rendered? Should the text labels for the fallback content be rendered? We now have interactive  content so we need a better solution.
: * LucidArt created separate canvas instances to be facilitate hit testing because all the hit testing is handled by the application at the canvas element. When this happens we have canvas elements spread all over the DOM and associating them, accessibly, with a single canvas application is a mess. 
: * ''This section originally had the following note, but as far as I (Hixie) can tell, this note is exactly backwards: "An advantage canvas has over SVG is that it shares the same DOM and keyboard navigation model as HTML. So, if one were to mix the best features of HTML (interactive widgets) with a drawing technology, including things like semantic relationships between elements, canvas is a better choice. So, SVG is not necessarily the preferred technology for accessibility. So, where we have a flow chart region of drawing objects the author could implement an HTML or an HTML/ARIA-enabled list box in fallback content inside a navigation section and a screen reader could bring it up in the list of navigable landmarks and it could be included in the keyboard navigation order."''
 
;http://www.phpied.com/canvas-pie/
: *If a pie chart were scrolled into view it would be treated as a list box in fallback at this time. The location of the slices is not discernible by a magnifier nor is the bounds of the circle. Consequently, the magnifier does not have the knowledge to properly position the magnifier while navigating the list box. The author can scrollPathIntoView but that does provide enough information to determine if this is something to magnify to or how to magnify to it. It does not know what is being scrolled into view. In fact, it's exactly as inaccessible as an image of the pie graph would be.
 
;http://www.kevs3d.co.uk/dev/asteroids/
: *A magnifier user cannot find the location of the asteroids to be able to adjust zoom level as they asteroids approach the target. Typically, focus would be on the ship with the artillery.
 
= Use cases that are already handled but are sometimes mistakenly thought to show limitations of the platform =
 
; http://www.libreoffice.org/download/
:*A magnifier cannot follow the caret or selection location while the users is editing the LibreOffice document. Online demo: http://www.youtube.com/watch?v=YdJu59bSBpI ''Actually this is already possible: use contenteditable instead of canvas. Using canvas here is inappropriate.''
:*A magnifier cannot locate the text, or embedded drawing objects while editing the LibreOffice document.
 
==Discussion==
Rich:''Actually this is already possible: use contenteditable instead of canvas. Using canvas here is inappropriate.'' ''We are told that contenteditable is insufficient - at least for Google Docs. It would be good to find out why they feel contenteditable is inadequate before we can say something is already possible. Otherwise, it is foolish to make an investment to create the alternative. Has anyone asked for the details as to why Google Docs or LibreOffice did not use contenteditable?''
 
Hixie: contenteditable is in roughly the same state as canvas right now in terms of being immature. Work is rapidly progressing to make contenteditable more usable. Even if we weren't working on it, though, it doesn't make sense to add a feature to canvas to address a limitation in contenteditable. That would be like saying "Oh no! My house is on fire! I will buy a fire extinguisher for my school instead of calling the fire brigade".
 
Rich: ''So, what do you say to the Google Docs team and Michael Meeks who works on LibreOffice. "Stop production and wait for contenteditable? - my solution is ultimately going to be far better. From what I hear Google has only one guy working on contenteditable. Industry is not going to wait for contenteditable and for whatever reason they chose not to contribute to it. You should know that at this point that IBM is NOT creating a rich text editor in canvas."


= Proposals =
= Proposals =
Line 22: Line 74:
In this section, propose alternatives to improve canvas to make it easier to fill in the limitations listed in the previous section.
In this section, propose alternatives to improve canvas to make it easier to fill in the limitations listed in the previous section.


== Path primitives - DONE ==
This addresses:
* creating more than one path and stamping them out in different places
* drawing text on a path
* drawing text to a path
* stamping a path on another path with transformations applied
Doesn't yet address:
* how to manipulate a path once created, e.g. to make a path be the intersection or union of all its subpaths, or to make it possible to apply noise to a path created from text, or some such.
(See spec for final proposal.)
== Ellipses - DONE ==
This addresses:
* Drawing ellipses, a common request
Add two extra arguments to arcTo() and add ellipse():
<pre>  void arcTo(double x1, double y1, double x2, double y2, double radiusX, double radiusY, double rotation);
  void ellipse(double x, double y, double radiusX, double radiusY, double rotation, double startAngle, double endAngle, boolean anticlockwise);</pre>
Make sure to define this such that the transformation is applied to the resulting arc, not to the coordinates before drawing the arc.
== SVG Path - DONE ==
<pre>new Path(DOMString d)</pre>
Somehow this needs to be defined as adding the given path(s) to the path as described in SVG.
== Dashed lines - DONE ==
<pre>
  void setLineDash(sequence<Number>);  // array of on/off dash lengths
  sequence<Number> getLineDash();      // return the current dash array, freshly allocated
  attribute Number lineDashOffset;    // default 0; offset within dash pattern to begin stroking
</pre>
Need to define how this is applied. Probably that any paths drawn with this are cut into subpaths by removing components of subpaths that are in "off" segments.
Need to define how to handle zero-length resulting subpaths (drop them).
Segment lengths are in pixels, affected by the transform.
== Regions - DONE ==
This addresses:
* complex hit region support without the author having to do anything using isPointInPath, maintain a scene graph, or maintain a shadow DOM
* make content in canvas discoverable to users of ATs with an AT-specific focus mechanism (e.g. VO)
* make content in canvas discoverable to users of touch screen interfaces that read what's under the finger
* the last two even in the case of hierarchical AT structures (e.g. menus with menu items, toolbars with buttons)
* make content in canvas discoverable to AT users where the content is backed by HTML elements, without needing the user to move the system focus
* make it easy for text drawn to the canvas to be made discoverable (should be addressed)
* make it easy to specify custom cursors for different parts of the canvas
This does not address:
* making it possible to select, or cursor through, text on canvas (probably not an issue, e.g. you can't do that with VO on Mac OS X system text labels anyway)
<pre>
// add a method to 2D that adds a region to the canvas:
addHitRegion({
  path: path, // Path to use as region description, defaults to the context's default path
  element: element, // Element to send events to; limited at hit-test time to specific interactive elements
  id: id, // DOMString to use as the ID in events fired on the canvas for this region (MouseEvent gets new attribute for this purpose); also used for parentID references
  // if element is present, label, ariaRole, and parentID must not be:
  label: label, // DOMString to use as a label when the user strokes a touch display or focuses the hit region with an AT
  ariaRole: ariaRole, // DOMString limited to specific roles, AT uses this to decide how to expose the region
  parentID: parentID, // DOMString, AT uses this to decide which region to use as this region's parent (defaults to canvas as parent)
  // one of element, label, id, or ariaRole must be present; exception otherwise? no-op?
  // ariaRole must be present if parentID is present; exception otherwise? ignore ariaRole?
  // if parentID refers to a region that no longer exists, exception? no-op? ignore parentID?
  cursor: cursor, // a CSS cursor specification that, if given, overrides the canvas' default cursor
});
// when mouse events go to the canvas, they check to see if their coordinates are in a region, and if so, they include the ID.
// if the region has an element, the event is [also?] dispatched to the element [instead of the canvas?]
// when a region is completely overlapped by one or more other regions, or by a clearRect(), then it is forgotten
// make fillText() and strokeText() generate regions automatically unless a new argument is passed in disabling this behaviour, e.g. {decorative:true}.
</pre>
=== Comments ===
==== Rich Schwerdtfeger ====
I have some clarification and requests. As you know the accessibility information is provided to the assistive technology through the fallback content DOM.
''This is an oversimplification. It's provided to ATs either through proprietary mechanisms or through an accessibility API. There's no reason it has to be through a DOM. Most applications on Windows and Mac, for instance, do not have a DOM. There's no reason we need to constrain ourselves in this manner. In fact, since the whole point of canvas is to do graphics without a scene graph, it's significantly better if we can avoid having a fallback DOM too, as that is a kind of scene graph. We should only require the use of a fallback DOM where such use makes things simpler, as e.g. when the content being represented is an interactive control for which existing HTML elements exist. -Hixie''
''Ian, I understand but what you are overlooking is the fact that the AT looks at a whole lot of things through the accessibility API and not just labels, roles, and parents. They also look at children, relationships, state and properties. You do not have an exhausted list here. Also all platform accessibility API provide a parent child hierarchy. You don't have to call it a DOM but it is in essence the same thing. You have a tree in which events are propagated, etc. In Windows you bind accessibility API to that tree. If you want to be more broad than the DOM I suggest you simply provide an "object" for which an element can be. I am assuming you are doing this because of the other work going on in WebKit on a separate model that bound different rendering engines (SVG, etc.) to it. You and I talked about that at TPAC. Those could apply similar attributes to it to what is being used in the DOM. Having to constantly change the same API to add a new accessibility features is a maintenance issue. Also, if you don't have an element what are you binding those parameters to? 
- Rich''
''This API proposal does handle this, that's what the "parent" field is for, amongst other things. -Hixie''
'' Hixie, How are states and properties added to the objects not bound to DOM elements with roles assigned? How is focus exposed for objects not bound to DOM elements? - Stevef''
''Use elements if you need states, properties, or focus. -Hixie''
''Can you provide examples of roles that don't require states or properties? - Stevef"
''Accessible objects must supply state and property information, in addition to roles, much the same way that a checkbox supplies a checked state. This is not limited to HTML5 elements. - Rich ''
That labels, roles, states, properties, parent child information is acquired from that DOM. This DOM also feeds accessibility APIs on each of the platforms. The ATs access the accessibility information as follows:
*JAWS: IE- DOM+MSAA, Chrome: MSAA+IA2, Firefox: MSAA+IA2
*NVDA: Chrome: MSAA+IA2, Firefox: MSAA+IA2, IE-UIA but not sure about other APIs
*MAGIC Magnifier: IE- DOM+MSAA, Chrome: MSAA+IA2, Firefox: MSAA+IA2
*ChromeVox: DOM ( for Chrome on desktop and Android)
*VoiceOver: MACOSX accessibility protocol (whose mapping is in large part directly tied to the DOM)
*ZoomText Magnifier: IE- DOM+MSAA, Firefox: MSAA+IA2
*Narrator for Metro: IE- UIAutomation (whose mapping in large part comes from the DOM)
*Magnifier for Metro:IE- UIAutomation (whose mapping in large part comes from the DOM)
So, the author should not include providing these attributes to the function call for hit testing as they interfere with the native host language semantics of the fallback DOM which supplies the assistive technology with all the information in the use case:
<pre>
  // if element is present, label, ariaRole, and parentID must not be:
  label: label, // DOMString to use as a label when the user strokes a touch display or focuses the hit region with an AT
  ariaRole: ariaRole, // DOMString limited to specific roles, AT uses this to decide how to expose the region
  parentID: parentID, // DOMString, AT uses this to decide which region to use as this region's parent (defaults to canvas as parent)
// one of element, label, id, or ariaRole must be present; exception otherwise? no-op?
  // ariaRole must be present if parentID is present; exception otherwise? ignore ariaRole?
  // if parentID refers to a region that no longer exists, exception? no-op? ignore parentID?
</pre>
I am also concerned that the author could write a label here that would interfere with native host language features such as <label> and other native standard HTML interactive controls.
Also, please consider additional methods to:
* remove a hit region association with an element
* remove all hit regions associations associated with elements
I spoke with Kenneth Russell from the Chrome 3D Team at SXSW and he also agreed we need functions to clear hit testing association.
''These already exist, in the form of clearRect(). -Hixie''
''That does not explicitly clear associations with elements or objects if you so chose - Rich''
''It does if we define that it does... -Hixie''
''Yes it would. I will look for it in your next change proposal. -Rich''
== Pattern offsets - DONE ==
CanvasPattern gets all the transformation methods. To create the pattern, it is aligned as now, then transformed, then the path is applied, then the whole thing is transformed and painted.
See https://www.w3.org/Bugs/Public/show_bug.cgi?id=10132
== Image smoothing - DONE ==
This addresses:
* Being able to show an image's pixels, e.g. in an image editor
This does not address:
* Performance concerns. Browsers should handle optimisation of image drawing themselves.
<pre>
Context:
attribute boolean imageSmoothingEnabled;
</pre>
See https://www.w3.org/Bugs/Public/show_bug.cgi?id=12044
== TextMetrics - DONE ==
This addresses:
* baseline aligning text with non-text over multiple lines
* drawing selection rectangles that cover the f in Zapfino
<pre>
TextMerics:
  readonly attribute double fontBoundingBoxAscent; // distance from textBaseline to top of highest font bounding box of all the fonts used to render the text
  readonly attribute double fontBoundingBoxDescent; // same but down to bottom of font bounding box
  readonly attribute double actualBoundingBoxAscent; // distance from textBaseline to top of bounding box of the given text
  readonly attribute double actualBoundingBoxDescent; // same but down to bottom of bounding box
  // emHeightAscent + emHeightDescent = font-size
  readonly attribute double emHeightAscent; // distance from textBaseline to top of em height (zero if textBaseline is top, half of font-size if textBaseline is middle)
  readonly attribute double emHeightDescent; // same but down to bottom of em height (zero if textBaseline is bottom, half of font-size if textBaseline is middle)
  readonly attribute double hangingBaseline; // distance from textBaseline to hanging baseline; up is negative, down is positive (zero if textBaseline is hanging)
  readonly attribute double alphabeticBaseline; // distance from textBaseline to alphabetic baseline; up is negative, down is positive (zero if textBaseline is alphabetic)
  readonly attribute double ideographicBaseline; // distance from textBaseline to ideographic baseline; up is negative, down is positive (zero if textBaseline is ideographic)
</pre>
See https://www.w3.org/Bugs/Public/show_bug.cgi?id=7798
== Misc other proposals - DONE ==
<pre>
Context:
  void resetClip(); // resets clip to canvas extent without affecting save/restore stack - see https://www.w3.org/Bugs/Public/show_bug.cgi?id=14499 - DONE
  attribute SVGMatrix currentTransform; // see https://www.w3.org/Bugs/Public/show_bug.cgi?id=12140 and http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-November/033745.html - DONE
  void resetTransform(); // resets the transform to the identity matrix - DONE
Path:
  new Path(path) // copy constructor - DONE
</pre>


= Examples =
= Examples =

Latest revision as of 23:24, 27 March 2012

Real world uses of Canvas

List some real-world examples of uses of canvas that are examples of things best done with canvas and not other features of HTML:

Needs of AI Squared Magnifier to assist in change proposals

Here are the needs from AISquared for ZoomText:

  • In general having a bounding rectangle for the path will allow us to track the object into the magnified view.
  • When ZoomText tracks it aligns the object in a certain way. ZoomText users can specify center, parent or edge alignment for text cursor, focus and window objects differently. In order to provide the correct alignment for the user we need a way to understand the role of a given path.
  • For the parent alignment option we need to be able to retrieve the location of the parent object of the path. The goal of parent alignment is to keep the parent object as good as possible in view while making sure the current object is displayed within the magnified view.
  • When ZoomText navigates or reads a web page, we scroll elements that are currently not displayed into view. We have access to the elements in fallback content but not their location.
  • Another reason why ZoomText needs to understand the role of the path are the screen enhancement that we provide. ZoomText provides different shaped enhancements for text cursor and for keyboard focus and (currently) no enhancement for text selection.

Limitations of real-world use cases

In this section, discuss specific examples from the list above and explore what those use cases fail to do (e.g. in terms of accessibility) which they should do.

https://zewt.org/curves/
* Keyboard users can't tab to specific points and move them from the keyboard.
* Should show focus ring around the selected point when moving by keyboard movement.
* Limited-vision users can't zoom around the specific area that the user is manipulating.
* It's a pity the mouse cursor has to be manually changed onmousemove.
* Finger users can't drag their finger across the canvas to find the various interactive parts of the document, because the user agent doesn't know ahead of times which parts are interactive.
http://www.ludei.com/sumon
* can't navigate and activate buttons using keyboard
* various UI controls not identified as controls or operable for range of users.
* can't zoom in to specific controls on the UI
* With scrollIntoView the author could bring the numbers into view within the canvas but the assistive technology cannot assess how to place the magnification point around the element as it does not know the corresponding role of the element. Many magnifier users are able to use a mouse so they would not be relying on keyboard for focus. would be good to have examples of how ATs treat different roles when it comes to magnification
* A magnifier vendor will want to allow the user to search the gridd of numbers to find an appropriate match. However, the magnifier cannot provide the ability to magnify around the number as it does not know the location of the element.
https://www.lucidchart.com/documents/edit?button#4766-6fcc-4f18275d-b546-71450a7ac5be?demo=on&branch=5a613773-81d2-48fb-b3a5-4fe780978ab4
* With scrollPathIntoView the author could bring the numbers into view within the canvas but the assistive technology cannot assess how to place the magnification point around the element as it does not know the corresponding role of the element. Many magnifier users are able to use a mouse.
* A magnifier vendor will want to allow the user to search the drawing surface of the flow chart to find an appropriate match. However, the magnifier cannot provide the ability to magnify around the number as it does not know the location of the element.
* The magnifier cannot assess the location of the element in order to position the magnifier within the context of a list box.
* When images are turned off canvas disappears. With fallback content you now have semantic content. Should the fallback content be rendered? Should the text labels for the fallback content be rendered? We now have interactive content so we need a better solution.
* LucidArt created separate canvas instances to be facilitate hit testing because all the hit testing is handled by the application at the canvas element. When this happens we have canvas elements spread all over the DOM and associating them, accessibly, with a single canvas application is a mess.
* This section originally had the following note, but as far as I (Hixie) can tell, this note is exactly backwards: "An advantage canvas has over SVG is that it shares the same DOM and keyboard navigation model as HTML. So, if one were to mix the best features of HTML (interactive widgets) with a drawing technology, including things like semantic relationships between elements, canvas is a better choice. So, SVG is not necessarily the preferred technology for accessibility. So, where we have a flow chart region of drawing objects the author could implement an HTML or an HTML/ARIA-enabled list box in fallback content inside a navigation section and a screen reader could bring it up in the list of navigable landmarks and it could be included in the keyboard navigation order."
http://www.phpied.com/canvas-pie/
*If a pie chart were scrolled into view it would be treated as a list box in fallback at this time. The location of the slices is not discernible by a magnifier nor is the bounds of the circle. Consequently, the magnifier does not have the knowledge to properly position the magnifier while navigating the list box. The author can scrollPathIntoView but that does provide enough information to determine if this is something to magnify to or how to magnify to it. It does not know what is being scrolled into view. In fact, it's exactly as inaccessible as an image of the pie graph would be.
http://www.kevs3d.co.uk/dev/asteroids/
*A magnifier user cannot find the location of the asteroids to be able to adjust zoom level as they asteroids approach the target. Typically, focus would be on the ship with the artillery.

Use cases that are already handled but are sometimes mistakenly thought to show limitations of the platform

http://www.libreoffice.org/download/
  • A magnifier cannot follow the caret or selection location while the users is editing the LibreOffice document. Online demo: http://www.youtube.com/watch?v=YdJu59bSBpI Actually this is already possible: use contenteditable instead of canvas. Using canvas here is inappropriate.
  • A magnifier cannot locate the text, or embedded drawing objects while editing the LibreOffice document.

Discussion

Rich:Actually this is already possible: use contenteditable instead of canvas. Using canvas here is inappropriate. We are told that contenteditable is insufficient - at least for Google Docs. It would be good to find out why they feel contenteditable is inadequate before we can say something is already possible. Otherwise, it is foolish to make an investment to create the alternative. Has anyone asked for the details as to why Google Docs or LibreOffice did not use contenteditable?

Hixie: contenteditable is in roughly the same state as canvas right now in terms of being immature. Work is rapidly progressing to make contenteditable more usable. Even if we weren't working on it, though, it doesn't make sense to add a feature to canvas to address a limitation in contenteditable. That would be like saying "Oh no! My house is on fire! I will buy a fire extinguisher for my school instead of calling the fire brigade".

Rich: So, what do you say to the Google Docs team and Michael Meeks who works on LibreOffice. "Stop production and wait for contenteditable? - my solution is ultimately going to be far better. From what I hear Google has only one guy working on contenteditable. Industry is not going to wait for contenteditable and for whatever reason they chose not to contribute to it. You should know that at this point that IBM is NOT creating a rich text editor in canvas."

Proposals

In this section, propose alternatives to improve canvas to make it easier to fill in the limitations listed in the previous section.

Path primitives - DONE

This addresses:

  • creating more than one path and stamping them out in different places
  • drawing text on a path
  • drawing text to a path
  • stamping a path on another path with transformations applied

Doesn't yet address:

  • how to manipulate a path once created, e.g. to make a path be the intersection or union of all its subpaths, or to make it possible to apply noise to a path created from text, or some such.

(See spec for final proposal.)

Ellipses - DONE

This addresses:

  • Drawing ellipses, a common request

Add two extra arguments to arcTo() and add ellipse():

  void arcTo(double x1, double y1, double x2, double y2, double radiusX, double radiusY, double rotation); 
  void ellipse(double x, double y, double radiusX, double radiusY, double rotation, double startAngle, double endAngle, boolean anticlockwise);

Make sure to define this such that the transformation is applied to the resulting arc, not to the coordinates before drawing the arc.

SVG Path - DONE

new Path(DOMString d)

Somehow this needs to be defined as adding the given path(s) to the path as described in SVG.

Dashed lines - DONE

  void setLineDash(sequence<Number>);  // array of on/off dash lengths
  sequence<Number> getLineDash();      // return the current dash array, freshly allocated
  attribute Number lineDashOffset;     // default 0; offset within dash pattern to begin stroking

Need to define how this is applied. Probably that any paths drawn with this are cut into subpaths by removing components of subpaths that are in "off" segments. Need to define how to handle zero-length resulting subpaths (drop them). Segment lengths are in pixels, affected by the transform.

Regions - DONE

This addresses:

  • complex hit region support without the author having to do anything using isPointInPath, maintain a scene graph, or maintain a shadow DOM
  • make content in canvas discoverable to users of ATs with an AT-specific focus mechanism (e.g. VO)
  • make content in canvas discoverable to users of touch screen interfaces that read what's under the finger
  • the last two even in the case of hierarchical AT structures (e.g. menus with menu items, toolbars with buttons)
  • make content in canvas discoverable to AT users where the content is backed by HTML elements, without needing the user to move the system focus
  • make it easy for text drawn to the canvas to be made discoverable (should be addressed)
  • make it easy to specify custom cursors for different parts of the canvas

This does not address:

  • making it possible to select, or cursor through, text on canvas (probably not an issue, e.g. you can't do that with VO on Mac OS X system text labels anyway)
// add a method to 2D that adds a region to the canvas:
addHitRegion({
  path: path, // Path to use as region description, defaults to the context's default path
  element: element, // Element to send events to; limited at hit-test time to specific interactive elements
  id: id, // DOMString to use as the ID in events fired on the canvas for this region (MouseEvent gets new attribute for this purpose); also used for parentID references
  // if element is present, label, ariaRole, and parentID must not be:
  label: label, // DOMString to use as a label when the user strokes a touch display or focuses the hit region with an AT
  ariaRole: ariaRole, // DOMString limited to specific roles, AT uses this to decide how to expose the region
  parentID: parentID, // DOMString, AT uses this to decide which region to use as this region's parent (defaults to canvas as parent)
  // one of element, label, id, or ariaRole must be present; exception otherwise? no-op?
  // ariaRole must be present if parentID is present; exception otherwise? ignore ariaRole?
  // if parentID refers to a region that no longer exists, exception? no-op? ignore parentID?
  cursor: cursor, // a CSS cursor specification that, if given, overrides the canvas' default cursor
});
// when mouse events go to the canvas, they check to see if their coordinates are in a region, and if so, they include the ID.
// if the region has an element, the event is [also?] dispatched to the element [instead of the canvas?]
// when a region is completely overlapped by one or more other regions, or by a clearRect(), then it is forgotten

// make fillText() and strokeText() generate regions automatically unless a new argument is passed in disabling this behaviour, e.g. {decorative:true}.

Comments

Rich Schwerdtfeger

I have some clarification and requests. As you know the accessibility information is provided to the assistive technology through the fallback content DOM.

This is an oversimplification. It's provided to ATs either through proprietary mechanisms or through an accessibility API. There's no reason it has to be through a DOM. Most applications on Windows and Mac, for instance, do not have a DOM. There's no reason we need to constrain ourselves in this manner. In fact, since the whole point of canvas is to do graphics without a scene graph, it's significantly better if we can avoid having a fallback DOM too, as that is a kind of scene graph. We should only require the use of a fallback DOM where such use makes things simpler, as e.g. when the content being represented is an interactive control for which existing HTML elements exist. -Hixie

Ian, I understand but what you are overlooking is the fact that the AT looks at a whole lot of things through the accessibility API and not just labels, roles, and parents. They also look at children, relationships, state and properties. You do not have an exhausted list here. Also all platform accessibility API provide a parent child hierarchy. You don't have to call it a DOM but it is in essence the same thing. You have a tree in which events are propagated, etc. In Windows you bind accessibility API to that tree. If you want to be more broad than the DOM I suggest you simply provide an "object" for which an element can be. I am assuming you are doing this because of the other work going on in WebKit on a separate model that bound different rendering engines (SVG, etc.) to it. You and I talked about that at TPAC. Those could apply similar attributes to it to what is being used in the DOM. Having to constantly change the same API to add a new accessibility features is a maintenance issue. Also, if you don't have an element what are you binding those parameters to? - Rich

This API proposal does handle this, that's what the "parent" field is for, amongst other things. -Hixie

Hixie, How are states and properties added to the objects not bound to DOM elements with roles assigned? How is focus exposed for objects not bound to DOM elements? - Stevef

Use elements if you need states, properties, or focus. -Hixie

Can you provide examples of roles that don't require states or properties? - Stevef"

Accessible objects must supply state and property information, in addition to roles, much the same way that a checkbox supplies a checked state. This is not limited to HTML5 elements. - Rich

That labels, roles, states, properties, parent child information is acquired from that DOM. This DOM also feeds accessibility APIs on each of the platforms. The ATs access the accessibility information as follows:

  • JAWS: IE- DOM+MSAA, Chrome: MSAA+IA2, Firefox: MSAA+IA2
  • NVDA: Chrome: MSAA+IA2, Firefox: MSAA+IA2, IE-UIA but not sure about other APIs
  • MAGIC Magnifier: IE- DOM+MSAA, Chrome: MSAA+IA2, Firefox: MSAA+IA2
  • ChromeVox: DOM ( for Chrome on desktop and Android)
  • VoiceOver: MACOSX accessibility protocol (whose mapping is in large part directly tied to the DOM)
  • ZoomText Magnifier: IE- DOM+MSAA, Firefox: MSAA+IA2
  • Narrator for Metro: IE- UIAutomation (whose mapping in large part comes from the DOM)
  • Magnifier for Metro:IE- UIAutomation (whose mapping in large part comes from the DOM)

So, the author should not include providing these attributes to the function call for hit testing as they interfere with the native host language semantics of the fallback DOM which supplies the assistive technology with all the information in the use case:

  // if element is present, label, ariaRole, and parentID must not be:
  label: label, // DOMString to use as a label when the user strokes a touch display or focuses the hit region with an AT
  ariaRole: ariaRole, // DOMString limited to specific roles, AT uses this to decide how to expose the region
  parentID: parentID, // DOMString, AT uses this to decide which region to use as this region's parent (defaults to canvas as parent)
// one of element, label, id, or ariaRole must be present; exception otherwise? no-op?
  // ariaRole must be present if parentID is present; exception otherwise? ignore ariaRole?
  // if parentID refers to a region that no longer exists, exception? no-op? ignore parentID?

I am also concerned that the author could write a label here that would interfere with native host language features such as <label> and other native standard HTML interactive controls.

Also, please consider additional methods to:

  • remove a hit region association with an element
  • remove all hit regions associations associated with elements

I spoke with Kenneth Russell from the Chrome 3D Team at SXSW and he also agreed we need functions to clear hit testing association.

These already exist, in the form of clearRect(). -Hixie That does not explicitly clear associations with elements or objects if you so chose - Rich It does if we define that it does... -Hixie Yes it would. I will look for it in your next change proposal. -Rich

Pattern offsets - DONE

CanvasPattern gets all the transformation methods. To create the pattern, it is aligned as now, then transformed, then the path is applied, then the whole thing is transformed and painted.

See https://www.w3.org/Bugs/Public/show_bug.cgi?id=10132

Image smoothing - DONE

This addresses:

  • Being able to show an image's pixels, e.g. in an image editor

This does not address:

  • Performance concerns. Browsers should handle optimisation of image drawing themselves.
Context:
 attribute boolean imageSmoothingEnabled;

See https://www.w3.org/Bugs/Public/show_bug.cgi?id=12044

TextMetrics - DONE

This addresses:

  • baseline aligning text with non-text over multiple lines
  • drawing selection rectangles that cover the f in Zapfino
TextMerics:
  readonly attribute double fontBoundingBoxAscent; // distance from textBaseline to top of highest font bounding box of all the fonts used to render the text
  readonly attribute double fontBoundingBoxDescent; // same but down to bottom of font bounding box
  readonly attribute double actualBoundingBoxAscent; // distance from textBaseline to top of bounding box of the given text
  readonly attribute double actualBoundingBoxDescent; // same but down to bottom of bounding box
  // emHeightAscent + emHeightDescent = font-size
  readonly attribute double emHeightAscent; // distance from textBaseline to top of em height (zero if textBaseline is top, half of font-size if textBaseline is middle)
  readonly attribute double emHeightDescent; // same but down to bottom of em height (zero if textBaseline is bottom, half of font-size if textBaseline is middle)
  readonly attribute double hangingBaseline; // distance from textBaseline to hanging baseline; up is negative, down is positive (zero if textBaseline is hanging)
  readonly attribute double alphabeticBaseline; // distance from textBaseline to alphabetic baseline; up is negative, down is positive (zero if textBaseline is alphabetic)
  readonly attribute double ideographicBaseline; // distance from textBaseline to ideographic baseline; up is negative, down is positive (zero if textBaseline is ideographic)

See https://www.w3.org/Bugs/Public/show_bug.cgi?id=7798

Misc other proposals - DONE

Context:
   void resetClip(); // resets clip to canvas extent without affecting save/restore stack - see https://www.w3.org/Bugs/Public/show_bug.cgi?id=14499 - DONE
   attribute SVGMatrix currentTransform; // see https://www.w3.org/Bugs/Public/show_bug.cgi?id=12140 and http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2011-November/033745.html - DONE
   void resetTransform(); // resets the transform to the identity matrix - DONE

Path:
  new Path(path) // copy constructor - DONE

Examples

Take the pages from the first section and show how they would be changed to use the proposals in the previous section.