A user account is required in order to edit this wiki, but we've had to disable public user registrations due to spam.

To request an account, ask an autoconfirmed user on IRC (such as one of these permanent autoconfirmed members).

Canvas

From WHATWG Wiki
Revision as of 00:39, 28 February 2012 by Hixie (talk | contribs) (Regions)
Jump to: navigation, search

Real world uses of Canvas

List some real-world examples of uses of canvas that are examples of things best done with canvas and not other features of HTML:

Needs of AI Squared Magnifier to assist in change proposals

Here are the needs from AISquared for ZoomText:

  • In general having a bounding rectangle for the path will allow us to track the object into the magnified view.
  • When ZoomText tracks it aligns the object in a certain way. ZoomText users can specify center, parent or edge alignment for text cursor, focus and window objects differently. In order to provide the correct alignment for the user we need a way to understand the role of a given path.
  • For the parent alignment option we need to be able to retrieve the location of the parent object of the path. The goal of parent alignment is to keep the parent object as good as possible in view while making sure the current object is displayed within the magnified view.
  • When ZoomText navigates or reads a web page, we scroll elements that are currently not displayed into view. We have access to the elements in fallback content but not their location.
  • Another reason why ZoomText needs to understand the role of the path are the screen enhancement that we provide. ZoomText provides different shaped enhancements for text cursor and for keyboard focus and (currently) no enhancement for text selection.

Limitations of real-world use cases

In this section, discuss specific examples from the list above and explore what those use cases fail to do (e.g. in terms of accessibility) which they should do.

https://zewt.org/curves/

Keyboard users can't tab to specific points and move them from the keyboard.

  • Should show focus ring around the selected point when moving by keyboard movement.
  • Limited-vision users can't zoom around the specific area that the user is manipulating.
  • It's a pity the mouse cursor has to be manually changed onmousemove.
  • Finger users can't drag their finger across the canvas to find the various interactive parts of the document, because the user agent doesn't know ahead of times which parts are interactive.
http://www.ludei.com/sumon
  • can't navigate and activate buttons using keyboard
  • various UI controls not identified as controls or operable for range of users.
  • can't zoom in to specific controls on the UI
  • With scrollIntoView the author could bring the numbers into view within the canvas but the assistive technology cannot assess how to place the magnification point around the element as it does not know the corresponding role of the element. Many magnifier users are able to use a mouse so they would not be relying on keyboard for focus.
  • A magnifier vendor will want to allow the user to search the gridd of numbers to find an appropriate match. However, the magnifier cannot provide the ability to magnify around the number as it does not know the location of the element.
https://www.lucidchart.com/documents/edit?button#4766-6fcc-4f18275d-b546-71450a7ac5be?demo=on&branch=5a613773-81d2-48fb-b3a5-4fe780978ab4
  • With scrollPathIntoView the author could bring the numbers into view within the canvas but the assistive technology cannot assess how to place the magnification point around the element as it does not know the corresponding role of the element. Many magnifier users are able to use a mouse.
  • A magnifier vendor will want to allow the user to search the drawing surface of the flow chart to find an appropriate match. However, the magnifier cannot provide the ability to magnify around the number as it does not know the location of the element.
  • NOTE: An advantage canvas has over SVG is that it shares the same DOM and keyboard navigation model as HTML. So, if one were to mix the best features of HTML (interactive widgets) with a drawing technology, including things like semantic relationships between elements, canvas is a better choice. So, SVG is not necessarily the preferred technology for accessibility. So, where we have a flow chart region of drawing objects the author could implement an HTML or an HTML/ARIA-enabled list box in fallback content inside a navigation section and a screen reader could bring it up in the list of navigable landmarks and it could be included in the keyboard navigation order.
  • From the previous bullet, the magnifier cannot assess the location location of the element in order to position the magnifier within the context of a list box.
  • When images are turned off canvas disappears. With fallback content you now have semantic content. Should the fallback content be rendered? Should the text labels for the fallback content be rendered? We now have interactive content so we need a better solution.
  • LucidArt created separate canvas instances to be facilitate hit testing because all the hit testing is handled by the application at the canvas element. When this happens we have canvas elements spread all over the DOM and associating them, accessibly, with a single canvas application is a mess.
http://www.phpied.com/canvas-pie/
  • If a pie chart were scrolled into view it would be treated as a list box in fallback at this time. The location of the slices is not discernible by a magnifier nor is the bounds of the circle. Consequently, the magnifier does not have the knowledge to properly position the magnifier while navigating the list box. The author can scrollPathIntoView but that does provide enough information to determine if this is something to magnify to or how to magnify to it. It does not know what is being scrolled into view. In fact, it's exactly as inaccessible as an image of the pie graph would be.
http://www.kevs3d.co.uk/dev/asteroids/
  • A magnifier user cannot find the location of the asteroids to be able to adjust zoom level as they asteroids approach the target. Typically, focus would be on the ship with the artillery.

Use cases that are already handled but are sometimes mistakenly thought to show limitations of the platform

http://www.libreoffice.org/download/
  • A magnifier cannot follow the caret or selection location while the users is editing the LibreOffice document. Online demo: http://www.youtube.com/watch?v=YdJu59bSBpI Actually this is already possible: use contenteditable instead of canvas. Using canvas here is inappropriate.
  • A magnifier cannot locate the text, or embedded drawing objects while editing the LibreOffice document.

Discussion

Rich:Actually this is already possible: use contenteditable instead of canvas. Using canvas here is inappropriate. We are told that contenteditable is insufficient - at least for Google Docs. It would be good to find out why they feel contenteditable is inadequate before we can say something is already possible. Otherwise, it is foolish to make an investment to create the alternative. Has anyone asked for the details as to why Google Docs or LibreOffice did not use contenteditable?

Hixie: contenteditable is in roughly the same state as canvas right now in terms of being immature. Work is rapidly progressing to make contenteditable more usable. Even if we weren't working on it, though, it doesn't make sense to add a feature to canvas to address a limitation in contenteditable. That would be like saying "Oh no! My house is on fire! I will buy a fire extinguisher for my school instead of calling the fire brigade".

Proposals

In this section, propose alternatives to improve canvas to make it easier to fill in the limitations listed in the previous section.

Regions

This addresses:

  • complex hit region support without the author having to do anything using isPointInPath, maintain a scene graph, or maintain a shadow DOM
  • make content in canvas discoverable to users of ATs with an AT-specific focus mechanism (e.g. VO)
  • make content in canvas discoverable to users of touch screen interfaces that read what's under the finger
  • the last two even in the case of hierarchical AT structures (e.g. menus with menu items, toolbars with buttons)
  • make content in canvas discoverable to AT users where the content is backed by HTML elements, without needing the user to move the system focus
addHitRegion({
  path: path, // Path to use as region description, defaults to the context's default path
  // only one of element and id may be present; either id is ignored if element is present, or an exception is raised if both are present
  element: element, // Element to send events to; limited at hit-test time to specific interactive elements
  id: id, // DOMString to use as the ID in events fired on the canvas for this region (MouseEvent gets new attribute for this purpose)
  // if element is present, label, ariaRole, and parentID must not be:
  label: label, // DOMString to use as a label when the user strokes a touch display or focuses the hit region with an AT
  ariaRole: ariaRole, // DOMString limited to specific roles, AT uses this to decide how to expose the region
  parentID: parentID, // unsigned long or DOMString, AT uses this to decide which region to use as this region's parent (defaults to canvas as parent)
  // all arguments optional, but no-op if none of element, label, id, or ariaRole are given (or, throw if all are missing)
  // ariaRole must be present if parentID is present
  // if parentID refers to a region that no longer exists, exception? no-op? ignore parentID?
});

Path primitives

Examples

Take the pages from the first section and show how they would be changed to use the proposals in the previous section.