A user account is required in order to edit this wiki, but we've had to disable public user registrations due to spam.

To request an account, ask an autoconfirmed user on Chat (such as one of these permanent autoconfirmed members).

Microdata Problem Descriptions

From WHATWG Wiki
Revision as of 19:07, 23 April 2009 by Hixie (talk | contribs)
Jump to navigation Jump to search

Dirty

It's difficult to price-check music and purchase it without having to go through a special website or application to acquire music.


A mechanism to annotate which proteins and genes one is referencing in a blog entry, so that colleagues can determine if they are talking about the same thing without having to read long series of numbers (or whatnot).


Paul wants to publish a large vocabulary in RDFS and/or OWL. Paul also wants to provide a clear, human readable description of the same vocabulary, that mixes the terms with descriptive text in HTML.


As a browser interface developer, I find it really annoying that I have to keep creating new screen scrapers for websites in order to build UIs that work with page data differently than the page developer intended. The data is all there on the page, but it takes a great amount of effort to extract it into a usable form. Even worse, the screen scraper breaks whenever a major update is made to the page, requiring me to solve the scraping problem yet again. Microformats were a step in the right direction, but I keep having to create a new parser and special rules for every new Microformat that is created. Every time I develop a new parser it takes precious time away from making the browser actually useful. Can we create a world where we don't have to worry about the data model anymore and instead focus on the UI? Browser UIs for working with web page data suck. We can add an RSS feed and bookmark a page, but many other more complex tasks force us to tediously cut and paste text instead of working with the information on the page directly. It would increase productivity and reduce frustration for many people if we could use the data in a web page to generate a custom browser UI for calling a phone number using our cellphone, adding an event to our calendaring software, including a person in our address book, or buying a song from our favorite online music store.


How do you merge statements made in multiple languages about a single subject or topic into a particular knowledge base? If there are a number of thoughts about George W. Bush made by people that speak Spanish and there are a number of statements made by people that speak English, how do you coalesce those statements into one knowledge base? How do you differentiate those statements from statements made about George H.W. Bush? One approach would be to use a similar underlying vocabulary to describe each person and specify the person using a universally unique string. This would allow the underlying language to change, but ensure that the semantics of what is being expressed stays the same.


When Google answers a search like "Who is Napoleon" you get an answer, but where is the disambiguation? How does it determine the context for the search? There are many dimensions to "Napoleon" and Google statistically guessed one based on link density of its subjectively assembled index and page rank algorithm. How do you as writer or reader efficiently navigate the many aspects/facets associated with the pattern: "Napoleon"? What if the answer you are looking for is in the statistically insignificant links and not the major links?


Sam has posted a video tutorial on how to grow tomatoes on his video blog. Jane uses the tutorial and would like to leave feedback to others that view the video regarding certain parts of the video she found most helpful. Since Sam has comments disabled on his blog, Jane cannot comment on the particular sections of the video other than linking to it from her blog and entering the information there. This is not useful to most people viewing the video as they would have to go to every blogger's site to read each comment. Luckily, Jane has a video player that is capable of finding comments distributed around blogs on the net. The video player shows the comments as a video is being watched (shown as sub-titles). How can Jane specify her comments on parts of a video in a distributed manner?


Arbitrarily extensible by authors


Mapping to RDF


Ability to create groups of name-value pairs (i.e. triples with a common subject) without requiring that the name-value pairs be given on elements with a common parent


Ability to have name-value pairs with values that are arbitrary strings, dates and times, URIs, and further groups of name-value pairs


Encoding of machine-readable equivalents for times, lenths, durations, telephone numbers, languages, etc


API


Discourage data duplication (e.g. discourage people from saying <title>...</title> <meta name="dc.title" content="..."> <h 1 property="http://...dc...title">...</h 1>


The Microformats community has been struggling with the abbr design pattern when attempting to specify certain machine-readable object attributes that differ from the human-readable content. For example, when specifying times, dates, weights, countries and other data in French, Japanese, or Urdu, it is helpful to use the ISO format to express the data to a machine and associate it with an object property, but to specify the human-readable value in the speaker's natural language.


  • Pages should be able to expose nested lists of name-value pairs on a page-by-page basis.
  • It should be possible to define globally-unique names, but the syntax should be optimised for a set of predefined vocabularies.
  • Adding this data to a page should be easy.
  • The syntax for adding this data should encourage the data to remain accurate when the page is changed.
  • The syntax should be resilient to intentional copy-and-paste authoring: people copying data into the page from a page that already has data should not have to know about any declarations far from the data.
  • The syntax should be resilient to unintentional copy-and-paste authoring: people copying markup from the page who do not know about these features should not inadvertently mark up their page with inapplicable data.
  • Generic syntax and parsing mechanism for Microformats.

Cleaned up

USE CASE: Copy-and-paste should work between Web apps and native apps and between Web apps and other Web apps.

SCENARIOS:

  • Fred copies an e-mail from Apple Mail into GMail, and the e-mail survives intact, including headers, attachments, and multipart/related parts.
  • Fred copies an e-mail from GMail into Hotmail, and the e-mail survives intact, including headers, attachments, and multipart/related parts.

USE CASE: Web browsers should be able to help users find information related to the items discussed by the page that they are looking at.

SCENARIOS:

  • Finding more information about a movie when looking at a page about the movie, when the page contains detailed data about the movie.
    • For example, where the movie is playing locally.
    • For example, what your friends thought of it.
  • Exposing music samples on a page so that a user can listen to all the samples.
  • Students and teachers should be able to discover each other -- both within an institution and across institutions -- via their blogging.

REQUIREMENTS:

  • Should be discoverable, because otherwise users will not use it, and thus users won't be helped.
  • Should be consistently available, because if it only works on some pages, users will not use it (see, for instance, the rel=next story).
  • Should be bootstrapable (rel=next failed because UAs didn't expose it because authors didn't use it because UAs didn't expose it).

USE CASE: Exposing calendar events so that users can add those events to their calendaring systems.

SCENARIOS:

  • A user visits the Avenue Q site and wants to make a note of when tickets go on sale for the tour's stop in his home town. The site says "October 3rd", so the user clicks this and selects "add to calendar", which causes an entry to be added to his calendar.
  • A student is making a timeline of important events in Apple's history. As he reads Wikipedia entries on the topic, he clicks on dates and selects "add to timeline", which causes an entry to be added to his timeline.
  • TV guide listings - browsers should be able to expose to the user's tools (e.g. calendar, DVR, TV tuner) the times that a TV show is on.
  • Paul sometimes gives talks on various topics, and announces them on his blog. He would like to mark up these announcements with proper scheduling information, so that his readers' software can automatically obtain the scheduling information and add it to their calendar. Importantly, some of the rendered data might be more informal than the machine-readable data required to produce a calendar event. Also of importance: Paul may want to annotate his event with a combination of existing vocabularies and a new vocabulary of his own design. (why?)

REQUIREMENTS:

  • Should be discoverable.
  • Should be compatible with existing calendar systems.
  • Should be unlikely to get out of sync with prose on the page.
  • Shouldn't require the consumer to write XSLT or server-side code to read the calendar information.
  • Machine-readable event data shouldn't be on a separate page than human-readable dates.

USE CASE: Exposing contact details so that users can add people to their address books or social networking sites.

SCENARIOS:

  • Instead of giving a colleague a business card, someone gives their colleague a URL, and that colleague's user agent extracts basic profile information such as the person's name along with references to other people that person knows and adds the information into an address book.
  • A scholar and teacher wants other scholars (and potentially students) to be able to easily extract information about who he is to add it to their contact databases.
  • Fred copies the names of one of his Facebook friends and pastes it into his OS address book; the contact information is imported automatically.
  • Fred copies the names of one of his Facebook friends and pastes it into his Webmail's address book feature; the contact information is imported automatically.

REQUIREMENTS:

  • A user joining a new social network should be able to identify himself to the new social network in way that enables the new social network to bootstrap his account from existing published data (e.g. from another social nework) rather than having to re-enter it, without the new site having to coordinate (or know about) the pre-existing site, without the user having to give either sites credentials to the other, and without the new site finding out about relationships that the user has intentionally kept secret. (http://w2spconf.com/2008/papers/s3p2.pdf)
  • Data should not need to be duplicated between machine-readable and human-readable forms (i.e. the human-readable form should be machine-readable).
  • Shouldn't require the consumer to write XSLT or server-side code to read the contact information.
  • Machine-readable contact information shouldn't be on a separate page than human-readable contact information.

USE CASE: Getting data out of poorly written Web pages, so that the user can find more information about the page's contents.

SCENARIOS:

  • Alfred merges data from various sources in a static manner, generating a new set of data. Bob later uses this static data in conjunction with other data sets to generate yet another set of static data. Julie then visits Bob's page later, and wants to know where and when the various sources of data Bob used come from, so that she can evaluate its quality. (In this instance, Alfred and Bob are assumed to be uncooperative, since creating a static mashup would be an example of a poorly-written page.)
  • TV guide listings - If the TV guide provider does not render a link to IMDB, the browser should recognise TV shows and give implicit links. (In this instance, it is assumed that the TV guide provider is uncooperative, since it isn't providing the links the user wants.)
  • Students and teachers should be able to discover each other -- both within an institution and across institutions -- via their blogging. (In this instance, it is assumed that the teachers and students aren't cooperative, since they would otherwise be able to find each other by listing their blogs in a common directory.)

REQUIREMENTS:

  • Does not need cooperation of the author (if the page author was cooperative, the page would be well-written).
    • → Can only rely on the content on the page, not markup, by definition.
  • Shouldn't require the consumer to write XSLT or server-side code to derive this information from the page.

USE CASE: Search engines and other site categorisation and aggregation engines should be able to determine the contents of pages with more accuracy than today.

SCENARIOS

  • Students and teachers should be able to discover each other -- both within an institution and across institutions -- via their blogging.
  • A blogger wishes to categorise his posts such that he can see them in the context of other posts on the same topic, including posts by unrelated authors (i.e. not via a pre-agreed tag or identifier, not via a single dedicated and preconfigured aggregator).

REQUIREMENTS:

  • Should not disadvantage pages that are more useful to the user but that have not made any effort to help the search engine.
    • → Can't rely on special markup or annotations.
  • Should not be more susceptible to spamming than today's markup.
    • → Can't rely on hidden metadata.

USE CASE: Allow users to maintain bibliographies or otherwise keep track of sources of quotes or references.

SCENARIOS:

  • Frank copies a sentence from Wikipedia and pastes it in some word processor: it would be great if the word processor offered to automatically create a bibliographic entry.
  • Patrick keeps a list of his scientific publications on his web site. He would like to provide structure within this publications page so that Frank can automatically extract this information and use it to cite Patrick's papers without having to transcribe the bibliographic information.
  • A scholar and teacher wants other scholars (and potentially students) to be able to easily extract information about what he has published to add it to their bibliographic applications.
  • A scholar and teacher wants to publish scholarly documents or content that includes extensive citations that readers can then automatically extract so that they can find them in their local university library. These citations may be for a wide range of different sources: an interview posted on YouTube, a legal opinion posted on the Supreme Court web site, a press release from the White House.
  • A blog, say htmlfive.net, copies content wholesale from another, say blog.whatwg.org (as permitted and encouraged by the license). The author of the original content would like the reader of the reproduced content to know the provenance of the content. The reader would like to find the original blog post so he can leave comments for the original author.
  • Chaals could improve the Opera intranet if he had a mechanism for identifying the original source of various parts of a page. (why?)


REQUIREMENTS:

  • Machine-readable bibliographic information shouldn't be on a separate page than human-readable bibliographic information.

USE CASE: Site owners want a way to provide enhanced search results to the engines, so that an entry in the search results page is more than just a bare link and snippet of text, and provides additional resources for users straight on the search page without them having to click into the page and discover those resources themselves.

SCENARIOS:

  • For example, in response to a query for a restaurant, a search engine might want to have the result from yelp.com provide additional information, e.g. info on price, rating, and phone number, along with links to reviews or photos of the restaurant.

REQUIREMENTS:

  • Information for the search engine should be on the same page as information that would be shown to the user if the user visited the page.

USE CASE: Annotate structured data that HTML has no semantics for, and which nobody has annotated before, and may never again, for private use or use in a small self-contained community.

SCENARIOS:

  • A group of users want to mark up their iguana collections so that they can write a script that collates all their collections and presents them in a uniform fashion.
  • A scholar and teacher wants other scholars (and potentially students) to be able to easily extract information about what he teaches to add it to their custom applications.
  • The list of specifications produced by W3C, for example, and various lists of translations, are produced by scraping source pages and outputting the result. This is brittle. It would be easier if the data was unambiguously obtainable from the source pages. This is a custom set of properties, specific to this community.
  • Chaals wants to make a list of the people who have translated W3C specifications or other documents, and then use this to search for people who are familiar with a given technology at least at some level, and happen to speak one or more languages of interest.
  • Chaals wants to have a reputation manager that can determine which of the many emails sent to the WHATWG list might be "more than usually valuable", and would like to seed this reputation manager from information gathered from the same source as the scraper that generates the W3C's TR/ page.
  • A user wants to write a script that finds the price of a book from an Amazon page.
  • Todd sells an HTML-based content management system, where all documents are processed and edited as HTML, sent from one editor to another, and eventually published and indexed. He would like to build up the editorial metadata used by the system within the HTML documents themselves, so that it is easier to manage and less likely to be lost.

REQUIREMENTS:

  • Vocabularies can be developed in a manner that won't clash with future more widely-used vocabularies, so that those future vocabularies can later be used in a page making use of private vocabularies without making the earlier annotations ambiguous.
  • Using the data should not involve learning a plethora of new APIs, formats, or vocabularies (today it is possible, e.g., to get the price of an Amazon product, but it requires learning a new API; similarly it's possible to get information from sites consistently using 'class' values in a documented way, but doing so requires learning a new vocabulary).
  • Shouldn't require the consumer to write XSLT or server-side code to process the annotated data.
  • Machine-readable annotations shouldn't be on a separate page than human-readable annotations.

USE CASE: Kill DBpedia.

SCENARIOS:

  • A user wants to have information in RDF form. The user visits Wikipedia, and his user agent can obtain the information without relying on DBpedia's interpretation of the page.

REQUIREMENTS:

  • All the data exposed by DBpedia should be derivable from Wikipedia without using DBpedia.

USE CASE: Replace Atom

SCENARIOS:

  • Paul maintains a blog and wishes to write his blog in such a way that tools can pick up his blog post tags, authors, titles, and his blogroll directly from his blog, so that he does not need to maintain a parallel version of his data in a "structured format." In other words, his HTML blog should be usable as its own structured feed.

USE CASE: Allow users to share data between sites (e.g. between an online store and a price comparison site).

SCENARIOS

  • Lucy is looking for a new apartment and some items with which to furnish it. She browses various web pages, including apartment listings, furniture stores, kitchen appliances, etc. Every time she finds an item she likes, she points to it and transfers its details to her apartment-hunting page, where her picks can be organized, sorted, and categorized.
  • Lucy uses a website called TheBigMove.com to organize all aspects of her move, including items that she is tracking for the move. She goes to her "To Do" list and adds some of the items she collected during her visits to various Web sites, so that TheBigMove.com can handle the purchasing and delivery for her.

REQUIREMENTS:

  • Should be discoverable, because otherwise users will not use it, and thus users won't be helped.
  • Should be consistently available, because if it only works on some pages, users will not use it (see, for instance, the rel=next story).
  • Should be bootstrapable (rel=next failed because UAs didn't expose it because authors didn't use it because UAs didn't expose it).

USE CASE: Help people searching for content to find content covered by licenses that suit their needs.

SCENARIOS:

  • If a user is looking for recipes of pies to reproduce on his blog, he might want to exclude from his results any recipes that are not available under a license allowing non-commercial reproduction.
  • Lucy wants to publish her papers online. She includes an abstract of each one in a page, but because they are under different copyright rules, she needs to clarify what the rules are. A harvester such as the Open Access project can actually collect and index some of them with no problem, but may not be allowed to index others. Meanwhile, a human finds it more useful to see the abstracts on a page than have to guess from a bunch of titles whether to look at each abstract.
  • There are mapping organisations and data producers and people who take photos, and each may place different policies. Being able to keep that policy information helps people with further mashups avoiding violating a policy. For example, if GreatMaps.com has a public domain policy on their maps, CoolFotos.org has a policy that you can use data other than images for non-commercial purposes, and Johan Ichikawa has a photo there of my brother's café, which he has licensed as "must pay money", then it would be reasonable for me to copy the map and put it in a brochure for the café, but not to copy the data and photo from CoolFotos. On the other hand, if I am producing a non-commercial guide to cafés in Melbourne, I can add the map and the location of the cafe photo, but not the photo itself.
  • At University of Mary Washington, many faculty encourage students to blog about their studies to encourage more discussion using an instance of WordPress MultiUser. A student with have a blog might be writing posts relevant to more than one class. Professors would like to then aggregate relevant posts into one blog.
  • Tara runs a video sharing web site for people who want licensing information to be included with their videos. When Paul wants to blog about a video, he can paste a fragment of HTML provided by Tara directly into his blog. The video is then available inline in his blog, along with any licensing information about the video.
  • Fred's browser can tell him what license a particular video on a site he is reading has been released under, and advise him on what the associated permissions and restrictions are (can he redistribute this work for commercial purposes, can he distribute a modified version of this work, how should he assign credit to the original author, what jurisdiction the license assumes, whether the license allows the work to be embedded into a work that uses content under various other licenses, etc).

REQUIREMENTS:

  • Content on a page might be covered by a different license than other content on the same page.
    • The current rel=license Microformat can not be re-used within these drafts, because virtually all existing rel=license implementations will just assume that the license applies to the whole page rather than just part of it
  • License proliferation should be discouraged.
  • License information should be able to survive from one site to another as the data is transfered.
  • Expressing copyright licensing terms should be easy for content creators, publishers, and redistributors to provide.
  • It should be more convenient for the users (and tools) to find and evaluate copyright statements and licenses than it is today.
  • Shouldn't require the consumer to write XSLT or server-side code to process the license information.
  • Machine-readable licensing information shouldn't be on a separate page than human-readable licensing information.

USE CASE: Allow authors to annotate their documents to highlight the key parts, e.g. as when a student highlights parts of a printed page, but in a hypertext-aware fashion.

SCENARIOS:

  • Fred writes a page about Napoleon. He can highlight the word Napoleon in a way that indicates to the reader that that is a person. Fred can also annotate the page to indicate that Napoleon and France are related concepts.

USE CASE: Allow sites to offer digital media (music, TV shows, etc) for sale without the site having to select a merchant (i.e. allowing the user to use his favourite merchant).

SCENARIOS:

  • Joe wants to sell his music, but he doesn't want to sell it through a specific retailer, he wants to allow the user to pick a retailer. So he forgoes the chance of an affiliate fee, negotiates to have his music available in all retail stores that his users might prefer, and then puts a generic link on his page that identifies the product but doesn't identifier a retailer. Kyle, a fan, visits his page, clicks the link, and Amazon charges his credit card and puts the music into his Amazon album downloader. Leo instead clicks on the link and is automatically charged by Apple, and finds later that the music is in his iTunes library.

REQUIREMENTS:

  • Should not be easily prone to clickjacking (sites shouldn't be able to charge the user without the user's consent).