A user account is required in order to edit this wiki, but we've had to disable public user registrations due to spam.

To request an account, ask an autoconfirmed user on Chat (such as one of these permanent autoconfirmed members).

Microdata Problem Descriptions

From WHATWG Wiki
Revision as of 07:18, 23 April 2009 by Hixie (talk | contribs)
Jump to navigation Jump to search


1. Service and product provider can't include the meaning of the things they publish in HTML. For example, how do you find out where the price of a book is located in, say, a page from Amazon? Now, people that want to use this data are forced to perform *screen scraping*, that is, there is a need for publisher-push rather than consumer-pull semantics.

2. People doing data mash-ups need to learn a plethora of APIs/formats while all they would likely want is *one data model* + and a bunch of vocabularies covering the domain.

When writing HTML (by hand or indirectly via a program) I want to isolate at describe what the content is about in terms of people, places, and other real-world things. I want to isolate "Napoleon" from a paragraph or heading, and state that the aforementioned entity is: is of type "Person" and he is associated with another entity "France". The use-case above is like taking a highlighter and making notes while reading about "Napoleon". This is what we all do when studying, but when we were kids, we never actually shared that part of our endeavors since it was typically the route to competitive advantage i.e., being top student in the class.

Simple programs should thus be able to answer questions like:

  • Under what license has a copyright holder released her work, and what are the associated permissions and restrictions? * Can I redistribute this work for commercial purposes? * Can I distribute a modified version of this work? * How should I assign credit to the original author?

Paul maintains a blog and wishes to "mark up" his existing page with structure so that tools can pick up his blog post tags, authors, titles, and his blogroll, and so that he does not need to maintain a parallel version of his data in "structured format." His HTML blog should be usable as its own structured feed.

Paul sometimes gives talks on various topics, and announces them on his blog. He would like to mark up these announcements with proper scheduling information, so that his readers' software can automatically obtain the scheduling information and add it to their calendar. Importantly, some of the rendered data might be more informal than the machine-readable data required to produce a calendar event. Also of importance: Paul may want to annotate his event with a combination of existing vocabularies and a new vocabulary of his own design.

Tod sells an HTML-based content management system, where all documents are processed and edited as HTML, sent from one editor to another, and eventually published and indexed. He would like to build up the editorial metadata within the HTML document itself, so that it is easier to manage and less likely to be lost.

Tara runs a video sharing web site. When Paul wants to blog about a video, he can paste a fragment of HTML provided by Tara directly into his blog. The video is then available inline, in his blog, along with any licensing information (Creative Commons?) about the video.

Lucy is looking for a new apartment and some items with which to furnish it. She browses various web pages, including apartment listings, furniture stores, kitchen appliances, etc. Every time she finds an item she likes, she can point to it, extract the locally-relevant structured data, and transfer it to her apartment-hunting page, where it can be organized, sorted, and categorized. Extracting relevant information from web pages is a still a very manual process. Unless a particular site allows you to add items to a shopping cart, or a "favorites list", it is very difficult to store relevant details for later use. The use of a web browser to remember items from multiple sites is even more daunting, usually resulting in dropping web tools in favor of desktop tools such as a text editor. There is no reason why copying concepts to a web-based clipboard should be so difficult - the idea has failed to gain traction until now because there has not been an easy-to-implement data model and mark-up mechanism allowing people to right-click and store items into a semantic clipboard. Lucy could then use an website called TheBigMove.com to organize all aspects of her move, including items that she is tracking for the move. She would go to her "To Do" list and add the semantic objects she had cut from other places. To ensure that sites don't try and steal any of her web clipboard objects, she would be required to click a browser-activated button labeled "Upload Web Objects", which would ask her which web objects she would like to share with the web page.

A mechanism to mark up music, video and other digital content in a blog or website. The Bitmunk Firefox plug-in would then detect the purchase information required from the embedded meta-data in the same web page that the browser is viewing. For example, while browsing the Scissorkick website, it would be nice to be able to purchase the music directly from one's favorite online music store without leaving the page. Marking up the music information in a way that works across websites would hopefully help drive a universal set of tools to enable this use case.

It's difficult to price-check music and purchase it without having to go through a special website or application to acquire music.

A mechanism to annotate which proteins and genes one is referencing in a blog entry, so that colleagues can determine if they are talking about the same thing without having to read long series of numbers (or whatnot).

Paul wants to publish a large vocabulary in RDFS and/or OWL. Paul also wants to provide a clear, human readable description of the same vocabulary, that mixes the terms with descriptive text in HTML.

As a browser interface developer, I find it really annoying that I have to keep creating new screen scrapers for websites in order to build UIs that work with page data differently than the page developer intended. The data is all there on the page, but it takes a great amount of effort to extract it into a usable form. Even worse, the screen scraper breaks whenever a major update is made to the page, requiring me to solve the scraping problem yet again. Microformats were a step in the right direction, but I keep having to create a new parser and special rules for every new Microformat that is created. Every time I develop a new parser it takes precious time away from making the browser actually useful. Can we create a world where we don't have to worry about the data model anymore and instead focus on the UI? Browser UIs for working with web page data suck. We can add an RSS feed and bookmark a page, but many other more complex tasks force us to tediously cut and paste text instead of working with the information on the page directly. It would increase productivity and reduce frustration for many people if we could use the data in a web page to generate a custom browser UI for calling a phone number using our cellphone, adding an event to our calendaring software, including a person in our address book, or buying a song from our favorite online music store.

How do you merge statements made in multiple languages about a single subject or topic into a particular knowledge base? If there are a number of thoughts about George W. Bush made by people that speak Spanish and there are a number of statements made by people that speak English, how do you coalesce those statements into one knowledge base? How do you differentiate those statements from statements made about George H.W. Bush? One approach would be to use a similar underlying vocabulary to describe each person and specify the person using a universally unique string. This would allow the underlying language to change, but ensure that the semantics of what is being expressed stays the same.

When Google answers a search like "Who is Napoleon" you get an answer, but where is the disambiguation? How does it determine the context for the search? There are many dimensions to "Napoleon" and Google statistically guessed one based on link density of its subjectively assembled index and page rank algorithm. How do you as writer or reader efficiently navigate the many aspects/facets associated with the pattern: "Napoleon"? What if the answer you are looking for is in the statistically insignificant links and not the major links?

Sam has posted a video tutorial on how to grow tomatoes on his video blog. Jane uses the tutorial and would like to leave feedback to others that view the video regarding certain parts of the video she found most helpful. Since Sam has comments disabled on his blog, Jane cannot comment on the particular sections of the video other than linking to it from her blog and entering the information there. This is not useful to most people viewing the video as they would have to go to every blogger's site to read each comment. Luckily, Jane has a video player that is capable of finding comments distributed around blogs on the net. The video player shows the comments as a video is being watched (shown as sub-titles). How can Jane specify her comments on parts of a video in a distributed manner?

Arbitrarily extensible by authors

Mapping to RDF

Ability to create groups of name-value pairs (i.e. triples with a common subject) without requiring that the name-value pairs be given on elements with a common parent

Ability to have name-value pairs with values that are arbitrary strings, dates and times, URIs, and further groups of name-value pairs

Encoding of machine-readable equivalents for times, lenths, durations, telephone numbers, languages, etc


Discourage data duplication (e.g. discourage people from saying <title>...</title> <meta name="dc.title" content="..."> <h 1 property="http://...dc...title">...</h 1>

The Microformats community has been struggling with the abbr design pattern when attempting to specify certain machine-readable object attributes that differ from the human-readable content. For example, when specifying times, dates, weights, countries and other data in French, Japanese, or Urdu, it is helpful to use the ISO format to express the data to a machine and associate it with an object property, but to specify the human-readable value in the speaker's natural language.

  • Pages should be able to expose nested lists of name-value pairs on a page-by-page basis.
  • It should be possible to define globally-unique names, but the syntax should be optimised for a set of predefined vocabularies.
  • Adding this data to a page should be easy.
  • The syntax for adding this data should encourage the data to remain accurate when the page is changed.
  • The syntax should be resilient to intentional copy-and-paste authoring: people copying data into the page from a page that already has data should not have to know about any declarations far from the data.
  • The syntax should be resilient to unintentional copy-and-paste authoring: people copying markup from the page who do not know about these features should not inadvertently mark up their page with inapplicable data.
  • Generic syntax and parsing mechanism for Microformats.

Cleaned up

USE CASE: Copy-and-paste should work between Web apps and native apps and between Web apps and other Web apps.


  • Fred copies an e-mail from Apple Mail into GMail, and the e-mail survives intact, including headers, attachments, and multipart/related parts.
  • Fred copies an e-mail from GMail into Hotmail, and the e-mail survives intact, including headers, attachments, and multipart/related parts.

USE CASE: Web browsers should be able to help users find information related to the items discussed by the page that they are looking at.


  • Finding more information about a movie when looking at a page about the movie, when the page contains detailed data about the movie.
    • For example, where the movie is playing locally.
    • For example, what your friends thought of it.
  • Exposing music samples on a page so that a user can listen to all the samples.
  • Students and teachers should be able to discover each other -- both within an institution and across institutions -- via their blogging.


  • Should be discoverable, because otherwise users will not use it, and thus users won't be helped.
  • Should be consistently available, because if it only works on some pages, users will not use it (see, for instance, the rel=next story).
    • → Should work on existing content.
    • → Can't rely on author annotations.
  • Should be bootstrapable (rel=next failed because UAs didn't expose it because authors didn't use it because UAs didn't expose it).

USE CASE: Exposing calendar events so that users can add those events to their calendaring systems.


  • A user visits the Avenue Q site and wants to make a note of when tickets go on sale for the tour's stop in his home town. The site says "October 3rd", so the user clicks this and selects "add to calendar", which causes an entry to be added to his calendar.
  • A student is making a timeline of important events in Apple's history. As he reads Wikipedia entries on the topic, he clicks on dates and selects "add to timeline", which causes an entry to be added to his timeline.
  • TV guide listings - browsers should be able to expose to the user's tools (e.g. calendar, DVR, TV tuner) the times that a TV show is on.


  • Should be discoverable.
  • Should be compatible with existing calendar systems.
  • Should be unlikely to get out of sync with prose on the page.
  • Shouldn't require the consumer to write XSLT or server-side code to read the calendar information.
  • Machine-readable event data shouldn't be on a separate page than human-readable dates.

USE CASE: Exposing contact details so that users can add people to their address books or social networking sites.


  • Instead of giving a colleague a business card, someone gives their colleague a URL, and that colleague's user agent extracts basic profile information such as the person's name along with references to other people that person knows and adds the information into an address book.
  • A scholar and teacher wants other scholars (and potentially students) to be able to easily extract information about who he is to add it to their contact databases.
  • Fred copies the names of one of his Facebook friends and pastes it into his OS address book; the contact information is imported automatically.
  • Fred copies the names of one of his Facebook friends and pastes it into his Webmail's address book feature; the contact information is imported automatically.


  • A user joining a new social network should be able to identify himself to the new social network in way that enables the new social network to bootstrap his account from existing published data (e.g. from another social nework) rather than having to re-enter it, without the new site having to coordinate (or know about) the pre-existing site, without the user having to give either sites credentials to the other, and without the new site finding out about relationships that the user has intentionally kept secret. (http://w2spconf.com/2008/papers/s3p2.pdf)
  • Data should not need to be duplicated between machine-readable and human-readable forms (i.e. the human-readable form should be machine-readable).
  • Shouldn't require the consumer to write XSLT or server-side code to read the contact information.
  • Machine-readable contact information shouldn't be on a separate page than human-readable contact information.

USE CASE: Getting data out of poorly written Web pages, so that the user can find more information about the page's contents.


  • Alfred merges data from various sources in a static manner, generating a new set of data. Bob later uses this static data in conjunction with other data sets to generate yet another set of static data. Julie then visits Bob's page later, and wants to know where and when the various sources of data Bob used come from, so that she can evaluate its quality. (In this instance, Alfred and Bob are assumed to be uncooperative, since creating a static mashup would be an example of a poorly-written page.)
  • TV guide listings - If the TV guide provider does not render a link to IMDB, the browser should recognise TV shows and give implicit links. (In this instance, it is assumed that the TV guide provider is uncooperative, since it isn't providing the links the user wants.)
  • Students and teachers should be able to discover each other -- both within an institution and across institutions -- via their blogging. (In this instance, it is assumed that the teachers and students aren't cooperative, since they would otherwise be able to find each other by listing their blogs in a common directory.)


  • Does not need cooperation of the author (if the page author was cooperative, the page would be well-written).
    • → Can only rely on the content on the page, not markup, by definition.
  • Shouldn't require the consumer to write XSLT or server-side code to derive this information from the page.

USE CASE: Search engines and other site categorisation and aggregation engines should be able to determine the contents of pages with more accuracy than today.


  • Students and teachers should be able to discover each other -- both within an institution and across institutions -- via their blogging.
  • A blogger wishes to categorise his posts such that he can see them in the context of other posts on the same topic, including posts by unrelated authors (i.e. not via a pre-agreed tag or identifier, not via a single dedicated and preconfigured aggregator).


  • Should not disadvantage pages that are more useful to the user but that have not made any effort to help the search engine.
    • → Can't rely on special markup or annotations.
  • Should not be more susceptible to spamming than today's markup.
    • → Can't rely on hidden metadata.

USE CASE: Allow users to maintain bibliographies or otherwise keep track of sources of quotes or references.


  • Frank copies a sentence from Wikipedia and pastes it in some word processor: it would be great if the word processor offered to automatically create a bibliographic entry.
  • Patrick keeps a list of his scientific publications on his web site. He would like to provide structure within this publications page so that Frank can automatically extract this information and use it to cite Patrick's papers without having to transcribe the bibliographic information.
  • A scholar and teacher wants other scholars (and potentially students) to be able to easily extract information about what he has published to add it to their bibliographic applications.
  • A scholar and teacher wants to publish scholarly documents or content that includes extensive citations that readers can then automatically extract so that they can find them in their local university library. These citations may be for a wide range of different sources: an interview posted on YouTube, a legal opinion posted on the Supreme Court web site, a press release from the White House.
  • A blog, say htmlfive.net, copies content wholesale from another, say blog.whatwg.org (as permitted and encouraged by the license). The author of the original content would like the reader of the reproduced content to know the provenance of the content. The reader would like to find the original blog post so he can leave comments for the original author.

* Chaals could improve the Opera intranet if he had a mechanism for identifying the original source of various parts of a page, because ...?


  • Machine-readable bibliographic information shouldn't be on a separate page than human-readable bibliographic information.

USE CASE: Site owners want a way to provide enhanced search results to the engines, so that an entry in the search results page is more than just a bare link and snippet of text, and provides additional resources for users straight on the search page without them having to click into the page and discover those resources themselves.


  • For example, in response to a query for a restaurant, a search engine might want to have the result from yelp.com provide additional information, e.g. info on price, rating, and phone number, along with links to reviews or photos of the restaurant.


  • Information for the search engine should be on the same page as information that would be shown to the user if the user visited the page.

USE CASE: Annotate structured data that HTML has no semantics for, and which nobody has annotated before, and may never again, for private use or use in a small self-contained community.


  • A group of users want to mark up their iguana collections so that they can write a script that collates all their collections and presents them in a uniform fashion.
  • A scholar and teacher wants other scholars (and potentially students) to be able to easily extract information about what he teaches to add it to their custom applications.
  • The list of specifications produced by W3C, for example, and various lists of translations, are produced by scraping source pages and outputting the result. This is brittle. It would be easier if the data was unambiguously obtainable from the source pages. This is a custom set of properties, specific to this community.
  • Chaals wants to make a list of the people who have translated W3C specifications or other documents, and then use this to search for people who are familiar with a given technology at least at some level, and happen to speak one or more languages of interest.
  • Chaals wants to have a reputation manager that can determine which of the many emails sent to the WHATWG list might be "more than usually valuable", and would like to seed this reputation manager from information gathered from the same source as the scraper that generates the W3C's TR/ page.
  • A user wants to write a script that finds the price of a book from an Amazon page.


  • Vocabularies can be developed in a manner that won't clash with future more widely-used vocabularies, so that those future vocabularies can later be used in a page making use of private vocabularies without making the earlier annotations ambiguous.
  • Using the data should not involve learning a plethora of new APIs, formats, or vocabularies (today it is possible, e.g., to get the price of an Amazon product, but it requires learning a new API; similarly it's possible to get information from sites consistently using 'class' values in a documented way, but doing so requires learning a new vocabulary).
  • Shouldn't require the consumer to write XSLT or server-side code to process the annotated data.
  • Machine-readable annotations shouldn't be on a separate page than human-readable annotations.

USE CASE: Kill DBpedia.


  • A user wants to have information in RDF form. The user visits Wikipedia, and his user agent can obtain the information without relying on DBpedia's interpretation of the page.


  • All the data exposed by DBpedia should be derivable from Wikipedia without using DBpedia.

USE CASE: Help people searching for content to find content covered by licenses that suit their needs.


  • If a user is looking for recipes of pies to reproduce on his blog, he might want to exclude from his results any recipes that are not available under a license allowing non-commercial reproduction.
  • Lucy wants to publish her papers online. She includes an abstract of each one in a page, but because they are under different copyright rules, she needs to clarify what the rules are. A harvester such as the Open Access project can actually collect and index some of them with no problem, but may not be allowed to index others. Meanwhile, a human finds it more useful to see the abstracts on a page than have to guess from a bunch of titles whether to look at each abstract.
  • There are mapping organisations and data producers and people who take photos, and each may place different policies. Being able to keep that policy information helps people with further mashups avoiding violating a policy. For example, if GreatMaps.com has a public domain policy on their maps, CoolFotos.org has a policy that you can use data other than images for non-commercial purposes, and Johan Ichikawa has a photo there of my brother's café, which he has licensed as "must pay money", then it would be reasonable for me to copy the map and put it in a brochure for the café, but not to copy the data and photo from CoolFotos. On the other hand, if I am producing a non-commercial guide to cafés in Melbourne, I can add the map and the location of the cafe photo, but not the photo itself.
  • At University of Mary Washington, many faculty encourage students to blog about their studies to encourage more discussion using an instance of WordPress MultiUser. A student with have a blog might be writing posts relevant to more than one class. Professors would like to then aggregate relevant posts into one blog.


  • Content on a page might be covered by a different license than other content on the same page.
    • The current rel=license Microformat can not be re-used within these drafts, because virtually all existing rel=license implementations will just assume that the license applies to the whole page rather than just part of it
  • License proliferation should be discouraged.
  • License information should be able to survive from one site to another as the data is transfered.
  • Expressing copyright licensing terms should be easy for content creators, publishers, and redistributors to provide.
  • It should be more convenient for the users (and tools) to find and evaluate copyright statements and licenses than it is today.
  • Shouldn't require the consumer to write XSLT or server-side code to process the license information.
  • Machine-readable licensing information shouldn't be on a separate page than human-readable licensing information.