A user account is required in order to edit this wiki, but we've had to disable public user registrations due to spam.
To request an account, ask an autoconfirmed user on Chat (such as one of these permanent autoconfirmed members).
Parser tests: Difference between revisions
Line 38: | Line 38: | ||
CDATA | CDATA | ||
PLAINTEXT | PLAINTEXT | ||
The test applies when the tokenizer begins with its content model flag set to any of those values. If <tt>content_model_flags</tt> is omitted, it defaults to <tt>["PCDATA"]</tt>. | The test case applies when the tokenizer begins with its content model flag set to any of those values. If <tt>content_model_flags</tt> is omitted, it defaults to <tt>["PCDATA"]</tt>. | ||
<tt>last_start_tag</tt> is a | <tt>last_start_tag</tt> is a lowercase string that should be used as "the tag name of the last start tag token emitted" in the tokenizer algorithm. If it is omitted, it is treated as if "no start tag token has ever been emitted by this instance of the tokeniser". | ||
<tt>ignore_error_order</tt> is a boolean value indicating that the order of <tt>ParseError</tt> tokens relative to other tokens in the output stream is unimportant, and implementations should ignore such differences between their output and <tt>expected_output_tokens</tt>. (This is used for errors emitted by the input stream preprocessing stage, since it is useful to test that code but it is undefined when the errors occur). If it is omitted, it defaults to <tt>false</tt>. | <tt>ignore_error_order</tt> is a boolean value indicating that the order of <tt>ParseError</tt> tokens relative to other tokens in the output stream is unimportant, and implementations should ignore such differences between their output and <tt>expected_output_tokens</tt>. (This is used for errors emitted by the input stream preprocessing stage, since it is useful to test that code but it is undefined when the errors occur). If it is omitted, it defaults to <tt>false</tt>. | ||
Line 51: | Line 51: | ||
=== Open Issues === | === Open Issues === | ||
* Is the format too verbose? | * Is the format too verbose? | ||
* Do we want to allow the test to pass if only a subset of the actual tokens emitted matches the expected_output_tokens list? | * Do we want to allow the test to pass if only a subset of the actual tokens emitted matches the expected_output_tokens list? | ||
Revision as of 21:52, 17 August 2007
Parser Tests
This page documents the unit-test format(s) being used for implementations of the HTML5 parsing spec. The aim is to produce implementation-independent, self-describing tests that can be shared between any groups working on these technologies.
Tokenizer Tests
The test format is JSON. This has the advantage that the syntax allows backward-compatible extensions to the tests and the disadvantage that it is relatively verbose.
Basic Structure
{"tests": [ {"description":"Test description", "input":"input_string", "output":[expected_output_tokens]}, "contentModelFlags":[content_model_flags], "lastStartTag":last_start_tag, "ignoreErrorOrder":ignore_error_order ]}
description, input and output are always present. The other values are optional.
input_string is a string literal containing the input string to pass to the tokenizer.
expected_output_tokens is a list of tokens, ordered with the first produced by the tokenizer the first (leftmost) in the list. The list must mach the complete list of tokens that the tokenizer should produce. Valid tokens are:
["DOCTYPE", name, public_id, system_id, correctness] ["StartTag", name, {attributes}]) ["EndTag", name] ["Comment", data] ["Character", data] "ParseError"
public_id and system_id are either strings or null. correctness is either true (correct) or false (incorrect).
content_model_flags is a list of strings from the set:
PCDATA RCDATA CDATA PLAINTEXT
The test case applies when the tokenizer begins with its content model flag set to any of those values. If content_model_flags is omitted, it defaults to ["PCDATA"].
last_start_tag is a lowercase string that should be used as "the tag name of the last start tag token emitted" in the tokenizer algorithm. If it is omitted, it is treated as if "no start tag token has ever been emitted by this instance of the tokeniser".
ignore_error_order is a boolean value indicating that the order of ParseError tokens relative to other tokens in the output stream is unimportant, and implementations should ignore such differences between their output and expected_output_tokens. (This is used for errors emitted by the input stream preprocessing stage, since it is useful to test that code but it is undefined when the errors occur). If it is omitted, it defaults to false.
Multiple tests per file are allowed simply by adding more objects to the "tests" list.
All adjacent character tokens are coalesced into a single ["Character", data] token.
Open Issues
- Is the format too verbose?
- Do we want to allow the test to pass if only a subset of the actual tokens emitted matches the expected_output_tokens list?
Tree Construction Tests
Each file containing tree construction tests consists of any number of tests separated by two newlines (LF) and a single newline before the end of the file. For instance:
[TEST]LF LF [TEST]LF LF [TEST]LF
Where [TEST] is the following format:
Each test must begin with a string "#data" followed by a newline (LF). All subsequent lines until a line that says "#errors" are the test data and must be passed to the system being tested unchanged, except with the final newline (on the last line) removed. Then there must be a line that says "#errors". It must be followed by one line per parse error that a conformant checker would return. It doesn't matter what those lines are, the only thing that matters is that there be the right number of parse errors. Then there must be a line that says "#document", which must be followed by a dump of the tree of the parsed DOM. Each node must be represented by a single line. Each line must start with "| ", followed by two spaces per parent node that the node has before the root document node. Element nodes must be represented by a "<" then the tag name then ">", and all the attributes must be given, sorted lexicographically by UTF-16 code unit, on subsequent nodes, as if they were children of the element node. Attribute nodes must have the attribute name, then an "=" sign, then the attribute value in double quotes ("). Text nodes must be the string, in double quotes. Newlines aren't escaped. Comments must be "<" then "!-- " then the data then " -->". DOCTYPEs must be "<!DOCTYPE " then the name then ">".
For example:
#data <p>One<p>Two #errors 3: Missing document type declaration #document | <html> | <head> | <body> | <p> | "One" | <p> | "Two"
Tests can be found here: http://html5lib.googlecode.com/svn/trunk/testdata/tree-construction/