A user account is required in order to edit this wiki, but we've had to disable public user registrations due to spam.

To request an account, ask an autoconfirmed user on Chat (such as one of these permanent autoconfirmed members).

Difference between revisions of "Parser tests"

From WHATWG Wiki
Jump to navigation Jump to search
(→‎Tokenizer Tests: Add section for xmlViolation tests)
(The documentation now lives in the repo, so this can die.)
 
Line 1: Line 1:
[https://github.com/html5lib/html5lib-tests html5lib-tests] is a suite of unit tests for use by implementations of the HTML5 parsing spec.
[https://github.com/html5lib/html5lib-tests html5lib-tests] is a suite of unit tests for use by implementations of the HTML spec. The aim is to produce implementation-independent, self-describing tests that can be shared between any groups working on these technologies. The parser tests live in the <code>tokenizer</code> and <code>tree-construction</code> directories, both of which contain README files describing the test format.
The aim is to produce implementation-independent, self-describing tests that can be shared between any groups working on these technologies.
This page documents the various test formats that are used within the suite.
 
=Tokenizer Tests=
The test format is [http://www.json.org/ JSON]. This has the advantage that the syntax allows backward-compatible extensions to the tests and the disadvantage that it is relatively verbose.
 
==Basic Structure==
 
{"tests": [
    {"description":"Test description",
    "input":"input_string",
    "output":[expected_output_tokens],
    "initialStates":[initial_states],
    "lastStartTag":last_start_tag,
    "ignoreErrorOrder":ignore_error_order
    }
]}
 
Multiple tests per file are allowed simply by adding more objects to the "tests" list.
 
<tt>description</tt>, <tt>input</tt> and <tt>output</tt> are always present. The other values are optional.
 
===Test set-up===
 
<tt>test.input</tt> is a string containing the characters to pass to the tokenizer.
Specifically, it represents the characters of the '''input stream''', and so implementations are expected to perform the processing described in the spec's '''Preprocessing the input stream''' section before feeding the result to the tokenizer.
 
If <tt>test.doubleEscaped</tt> is present and <tt>true</tt>, then <tt>test.input</tt> is not quite as described above.
Instead, it must first be subjected to another round of unescaping (i.e., in addition to any unescaping involved in the JSON import), and the result of ''that'' represents the characters of the input stream.
Currently, the only unescaping required by this option is to convert each sequence of the form \uHHHH (where H is a hex digit) into the corresponding Unicode code point.
(Note that this option also affects the interpretation of <tt>test.output</tt>.)
 
<tt>test.initialStates</tt> is a list of strings, each being the name of a tokenizer state.
The test should be run once for each string, using it to set the tokenizer's initial state for that run.
If <tt>test.initialStates</tt> is omitted, it defaults to <tt>["data state"]</tt>.
 
<tt>test.lastStartTag</tt> is a lowercase string that should be used as "the tag name of the last start tag to have been emitted from this tokenizer", referenced in the spec's definition of '''appropriate end tag token'''. If it is omitted, it is treated as if "no start tag has been emitted from this tokenizer".
 
===Test results===
 
<tt>test.output</tt> is a list of tokens, ordered with the first produced by the tokenizer the first (leftmost) in the list. The list must mach the '''complete''' list of tokens that the tokenizer should produce. Valid tokens are:
 
["DOCTYPE", name, public_id, system_id, correctness]
["StartTag", name, {attributes}'', true'']
["StartTag", name, {attributes}]
["EndTag", name]
["Comment", data]
["Character", data]
"ParseError"
 
<tt>public_id</tt> and <tt>system_id</tt> are either strings or <tt>null</tt>. <tt>correctness</tt> is either <tt>true</tt> or <tt>false</tt>; <tt>true</tt> corresponds to the force-quirks flag being false, and vice-versa.
 
When the self-closing flag is set, the <tt>StartTag</tt> array has <tt>true</tt> as its fourth entry. When the flag is not set, the array has only three entries for backwards compatibility.
 
All adjacent character tokens are coalesced into a single <tt>["Character", data]</tt> token.
 
If <tt>test.doubleEscaped</tt> is present and <tt>true</tt>, then every string within <tt>test.output</tt> must be further unescaped (as described above) before comparing with the tokenizer's output.
 
<tt>test.ignoreErrorOrder</tt> is a boolean value indicating that the order of <tt>ParseError</tt> tokens relative to other tokens in the output stream is unimportant, and implementations should ignore such differences between their output and <tt>expected_output_tokens</tt>. (This is used for errors emitted by the input stream preprocessing stage, since it is useful to test that code but it is undefined when the errors occur). If it is omitted, it defaults to <tt>false</tt>.
 
== xmlViolation tests ==
 
<tt>tokenizer/xmlViolation.test</tt> differs from the above in a couple of ways:
* The name of the single member of the top-level JSON object is "xmlViolationTests" instead of "tests".
* Each test's expected output assumes that implementation is applying the tweaks given in the spec's "Coercing an HTML DOM into an infoset" section.
 
== Open Issues ==
* Is the format too verbose?
* Do we want to allow the test to pass if only a subset of the actual tokens emitted matches the expected_output_tokens list?
 
=Tree Construction Tests=
 
Each file containing tree construction tests consists of any number of tests separated by two newlines (LF) and a single newline before the end of the file. For instance:
 
<pre>[TEST]LF
LF
[TEST]LF
LF
[TEST]LF</pre>
 
Where [TEST] is the following format:
 
Each test must begin with a string "#data" followed by a newline (LF). All subsequent lines until a line that says "#errors" are the test data and must be passed to the system being tested unchanged, except with the final newline (on the last line) removed.
 
Then there must be a line that says "#errors". It must be followed by one line per parse error that a conformant checker would return. It doesn't matter what those lines are, although they can't be "#document-fragment", "#document", or empty, the only thing that matters is that there be the right number of parse errors.
 
Then there *may* be a line that says "#document-fragment", which must be followed by a newline (LF), followed by a string of characters that indicates the context element, followed by a newline (LF). If this line is present the "#data" must be parsed using the HTML fragment parsing algorithm with the context element as context.
 
Then there must be a line that says "#document", which must be followed by a dump of the tree of the parsed DOM. Each node must be represented by a single line. Each line must start with "| ", followed by two spaces per parent node that the node has before the root document node.
* Element nodes must be represented by a "<tt><</tt>" then the ''tag name string'' "<tt>></tt>", and all the attributes must be given, sorted lexicographically by UTF-16 code unit according to their ''attribute name string'', on subsequent lines, as if they were children of the element node.
* Attribute nodes must have the ''attribute name string'', then an "=" sign, then the attribute value in double quotes (").
* Text nodes must be the string, in double quotes. Newlines aren't escaped.
* Comments must be "<tt><</tt>" then "<tt>!-- </tt>" then the data then "<tt> --></tt>".
* DOCTYPEs must be "<tt><!DOCTYPE </tt>" then the name then if either of the system id or public id is non-empty a space, public id in double-quotes, another space an the system id in double-quotes, and then in any case "<tt>></tt>".
* Processing instructions must be "<tt><?</tt>", then the target, then a space, then the data and then "<tt>></tt>". (The HTML parser cannot emit processing instructions, but scripts can, and the WebVTT to DOM rules can emit them.)
 
The ''tag name string'' is the local name prefixed by a namespace designator. For the HTML namespace, the namespace designator is the empty string, i.e. there's no prefix. For the SVG namespace, the namespace designator is "svg ". For the MathML namespace, the namespace designator is "math ".
 
The ''attribute name string'' is the local name prefixed by a namespace designator. For no namespace, the namespace designator is the empty string, i.e. there's no prefix. For the XLink namespace, the namespace designator is "xlink ". For the XML namespace, the namespace designator is "xml ". For the XMLNS namespace, the namespace designator is "xmlns ". Note the difference between "xlink:href" which is an attribute in no namespace with the local name "xlink:href" and "xlink href" which is an attribute in the xlink namespace with the local name "href".
 
If there is also a "#document-fragment" the bit following "#document" must be a representation of the HTML fragment serialization for the context element given by "#document-fragment".
 
For example:
<pre>
#data
<p>One<p>Two
#errors
3: Missing document type declaration
#document
| <html>
|  <head>
|  <body>
|    <p>
|      "One"
|    <p>
|      "Two"
</pre>
 
Tests can be found here: http://code.google.com/p/html5lib/source/browse/#hg%2Ftestdata%2Ftree-construction
 
== Open Issues ==
* should relax the order constraint?

Latest revision as of 16:56, 24 October 2013

html5lib-tests is a suite of unit tests for use by implementations of the HTML spec. The aim is to produce implementation-independent, self-describing tests that can be shared between any groups working on these technologies. The parser tests live in the tokenizer and tree-construction directories, both of which contain README files describing the test format.