A user account is required in order to edit this wiki, but we've had to disable public user registrations due to spam.

To request an account, ask an autoconfirmed user on Chat (such as one of these permanent autoconfirmed members).

Browser Tests Project: Difference between revisions

From WHATWG Wiki
Jump to navigation Jump to search
(Related discussions)
(→‎Proposal for a testing API: link to new test format proposal)
 
(One intermediate revision by the same user not shown)
Line 73: Line 73:


== Proposal for a testing API ==
== Proposal for a testing API ==
'''This section is obsolete. See http://omocha.w3.org/wiki/newformat for a format that could be used for writing JavaScript based client-side tests which can be run automatically.'''


Constraints:
Constraints:
Line 78: Line 80:
* For WebKit, this means it is not possible to rely on the test running inside a frame, their testing tool loads toplevel pages for performance reasons.
* For WebKit, this means it is not possible to rely on the test running inside a frame, their testing tool loads toplevel pages for performance reasons.
* Be flexible for running existing automated tests/suites and importing existing tests without too much efforts.
* Be flexible for running existing automated tests/suites and importing existing tests without too much efforts.
* See also: http://wiki.whatwg.org/wiki/Testsuite#Requirements


Proposal: 3 kinds of tests
Proposal: 3 kinds of tests

Latest revision as of 19:00, 23 September 2009

Cross browser tests project

The goals of this project are to:

  • create a repository of browser test cases and test suites
  • develop tools to run these tests automatically on several platforms/browsers
  • provide infrastructure to run theses tests
  • make the test results visible and browseable (view tests by specification, specification coverage, compare browser implementation status, ...)

The current project scope is about automated tests: tests that can be run automatically on several browsers without human intervention. In the future the project could extend to non automated tests. Performance testing would also be a nice feature to add.

Test sources

Lots of test cases and suites already exist. Some are automated while others require human intervention. To benefit from the existing work done, the project should provide ways to import and run existing automated tests. Additional tests can be contributed by specification writers, browser vendors or web developers.

Benefits:

  • Specification authors and web developers can check what feature is implemented on what browser
  • Browser vendors can run these test for regression and conformance checking

Overview of existing automated tests and frameworks

Mozilla

Reference:

xpcshell
implementation specific
compiled-code tests
implementation specific
Mochitest
  • Tests running in the browser using frames. Contains implementation specific and browser neutral tests.
  • Server side:
chrome tests / browser chrome tests
implementation specific
Reftest
Visual comparison of two HTML/SVG/... files. Lots of these tests are cross browser.
Crash tests
Pages that crashed the browser once. They could be useful for testing stability of other browsers.
Javascript tests
TODO look in more details http://www.mozilla.org/js/tests/library.html

WebKit

Reference:

The WebKit regression test harness
It loads pages one at a time in an off-screen window without any use of frames. This tools can run tests faster than a browser-based solution.
Output can be in several formats:
  • Textual representation of the page: This textual representation contains the console error messages, alert messages and a text dump of the DOM at the end of the test. Tests may optionally ask that the content of the frames are included recursively in the dump. The textual representation uses "innerText" that will only dump visible content and may add newlines to match the page layout (should match IE behavior). This textual dump is compared against an expected file for mismatch.
    • A part of these tests is cross browser. Some tests may require browser extensions for features that can't be accessed from content accessible APIs: dumping the console messages, cross-domain frame dump, ...
  • Textual hierarchy of the Render objects: implementation specific.
  • Pixel image of the loaded page: while this is useful for regression testing, this is not very suitable in cross browser tests.
  • Server side:
    • Most of the tests run on file:// without requiring a HTTP server. For the other tests, an Apache instance is launched. The following features are used
      • listens on several ports, used for cross domain testing
      • listens on a SSL port
      • php scripts
      • perl cgi scripts
JavaScriptCore Tests
TODO (same tests as Mozilla?)

Observations

Mozilla mochitest and WebKit page textual dumps are rather different approaches. Some differences:

  • Mochitests: all assertions are in the pages, the content of the page at the end of the tests is not important.
  • WebKit DumpRenderTree: assertions have to modify the page content which is compared with an expected rendering. This can be useful for integrating non automated tests: to convert an existing test that outputs PASS/FAIL on a page, it only requires adding a text file with the string "PASS" alongside the test.

Proposal for a testing API

This section is obsolete. See http://omocha.w3.org/wiki/newformat for a format that could be used for writing JavaScript based client-side tests which can be run automatically.

Constraints:

  • Browser vendor should be able to use these tests in their infrastructure.
  • For WebKit, this means it is not possible to rely on the test running inside a frame, their testing tool loads toplevel pages for performance reasons.
  • Be flexible for running existing automated tests/suites and importing existing tests without too much efforts.
  • See also: http://wiki.whatwg.org/wiki/Testsuite#Requirements

Proposal: 3 kinds of tests

1) Assertion based tests (based on Mozilla MochiTests)

One way to implemented an automated Testing framework is to provide two entry points:

  • assertion checking
  • end of test notification.

The end of test notification is used to let the framework know that the test finished without exceptions. The alternative to having that end of test notification is to catch all errors and notify the testing framework. This can be done by registering an "onerror" handler with browsers that support it. However some browsers do not support this (Safari and Opera for instance). An alternative is to surround all the testing code with a try catch block, which is more verbose than calling an end of test function. Another alternative is to declare how many assertions are going to happen. If less than this number are seen by the framework, it can flag the testcase as failed. However, maintaining the number of assertions can be much effort.

Assertion checking should take a boolean parameter which is true in case of success and false otherwise. It should take another string parameter for the assertion message. The end of test notification does not need any parameter.

The implementation of these entry points must be in a JavaScript file that is included in the test page. This means that multiple testing API could be used. The testing framework has to call these two entry points at the right time.

Proposal: Use MochiTest .js files. The base template looks like this http://mxr.mozilla.org/mozilla/source/testing/mochitest/static/test.template.txt?raw=1. With the following modifications:

  • Use relative path instead of absolute ones, so that tests can be run with file:// when http is not needed (however some browser restrict access to parent paths), or can installed in a sub directory on a public web server.
  • Always call finish() at the end of the test (add an alias to have finish = SimpleTest.finish() to make it shorter).

If there's interest for it, a version of the API which does not require MochiKit could be created.

2) Reftests

The manifest used for cross browser test would not contain the implementation specific variables (see the "flagging tests/assertions" section below).

3) LayoutTests

If we want to be compatible with the existing -expected.txt files from WebKit, browsers need to implement .innerText so that we get the exact same output in all browsers. Firefox and Opera will need to simulate this in .js (looks like difficult to implement in .js, looking at the way TextIterator.cpp handles this). See also the next section about browser provided testing APIs.

Server side

Mozilla httpd.js can be bundled as a XULRunner application, making it cross platform and easy to install. Scripts like the one in run-webkit-tests can be used for managing the Apache server. Apache on Windows will requires Cygwin.

Browser provided testing APIs

Some tests may require features that are normally not available to web pages. LayoutTests API is one example: an additional object is made available in the scope of pages that can be used for performing testing specific tasks. It is possible to do this on browsers by using extensions (WebKit uses a dedicated testing tool, Firefox can use an extension for providing such objects, Internet Explorer also provides an add-on API). However not all browsers can provide such an API, so this should not be a requirement for all tests.

Tests could be flagged to advertise what feature they need to be run. Some possible flags:

ahem
test requires ahem font (maybe make this required and drop the requirement)
proxy
test require proxy autoconfig for cross domain testing of the domains available in the MochiTests
httpdjs
test needs Mozilla HTTP server
apache
test needs the Apache server
topframe
test is required to be run in a top frame (use for security tests)
layouttests
test requires the layouttests object is available (TODO: separate this in more granular requirements. A browser could implement only part of the LayoutTests API).

Other requirements: API for event testing, ...

Test metadata

Link tests to the specifications and sections they are referring to. This can be useful to check the test coverage of a given specification, and check what browsers to implement a given feature. This metadata information does not need to be in the test, could be maintained separately. Test tagging could also be useful way to browse tests.

Other metadata is the browser requirements for running the test, see previous section.

Integration of existing automated tests

  • Mozilla Mochitest: filter out cross browser tests. All tests should use waitForExplicitFinish() / finish() to avoid false positive with browsers without "onerror" implementation. An alternative to modifying the tests is to store how many assertions are expected and compare the results against.
  • Mozilla reftest: filter out cross browser tests.
  • WebKit LayoutTests: filter out cross browser test in categories:
    • cross browser tests with no layoutTestController needed (a parent frame retrieves something like document.body.innerHTML and compares it with the -expected.txt file at the and of the test)
    • cross browser tests which require a layoutTestController object, or need to be run in a top level frame (security tests, ...).
  • Others: TODO build a list of tests candidates available around the Web.

Integration of the tests in browsers: flagging tests/assertions as "this fails, return later"

Implementations may want to integrate testcases, but flag parts/some of them as not passing yet. This means that the assertion is not making the test fail, but is kept in the suite so that it can be turned on once the feature is implemented. For instance, this is implemented using todo()/todo_is()/toto_isnot() assertions in Mozilla MochiTests. See also the thread on http://groups.google.com/group/mozilla.dev.quality/browse_frm/thread/b2a959c7547b9877/69dd526a2c8f73ea

Assertion based tests
Flag by assertion message or assertion index
RefTests
Manage a parallel manifest file with flags for each test. TODO: this means we need a way to identify uniquely a given reftest (or one manifest per file?).
LayoutTests
Same mechanism as WebKit can be used: a parallel hierarchy of expected files can be use which override the -expected.txt file located in the same directory as the test.

Integration of the tests in browsers: what to require server side

Would Mozilla want to require Apache for running the http tests? The alternative is to port the .php and .pl files to something equivalent running under httpd.js

Same for WebKit: would the integration of httpd.js be wanted the testing infrastructure?

Running the tests in Mozilla

  • Assertion based tests: OK uses MochiTest framework. Needs to manage the todo() assertions list separately
  • Reftests: OK uses reftest framework. Needs to manage the reftest conditions list separately.
  • LayoutTests: needs to be implemented. See http://crypto.stanford.edu/websec/cross-testing/ which is a starting point.
  • Server side: see above

Running the tests in WebKit

  • Assertion based tests: needs to be done.
    • scenario 1: The Javascript testing API outputs messages in the page for each assertion. An -expected.txt file can be used for checking the result.
      • pro: no change needed for layoutTestController/run-webkit-tests
      • cons: work is needed for generating the -expected.txt file for each imported test
    • scenario 2: new methods on layoutTestController are added for recording assertions. run-webkit-tests is modified to show tests with failed assertions in the test results
      • pro: no need to create an -expected.txt file for each test
      • cons: more work needed for modifying layoutTestController/run-webkit-tests
  • Reftests: DumpRenderTree should already be able to generate images for each reftest. run-webkit-tests will need to be modified to deal with image comparison and managing reftest results (should be similar to how the pixel test are managed).
  • LayoutTests: already implemented
  • Server side: see above

Exposing tests and results

http://www.browsertests.org website

Feature list

  • ...

Related discussions

First announcement and discussions:


First release announcement