Loading ...
Sorry, an error occurred while loading the content.

RE: [svg-developers] test suite evaluation

Expand Messages
  • David Dailey
    Someone who is closer to the process may set me straight on how it is done these days, but I recall having discussions with members of the SVG Working Group
    Message 1 of 2 , Dec 11, 2011
    • 0 Attachment
      Someone who is closer to the process may set me straight on how it is done these days, but I recall having discussions with members of the SVG Working Group who argued that test suites should be “automatable” in the sense that the spec should specify exactly how browsers should present content down to the level of pixel-perfect rendering. Others did not seem to feel that way and I don’t know if the issue has been resolved to everyone’s satisfaction.



      A few things seem to be left at the viewing software’s discretion: how to support text (to call this a zoo at present, would be an understatement), compound filters, and perhaps antialiasing among them. My first foray into cross browser SVG in what must have been the 18th century, revealed enormous differences between Opera’s and ASV’s implementation of filters and much of it was latitude that might have been acceptable under the language of the spec, but would certainly not have been acceptable to an author. The spec has tightened considerably from SVG1.1 (2003) to SVG1.1 (2011) and browser implementations have converged on many issues, excepting a few (text being the most notorious).



      What does impress me about the test suite is the cleverness of many of the tests – creating situations in which passing the test means, in the end, either displaying something or not. A simple binary outcome that removes much of the judgment.



      In the Acid 3 tests, complex series of things (for much of the HTML world) were combined, though the SVG tests therein were kept rudimentary to avoid political complications, I suppose. In the end, though the politics proved too much for any attempt at objectivity to prevail and magic won out, once more, over science, as the tests were amended so that everyone could pass and, I suppose, so that shrapnel would not fly.



      I’m not sure that the same folks who write the specs (largely the companies developing browsers and mobile viewers, with the exception of course of W3C members) are the proper folks to oversee test-suites or evaluations, but so long as no one important fusses, then I suppose all remains cheerful. In truth, it is very hard to convince others to participate in such a “mundane activity” and we are lucky that the members of the SVG Working Group do this relatively thankless work!



      Jeff Schiller, for years has kept an evaluation of much of the first round of 280 SVG 1.1 tests at http://www.codedread.com/svg-support.php and you’ll note that there is a category of “almost pass” implying some human judgment. Others have spawned some of their own tests (like the CooL tests) and the SVG Interest Group had a healthy initiative for a year or so to create “torture tests” at http://code.google.com/p/svgtorture/ .



      I have a sense that given any document as complex as the SVG 1.1 (2011) spec, no finite collection of tests will actually evaluate every browser’s (or user agent’s) degree of compliance; the test suite is perhaps more for spec-evaluation than for browser evaluation, and with the competition in the space of “who has the best browser” ultimately boiling down to SVG (despite everyone’s tiptoeing around the issue and pretending that it is all about JavaScript, canvas and the paltry set of things that the HTML5 folks imagined) it will be interesting to see how this all shakes out in years to come. I remember in some of those same conversations with working group members, arguing that improperly administered evaluations could unfairly represent certain browsers, hence setting the stage for complaints that make patent concerns look lighthearted. I am prone to hyperbole, though.



      Cheers

      David







      From: svg-developers@yahoogroups.com [mailto:svg-developers@yahoogroups.com] On Behalf Of Zdenek Kedaj
      Sent: Sunday, December 11, 2011 1:44 PM
      To: svg-developers@yahoogroups.com
      Subject: [svg-developers] test suite evaluation





      I took a look in the official SVG test suites and I wonder the
      evaluation. Given just a bunch of html and svg files, how do the
      implementors run and evaluate the tests? Do they have to implement
      their own evaluation environment?

      There are also lots of tests that whose result cannot be simply
      compared with the expected final state SVG, because it matters how was
      the final state reached (animations etc.). Evaluation of these must be
      especially hard to automate. Is it done by human beings (not
      automated)?

      Please share any insight on how it's done I am curious.
      Zdeněk





      [Non-text portions of this message have been removed]
    Your message has been successfully submitted and would be delivered to recipients shortly.