On 12/6/05, Paul Downey <paul.downey@...
> Hi Steve!
> > I see we have a whole twenty test cases that define the required
> > behaviour of a WS-A stack.
> The test suite is still a matter of work in progress, and the
> Working Group invites people to contribute more test cases:
> However, the primary focus of 'CR' is to prove it is possible
> to build implementations based up in the specifications, but
> most importantly, that it enjoys support from four or more
> vendors (two for features identified as being optional).
> That is, the 20 or so tests (expect more) are not intended as
> a conformance suite to test a single implementation, rather as
> a vehicle for demonstrating multiple implementations
> interoperating based upon the Candidate Recommendations for
Well thats it you see. "Demonstrates that systems can be made to
interoperate in controlled circumstances" is fundamentally different
from "passes standard conformance suite", and also "demonstrates that
interoperability is possible in the field".
> > Above and beyond those 20 test cases, if two stacks fail to
> > interoperate, who is going to be at fault? In the absence of any
> > broad, normative test suite, its going to devolve to blame assigment
> > or the more agile/niche stacks having to adapt to the de-facto
> > standards set by the mainstream implementations. This is
> > unsatisfactory, as the normative specification will end up being
> > supplemented by the informal rules as how to work with WSA-3.0,
> > indigo, Axis1.4, etc.
> That's an interesting discussion for SOAPbuilders in general. I've
> heard several people I know, who should know better, assert that the
> WS-I performs 'bake-offs' in the manor that SOAPbuilders undertook
> a few years ago. Others believe it is a certification organisation.
> In reality, little could be further from the truth.
WS-I punted on the entire problem of which subsets of XSD people
should be using. The SOAPBuilder rounds are the only reliable things
to test out there, and all that WS-I has done is that they have caused
it to stagnate. Where are the WS-A EPRs from SOAPBuilders. Where are
the WSRF endpoints? Whose endpoints are supporting one-way MTOM. Whose
two-way endponts take 3 minutes and 15 seconds to respond, just to see
what timeouts the callers and the proxy servers have turned on?
> There is a need for more testing and test-driven work in Web
> services in general. I'm not sure how to achieve that given the
> current attitude from the industry. Then there is the sad but
> true fact that most everybody wants to interop with one defacto
> implementation, and many see that as being somehow 'good enough'.
Test-driven software development has currently the lead in modern
software development processes. The technical term used to describe
other processes is "behind schedule".
To move it into the standards arena, we need to adopt the same notion
of write-test-first, as a formalisation of the specification. More to
the point, we, the stack implementors need to be ruthelss and bounce
back to the standards authority those specifications that get released
without any tests. It wont take long for the message to get through:
Any standard without tests doesnt exist.
> Whilst I think this kind of one-off event is going to have its
> part to play, I hope the test-suite, public end-points provided
> by vendors and logs of example messages will go at least some way
> to improving interoperability. And let's face it 'interoperability'
> is the only reason to even consider using this stuff.
A one-off propaganda event to demonstrate interop between vendors does
make for good PR, but where are the regression tests?
> > Have the WS-A WG group noticed that inconsistent implementations of
> > the spec is a inevitable outcome of having no test suite developed
> > alongside the specification, and do they or the W3C plan to change
> > their process to be more test-driven in future?
> FWIW I personally agree 100% with the notion of "test driven
> specifications" and as Chair of the XML Schema Patterns for
> Databinding WG*, that's something I'd like to promote - that we
> don't have anything in the spec that isn't testable, and for which
> we don't have a test-case. But that is a far more constrained
> specification, without the multiplicity of 'bindings' and
> message exchanges with which WS-Addressing may be used.
Well I'm pleased to see a standard that is being test driven; I hear
that some of the RDF work is done that way too.
We are currently working on the test infrastructure for our
distributed deployment standard. The language for describing
configurations is relatively testable, all you need is a wrapper XSD
to describe inputs and outputs, consistent fault codes across
implementations, and a test runner to push JUnit to its limits. By the
end of the week all three java impls will be using the same junit test
classes, leaving the .NET implementation to reimplement that bit from
Developing that test system has shown up all the bits we forgot from
the spec: those consistent fault codes, reference importation rules,
other surprises. If we had written the tests earlier, we would have
found the problems earlier.
IMO, it is competitive pressure to get a version 1.0 'standard' out
the door that places emphasis on a code-frozen XSD, WSDL and matching
document. However that is just the classic old waterfall process, and
the lack of tests increases the delay between gold standard and
> For WS-Addressing, features which don't result in implementations
> interoperating or are not testable will generate CR comments as
> a direct outcome of the interoperability testing and therefore be
> under risk of being reworked or removed from the specification.
> To that end, I would encourage comments on the suite, contribution
> of tests and above all participation at the WS-Addressing event.
I do plan to contribute tests. Given that the team has gone from 0 to
20 twenty tests since September, with a few contributions they should
be at, what, thirty, by Christmas!