WADL as an hypertext
- Hello list! :-D
I've read a lot of criticism to WADL (see for example http://bitworking.org/news/193/Do-we-need-WADL), since it could lead to something like WSDL/SOAP/RPC/Berlusconi & other human's fault.
BTW, I'd like to use it as a hypertext as in http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
It would be possible?
Actually a wadl file should have it's own mime type (does it have one?) but it seem quite good as hypertext language as far as the client can handle it (through, for example, some code on demand).
I'm not considering it a tool to generate such code (even if it would be possible, and as far as the code is downloaded with the wadl, still restful), but just a simple and clean way to connect the resources.
It seem to me that nothing restfully wrong to use an alternative hypertext language instead of html, isn't it?
- On Mon, 2010-07-26 at 15:52 -0700, Will Hartung wrote:
>Very good analysis.
> On Sun, Jul 25, 2010 at 2:40 AM, Eric J. Bowman
> <eric@...> wrote:
> > The biggest clash between Old Testament and New right now, seems to
> > the issue of media type proliferation. On that point, please refer
> > http://roy.gbiv.com/untangled/2008/paper-tigers-and-hidden-dragons
> > Notice that Roy's solution to the problem space is a sparse-bit
> > Instead of creating a new media type, Roy's thought process is to
> > consider what ubiquitous media type may be repurposed to this need.
> > His choice is image/gif. That's so REST!
> It seems to me the conflict is coming from two distinct visions of
> One vision is to model the world you as you see fit, and make the
> world work with it. The other is to take the worlds models and make
> your software work with that.
> Your discussion of using HTML is a simple example. You've always
> mentioned that before, and I never quite groked how you went about it
> until recently. Effectively what you are doing is using semantic, HTML
> markup combined with RDFa style annotations to augment the markup, and
> using that as a representation for your data.
> When I look at the RFDa primer
> (http://www.w3.org/TR/xhtml-rdfa-primer/) it became much clearer to
> But it still prompted my confusion about identifying the data to the
> system, since application/xhtml+xml simply doesn't tell me, at least,
> enough about how to process the data. But to your point, it does tell
> me what it is, and if it were my standard data type, then I would
> proceed to mine the payload for the interesting attributes.
> Apparently, that's what you're doing, correct? The XML payload that
> happens to be XHTML is not processed in total. Rather you dig your
> data out of it guided by XHTML and RDF annotations.
> If it were some defined XML, I'd be tempted to take the schema,
> generated a bunch of JAXB annotations, and have the framework
> marshal/unmarshal the document to internal Java objects, and
> manipulate those rather than, perhaps, pull chunks out of the document
> using a bunch of, say, XPath expressions.
> That's when the light hit me. Effectively, if your path of approach is
> using something like XPath as your accessor technique, then the
> difference between an XML document and an XHTML/RDFa document are the
> actual paths used, but really little else. The RDFa can impose enough
> structure that static XPath expressions are effective and precise
> enough to get the data you want out of the payloads. Once that
> decision has been made, XML vs XHTML becomes a bike shed color, and
> it's easy to see the extra value XHTML provides "for free" over XML.
> But I think it's clear when you're model making, and particularly from
> a world where binding documents to objects is common, automated, and
> "free", the XHTML option never comes on the radar. Arguably, it's not
> even an option at the point. Who wants the complexity of a generic
> XHTML DOM, even if mapped to an Object in the system, to a "simpler",
> specific DOM/Mapping.
> XHTML also (potentially) loses the value that things like Schema
> validation can bring to the table.
> Now, technically, you could make a "sub schema", where your document
> IS XHTML, it's just a specific subset of it that you (the designer)
> have decided is enough to represent your data. You can schema this,
> potentially map this (not many mappers do well with XML attributes to
> specific object slots), etc. "Cake and eat it too". If the goal of
> XHTML is for those intermediaries (i.e. it's not for the clients
> benefit, nor the servers benefit), that can work. But if you go this
> route, you can't take "arbitrary" XHTML that happens to have your
> interesting data embedded within it, since the overall document may
> not match your subset schema.
> But I don't think this is contrary to what you've been discussing. I
> don't think you've ever advocated a system being able to take
> arbitrary documents that meet the higher level specification of the
> data type you're leveraging, vs the more specific subset that your
> system supports. Might be a handy feature, but it's not a requirement.
> However, whether you use XHTML or XML, the semantics of the payload
> still need to be defined. That's always hard work.
> In that light, though I want to take Roys example you cited.
> While using a GIF is a clever media type to use, I think for many
> folks interested in this data it's wrong on many levels.
> First, it's not a sparse array, as was suggested, it's just compact.
> You're still sending all 1M bits whether it's 1 user or 10000 user
> changes. Yes, it compresses, but that's not relevant as that's only a
> transport issue.
> But most importantly, many systems that happen to use the GIF media
> type DON'T use it at the level for which it's being suggested.
> Specifically, at the bit level. I don't know PHP, but is it really
> straightforward to get the color of pixel 100,100 of a received GIF?
> element it can be done, but that's a pretty recent development. But
> either way, it sure is a lot of hoops to jump through to find out if
> bit #100100 is set. Most systems present the artifact instantiated
> from a GIF datatype as an opaque blob with very simple properties
> rather than as a list of Bits.
> I see the conflict between the reuse of what is, vs the create of what
> wants as the difference between the folks wanting full boat OO systems
> and typing within JS instead of just passing around hashes of hashes.
> Bags of hashes of bags of hashes. The conflict between the strongly
> typed crowd and the dynamically typed crowd (the battles between which
> are legion). Some make do, others want specific abstractions to work
> We're actually seeing the phenomenon of reusing data types, even in
> the SOAP world here in health care. Leveraging a few "common" data
> formats for many uses. A common data type today is the Document
> Submission Set payload. It's based on ebXML, which is used by another
> standards committee, and therefor adopted by yet another standards
> Ideally this is what standard formats are for. But, at the same time,
> the format is so onerous, that there is already push back from the
> "simpler" crowd. For a simple exchange, there is a huge amount of
> "boiler plate" using this format. Just like the pushback from SOAP,
> and the boiler plate it brings with it (outside of semantics of SOAP).
> "Why can't I just send a PDF" they say.
> So, standards or no, they're not necessarily easy to use. Tooling made
> SOAP "easy to use". REST is "harder" for many to use because of the
> lack of tooling. Throwing an XSD against some tools and getting free
> Java classes is "easier" than crafting and testing DOM code or Xpath
> That's where the pressure for many media types are coming from, IMHO.
> They're "cheap" to make, and "easy" to use.
I wish it was 2022. HTML5 would be finished and maybe the world would
have moved off media types for APIs in favour of Higher Order HTML, that
allowed you to express you data clearly and specifically in a single
interlingua. Imagine being able to describe the domain, range and
cardinality of your data. That would be mappable to code.
Life would be grand :)