[ZapFlash] Does REST Provide Deep Interoperability?
Does REST Provide Deep Interoperability?
Document ID: | Document Type: ZapFlash
By: Jason Bloomberg | Posted: December 2, 2010
We at ZapThink were encouraged by the fact that our recent ZapFlash on Deep Interoperability generated some intriguing responses. Deep Interoperability is one of the Supertrends in the ZapThink 2020 vision for enterprise IT (now available as a poster for free download or purchase). In essence the Deep Interoperability Supertrend is the move toward software products that truly interoperate, even over time as standards and products mature and requirements evolve. ZapThink’s prediction is that customers will increasingly demand Deep Interoperability from vendors, and eventually vendors will have to figure out how to deliver it.
One of the key points in the recent ZapFlash was that the Web Services standards don’t even guarantee interoperability, let alone Deep Interoperability. We had a few responses from vendors who picked up on this point. They had a few different angles, but the common thread was that hey, we support REST, so we have Deep Interoperability out of the box! So buy our gear, forget the Web Services standards, and your interoperability issues will be a thing of the past!
Not so fast. Such a perspective misses the entire point to Deep Interoperability. For two products to be deeply interoperable, they should be able to interoperate even if their primary interface protocols are incompatible. Remember the modem negotiation on steroids illustration: a 56K modem would still be able to communicate with an older 2400-baud modem because it knew how to negotiate with older modems, and could support the slower protocol. Similarly, a REST-based software product would have to be able to interoperate with another product that didn’t support REST by negotiating some other set of protocols that both products did support.
But this “least common denominator” negotiation model is still not the whole Deep Interoperability story. Even if all interfaces were REST interfaces we still wouldn’t have Deep Interoperability. If REST alone guaranteed Deep Interoperability, then there could be no such thing as a bad link.
Bad links on Web pages are ubiquitous, of course. Put a perfectly good link in a Web page that connects to a valid resource. Wait a few years. Click the link again. Chances are, the original resource was deleted or moved or had its name changed. 404 not found.
OK, all you RESTafarians out there, how do we solve this problem? What can we do when we create a link to prevent it from ever going bad? How do we keep existing links from going bad? And what do we do about all the bad links that are already out there? The answers to these questions are all part of the Deep Interoperability Supertrend.
One important point is that the modem negotiation example is only a part of the story, since in that case, you already have the two modems, and the initiating one can find the other one. But Deep Interoperability also requires discoverability and location independence. You can’t interoperate with a piece of software you can’t find.
But we still don’t have the whole story yet, because we must still deal with the problem of change. What if we were able to interoperate at one point in time, but then one of our endpoints changed. How do we ensure continued interoperability? The traditional answer is to put something in the middle: either a broker in a middleware-centric model or a registry or other discovery agency that can resolve abstract endpoint references in a lightweight model (either REST or non-middleware SOA). The problem with such intermediary-based approaches, however, is that they relieve the vendors from the need to build products with Deep Interoperability built in. Instead, they simply offer one more excuse to sell middleware.
The ZapThink Take
At its core Deep Interoperability is a peer-to-peer model, in that we’re requiring two products to be deeply interoperable with each other. But peer-to-peer Deep Interoperability is just the price of admission. If we have two products that are deeply interoperating, and we add a third product to the mix, it should be able to negotiate with the other two, not just to establish the three pairwise relationships, but to form the most efficient way for all three products to work together. Add a fourth product, then a fifth, and so on, and the same process should take place.
The end result will be IT environments of arbitrary size and complexity supporting Deep Interoperability across the entire architecture. Add a product, remove a product, or change a product, and the entire ecosystem adjusts accordingly. And if you’re wondering whether this ecosystem-level adjustment is an emergent property of our system of systems, you’ve hit the nail on the head. That’s why Deep Interoperability and Complex Systems Engineering are adjacent on our ZapThink 2020 poster.
- Can I have this too please... quite frankly MDM and the assembly of federated information views is a brilliant example of where REST _should_ be used over a functional/WS approach. It would be great if we could have some of this stuff to standardised and enterprise grade REST.SteveOn 31 December 2010 17:05, Stuart Charlton <stuartcharlton@...> wrote:Sorry for the belated response; been on holidays. Happy New Year!To be clear - REST is fine as it is and very valuable, but in the Enterprise, here is what I want:Short run (i.e. yesterday)....1. Media type for resource lifecycle contracts (e.g. a more web-friendly version of SCXML).Basically this is the v1.0 answer to "if I shouldn't bind RESTful resources to RPC methods, then what?"The answer, is, I think, hierarchical state machines, also known as programming-by-difference. This would preserve the anarchic scalability and serendipity of the web for "write"-oriented uses.The new "REST in Practice" book starts down this path with what they call Domain Application Protocols (DAP). But I think it needs to be a media type (and suitable language bindings/frameworks) on its own. I've been sitting on a half-written blog post for 6 months here because I would like to actually finish writing some code to support this hypothesis, but of course, life got in the way.2. A simple semantic language for use with URI link relations.This complements #1. You can't get programming by difference if you can't agree on a couple standard link relations like "extends" or "replaces".Medium run (i.e. next two to four years)...3. Secure media type containers (i.e. the successor to S/MIME), see http://blog.jclark.com/2007/10/why-not-smime.htmlXML can do this on its own, of course, but the kids these days like JSON and may like something else in the future. Signatures & Encryption are hard, so it would be best to do this once.And if you want to really look down the horizon...1. Scalable resource pub/sub with intermediaries (WATCH/NOTIFY, perhaps waka will go here)2. HTML/CSS features that better supports multitouch devices3. Media type for logical data exchange (e.g. logical pre/post conditions ala RuleML, a logical data model like RDF but without the conceptual & political baggage)