Loading ...
Sorry, an error occurred while loading the content.

[ZapFlash] Does REST Provide Deep Interoperability?

Expand Messages
  • Gervas Douglas
    Does REST Provide Deep Interoperability? Document ... By: Jason Bloomberg | Posted: December 2, 2010 We at ZapThink were encouraged by the fact that our recent
    Message 1 of 35 , Dec 2, 2010
    • 0 Attachment

      Does REST Provide Deep Interoperability?

      Document ID: | Document Type: ZapFlash
      By: Jason Bloomberg | Posted: December 2, 2010

      We at ZapThink were encouraged by the fact that our recent ZapFlash on Deep Interoperability generated some intriguing responses. Deep Interoperability is one of the Supertrends in the ZapThink 2020 vision for enterprise IT (now available as a poster for free download or purchase). In essence the Deep Interoperability Supertrend is the move toward software products that truly interoperate, even over time as standards and products mature and requirements evolve. ZapThink’s prediction is that customers will increasingly demand Deep Interoperability from vendors, and eventually vendors will have to figure out how to deliver it.

      One of the key points in the recent ZapFlash was that the Web Services standards don’t even guarantee interoperability, let alone Deep Interoperability. We had a few responses from vendors who picked up on this point. They had a few different angles, but the common thread was that hey, we support REST, so we have Deep Interoperability out of the box! So buy our gear, forget the Web Services standards, and your interoperability issues will be a thing of the past!

      Not so fast. Such a perspective misses the entire point to Deep Interoperability. For two products to be deeply interoperable, they should be able to interoperate even if their primary interface protocols are incompatible. Remember the modem negotiation on steroids illustration: a 56K modem would still be able to communicate with an older 2400-baud modem because it knew how to negotiate with older modems, and could support the slower protocol. Similarly, a REST-based software product would have to be able to interoperate with another product that didn’t support REST by negotiating some other set of protocols that both products did support.

      But this “least common denominator” negotiation model is still not the whole Deep Interoperability story. Even if all interfaces were REST interfaces we still wouldn’t have Deep Interoperability. If REST alone guaranteed Deep Interoperability, then there could be no such thing as a bad link.

      Bad links on Web pages are ubiquitous, of course. Put a perfectly good link in a Web page that connects to a valid resource. Wait a few years. Click the link again. Chances are, the original resource was deleted or moved or had its name changed. 404 not found.

      OK, all you RESTafarians out there, how do we solve this problem? What can we do when we create a link to prevent it from ever going bad? How do we keep existing links from going bad? And what do we do about all the bad links that are already out there? The answers to these questions are all part of the Deep Interoperability Supertrend.

      One important point is that the modem negotiation example is only a part of the story, since in that case, you already have the two modems, and the initiating one can find the other one. But Deep Interoperability also requires discoverability and location independence. You can’t interoperate with a piece of software you can’t find.

      But we still don’t have the whole story yet, because we must still deal with the problem of change. What if we were able to interoperate at one point in time, but then one of our endpoints changed. How do we ensure continued interoperability? The traditional answer is to put something in the middle: either a broker in a middleware-centric model or a registry or other discovery agency that can resolve abstract endpoint references in a lightweight model (either REST or non-middleware SOA). The problem with such intermediary-based approaches, however, is that they relieve the vendors from the need to build products with Deep Interoperability built in. Instead, they simply offer one more excuse to sell middleware.

      The ZapThink Take

      At its core Deep Interoperability is a peer-to-peer model, in that we’re requiring two products to be deeply interoperable with each other. But peer-to-peer Deep Interoperability is just the price of admission. If we have two products that are deeply interoperating, and we add a third product to the mix, it should be able to negotiate with the other two, not just to establish the three pairwise relationships, but to form the most efficient way for all three products to work together. Add a fourth product, then a fifth, and so on, and the same process should take place.

      The end result will be IT environments of arbitrary size and complexity supporting Deep Interoperability across the entire architecture. Add a product, remove a product, or change a product, and the entire ecosystem adjusts accordingly. And if you’re wondering whether this ecosystem-level adjustment is an emergent property of our system of systems, you’ve hit the nail on the head. That’s why Deep Interoperability and Complex Systems Engineering are adjacent on our ZapThink 2020 poster.

    • Steve Jones
      Can I have this too please... quite frankly MDM and the assembly of federated information views is a brilliant example of where REST _should_ be used over a
      Message 35 of 35 , Jan 1, 2011
      • 0 Attachment
        Can I have this too please... quite frankly MDM and the assembly of federated information views is a brilliant example of where REST _should_ be used over a functional/WS approach.  It would be great if we could have some of this stuff to standardised and enterprise grade REST.


        On 31 December 2010 17:05, Stuart Charlton <stuartcharlton@...> wrote:

        Sorry for the belated response; been on holidays.   Happy New Year!

        To be clear - REST is fine as it is and very valuable, but in the Enterprise, here is what I want:

        Short run (i.e. yesterday)....

        1.  Media type for resource lifecycle contracts (e.g. a more web-friendly version of SCXML).  

        Basically this is the v1.0 answer to "if I shouldn't bind RESTful resources to RPC methods, then what?"

        The answer, is, I think, hierarchical state machines, also known as programming-by-difference.    This would preserve the anarchic scalability and serendipity of the web for "write"-oriented uses.

        The new "REST in Practice" book starts down this path with what they call Domain Application Protocols (DAP).  But I think it needs to be a media type (and suitable language bindings/frameworks) on its own.   I've been sitting on a half-written blog post for 6 months here because I would like to actually finish writing some code to support this hypothesis, but of course, life got in the way.   

        2.   A simple semantic language for use with URI link relations.
        This complements #1.  You can't get programming by difference if you can't agree on a couple standard link relations like "extends" or "replaces".

        Medium run (i.e. next two to four years)...

        3. Secure media type containers (i.e. the successor to S/MIME), see http://blog.jclark.com/2007/10/why-not-smime.html   

        XML can do this on its own, of course, but the kids these days like JSON and may like something else in the future.   Signatures & Encryption are hard, so it would be best to do this once.  

        And if you want to really look down the horizon...

        1. Scalable resource pub/sub with intermediaries (WATCH/NOTIFY, perhaps waka will go here)
        2.  HTML/CSS features that better supports multitouch devices
        3.  Media type for logical data exchange (e.g. logical pre/post conditions ala RuleML, a logical data model like RDF but without the conceptual & political baggage)

        From: jasonbaragry <Jason.Baragry@...>
        Sent: Tue, December 14, 2010 11:44:44 AM
        Subject: [service-orientated-architecture] Re: [ZapFlash] Does REST Provide Deep Interoperability?



        > REST might be the future, but there's a *huge* amount of work left to realize the promises made for it.

        I'd find it very useful if you could expand on your experiences concerning what is still left to do. Is it just contracts? Is it also standard ways to specify QoS issues? Which tradeoffs do you need to make to model the lifecycle of business entities using standard http ops? etc.

        Especially items which go beyond what you have already blogged.


        --- In service-orientated-architecture@yahoogroups.com, Stuart Charlton <stuartcharlton@...> wrote:
        > Three notes.
        > 1. I'm in charge of a project that will be delivering a set of RESTful services
        > for Order to Cash in logistics & shipping, will be put into production at a
        > large organization in that space next year. Whether how widely it will be
        > reused remains to be seen, but the work is beginning.
        > 2. I agree with Steve that from the business perspective, there's nearly always
        > a specific contract at play. The problem is the degree to which you bake that
        > into your code, because it will always change over time. IT rigidity
        > eliminates a lot of opportunity when the implications of a business circumstance
        > is a 9 month multi-million dollar integration project.
        > The goal of REST for enterprise integration is to change that interface change
        > equation into modifying a bunch of hyperlinks, switching a media type or two,
        > and letting the client/user agents automatically sort out the technical
        > differences. The challenge is that we are nowhere near getting there. REST
        > might be the future, but there's a *huge* amount of work left to realize the
        > promises made for it.
        > 3. The core technical issue with REST in business integration is that there has
        > been very little work on "how to design and reuse media types", which seems to
        > be the undiscovered frontier, and an essential one. Unfortunately there's not
        > a lot of investment being made to do this because people are just busy doing
        > their own thing, and most vendors really don't see a lot of upside to designing
        > another middleware stack at the moment, given how well that usually turns out.
        > For example, that you *cannot* build a global (enterprise wide) contract for
        > certain business processes, because it's nearly impossible to gain that level
        > agreement across projects, departments, and management teams, and it's
        > constantly changing & evolving by a dozen projects, each doing their own thing
        > simultaneously. There always will be specifics in terms of data definitions
        > and process activities or systems interactions, because of this disagreement.
        > There will never be a "Order to Cash" media type that everybody agrees to.
        > Firstly, that's the wrong granularity for interoperability -- a generic media
        > type for defining a process or business entity lifecycle would be much more
        > useful, for example, one that complements specific media types for data.
        > Secondly, minting new media types for every business process or domain
        > application protocol leads to similar challenges of service-specific interfaces
        > - an explosion of complexity with limited interoperability. It is better than
        > what we had with WS*, because the ability to GET documents is a useful step
        > towards global contracts, but we're still stuck with the effects of
        > PUT/POST/DELETEing nearly always requiring service-specific client code for
        > system-to-system integration.
        > I think it's quite possible to fix this situation, through a media type that
        > describes state lifecycles and/or pre+post conditions, but it's going to take
        > longer than a lot of people would like. The closest example I've seen to this
        > kind of thing is rather academic (OWL-S with SWRL atoms to define pre/post
        > conditions as part of Clark & Parsia's HotPlanner), so I know it's possible,
        > just far from the mainstream.
        > Cheers
        > Stu
        > ________________________________
        > From: Steve Jones <jones.steveg@...>
        > To: service-orientated-architecture@yahoogroups.com
        > Sent: Wed, December 8, 2010 4:49:04 AM
        > Subject: Re: [service-orientated-architecture] Re: [ZapFlash] Does REST Provide
        > Deep Interoperability?
        > On 6 December 2010 14:03, Jan Algermissen <algermissen1971@...> wrote:
        > >
        > >On Dec 6, 2010, at 11:43 AM, Steve Jones wrote:
        > >
        > >> The AtomPub thing for blogs is a standard interface for a SINGLE service which
        > >>has proved very successful, I'd argue specifically because its a standard
        > >>interface for a standard service. However using the same interface for an Order
        > >>to Cash process is liable to prove problematic.
        > >
        > >
        > Well, sure. So, it your domain is Order to Cash the task is to specify media
        > type(s) that enable the use cases commonly found with Order to Cash (UBL comes
        > to mind). You just do not need service specific contracts to do very specific
        > things.
        > >
        > Which is great in theory, but what I'm really looking for these days is examples
        > of where its been done (ala AIA for WS) particularly in terms of the interaction
        > model.
        > >That's basically all that REST is about: move all service specific stuff to a
        > >global (could also mean 'enterprise-wide') contract and thereby eliminate all
        > >coupling between individual clients and services.
        > >
        > >I am failing to see how having *service specific* contracts is better. Can you
        > >explain?
        > >
        > Because people work better with specific contracts and even with the REST get
        > "telling you the map" you still have a specific contract for that resource and
        > changing that contract will have a downstream impact (potentially invalidating a
        > real world contract). At any point in time from a business perspective there is
        > always a service specific contract even if technically the interface appears
        > service independent.
        > Steve
        > >Jan
        > >
        > >

      Your message has been successfully submitted and would be delivered to recipients shortly.