Loading ...
Sorry, an error occurred while loading the content.

Re: [rest-discuss] Re: Why HATEOAS?

Expand Messages
  • Bill de hOra
    ... So for me, some partical things come to mind. - the methods give you high level support for potential operations/scaling pain. Just knowing a system could
    Message 1 of 27 , Apr 6 3:10 PM
    View Source
    • 0 Attachment
      wahbedahbe wrote:
      >
      >
      > Ok, but I'm more wondering about the specific gains folks are seeing in
      > practice in the systems they are building. The reason I'm curious is
      > because there are a lot of frameworks like Rails which claim
      > "RESTfulness" but seem to just deliver REST - HATEOAS (well at least on
      > the "machine to machine" ActiveResource side of things when I last
      > looked at it). Lots of folks seem to think this is really great and is
      > light years better than RPC but I don't really understand why.
      >
      > Also, things like the idempotency of PUT and DELETE have never yielded
      > any practical benefits to me (though I get how they can in _theory_) so
      > I'm also really curious to know how people are making practical use of
      > them in the systems they are building.
      >
      > I have personally seen huge gains with "full" REST in systems I've built
      > -- chiefly in decoupling clients and servers (a lot of the stuff Craig
      > McClanahan brings up in this thread) -- and so I really "get" that. REST
      > - HATEOAS -- not so much.

      So for me, some partical things come to mind.

      - the methods give you high level support for potential
      operations/scaling pain. Just knowing a system could internally be
      partitioned at the http level into HEAD/OPTIONS/GET and PUT/POST/DELETE
      makes me sleep better at night. Much easier to do it at the load
      balancers than in application code imvho.

      - PUT and DELETE are useful to have as I don't have to disambiguate
      POST. I believe that when smart developers are encouraged to use the
      full method set from the get go, they will naturally use POST well and
      for dealing with the inevitable corner cases (also forms posting tends
      to get used well, which is a big thing for me). So I think having a
      method complement helps you fall into the pit of success.

      - URL construction (or the lack of). I was reviewing an API today and
      realized it could be geolocated by allowing a server to supply URLs to
      different domains/administrations. If the clients were putting the URLs
      together, that would not work. It also means basic stuff like media
      serving/cdns will work when you need them to.

      - Well known formats. Or at least well specified ones. You get so much
      futureproofing against versioning by making the media type explicit. I'm
      not a huge conneg fan (think it doesn't get used well), but the Accept
      header is a huge win if you're building something that has to evolve and
      support already deployed clients for years to come.

      - Caching, but this is well known.

      - Organisation of application v resource state. Giving non-domain codey
      type things URLs is big win. Jim Webber does a good job here explaining
      the practical benefits:
      http://www.infoq.com/articles/webber-rest-workflow. I don't know whether
      you can express the full BPM/BPEL/piCalculus thing via REST's notions of
      state, but I do suspect in many cases you don't need that level of
      expressive power.


      Ultimately what I get via REST is the notion of applying constraints to
      obtain systemic properties. The REST community have done a good job
      articulating what happens when you add and remove constraints. It's
      objective architectural/systems analysis, not the flimflam I see coming
      from EAI/SOA which tend to describe /desirable outcomes/ and not /how to
      obtain them/. You don't have to like REST as a style (I personally don't
      have much time for the current hype), but you can at least analyse the
      design.


      > On another note: I think HATEOAS is much more than "links in content"
      > unless your client is something like a spider.

      Granted, but 'lick' is a better acronym (links in content are king) than
      'hateoas' ;)


      > What's your take on the
      > discussion here:
      > http://www.intertwingly.net/blog/2008/03/23/Connecting
      > <http://www.intertwingly.net/blog/2008/03/23/Connecting>

      I sympathise with Sam's view on things, but still find "connectedness" a
      bit abstract. So I distill it even further by asking/cajoling people to
      put links in content, to increase the likelihood that a format will be
      useful across as many clients as possible.

      Bill
    • Bill Burke
      ... Just curious. Why not a big conneg fan? Is it that you prefer existing, well defined formats? To me, conneg seems to be one of the most powerful
      Message 2 of 27 , Apr 7 4:45 PM
      View Source
      • 0 Attachment
        Bill de hOra wrote:
        > - Well known formats. Or at least well specified ones. You get so much
        > futureproofing against versioning by making the media type explicit. I'm
        > not a huge conneg fan (think it doesn't get used well), but the Accept
        > header is a huge win if you're building something that has to evolve and
        > support already deployed clients for years to come.
        >

        Just curious. Why not a big conneg fan? Is it that you prefer
        existing, well defined formats? To me, conneg seems to be one of the
        most powerful features of HTTP. Since REST pushes complexity into the
        data format, conneg seems uber critical.

        --
        Bill Burke
        JBoss, a division of Red Hat
        http://bill.burkecentral.com
      • Andrew S. Townley
        ... I certainly agree with you that there are very, very few apps that *do* approach integration in this manner, but I don t agree that you can t have apps
        Message 3 of 27 , Apr 8 1:50 AM
        View Source
        • 0 Attachment
          On Mon, 2009-04-06 at 10:25 -0400, Bill Burke wrote:
          >
          > Andrew S. Townley wrote:
          > > Alternatively, you invert the approach and implement common behavior
          > > based on the clients "detecting" the state of the application from the
          > > representation.
          >
          >
          > Sorry to pick out one tiny piece of your excellent post...But...
          >
          > IMO, there are very very few applications/clients that can approach
          > integration in this manner. In production systems, things have to be
          > well planned out and predictable or it will just be a disaster.

          I certainly agree with you that there are very, very few apps that *do*
          approach integration in this manner, but I don't agree that you can't
          have apps that *can* approach integration in this manner, even in highly
          structured, regulated and mission-critical deployments.

          You bring up a great point: "if things aren't well planned out and
          predictable...things will be a disaster." However, if you stop and
          think about it, do you know why this is true?

          I've been doing both large & small system integration for over 10 years,
          and I've both lived through the reality you described, and also been
          trying to find ways to make systems less brittle and more resilient to
          change because there's two core axioms of large systems development:

          1) Things are going to change between when you start the system and when
          you get it "finished", if ever....

          2) The system is likely to live far longer than you expect it to

          If your integration is based on lots of out-of-band shared knowledge
          about the system state transitions, you do need a lot of formal planning
          and predictability in the way they work, because you code the endpoints
          based on those assumptions. It's really a self-fulfilling prophecy,
          actually.

          However, if you take the approach that you need to identify and expect a
          particular number of states and transitions, each of which are specified
          well rather than working from an API reference manual and system user
          guide (or functional specification), then you can potentially have more
          scalable, flexible and long-lived systems. You can also end up with a
          big mess if you don't manage it properly...

          I'm not saying that you still won't have to go through a similar amount
          of organizational management, politics and pain to arrive at these
          "interface" definitions any less than you will for a more traditional
          integration approach, but it's the difference in perspective (and
          outputs) that matter.

          Again, I'm not saying that this approach makes sense for every system on
          the planet, but I think it's critical to start thinking differently
          about the way we design, implement and extend the large-scale, cross
          organisational (and multi-national, in some cases) systems used by every
          one of us as businesses and individuals (directly or indirectly) each
          day.

          Another key point to remember: I'm not talking about altering the
          mission profile of the particular application or system, I'm simply
          focused on taking a (perhaps radically) different approach to how those
          systems interact to deliver the system's mission profile and the
          corresponding long-term business objectives of those who built and
          operate it.

          Cheers,

          ast
          --
          Andrew S. Townley <ast@...>
          http://atownley.org
        • Bill Burke
          ... Its true because stable systems are well tested. You can t test variability. ... FYI, I wasn t bashing HATEOAS. I think it is extremely useful to have
          Message 4 of 27 , Apr 8 12:43 PM
          View Source
          • 0 Attachment
            Andrew S. Townley wrote:
            > On Mon, 2009-04-06 at 10:25 -0400, Bill Burke wrote:
            >> Andrew S. Townley wrote:
            >>> Alternatively, you invert the approach and implement common behavior
            >>> based on the clients "detecting" the state of the application from the
            >>> representation.
            >>
            >> Sorry to pick out one tiny piece of your excellent post...But...
            >>
            >> IMO, there are very very few applications/clients that can approach
            >> integration in this manner. In production systems, things have to be
            >> well planned out and predictable or it will just be a disaster.
            >
            > I certainly agree with you that there are very, very few apps that *do*
            > approach integration in this manner, but I don't agree that you can't
            > have apps that *can* approach integration in this manner, even in highly
            > structured, regulated and mission-critical deployments.
            >
            > You bring up a great point: "if things aren't well planned out and
            > predictable...things will be a disaster." However, if you stop and
            > think about it, do you know why this is true?
            >

            Its true because stable systems are well tested. You can't test
            variability.

            > I've been doing both large & small system integration for over 10 years,
            > and I've both lived through the reality you described, and also been
            > trying to find ways to make systems less brittle and more resilient to
            > change because there's two core axioms of large systems development:
            >
            > 1) Things are going to change between when you start the system and when
            > you get it "finished", if ever....
            >
            > 2) The system is likely to live far longer than you expect it to
            >
            > If your integration is based on lots of out-of-band shared knowledge
            > about the system state transitions, you do need a lot of formal planning
            > and predictability in the way they work, because you code the endpoints
            > based on those assumptions. It's really a self-fulfilling prophecy,
            > actually.
            >
            > However, if you take the approach that you need to identify and expect a
            > particular number of states and transitions, each of which are specified
            > well rather than working from an API reference manual and system user
            > guide (or functional specification), then you can potentially have more
            > scalable, flexible and long-lived systems. You can also end up with a
            > big mess if you don't manage it properly...
            >

            FYI, I wasn't bashing HATEOAS. I think it is extremely useful to have
            relationship links embedded in your messages and to traverse these
            links. I just don't think its realistic to think that a client is going
            to be able to make state transition decisions dynamically based on
            looking at a self-describing message. Machines aren't humans.


            --
            Bill Burke
            JBoss, a division of Red Hat
            http://bill.burkecentral.com
          Your message has been successfully submitted and would be delivered to recipients shortly.