Loading ...
Sorry, an error occurred while loading the content.

Re: [rest-discuss] REST toolkits

Expand Messages
  • Tony Butterfield
    There is lots of interesting fragments of this broad discussion. If I could summarise the main threads, in order of importance, from my perspective: 1)
    Message 1 of 37 , Mar 1, 2004
      There is lots of interesting fragments of this broad discussion. If I
      could summarise the main threads, in order of importance, from my
      1) Toolkits versus Frameworks (the pro's and con's of each)
      2) Service orchestration (declaring, sequencing and pipelining requests)
      3) Meta data (what is needed and why)
      4) REST versus HTTP (HTTP is the definitive REST implementation but
      isn't the essence)

      On Fri, 2004-02-27 at 13:53, Josh Sled wrote:
      > On Mon, Feb 09, 2004 at 11:09:09AM +0000, Tony Butterfield wrote:
      > | Are you thinking of RDF for defining, declaring and documenting the
      > | interfaces?
      > Likely... maybe ... too early to say, though.
      Yeah, I agree, certainly at the moment XML suffices for all our
      meta-data, RDF/XML requires a whole new technology stack to process it
      but if it has sufficient merit then why not. Ideally it is just another
      data type.
      > | Looking at the service, there are at least three aspects:
      > | 1) implementing the service
      > | 2) providing a public interface to the service
      > | 3) providing meta-data about the service
      > |
      > | As far as implementing the service goes, I think it would great if the
      > | toolkit would provide all abstractions to implement the service
      > | independent of the public interface, provide implementations of the
      > | common web datatypes and provide tools for the management of the
      > | service.
      > Hmm. My biggest issue with this is that the developer is likely already
      > going to have a set of idioms and tools [data-structures, libraries, &c.]
      > with which to deal with common web datatypes [be they images or XML]. Hmmm.
      > "Management of the service" I'm going to read as "basic lifecycle and
      > network-connection accepting" ... in that case, I also think that most people
      > already have some tool for dealing with this, as well. At the same time, a
      > lightweight HTTP client/server might be a good optional provision of the
      > toolkit.

      I think our difference of perspective is because I have been thinking
      more framework than toolkit. You talk about this further down too. To me
      this is a key aspect to raising the abstraction bar on developing REST
      services- I don't want to be hand crafting code to extract fields from
      HTTP headers and parsing XML.

      I don't think of it as a black and white thing though. By using an OS,
      JVM, servlets you are effectively using a framework just a fairly non
      invasive one. You mention a few problems further down which, to me, hint
      at framework solutions. I understand that a framework is more subsuming
      of the developers world and than a toolkit and makes adoption harder but
      the tradeoffs are worth considering. From the little I understand about
      Aspect Oriented Programming too, it seems that framework-like features
      can be added in to code in a less invasive manner.

      I think that a toolkit encourages a from-the-ground structure, this
      isn't as bad as having to re-invent the wheel like the lack of toolkits
      causes. But it doesn't easily encourage good patterns of usage. I guess
      an attribute of a good framework is that it can be repeatedly "peeled
      back" to reveal more control and flexibility and thus ideally converging
      on a toolkit.

      For me the bottom line is that if you mostly like and trust the pattern
      that a framework is encouraging then a framework can offer more than a

      > I don't know what meta data there necessarily is. In fact, I think that
      > RESTful services should try to minimize the volume of meta-data necessary to
      > function; anything apart from the interface itself is another point of
      > "failure".
      > | As far as providing the public interface I would good to be able to
      > | declaratively map the service onto some public interface. I.e. lets say
      > | put the ImageTransformService at http://imagetools.org/transform
      > Hmm. Let's come to agreement on a sketch of what the RESTful API for this
      > service would look like...
      > C: POST /transform
      > C: <transform>
      > <image resource="http://host/path/image.png" />
      > <format>image/png</format>
      > <transforms>
      > <scale x="0.5" y="0.5" />
      > <crop x0="0" y0="0" x1="-20" y1="-50" />
      > <sharpen q="0.217" />
      > </transforms>
      > </transform>
      > S: 201; Location: /transform/1
      > C: GET /transform/1
      > S: 200
      > S: <transform>
      > <source resource="http://host/path/image.png" />
      > <image resource="/transform/1/image" />
      > <transforms>
      > <scale x="0.5" y="0.5" />
      > <crop x0="0" y0="0" x1="-20" y1="-50" />
      > <sharpen q="0.217" />
      > </transforms>
      > </transform>
      > C: GET /transform/1/image
      > S: 200; Content-Type: image/png
      > S: [binary image data]
      > Seem reasonable?
      Yes. I see how it would work. What are you motivations for having a two
      stage register a transform, then get result of transform. Is this
      something that you have come across as a necessary pattern?
      > | Providing meta-data. I'd like to explore this issue more. To use RDF or
      > | WSDL to describe an interface is nice but useful only for automatically
      > | building static client stubs or providing documentation.
      > Those are pretty useful things, and I think exactly what most people want out
      > of such a toolkit. :)
      SOAP style object bindings are not needed though are they? Providing
      human readable definitions of the interfaces I can understand.

      Does anyone have experience of defining REST interfaces in terms of the
      MIME types posted or got, xml schemas (if XML), response codes and verbs
      applicable on URIs? Also is it currently more work than the benefits

      > There is an elephant in the room, though, of actually modeling a
      > resource-space such that it matches the nouns of the application domain, but
      > with the necessary ... formalism ... to support an REST implementation.
      > Perhaps a "resource-space-builder" [a-la a GUI builder] isn't out of the
      > question. This may be REST's hardest sell ... one needs to do some work not
      > directly related to writing code to get the thing off the ground.
      I like idea! Have you got one?

      > | parameters to a request using standard web datatypes- in this example
      > | say and XML specification plus a PNG image stream. Then pass them to the
      > | service.
      > Damn; I'm torn between getting out of the developer's way and letting her use
      > whatever tools for both XML/data-handling and http-{client,server}age she
      > wants, and the necessities of actually having a toolkit that provides value.
      > I think the right thing is to do the latter, and try to abstract it out
      > toward the former.
      > I think it's important to focus on a toolkit, rather than a framework. That
      > is: focus less on subsuming the developer's world, instead focus on providing
      > a set of small tools to be used and integrated into their world.

      See above.

      > >From recent experience building a RESTful [internal] client/server system,
      > these [were] the more useful parts we had spawned:
      > 1/ 'http[-test]-client' -- a simple [~300line] command-line wrapper around
      > the Jakarta HttpClient library, allowing manual execution of all 4 verbs:
      > e.g.:
      > # htc [-h localhost] [-p 8180] GET '/transform?created=today'
      > <xml>
      > # htc -f new-transform.xml POST /transform
      > Location: /transform/42
      > # htc GET /transform/42
      > <xml>
      > # htc DELETE /transform/42
      > 200
      > Basically: wget += [ put, post, delete ];
      > 2/ a resource/service-method mapping dispatcher -- a relatively simple
      > Servlet dispatcher which was configured by a simple text file of lines of
      > the form:
      > --
      > ### public void getTransforms( Map m )
      > GET /transform?[query-spec] in:null out:TransformList TransformService.getTransforms
      > ### public TransformUri postTransform( Transform t )
      > POST /transform in:Transform out:TransformURI TransformService.postTransform
      > ### public Transform getTransform( TransformURI turi )
      > GET /transform/1 in:TransformURI out:Transform TransformService.getTransform
      > --
      > The Dispatcher had support for:
      > * URI-pattern matching.
      > * required query paramaters which modified dispatching.
      > * input/output data validation/[un]marshaling.
      > I think the data behind this particular file is better placed in a simple
      > data structure directly in the code, in the simple case. One problem we
      > had was the seperation of the sections of this file from the 20-or-so
      > services which backed it ... they could then more easily change
      > independently, which sucks.
      > In out system, we autogenerated all those in: and out: types ... but it
      > was more important that they implemented a simple interface which allowed
      > the retreival of their "xml-delegate", which was then used to do
      > [un]marshaling.
      > This is the trickiest part, for me...
      > i/ I don't want a tool that autogenerates data-carrier objects: there's
      > already too many of them, and they all suck.
      > ii/ I personally don't want to deal with XML-enabled objects anymore.

      These are looking like ways service orchestration/pipelining of requests
      in a declarative way. I think this is a key feature however:
      1) if you use HTTP/network as the only inter-service communication
      mechanism you'll be reluctant to reduce the granularity of your
      services. If you create a request abstraction that maps to HTTP and
      anything else you might have you get more flexibility.
      2) these higher-level orchestrations of services will soon want to be
      services in their own right too. How would you wrap them up?
      > 3/ A set of defined error-codes, exceptions and consistent handling. We only
      > ended up using the "common subset" of HTTP response codes... specifically:
      > 200 [ok], 201 [created], 202 [accepted], 204 [no content]
      > 400 [bad req], 401 [unauthorized], 404, 409 [object conflict]
      > 500 [server error]
      > We handled these with exceptions in java; there was one in particular that
      > we all felt sad about throwing the exception, since it wasn't an
      > exceptional case. :/ At the same time, you don't want every function
      > return value to be "HttpResponse"...
      It's a tough one. Most languages only like one return type/value from a
      function/service. Exceptions are a let-out but it starts to get messy if
      you start putting real data into exception subclasses. Putting
      HttpResponse objects thoughout you code bind the whole thing tightly to
      HTTP and the servlet infrastructure.

      Our approach has been to in effect eliminate the dichotomy of response
      code and response representation in HTTP. i.e. consider the response as
      just one value, either the code is the relevant part or the
      representation in the body is.

      > 4/ request/response logger -- we did this on the client side, but should have
      > done it on the server side, too. It was basically a fixed-size FILO queue
      > of request objects, which was trivially HTMLized. It made debugging and
      > undestanding the system a breeze.
      > * We also should have extended it to save a replayable log for testing
      > purposes, which would have been real nice.
      I guess it was because of the servlet framework that you had a clean
      single point to add the logging. To me this is one of the key pro points
      for frameworks; if abstracted well they provide a single points where
      changes can be made which have global effect.

      > 5/ ScriptableHttpXml -- a simple tool which accepted a script file with
      > directives like:
      > --
      > GET /transform/0 expect:404
      > GET /transform/-1 expect:404
      > POST /transform file:transform-test0.xml expect:201 location:test0uri
      > GET ${test0uri} expect:200 saveIn:test0data compareTo:transform-test0-get.xml
      > XPATHREPLACE ${test0data} /transform/@about="/transform/42"
      > PUT ${test0uri} ${test0data} expect:409
      > # ...
      > --
      > * It started with the 4 verbs plus a couple of other data-handling
      > directives, and grew as we got more sophisticated. I'll see what I can
      > do about getting the tool from it's owner. :)
      More service orchestration with some simple XML primitives thrown in- I
      really think this approach works. You have:
      1) request definitions
      2) response assertions
      3) temporary variables
      4) xml comparisons
      5) xml manipulations

      I've been down this path too. Then I got to the point where I thought
      but why build all these primitives into the language, make them services
      too. Then you boil it down to simple URI request assembly language.

      > As well, in would have been really nice to have the following:
      > a/ some auto-population of the resource-space. I always wanted [GET /]
      > to return a list of the top-level resources. The ResourceDispatcher
      > should have been able to do that.
      I guess there would be a few aspects to this, mapping a listing service
      onto all paths that end with slash. The using of your meta-data about
      your services to create the listing resource (maybe in XML)

      > b/ a standard way to return "4xx, but here's how you use it...", in a
      > developer-understandable form.
      Mmm. Can you return a meaningful representation in the body of a

      > c/ a useful URI class -- we ended up doing the URI-pattern matching in
      > multiple places. It'd sure have been nice to have a consistent mechanism
      > for it.
      I've done lots of regex matching on URIs too. It works well. As for a
      consistent mechanism; having a framework where you define URI address
      spaces as sets of services that map to them has worked for me.

      > d/ etags/timestamps + conditional-GET support -- we solved multiple
      > performance problems with poor-man's caching, but it would have been nice
      > to have had an easy way to plumb etags/timestamps through the server to
      > the API.
      Having a non-HTTP request abstraction solves this one too. The
      equivalent of the HTTP header becomes meta-data that travels around with
      the representation of a resource.

      > So what toolkit pieces do I see out of this, at this point?
      > * "httpc"
      > * ScriptableHttpXml
      > * ResourceDispatcher
      > * PatternURI
      httpc- A good HTTP client yes, I've been recommended Apache's HTTPClient
      ScriptableHttpXML- Take a look at our equivalent DPML:
      ResourceDispatcher/PatternURI- take a look at:

      Tony Butterfield <tab@...>
      1060 Research
    • Tyler Close
      ... I agree. ... I think I understand what you re trying to say here, but you are misusing the term coupling . I think you re saying that many processing
      Message 37 of 37 , Mar 5, 2004
        On Tue March 2 2004 01:22 am, Tony Butterfield wrote:
        > However IMHO there are a large number of processing problems that can
        > easily solved by scripting services (behind REST interfaces)

        I agree.

        > together into pipelines, as described by Josh, without resorting
        > to coupling your XML to objects. If you can do this you save
        > yourself a lot of work, particularly when the system evolves.

        I think I understand what you're trying to say here, but you are
        misusing the term 'coupling'.

        I think you're saying that many processing tasks can be
        accomplished without using an OOP environment, and that doing so
        can often be a good design. I agree. In fact, I designed the
        Waterken Webizer <http://www.waterken.com/dev/SQL/> based on this

        It's interesting to note that the XML interface used by the
        Webizer is the same as that used by the XML to Java mapping. This
        property comes from the fact that the XML interface is designed to
        be loosely coupled to the AST.

        Many people seem to misunderstand what 'coupling' is. A piece of
        code, or data, is coupled to something if it uses that something.
        The coupling is 'loose' if there are many possible substitutes for
        that something. Otherwise, the coupling is tight.

        When I demonstrate an XML to Java binding for an interface, I am
        demonstrating loose coupling. The interface is loosely coupled to
        the AST used to represent it. The examples in the Waterken Message
        Tutorial <http://www.waterken.com/dev/Message/Tutorial/> show
        automated transformation between the XML AST and the Java AST.
        These ASTs have become substitutes for each other. That's loose

        The flipside of this argument is that not having an XML to Java
        binding is a demonstration of tight coupling. The interface is
        tightly coupled to the XML AST, because there are no substitutes
        for the XML AST.

        Taking this argument further, I claim that data represented in the
        Waterken Doc Model <http://www.waterken.com/dev/Doc/Model/> is
        more loosely coupled than the XML documents used in Josh's
        examples. This loose coupling comes from the fact that the Doc
        Model was designed to support mapping between many different ASTs.
        This makes it easy to build mappings into a wide variety of
        execution environments.

        Take a look at the grammar for the Doc Model:


        Notice how simple this grammar is compared to the XML infoset,
        Java Serialization Streams, etc. This simplicity creates loose
        coupling by making it possible to represent the data using any AST
        whose grammar is a superset of the Doc Model grammar. The
        complexity of the XML infoset grammar precludes this kind of

        In general, I've read a lot of nonsense about 'coupling'. I hope
        this email gives people an objective basis on which to reason
        about coupling. When people claim loose coupling, ask them to list
        the available substitutes. When people accuse tight coupling, show
        them the list of available substitutes.


        The union of REST and capability-based security.
      Your message has been successfully submitted and would be delivered to recipients shortly.