Loading ...
Sorry, an error occurred while loading the content.

Re: [rest-discuss] Using "hipertext as the engine of application state" in "data-centric" services

Expand Messages
  • Subbu Allamaraju
    ... Steve - allow me to refer back to my previous comment that there is yes/no answer to this question. You seem to be alluding that it incorrect to create
    Message 1 of 24 , Jan 5, 2009
    • 0 Attachment
      > On Jan 4, 2009, at 10:08 PM, Subbu Allamaraju wrote:
      >
      >> On Jan 4, 2009, at 9:38 PM, Steve Bjorg wrote:
      >>
      >>> For RESTful applications, the content type should only convey what
      >>> hypermedia representation was used (XHTML vs. Atom vs. RDF etc.).
      >>
      >> Can you explain how came to that conclusion?
      >
      > Conclusion is a strong word. That's more the way I'm leaning
      > currently. Regardless, this exchange has motivated me enough to
      > finally commit some of my thoughts to a wiki page entitled "The
      > Hypermedia Scale".
      > http://restpatterns.org/Articles/The_Hypermedia_Scale
      >
      > The driving question behind it is that if HATEOAS is the style to
      > follow, then how does one translate the HATEOAS principles that have
      > worked so well for human-to-machine interactions to machine-to-
      > machine interactions? Surprisingly, while there are multiple,
      > established hypermedia types, none are either complete or
      > constrained enough for this use case. Atom lacks the crucial
      > ability to describe how to create new entries in the presence of
      > extensions, and HTML has so much expressive power that it's causing
      > headaches. It would be interesting to have a discussion on how to
      > improve on this (or, just as importantly, correct the article where
      > it's wrong).

      Steve - allow me to refer back to my previous comment that there is
      yes/no answer to this question. You seem to be alluding that it
      "incorrect" to create new media types, which is not the case.

      There are two ways to let clients learn about the contents of a
      representation and neither is wrong. One is less optimal than the other.

      Subbu
      ---
      http://subbu.org
    • Subbu Allamaraju
      ... You may be reading too much into that. Since that RFC was written, a number of new types were introduced. Note that standardization happens usually after a
      Message 2 of 24 , Jan 5, 2009
      • 0 Attachment
        >>>
        > I'm with Steve here. I mean, if we're trying to stick to the specs,
        > how about http://www.w3.org/Protocols/rfc2616/rfc2616-
        > sec3.html#sec3.7 :
        > "Use of non-registered media types is discouraged." ?

        You may be reading too much into that. Since that RFC was written, a
        number of new types were introduced. Note that standardization happens
        usually after a discovered need for interop.

        > I think the Obasanjo article supports the idea that OpenSocial is a
        > good approach - no coupling to specific URI schemes, and no client
        > "guessing" either. And it's based on using the well-standardized
        > (though not IANA-registered?) 'application/xrds+xml' media type,
        > rather than inventing a new media type. Same with some other
        > well-designed RESTful API's that have been mentioned.

        Please look at the JSON/XML examples - not the XRDS part.

        Subbu
        ---
        http://subbu.org
      • amsmota@gmail.com
        ... For what I read in this thread and the other one MIME properties instead + , it seems to me that BOTH are less than optimal... :) I mean, I know there is
        Message 3 of 24 , Jan 5, 2009
        • 0 Attachment
          On Jan 5, 2009 3:53pm, Subbu Allamaraju <subbu@...> wrote:

          >
          > There are two ways to let clients learn about the contents of a
          > representation and neither is wrong. One is less optimal than the other.
          >

          For what I read in this thread and the other one "MIME properties instead +", it seems to me that BOTH are less than optimal... :) I mean, I know there is no "the" solution, but it's a bit frustrating for me to have to do things that are "less than optimal", or at least "less than good".

          Nevertheless since this is not a urgent matter for us I'll keep looking and reading, and maybe discussing.

          Cheers.
        • groovepapa82
          Well I m not much for following specs to the letter in the first place. :) And I don t think I m reading into the spec any more-so than if we extrapolate
          Message 4 of 24 , Jan 5, 2009
          • 0 Attachment
            Well I'm not much for following specs to the letter in the first
            place. :)

            And I don't think I'm reading into the spec any more-so than if we
            extrapolate sections 7.1 and 14.17 - which describe *what* a
            content-type is - to answer questions about *why* you should or should
            not express metadata as new content-type. ;)

            So let's forget about the spec ... you nailed it a long time ago -
            there's no single answer and the decision should be based on a number
            of design factors. I'm still trying to learn about all this stuff, but
            one that stands out to me could be if the metadata is semantic or
            technical?

            IMO, metadata to express whether content is visual, audible, or
            textual seems clearly technical, right? Hence the obvious choice of
            using different content-types like image/*, audio/*, text/*. Other
            content-type metadata seems to be technical in nature as well - file
            formats, character sets, etc.

            But metadata to express *semantics* seems like a very different issue?
            Semantics cut across the technical differences in some areas, but are
            highly specialized in others. For example, the semantics of having
            alternative R-, PG-13-, PG-, and G-rated resources could apply to
            images, audio, or text. On the other hand, a semantic meaning of
            "synonyms" is particular to language-, i.e. text-, based data.

            So I personally apply this to hypertext as an engine for application
            state by prefering to put any *semantic* metadata that will drive
            state transitions (hyperlinks!) into standardized content formats -
            microformats and semantically-aware formats; and to use as many
            pre-existing content-types as possible.

            If I do manage to come up with a genuinely new *technical* type of
            data, I'll register a new content-type value. Though I'm not sure that
            would have anything to do with state transitions. ;)

            -L

            --- In rest-discuss@yahoogroups.com, Subbu Allamaraju <subbu@...> wrote:
            >
            > >>>
            > > I'm with Steve here. I mean, if we're trying to stick to the specs,
            > > how about http://www.w3.org/Protocols/rfc2616/rfc2616-
            > > sec3.html#sec3.7 :
            > > "Use of non-registered media types is discouraged." ?
            >
            > You may be reading too much into that. Since that RFC was written, a
            > number of new types were introduced. Note that standardization happens
            > usually after a discovered need for interop.
            >
            > > I think the Obasanjo article supports the idea that OpenSocial is a
            > > good approach - no coupling to specific URI schemes, and no client
            > > "guessing" either. And it's based on using the well-standardized
            > > (though not IANA-registered?) 'application/xrds+xml' media type,
            > > rather than inventing a new media type. Same with some other
            > > well-designed RESTful API's that have been mentioned.
            >
            > Please look at the JSON/XML examples - not the XRDS part.
            >
            > Subbu
            > ---
            > http://subbu.org
            >
          • Aristotle Pagaltzis
            ... Exactly. I just caught up with the whole Steve Bjorg vs Subbu Allamaraju thread, watching their polar positions play out, and I don’t understand why
            Message 5 of 24 , Jan 10, 2009
            • 0 Attachment
              * Stefan Tilkov <stefan.tilkov@...> [2009-01-05 10:00]:
              > I don't think there's a "right" or "wrong" here: both options
              > are valid, it's really a design choice in every specific
              > situation.

              Exactly.

              I just caught up with the whole Steve Bjorg vs Subbu Allamaraju
              thread, watching their polar positions play out, and I don’t
              understand why either of them is taking such a dogmatic stance.
              Sticking to known media types while you figure out what kinds of
              things you want to provide to clients and what kinds of things
              client will need from you is good. Consolidating and formalising
              that knowledge once it exists is also good.

              I would say that most of the time you should err on the side of
              using well-established media types until you have a feel for the
              issue. “Innovating” in a vacuum is bad. It doesn’t help anyone.
              You make your mistakes while flying blind because there are few
              implementations at all ends and they all have to upgrade in lock
              step. (How often do we have to learn the lesson that this is a
              recipe for failure?)

              But people with similar apps should occasionally sit down at a
              table together and find out how they can standardise their
              approaches into a separate format. I didn’t read Dare’s post
              about OpenSocial but from what I get from this thread, this is
              what happened there. This is good.

              There is no correct dogma to answer the question of how specific
              one’s media type should be. All options are valid, each with pros
              and cons, and you need to decide on a case-by-case basis which
              side to pick. This sort of tradeoff is what engineering is about
              (and REST is the closest we have to it in software development).
              Sorry to the cookie cutter brigade. :-)

              In passing, though, I have to note that it would be nice if we
              could do a better job of what media types tried to do with their
              type/subtype separation, ie. have a standardised way to specify a
              layering of specifity of formats, including multiple formats, so
              that it would be possible to say that a document is text, and
              specifically HTML, and specifically a combination of hCard+hTag+
              hEXIF+image-link, and specifically a Flickr photo, so as to allow
              clients to know what the representation means without having to
              parse it, at whatever their level of understanding of the
              specified format.

              I don’t know if this would work in practice, after all the
              type/subtype thing in media types is mostly a failure. Maybe
              that was just because of it tried to constrain types to just two
              layers. It would also be necessary to do a better job of what
              media types tried to accomodate with the `+xml` suffix
              contortion, ie. make sure that types reliant on possibly multiple
              lower-level formats are expressible in a sensible fashion.

              If it did work, it would resolve the tradeoff issue nicely.

              Regards,
              --
              Aristotle Pagaltzis // <http://plasmasturm.org/>
            Your message has been successfully submitted and would be delivered to recipients shortly.