Loading ...
Sorry, an error occurred while loading the content.

weighting and boundaries of evolvability and loose coupling?

Expand Messages
  • Jakob Strauch
    one major aspect of using hypermedia is loose coupling and evolvability. but where are the boundaries? which server-side changes may/may not affect
    Message 1 of 15 , Dec 2, 2011
    View Source
    • 0 Attachment
      one major aspect of using hypermedia is loose coupling and evolvability. but where are the boundaries? which server-side changes may/may not affect hypermedia-aware clients?

      for example: as a human user, i can easely adapt to a change in a web shop system, e.g. when the order procedure include suddenly an additional step like providing an optional voucher code.

      a hypermedia client can be easely redirected. but i´m sure the agent can´t do something useful there. at least not unless he gets tought how to deal with the new domain concept (e.g. a new media type) - which means updating to a new version.

      so which aspect can be really decoupled? as far as i can see, there are "only" technical details like internal structures, urls. Speaking of evolvability, i think most changes to a growing API are more domain-related than technical-related.

      If my assumptions are correct, is it maybe more important to develop hypermedia clients, which can be updated by hot-deploy mechanisms?
    • mike amundsen
      Jakob: i ve been doing some work in this area (evolvability for hypermedia-based systems) and, while my experiments are still not completed, i can pass along
      Message 2 of 15 , Dec 2, 2011
      View Source
      • 0 Attachment
        Jakob:

        i've been doing some work in this area (evolvability for hypermedia-based systems) and, while my experiments are still not completed, i can pass along some observations that might give you some ideas.

        first, IMO, you are correct to state that most all the "evolvability" is due to changes in the problem-domain. IOW, not the protocol (HTTP) and not the message formats (media types).

        since REST focuses on sharing understanding through response representations that contain hypermedia to advance application flow,  the focus of evolvability is (in my work) on the media type and the response representations.

        the important task of writing hypermedia applications is mapping the problem domain details to elements in the media type. IOW, to evolve the system to match changes in the problem domain, you modify the representations and the hypermedia within those representations.

        so, with that as a basis...

        there are two different cases to consider:
        Human-driven user-agents (or Human-to-Machine - H2M) and,
        Machine-driven user-agents (or Machine-to-Machine - M2M).

        H2M evolvability for hypermedia
        in this case the "human" driving the user agent (UA) has "knowledge in the head" that the user agent does not have. the UA can focus just on recognizing, parsing, and rendering the media type representations and allowing the human to interpret the results and make choices based on the human's knowledge of the problem domain and the hypermedia affordances (links and forms) presented.

        since the act of mapping intention (what i want to get done) to action (the links and forms available) is all handled by a human, servers are free to make quite a wide range of changes and the system will still function well.  

        for example, in H2M cases, the server is free to add/remove inputs elements in forms, add/remove links, change the "order" in which links/forms are presented, even introduce entirely new forms and inputs. All these things are not likely to "break" the system since the human can be reasonably expected to "know" the problem domain (or a similar domain) enough to make decisions along the way. 

        M2M evolvability for hypermedia
        in this case there is no human *directly* driving the interactions between client and server. the UA is a 'bot' and has only the "knowledge in the code" to work with. This knowledge has to be "put" there by some human, of course.

        for this scenario, the server has a much more limited set of evolvability options. severs can remove inputs, remove links/forms, and/or change the order of their appearance and still expect the system to "work properly." IOW, the server cannot add any new inputs, links, or forms and expect the 'bot' to "know" or "understand" these new elements.

        FWIW, i think there are a number of ways to improve the M2M case, but i am not yet prepared to talk about that since i have not made much progress yet in this area.

        i hope this gives you some ideas on how to tackle this problem and would be interested in other POVs and observations on this topic.

        mca
        http://amundsen.com/blog/
        http://twitter.com@mamund
        http://mamund.com/foaf.rdf#me




        On Fri, Dec 2, 2011 at 06:48, Jakob Strauch <jakob.strauch@...> wrote:
        one major aspect of using hypermedia is loose coupling and evolvability. but where are the boundaries? which server-side changes may/may not affect hypermedia-aware clients?

        for example: as a human user, i can easely adapt to a change in a web shop system, e.g. when the order procedure include suddenly an additional step like providing an optional voucher code.

        a hypermedia client can be easely redirected. but i´m sure the agent can´t do something useful there. at least not unless he gets tought how to deal with the new domain concept (e.g. a new media type) - which means updating to a new version.

        so which aspect can be really decoupled? as far as i can see, there are "only" technical details like internal structures, urls. Speaking of evolvability, i think most changes to a growing API are more domain-related than technical-related.

        If my assumptions are correct, is it maybe more important to develop hypermedia clients, which can be updated by hot-deploy mechanisms?



        ------------------------------------

        Yahoo! Groups Links

        <*> To visit your group on the web, go to:
           http://groups.yahoo.com/group/rest-discuss/

        <*> Your email settings:
           Individual Email | Traditional

        <*> To change settings online go to:
           http://groups.yahoo.com/group/rest-discuss/join
           (Yahoo! ID required)

        <*> To change settings via email:
           rest-discuss-digest@yahoogroups.com
           rest-discuss-fullfeatured@yahoogroups.com

        <*> To unsubscribe from this group, send an email to:
           rest-discuss-unsubscribe@yahoogroups.com

        <*> Your use of Yahoo! Groups is subject to:
           http://docs.yahoo.com/info/terms/


      • Kevin Duffey
        Mike, I think I get the gist of what you are saying, but I still struggle understanding the various aspects of writing a good rest api as well as consuming
        Message 3 of 15 , Dec 2, 2011
        View Source
        • 0 Attachment
          Mike,

          I think I get the gist of what you are saying, but I still struggle understanding the various aspects of writing a good rest api as well as consuming one. In your example of M2M, if I am a developer writing a web application with a UI front end that users visit/log in/etc, and I want to provide them with Facebook login to access the site.. am I now considered the M2M in this equation.. in that I will be writing code on my web app to interface with facebook api? What continually confuses me is the idea of trying to remain HATEOAS compliant as I write my own API, and consuming an API as a developer. What I mean is, as the API developer, I am trying to provide a HATEOAS compliant API.. one that returns response with links that MUST be followed by the consumer, and only those links. But as of now, I still have to provide documentation that explains to a developer the possible links that can be returned from each resource. For example, my API provides access control to a degree.. user and admin level resources. IF the request is being made to an admin resource, the auth user must be one that is authorized to use that resource. IF they are, the response has the resource link(s) that allow them to further do other things that a normal user can not. Without documenting what resources will result in a successful (or failed) attempt at an admin resource, the developer won't know what to scan for in the <links> elements I return and what they can do next. I have to document the returned links, the rel="" string value, and what each href resource pointer will allow them to do, so that the developer knows ahead of time and can make use of the resources as needed. To me, this is much like the facebook API.. I can't just go to facebook.com/api and from there magically know how to use whatever resources come back. I have to, as a developer providing my end users with the ablity to use facebook to log in, know what resources to call, what params to pass, etc. Am I wrong on this assumption? IF so, please enlighten me such that I might understand how this would not be needed.

          What confuses me about all this is the idea that we can write (and consume) evolveable APIs that we know nothing about. We simply need the entry URL and from there we just know what to do based on what is returned. Unless I am missing something, there is no standard set of link/rel values that work the same way for every API. Just because one rel="login" might indicate a resource to log in to, doesn't mean it won't do something else on another site. Like wise, any given API could return a variety of other rel="" values in the response links, or return entirely different element names and without some sort of documentation explaining all of this, I would not be able to consume it. I realize a HATEOAS API should be just like a web site..such that a web bot could traverse html <a> elements.. likewise we return <link> elements allowing a bot to traverse it. What throws me there is.. some links may be POST only, or UPDATE only, some may support GET,POST, etc. A bot could be written in such a way to try every method type, see where it leads and crawl it's way through every link. As a developer using someone's API to provide my user's a GUI to use my site, I can't just go crawling through an API blindly and give my end users some sort of useful functionality from the API. I have to know exactly what resource to call (or how to navigate to it) and what it does. If I want to get the weather, I need to know how I pass my users location to the api, and what resource to call that supports me passing in the location and returns the weather for that location. Don't I?

          Thanks.


          --- On Fri, 12/2/11, mike amundsen <mamund@...> wrote:

          From: mike amundsen <mamund@...>
          Subject: Re: [rest-discuss] weighting and boundaries of evolvability and loose coupling?
          To: "Jakob Strauch" <jakob.strauch@...>
          Cc: rest-discuss@yahoogroups.com
          Date: Friday, December 2, 2011, 7:48 AM

           

          Jakob:


          i've been doing some work in this area (evolvability for hypermedia-based systems) and, while my experiments are still not completed, i can pass along some observations that might give you some ideas.

          first, IMO, you are correct to state that most all the "evolvability" is due to changes in the problem-domain. IOW, not the protocol (HTTP) and not the message formats (media types).

          since REST focuses on sharing understanding through response representations that contain hypermedia to advance application flow,  the focus of evolvability is (in my work) on the media type and the response representations.

          the important task of writing hypermedia applications is mapping the problem domain details to elements in the media type. IOW, to evolve the system to match changes in the problem domain, you modify the representations and the hypermedia within those representations.

          so, with that as a basis...

          there are two different cases to consider:
          Human-driven user-agents (or Human-to-Machine - H2M) and,
          Machine-driven user-agents (or Machine-to-Machine - M2M).

          H2M evolvability for hypermedia
          in this case the "human" driving the user agent (UA) has "knowledge in the head" that the user agent does not have. the UA can focus just on recognizing, parsing, and rendering the media type representations and allowing the human to interpret the results and make choices based on the human's knowledge of the problem domain and the hypermedia affordances (links and forms) presented.

          since the act of mapping intention (what i want to get done) to action (the links and forms available) is all handled by a human, servers are free to make quite a wide range of changes and the system will still function well.  

          for example, in H2M cases, the server is free to add/remove inputs elements in forms, add/remove links, change the "order" in which links/forms are presented, even introduce entirely new forms and inputs. All these things are not likely to "break" the system since the human can be reasonably expected to "know" the problem domain (or a similar domain) enough to make decisions along the way. 

          M2M evolvability for hypermedia
          in this case there is no human *directly* driving the interactions between client and server. the UA is a 'bot' and has only the "knowledge in the code" to work with. This knowledge has to be "put" there by some human, of course.

          for this scenario, the server has a much more limited set of evolvability options. severs can remove inputs, remove links/forms, and/or change the order of their appearance and still expect the system to "work properly." IOW, the server cannot add any new inputs, links, or forms and expect the 'bot' to "know" or "understand" these new elements.

          FWIW, i think there are a number of ways to improve the M2M case, but i am not yet prepared to talk about that since i have not made much progress yet in this area.

          i hope this gives you some ideas on how to tackle this problem and would be interested in other POVs and observations on this topic.

          mca
          http://amundsen.com/blog/
          http://twitter.com@mamund
          http://mamund.com/foaf.rdf#me




          On Fri, Dec 2, 2011 at 06:48, Jakob Strauch <jakob.strauch@...> wrote:
          one major aspect of using hypermedia is loose coupling and evolvability. but where are the boundaries? which server-side changes may/may not affect hypermedia-aware clients?

          for example: as a human user, i can easely adapt to a change in a web shop system, e.g. when the order procedure include suddenly an additional step like providing an optional voucher code.

          a hypermedia client can be easely redirected. but i´m sure the agent can´t do something useful there. at least not unless he gets tought how to deal with the new domain concept (e.g. a new media type) - which means updating to a new version.

          so which aspect can be really decoupled? as far as i can see, there are "only" technical details like internal structures, urls. Speaking of evolvability, i think most changes to a growing API are more domain-related than technical-related.

          If my assumptions are correct, is it maybe more important to develop hypermedia clients, which can be updated by hot-deploy mechanisms?



          ------------------------------------

          Yahoo! Groups Links

          <*> To visit your group on the web, go to:
             http://groups.yahoo.com/group/rest-discuss/

          <*> Your email settings:
             Individual Email | Traditional

          <*> To change settings online go to:
             http://groups.yahoo.com/group/rest-discuss/join
             (Yahoo! ID required)

          <*> To change settings via email:
             rest-discuss-digest@yahoogroups.com
             rest-discuss-fullfeatured@yahoogroups.com

          <*> To unsubscribe from this group, send an email to:
             rest-discuss-unsubscribe@yahoogroups.com

          <*> Your use of Yahoo! Groups is subject to:
             http://docs.yahoo.com/info/terms/


        • mike amundsen
          Kevin: (my regrets for not responding sooner) this is long, my apologies ahead of time. Hopefully the content will be worth the time . I think I get
          Message 4 of 15 , Dec 2, 2011
          View Source
          • 0 Attachment
            Kevin:

            (my regrets for not responding sooner)

            this is long, my apologies ahead of time. Hopefully the content will be worth the time<g>.

            <snip>
            I think I get the gist of what you are saying, but I still struggle understanding the various aspects of writing a good rest api as well as consuming one.
            </snip>
            i think that is a common POV. there is not much guidance on this process. this is proly a good place to discuss it. I'd also encourage you to join the Hypermedia-Web discussion list[1] where some other folks working in this area also hang out.

            <snip>
            I want to provide them with Facebook login to access the site.. am I now considered the M2M in this equation.. in that I will be writing code on my web app to interface with facebook api?
            </snip>
            Well, it turns out facebook's API is not very "hypermedia-aware" is it? Actually almost all the OAuth examples I've seen are very difficult to "automate" in an M2M environment; I suspect that's the goal.  Often we can "cover" an RPC implementation w/ a hypermedia-aware one (I do this quite a bit), but sometimes you can't. 

            FWIW, I don't think the Facebook API is a good place to exercise your hypermedia skills.

            <snip>
            What continually confuses me is the idea of trying to remain HATEOAS compliant as I write my own API, and consuming an API as a developer.
            </snip>
            Designing a hypermedia API is, essentially, designing a media type (or applying semantics to an existing media type). That's the API. It's a big diff from most implementations. Some think it's not worth the trouble.

            Once the media type is designed & documented, the work of implementation servers and clients begins. Servers are pretty straight-forward. Tooling is weak in most cases, but for the most part servers just wait for a request, do some work and craft a response (which may/may-not contain one or more hypermedia controls (links & forms).

            Writing a client is more involved; not terribly difficult, but more work is done by hypermedia clients (HC) than RPC clients. The HC must "know" the media type (not the app) before it can function successfully. And yes, as you say, this means writing clients that are prepared for just about any reasonable response in that media type. You can limit the effort by creating a restrictive, small-scope media type design. My Maze+XML design has only ten elmenents (five are for errors and debugging), six attributes, and nine link relations. Creating clients to navigate mazes is pretty simple, too.  

            The HAL media type design is even more compact[2]. Now, implementing an HC that can handle HTML is quite a feat. There is a wide spectrum between Maze+XML and HTML, tho.

            <snip>
            Without documenting what resources will result in a successful (or failed) attempt at an admin resource, the developer won't know what to scan for in the <links> elements I return and what they can do next. I have to document the returned links, the rel="" string value, and what each href resource pointer will allow them to do, so that the developer knows ahead of time and can make use of the resources as needed.
            </snip>
            Yes, you need to document the media type. There are a number of examples out there to use as a guide. There is no need to document "all the possible responses" for a media type (can you imagine what that would entail for HTML?). Instead, you document the possible elements that can appear in a response and the rules for those elements (MUST be child elements of X, MAY have the following children, etc.). 

            <snip>
            What confuses me about all this is the idea that we can write (and consume) evolveable APIs that we know nothing about. 
            </snip>
            Yeah, that confuses me, too. I don't talk like that and suggest anyone telling you this "you can write and consume an API that you know nothing about") is full of it. If you hear me saying that, call BS on me ASAP!

            <snip>
            there is no standard set of link/rel values that work the same way for every API.
            </snip>
            first, just as there is no standard semantic for every problem domain, you're not likely to find a single set of standard rel values for every API. However, there are a couple sources for standardized rels include the the IANA[3], the Microformat group[4], and the Dublin Core[5]. Many media types also define their own rels set (including HTML).

            It is also possible to define and standardize your own rels (in cases where you think an important one is missing). I've done that at the Microformats site and am in the process of doing the same via an IETF Internet Draft.

            In the end, you'll find that rels provide key mapping between the problem domain and the media type. this means, unless your problem domain is incredibly common, you'll be using some unique rels in order to express unique problem domain semantics.  

            <snip>
            What throws me there is.. some links may be POST only, or UPDATE only, some may support GET,POST, etc. 
            </snip>
            Technically, the *links* don't hold the rules, the markup *around* the links does. HTML.FORM@method="get" tells you what you need to know. So does atom.link@rel="edit" Now, when you design your own API (XML, JSON, etc.) you'll be responsible for taking care to design these same protocol-level details. If you are using HTTP, the possibilities are few and it's not at all hard to design media type elements that clients can easily recognize (<update href="..." /> OR {"delete" : {"href":..."}}, etc.).

            Again, this way of designing APIs (the way that includes the hypermedia possiblities in responses, not just the data) is not at all common right now. 

            <snip>
            As a developer using someone's API to provide my user's a GUI to use my site, I can't just go crawling through an API blindly and give my end users some sort of useful functionality from the API.
            </snip>
            Yep, as stated earlier, anyone telling you to "blindly crawl" is tossing BS. That not at all needed.

            <snip>
            I have to know exactly what resource to call (or how to navigate to it) and what it does.
            </snip>
            Well, your version of "exactly" may vary, but yes, client apps will need to know how get convert "intention" into "action." That's what APIs are for. This is the same whether you use SOAP, URI-RPC, Hypermedia, etc. The key is "how does the client know" With most forms of API, the clients knows because a document sez so and the developer hard-codes this "knowing" into the client. With hypermedia the document sez "this is how you will 'know' where the weather can be found" and describes the bits that can appear in a response, even the link relation to use to get those bits:

            <!-- this is the representation for current weather -->
            <p class="current-weather">
            <span class="zipcode" />
            <span class="location-name" />
            <span class="current-temp" />
            </p>

            <!-- this affordance allows clients to get weather reports based on zipcode -->
            <form class="weather" action="..." method="get">
            <input type="text" name="zipcode" value="" />
            </form>

            <!-- this affordance allows clients to find the form that allows clients to get weather reports -->
            <a href="..." rel="weather" />weather</a>

            <!-- this affordance allows clients to find weather affordances[grin] -->
            <form class="api-list" action="..." method="get">
            <input type="text" name="rel-or-class" value="" />
            </form>

            <!-- this is the only URI needed to use the API -->

            I bet most people will understand this HTML-based "Hypermedia API" and I bet most people can write both a client and server implementation for it. I even bet the server and client implementations can be done independently, on different platforms, at different times, etc. and still work together just fine. I also bet this particular design would work for both H2M and M2M implementations.  And yes, all my other ramblings about the possible evolvability (for H2M and M2M) for this design still applies.

            Sure, this example is incomplete and trivial, but it has the basics for all complete, non-trivial implementations.

            I hope this gives you some ideas.

            Mike

             

            On Fri, Dec 2, 2011 at 15:14, Kevin Duffey <andjarnic@...> wrote:


            Mike,

            I think I get the gist of what you are saying, but I still struggle understanding the various aspects of writing a good rest api as well as consuming one. In your example of M2M, if I am a developer writing a web application with a UI front end that users visit/log in/etc, and I want to provide them with Facebook login to access the site.. am I now considered the M2M in this equation.. in that I will be writing code on my web app to interface with facebook api? What continually confuses me is the idea of trying to remain HATEOAS compliant as I write my own API, and consuming an API as a developer. What I mean is, as the API developer, I am trying to provide a HATEOAS compliant API.. one that returns response with links that MUST be followed by the consumer, and only those links. But as of now, I still have to provide documentation that explains to a developer the possible links that can be returned from each resource. For example, my API provides access control to a degree.. user and admin level resources. IF the request is being made to an admin resource, the auth user must be one that is authorized to use that resource. IF they are, the response has the resource link(s) that allow them to further do other things that a normal user can not. Without documenting what resources will result in a successful (or failed) attempt at an admin resource, the developer won't know what to scan for in the <links> elements I return and what they can do next. I have to document the returned links, the rel="" string value, and what each href resource pointer will allow them to do, so that the developer knows ahead of time and can make use of the resources as needed. To me, this is much like the facebook API.. I can't just go to facebook.com/api and from there magically know how to use whatever resources come back. I have to, as a developer providing my end users with the ablity to use facebook to log in, know what resources to call, what params to pass, etc. Am I wrong on this assumption? IF so, please enlighten me such that I might understand how this would not be needed.

            What confuses me about all this is the idea that we can write (and consume) evolveable APIs that we know nothing about. We simply need the entry URL and from there we just know what to do based on what is returned. Unless I am missing something, there is no standard set of link/rel values that work the same way for every API. Just because one rel="login" might indicate a resource to log in to, doesn't mean it won't do something else on another site. Like wise, any given API could return a variety of other rel="" values in the response links, or return entirely different element names and without some sort of documentation explaining all of this, I would not be able to consume it. I realize a HATEOAS API should be just like a web site..such that a web bot could traverse html <a> elements.. likewise we return <link> elements allowing a bot to traverse it. What throws me there is.. some links may be POST only, or UPDATE only, some may support GET,POST, etc. A bot could be written in such a way to try every method type, see where it leads and crawl it's way through every link. As a developer using someone's API to provide my user's a GUI to use my site, I can't just go crawling through an API blindly and give my end users some sort of useful functionality from the API. I have to know exactly what resource to call (or how to navigate to it) and what it does. If I want to get the weather, I need to know how I pass my users location to the api, and what resource to call that supports me passing in the location and returns the weather for that location. Don't I?

            Thanks.


            --- On Fri, 12/2/11, mike amundsen <mamund@...> wrote:

            From: mike amundsen <mamund@...>
            Subject: Re: [rest-discuss] weighting and boundaries of evolvability and loose coupling?
            To: "Jakob Strauch" <jakob.strauch@...>
            Cc: rest-discuss@yahoogroups.com
            Date: Friday, December 2, 2011, 7:48 AM


             

            Jakob:


            i've been doing some work in this area (evolvability for hypermedia-based systems) and, while my experiments are still not completed, i can pass along some observations that might give you some ideas.

            first, IMO, you are correct to state that most all the "evolvability" is due to changes in the problem-domain. IOW, not the protocol (HTTP) and not the message formats (media types).

            since REST focuses on sharing understanding through response representations that contain hypermedia to advance application flow,  the focus of evolvability is (in my work) on the media type and the response representations.

            the important task of writing hypermedia applications is mapping the problem domain details to elements in the media type. IOW, to evolve the system to match changes in the problem domain, you modify the representations and the hypermedia within those representations.

            so, with that as a basis...

            there are two different cases to consider:
            Human-driven user-agents (or Human-to-Machine - H2M) and,
            Machine-driven user-agents (or Machine-to-Machine - M2M).

            H2M evolvability for hypermedia
            in this case the "human" driving the user agent (UA) has "knowledge in the head" that the user agent does not have. the UA can focus just on recognizing, parsing, and rendering the media type representations and allowing the human to interpret the results and make choices based on the human's knowledge of the problem domain and the hypermedia affordances (links and forms) presented.

            since the act of mapping intention (what i want to get done) to action (the links and forms available) is all handled by a human, servers are free to make quite a wide range of changes and the system will still function well.  

            for example, in H2M cases, the server is free to add/remove inputs elements in forms, add/remove links, change the "order" in which links/forms are presented, even introduce entirely new forms and inputs. All these things are not likely to "break" the system since the human can be reasonably expected to "know" the problem domain (or a similar domain) enough to make decisions along the way. 

            M2M evolvability for hypermedia
            in this case there is no human *directly* driving the interactions between client and server. the UA is a 'bot' and has only the "knowledge in the code" to work with. This knowledge has to be "put" there by some human, of course.

            for this scenario, the server has a much more limited set of evolvability options. severs can remove inputs, remove links/forms, and/or change the order of their appearance and still expect the system to "work properly." IOW, the server cannot add any new inputs, links, or forms and expect the 'bot' to "know" or "understand" these new elements.

            FWIW, i think there are a number of ways to improve the M2M case, but i am not yet prepared to talk about that since i have not made much progress yet in this area.

            i hope this gives you some ideas on how to tackle this problem and would be interested in other POVs and observations on this topic.
            mca
            http://amundsen.com/blog/
            http://twitter.com@mamund
            http://mamund.com/foaf.rdf#me







            On Fri, Dec 2, 2011 at 06:48, Jakob Strauch <jakob.strauch@...> wrote:
            one major aspect of using hypermedia is loose coupling and evolvability. but where are the boundaries? which server-side changes may/may not affect hypermedia-aware clients?

            for example: as a human user, i can easely adapt to a change in a web shop system, e.g. when the order procedure include suddenly an additional step like providing an optional voucher code.

            a hypermedia client can be easely redirected. but i´m sure the agent can´t do something useful there. at least not unless he gets tought how to deal with the new domain concept (e.g. a new media type) - which means updating to a new version.

            so which aspect can be really decoupled? as far as i can see, there are "only" technical details like internal structures, urls. Speaking of evolvability, i think most changes to a growing API are more domain-related than technical-related.

            If my assumptions are correct, is it maybe more important to develop hypermedia clients, which can be updated by hot-deploy mechanisms?



            ------------------------------------

            Yahoo! Groups Links

            <*> To visit your group on the web, go to:
               http://groups.yahoo.com/group/rest-discuss/

            <*> Your email settings:
               Individual Email | Traditional

            <*> To change settings online go to:
               http://groups.yahoo.com/group/rest-discuss/join
               (Yahoo! ID required)

            <*> To change settings via email:
               rest-discuss-digest@yahoogroups.com
               rest-discuss-fullfeatured@yahoogroups.com

            <*> To unsubscribe from this group, send an email to:
               rest-discuss-unsubscribe@yahoogroups.com

            <*> Your use of Yahoo! Groups is subject to:
               http://docs.yahoo.com/info/terms/





          • Erik Wilde
            hello mike. just adding something here that might add an extra design dimension. ... well, that s not entirely true. media formats should be designed with
            Message 5 of 15 , Dec 2, 2011
            View Source
            • 0 Attachment
              hello mike.

              just adding something here that might add an extra design dimension.

              On 2011-12-02 07:48 , mike amundsen wrote:
              > M2M evolvability for hypermedia
              > in this case there is no human *directly* driving the interactions
              > between client and server. the UA is a 'bot' and has only the "knowledge
              > in the code" to work with. This knowledge has to be "put" there by some
              > human, of course.
              > for this scenario, the server has a much more limited set of
              > evolvability options. severs can remove inputs, remove links/forms,
              > and/or change the order of their appearance and still expect the system
              > to "work properly." IOW, the server cannot add any new inputs, links, or
              > forms and expect the 'bot' to "know" or "understand" these new elements.

              well, that's not entirely true. media formats should be designed with
              extensibility in mind, so that servers can add stuff without breaking
              clients. and then there are two options:

              - extensions are allowed and are ignored by definition. this allows
              servers to add stuff without breaking clients. it does not allow servers
              to make sure that old clients will understand that they shouldn't be
              doing things the old way.

              - extensions are allowed and there are switches that allow servers to
              communicate whether an extension is mandatory. HTML (and thus option one
              presented above) says "mustIgnore" implicitly for all extensions. media
              types can define "mustIgnore" and/or "mustUnderstand" labels that
              clients must interpret, so that an extension can be safely ignored by an
              old client, or that a old client knows that it should stop because there
              is an extension in a representation that it does not understand, but
              that is labeled "mustUnderstand".

              this latter design allows more nuances in evolving media types, but of
              course makes both the media type and the client implementation more complex.

              cheers,

              dret.

              --
              erik wilde | mailto:dret@... - tel:+1-510-2061079 |
              | UC Berkeley - School of Information (ISchool) |
              | http://dret.net/netdret http://twitter.com/dret |
            • mike amundsen
              Erik: I stated: the server cannot add any new inputs, links, or forms and expect the bot to know or understand these new elements. Does your response:
              Message 6 of 15 , Dec 2, 2011
              View Source
              • 0 Attachment
                Erik:

                I stated: "the server cannot add any new inputs, links, or forms and expect the 'bot' to "know" or "understand" these new elements."

                Does your response: "well, that's not entirely true...." apply to my statement above?

                mca
                http://amundsen.com/blog/
                http://twitter.com@mamund
                http://mamund.com/foaf.rdf#me




                On Fri, Dec 2, 2011 at 20:26, Erik Wilde <dret@...> wrote:
                hello mike.

                just adding something here that might add an extra design dimension.


                On 2011-12-02 07:48 , mike amundsen wrote:
                M2M evolvability for hypermedia
                in this case there is no human *directly* driving the interactions
                between client and server. the UA is a 'bot' and has only the "knowledge
                in the code" to work with. This knowledge has to be "put" there by some
                human, of course.
                for this scenario, the server has a much more limited set of
                evolvability options. severs can remove inputs, remove links/forms,
                and/or change the order of their appearance and still expect the system
                to "work properly." IOW, the server cannot add any new inputs, links, or
                forms and expect the 'bot' to "know" or "understand" these new elements.

                well, that's not entirely true. media formats should be designed with extensibility in mind, so that servers can add stuff without breaking clients. and then there are two options:

                - extensions are allowed and are ignored by definition. this allows servers to add stuff without breaking clients. it does not allow servers to make sure that old clients will understand that they shouldn't be doing things the old way.

                - extensions are allowed and there are switches that allow servers to communicate whether an extension is mandatory. HTML (and thus option one presented above) says "mustIgnore" implicitly for all extensions. media types can define "mustIgnore" and/or "mustUnderstand" labels that clients must interpret, so that an extension can be safely ignored by an old client, or that a old client knows that it should stop because there is an extension in a representation that it does not understand, but that is labeled "mustUnderstand".

                this latter design allows more nuances in evolving media types, but of course makes both the media type and the client implementation more complex.

                cheers,

                dret.

                --
                erik wilde | mailto:dret@...  -  tel:+1-510-2061079 |
                          | UC Berkeley  -  School of Information (ISchool) |
                          | http://dret.net/netdret http://twitter.com/dret |

              • Erik Wilde
                hello mike. ... yes it does, on a meta level, but my main intent was definitely not to say that you re wrong. if the media type is designed for it, the server
                Message 7 of 15 , Dec 2, 2011
                View Source
                • 0 Attachment
                  hello mike.

                  On 2011-12-02 17:40 , mike amundsen wrote:
                  > I stated: "the server cannot add any new inputs, links, or forms and
                  > expect the 'bot' to "know" or "understand" these new elements."
                  > Does your response: "well, that's not entirely true...." apply to my
                  > statement above?

                  yes it does, on a meta level, but my main intent was definitely not to
                  say that you're wrong. if the media type is designed for it, the server
                  can communicate to the client "you must understand this extension to
                  proceed", or it can say "you can safely ignore this and proceed". that
                  is a level of understanding, but admittedly only in a very restricted
                  way. it's not understanding the semantics of the extension, but
                  understanding how it has to be handled as an extension.

                  cheers,

                  dret.

                  --
                  erik wilde | mailto:dret@... - tel:+1-510-2061079 |
                  | UC Berkeley - School of Information (ISchool) |
                  | http://dret.net/netdret http://twitter.com/dret |
                • mike amundsen
                  Erik: Ok, i think i understand your POV. you re saying that a media type designer can, for example, bake in a design element (which all clients/servers must
                  Message 8 of 15 , Dec 2, 2011
                  View Source
                  • 0 Attachment
                    Erik:

                    Ok, i think i understand your POV. you're saying that a media type designer can, for example, "bake in" a design element (which all clients/servers must support) that signals a "MustUnderstand" rule. Thus, a M2M client can recognize that a response contains new "MustUnderstand" information and, if that client doesn't "understand" it, can act appropriately (stop processing, etc.).

                    In the example above, the M2M client cannot "evolve" to process the new information, but _can_ tell anyone who cares to know that it has failed to do so.

                    right?

                    mca
                    http://amundsen.com/blog/
                    http://twitter.com@mamund
                    http://mamund.com/foaf.rdf#me




                    On Fri, Dec 2, 2011 at 20:52, Erik Wilde <dret@...> wrote:
                    hello mike.

                    On 2011-12-02 17:40 , mike amundsen wrote:
                    > I stated: "the server cannot add any new inputs, links, or forms and
                    > expect the 'bot' to "know" or "understand" these new elements."
                    > Does your response: "well, that's not entirely true...." apply to my
                    > statement above?

                    yes it does, on a meta level, but my main intent was definitely not to
                    say that you're wrong. if the media type is designed for it, the server
                    can communicate to the client "you must understand this extension to
                    proceed", or it can say "you can safely ignore this and proceed". that
                    is a level of understanding, but admittedly only in a very restricted
                    way. it's not understanding the semantics of the extension, but
                    understanding how it has to be handled as an extension.

                    cheers,

                    dret.

                    --
                    erik wilde | mailto:dret@...  -  tel:+1-510-2061079 |
                               | UC Berkeley  -  School of Information (ISchool) |
                               | http://dret.net/netdret http://twitter.com/dret |


                    ------------------------------------

                    Yahoo! Groups Links

                    <*> To visit your group on the web, go to:
                       http://groups.yahoo.com/group/rest-discuss/

                    <*> Your email settings:
                       Individual Email | Traditional

                    <*> To change settings online go to:
                       http://groups.yahoo.com/group/rest-discuss/join
                       (Yahoo! ID required)

                    <*> To change settings via email:
                       rest-discuss-digest@yahoogroups.com
                       rest-discuss-fullfeatured@yahoogroups.com

                    <*> To unsubscribe from this group, send an email to:
                       rest-discuss-unsubscribe@yahoogroups.com

                    <*> Your use of Yahoo! Groups is subject to:
                       http://docs.yahoo.com/info/terms/


                  • Erik Wilde
                    hello mike. ... exactly. it might sound like a minor thing, but it s actually pretty major if a client knows when it shouldn t proceed and can signal an error
                    Message 9 of 15 , Dec 2, 2011
                    View Source
                    • 0 Attachment
                      hello mike.

                      On 2011-12-02 18:00 , mike amundsen wrote:
                      > In the example above, the M2M client cannot "evolve" to process the new
                      > information, but _can_ tell anyone who cares to know that it has failed
                      > to do so. right?

                      exactly. it might sound like a minor thing, but it's actually pretty
                      major if a client knows when it shouldn't proceed and can signal an
                      error condition, instead of blindly continuing down a path that's not a
                      safe route to go without understanding the new stuff. still, it's added
                      complication, and most generic media types seem to go the route of
                      baking in "mustIgnore" as the only possible semantics of how to handle
                      unknown extensions. HTML and Atom are two popular examples.

                      cheers,

                      dret.

                      --
                      erik wilde | mailto:dret@... - tel:+1-510-2061079 |
                      | UC Berkeley - School of Information (ISchool) |
                      | http://dret.net/netdret http://twitter.com/dret |
                    • mike amundsen
                      Erik: ok, i m getting you. thanks. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me
                      Message 10 of 15 , Dec 2, 2011
                      View Source
                      • 0 Attachment
                        Erik:

                        ok, i'm getting you. 

                        thanks.

                        mca
                        http://amundsen.com/blog/
                        http://twitter.com@mamund
                        http://mamund.com/foaf.rdf#me




                        On Fri, Dec 2, 2011 at 21:17, Erik Wilde <dret@...> wrote:
                        hello mike.


                        On 2011-12-02 18:00 , mike amundsen wrote:
                        In the example above, the M2M client cannot "evolve" to process the new
                        information, but _can_ tell anyone who cares to know that it has failed
                        to do so. right?

                        exactly. it might sound like a minor thing, but it's actually pretty major if a client knows when it shouldn't proceed and can signal an error condition, instead of blindly continuing down a path that's not a safe route to go without understanding the new stuff. still, it's added complication, and most generic media types seem to go the route of baking in "mustIgnore" as the only possible semantics of how to handle unknown extensions. HTML and Atom are two popular examples.


                        cheers,

                        dret.

                        --
                        erik wilde | mailto:dret@...  -  tel:+1-510-2061079 |
                                  | UC Berkeley  -  School of Information (ISchool) |
                                  | http://dret.net/netdret http://twitter.com/dret |

                      • Erik Wilde
                        hello again... ... as a corollary to what i just said: i was thinking about, let s say in XML/XSD terms, a global attribute you can put on elements to signal
                        Message 11 of 15 , Dec 2, 2011
                        View Source
                        • 0 Attachment
                          hello again...

                          On 2011-12-02 18:00 , mike amundsen wrote:
                          > Ok, i think i understand your POV. you're saying that a media type
                          > designer can, for example, "bake in" a design element (which all
                          > clients/servers must support) that signals a "MustUnderstand" rule.

                          as a corollary to what i just said: i was thinking about, let's say in
                          XML/XSD terms, a global attribute you can put on elements to signal
                          that. but oftentimes, a version attribute somewhere does this for all of
                          the representation, effectively disallowing a client to continue to
                          proceed if it encounters an unknown version. the big disadvantage of
                          this "document-level attribute" is that it disallows the use of
                          *everything*, including old stuff that still might be safe to use for
                          the client. which is the reason why version attributes often are a bit
                          too disruptive in a loosely coupled scenario.

                          another approach to this would be to remove this from representation
                          design altogether and use relations to communicate extensions, something
                          that has been discussed by mark nottingham in his recent blog post
                          http://www.mnot.net/blog/2011/10/25/web_api_versioning_smackdown . you
                          could possibly extend his method to also qualify those links as being
                          mandatory or not. the link-based extension mark proposes is an
                          interesting approach, but doesn't work too well in cases where
                          extensions need to be put in certain places in existing representations
                          (think documents instead of data), instead of just being a bag of
                          additional data clients can get to, if they want to get it.

                          cheers,

                          dret.

                          --
                          erik wilde | mailto:dret@... - tel:+1-510-2061079 |
                          | UC Berkeley - School of Information (ISchool) |
                          | http://dret.net/netdret http://twitter.com/dret |
                        • mike amundsen
                          Erik: to-date, my approach for handling modifying media type designs (over time) has been as follows: - Extend (compatible w/ existing implementations) no
                          Message 12 of 15 , Dec 2, 2011
                          View Source
                          • 0 Attachment
                            Erik:

                            to-date, my approach for handling modifying media type designs (over time) has been as follows:

                            - Extend (compatible w/ existing implementations)
                            no changes to existing features (appearance, required/optional, processing, or meaning)
                            all new features are optional
                            * optionally add "schema" identifiers to show which extension(s) you are using

                            -Version (incompatible w/ existing implementations)
                            can change existing features
                            can add new required elements
                            * required to use new media type identifier

                            this has allowed me a great deal of flexibility and stability on projects that have evolved over several years.

                            mca
                            http://amundsen.com/blog/
                            http://twitter.com@mamund
                            http://mamund.com/foaf.rdf#me




                            On Fri, Dec 2, 2011 at 21:29, Erik Wilde <dret@...> wrote:
                            hello again...


                            On 2011-12-02 18:00 , mike amundsen wrote:
                            Ok, i think i understand your POV. you're saying that a media type
                            designer can, for example, "bake in" a design element (which all
                            clients/servers must support) that signals a "MustUnderstand" rule.

                            as a corollary to what i just said: i was thinking about, let's say in XML/XSD terms, a global attribute you can put on elements to signal that. but oftentimes, a version attribute somewhere does this for all of the representation, effectively disallowing a client to continue to proceed if it encounters an unknown version. the big disadvantage of this "document-level attribute" is that it disallows the use of *everything*, including old stuff that still might be safe to use for the client. which is the reason why version attributes often are a bit too disruptive in a loosely coupled scenario.

                            another approach to this would be to remove this from representation design altogether and use relations to communicate extensions, something that has been discussed by mark nottingham in his recent blog post http://www.mnot.net/blog/2011/10/25/web_api_versioning_smackdown . you could possibly extend his method to also qualify those links as being mandatory or not. the link-based extension mark proposes is an interesting approach, but doesn't work too well in cases where extensions need to be put in certain places in existing representations (think documents instead of data), instead of just being a bag of additional data clients can get to, if they want to get it.


                            cheers,

                            dret.

                            --
                            erik wilde | mailto:dret@...  -  tel:+1-510-2061079 |
                                      | UC Berkeley  -  School of Information (ISchool) |
                                      | http://dret.net/netdret http://twitter.com/dret |

                          • Andjarnic
                            Hi, thank you for all the good replies. It has helped my understanding. With regards to this reply, can you give an example of how you use (and document)
                            Message 13 of 15 , Dec 2, 2011
                            View Source
                            • 0 Attachment
                              Hi, thank you for all the good replies. It has helped my understanding. With regards to this reply, can you give an example of how you use (and document) extend and version attributes? I am trying to figure out for example what a response xml might look like and how a consumer might use either/or attribute. Thanks

                              Sent from my ASUS Eee Pad

                              mike amundsen &lt;mamund@...&gt; wrote:

                               

                              Erik:


                              to-date, my approach for handling modifying media type designs (over time) has been as follows:

                              - Extend (compatible w/ existing implementations)
                              no changes to existing features (appearance, required/optional, processing, or meaning)
                              all new features are optional
                              * optionally add "schema" identifiers to show which extension(s) you are using

                              -Version (incompatible w/ existing implementations)
                              can change existing features
                              can add new required elements
                              * required to use new media type identifier

                              this has allowed me a great deal of flexibility and stability on projects that have evolved over several years.

                              mca
                              http://amundsen.com/blog/
                              http://twitter.com@mamund
                              http://mamund.com/foaf.rdf#me




                              On Fri, Dec 2, 2011 at 21:29, Erik Wilde <dret@...> wrote:
                              hello again...


                              On 2011-12-02 18:00 , mike amundsen wrote:
                              Ok, i think i understand your POV. you're saying that a media type
                              designer can, for example, "bake in" a design element (which all
                              clients/servers must support) that signals a "MustUnderstand" rule.

                              as a corollary to what i just said: i was thinking about, let's say in XML/XSD terms, a global attribute you can put on elements to signal that. but oftentimes, a version attribute somewhere does this for all of the representation, effectively disallowing a client to continue to proceed if it encounters an unknown version. the big disadvantage of this "document-level attribute" is that it disallows the use of *everything*, including old stuff that still might be safe to use for the client. which is the reason why version attributes often are a bit too disruptive in a loosely coupled scenario.

                              another approach to this would be to remove this from representation design altogether and use relations to communicate extensions, something that has been discussed by mark nottingham in his recent blog post http://www.mnot.net/blog/2011/10/25/web_api_versioning_smackdown . you could possibly extend his method to also qualify those links as being mandatory or not. the link-based extension mark proposes is an interesting approach, but doesn't work too well in cases where extensions need to be put in certain places in existing representations (think documents instead of data), instead of just being a bag of additional data clients can get to, if they want to get it.


                              cheers,

                              dret.

                              --
                              erik wilde | mailto:dret@...  -  tel:+1-510-2061079 |
                                        | UC Berkeley  -  School of Information (ISchool) |
                                        | http://dret.net/netdret http://twitter.com/dret |

                            • Mike Kelly
                              ... No, it just sounds like an unrealistic expectation - the odds of developers building machine clients that follow this advice, in practice, are quite low.
                              Message 14 of 15 , Dec 3, 2011
                              View Source
                              • 0 Attachment
                                On Sat, Dec 3, 2011 at 2:17 AM, Erik Wilde <dret@...> wrote:
                                > hello mike.
                                >
                                > On 2011-12-02 18:00 , mike amundsen wrote:
                                >> In the example above, the M2M client cannot "evolve" to process the new
                                >> information, but _can_ tell anyone who cares to know that it has failed
                                >> to do so. right?
                                >
                                > exactly. it might sound like a minor thing,

                                No, it just sounds like an unrealistic expectation - the odds of
                                developers building machine clients that follow this advice, in
                                practice, are quite low. This means you are still going to have to
                                find a safe way to deal with disrespectful client behaviour anyway, at
                                which point you've effectively achieved nothing.

                                If you're using link relations there's a much easier way of dealing
                                with this: create a new relation and 'decommission' the old relation.
                                i.e. old clients won't find the relation they're looking for and will
                                bomb out.

                                Cheers,
                                Mike
                              • mike amundsen
                                Kevin: as an example, i ll riff on the weather design i posted earlier in this thread. below is an extension of the weather media type design (i.e. adding
                                Message 15 of 15 , Dec 3, 2011
                                View Source
                                • 0 Attachment
                                  Kevin:

                                  as an example, i'll riff on the "weather" design i posted earlier in this thread.

                                  below is an "extension" of the weather media type design (i.e. adding this will not break existing implemenations). Note the new optional HTML.INPUT@name="include-five-day-forecast" state transition element that MAY appear in the HTML.FORM@class="weather" block and the new HTML.SPAN@class="five-day-forecast" element that MAY appear in the HTML.P@class="current-weather" response.
                                  <!-- this is the representation for current weather -->
                                  <p class="current-weather">
                                  <span class="zipcode" />
                                  <span class="location-name" />
                                  <span class="current-temp" />
                                  <!-- new OPTIONAL element -->
                                  <span class="five-day-forecast" />
                                  </p>

                                  <!-- this affordance allows clients to get weather reports based on zipcode -->
                                  <form class="weather" action="..." method="get">
                                  <input type="text" name="zipcode" value="" />
                                  <-- new OPTIONAL element, defaults to "false" -->
                                  <input type="checkbox" name="include-five-day-forecast" />
                                  </form>

                                  Now, here is a design alteration that is a "breaking change" - a new "version" - of the weather media type design:
                                  <!-- this affordance allows clients to get weather reports based on zipcode -->
                                  <form class="weather2" action="..." method="get">
                                  <input type="text" name="zipcode" value="" />
                                  <!-- new REQUIRED element -->
                                  <select name="temperature-scale">
                                    <option value="Fahrenheit">Fahrenheit</option>
                                    <option value="Celsius">Celsius</option>
                                  </select>
                                  <-- new optional element, defaults to "false" -->
                                  <input type="checkbox" name="include-five-day-forecast" />
                                  </form>

                                  Enforcing the Version Change
                                  In this example, since I used an existing media type (text/html), changing the media type identifier to enforce the version change is not a reasonable option. Instead, servers that want to "force" this new version, must change the identifier for the transition ("weather" -> "weather2") - which is, essentially a *new* transition - and stop including the "old" version transition in responses. M2M clients will no longer be able to find (and activate) the expected transition ("weather"), preventing them from participating with the server (they are now "broken"). H2M clients will likely be able to depend on the human driver to successfully handle this evolution and will be able to continue talking with this server (they have "evolved").

                                  mca
                                  http://amundsen.com/blog/
                                  http://twitter.com@mamund
                                  http://mamund.com/foaf.rdf#me




                                  On Sat, Dec 3, 2011 at 02:29, Andjarnic <andjarnic@...> wrote:
                                  Hi, thank you for all the good replies. It has helped my understanding. With regards to this reply, can you give an example of how you use (and document) extend and version attributes? I am trying to figure out for example what a response xml might look like and how a consumer might use either/or attribute. Thanks

                                  Sent from my ASUS Eee Pad


                                  mike amundsen &lt;mamund@...&gt; wrote:

                                   

                                  Erik:


                                  to-date, my approach for handling modifying media type designs (over time) has been as follows:

                                  - Extend (compatible w/ existing implementations)
                                  no changes to existing features (appearance, required/optional, processing, or meaning)
                                  all new features are optional
                                  * optionally add "schema" identifiers to show which extension(s) you are using

                                  -Version (incompatible w/ existing implementations)
                                  can change existing features
                                  can add new required elements
                                  * required to use new media type identifier

                                  this has allowed me a great deal of flexibility and stability on projects that have evolved over several years.

                                  mca
                                  http://amundsen.com/blog/
                                  http://twitter.com@mamund
                                  http://mamund.com/foaf.rdf#me




                                  On Fri, Dec 2, 2011 at 21:29, Erik Wilde <dret@...> wrote:
                                  hello again...


                                  On 2011-12-02 18:00 , mike amundsen wrote:
                                  Ok, i think i understand your POV. you're saying that a media type
                                  designer can, for example, "bake in" a design element (which all
                                  clients/servers must support) that signals a "MustUnderstand" rule.

                                  as a corollary to what i just said: i was thinking about, let's say in XML/XSD terms, a global attribute you can put on elements to signal that. but oftentimes, a version attribute somewhere does this for all of the representation, effectively disallowing a client to continue to proceed if it encounters an unknown version. the big disadvantage of this "document-level attribute" is that it disallows the use of *everything*, including old stuff that still might be safe to use for the client. which is the reason why version attributes often are a bit too disruptive in a loosely coupled scenario.

                                  another approach to this would be to remove this from representation design altogether and use relations to communicate extensions, something that has been discussed by mark nottingham in his recent blog post http://www.mnot.net/blog/2011/10/25/web_api_versioning_smackdown . you could possibly extend his method to also qualify those links as being mandatory or not. the link-based extension mark proposes is an interesting approach, but doesn't work too well in cases where extensions need to be put in certain places in existing representations (think documents instead of data), instead of just being a bag of additional data clients can get to, if they want to get it.


                                  cheers,

                                  dret.

                                  --
                                  erik wilde | mailto:dret@...  -  tel:+1-510-2061079 |
                                            | UC Berkeley  -  School of Information (ISchool) |
                                            | http://dret.net/netdret http://twitter.com/dret |


                                Your message has been successfully submitted and would be delivered to recipients shortly.