Re: [rest-discuss] client keeps its state
- Stuart Charlton wrote:
>No, saying this has nothing to do with REST or declaring it off-topic
>> The problem is, how does a machine user deduce which of the presented
>> state-transition options will advance it towards its goal? This is a
>> problem orthogonal to REST, which is not to say off-topic to rest-
>> discuss. Once the client component arrives at the proper steady-
>> state REST doesn't enter the equation again, until the user requests
>> some transition to the next steady-state in their specific
> That's rather extreme.
to rest-discuss would be extreme; I did neither. ;-)
>Agreed. To more explicitly state my position: Discussions of m2m REST
> Implementers clearly are curious how to retain the constraints of the
> architecture and build m2m agents. While the techniques for building
> goal-directed agents aren't particular to REST, they're certainly of
> interest to this audience, and it's been a sorely lacking area of
> exploration, IMO.
consistently violate the layered-system and self-descriptive-messaging
constraints. We need to change the discussion so these m2m agents are
manipulating the API, not the other way 'round...
First, user and user-agent are combined into client-component. This
leads to (amongst other horrors) APIs where a separate media type is
used to represent each resource state, enforcing a 1:1 relationship
between resource state and application state -- itself a violation of
the layered-system constraint -- by solving a vocabulary problem over
the wire, i.e. at the protocol layer.
Solving a vocabulary problem over the wire with custom media types
results from a violation of the layered-system constraint, and carries
that error forward. The resulting API violates the self-descriptive-
messaging constraint and HTTP by failing to use well-known, registered
media types to derive application steady-states.
All led to from the notion that an m2m client is a user-agent not a
user, and that a REST application instructs these m2m user-agents what
to do. Which is entirely backwards from what a REST application *is*.
The user informs the user-agent of the next step, the series of steps
from initial URI to completion of some task is defined as a "REST
application". Not the other way around! Not even for m2m! No!
This mess may be avoided from the get-go by applying some REST
discipline and recognizing that a user and a user-agent are indeed
separate layers in a system, regardless of the nature of the user.
So, in order to have a discussion about m2m REST, we must distinguish
between user and user-agent, avoiding the paper tiger of machine vs.
human user-agents -- such a distinction being a violation of the
The distinction between human and machine belongs in the user component
of a REST system. The problem is, how do we inform the user of the
meaning of the possible state transitions? When the user is human, the
solution is simple -- natural language. When the user is a machine,
the solution is no less simple -- machine language -- just harder to
implement. Either way, these domain-specific (even if standardized)
vocabularies must be embedded within the standard methods, media types
and link relations making up the REST API.
The first thing you need in a REST API are standard link relations,
methods and media types to instruct user-agents how to arrive at an
application steady-state when a URI is dereferenced. Domain-specific
vocabularies are used which allow the user-agent to inform the user
what options there are and what information is required to proceed,
i.e. natural-language descriptions of form fields and submission
buttons in a shopping-cart system.
It's the human user instructing the user-agent how to proceed.
Domain-specific vocabularies which allow the user-agent to inform a
machine user what options there are and what information is required
to proceed, i.e. machine-language descriptions of form fields and
submission buttons in a shopping-cart system, are embedded within the
steady-state just like natural-language vocabularies, except as
metadata instead of as content.
It's the machine user instructing the user-agent how to proceed.
This is RESTful m2m development and must be emphasized. It must also be
emphasized that "user decides what to do" isn't part of a REST
application -- it *defines* any given REST application (what the user
is trying to do). So please, folks, stop writing m2m HTTP APIs which
instruct the *user* how to proceed and calling the result a REST
REST ends at "user-agent informs the user what it can do", while "user
decides what to do" is out-of-scope. This isn't extremist, it's central
to having the entire m2m discussion; the point is, the discussion must
be framed properly as "how does the user-agent inform the user of its
options" not "how does the API instruct the user of the next step"
(which leaps right across the user-agent layer, while standing the
definition of "REST application" on its head, you see).
- On Tue, Apr 6, 2010 at 6:07 PM, Eric J. Bowman <eric@...> wrote:Andrew Wahbe wrote:> single distributed client...
> But from a REST perspective, you could think of them being part of a
Not sure what you mean. In REST, "client" specifically means "client
connector", so do you mean a single distributed client connector, or a
single distributed user agent? Or is it a single distributed user,
driving numerous user agents (like Google driving googlebot)?
Yes I see how that's confusing. By "client" I mean the "thing running the application" -- perhaps "distributed user-agent" is the right terminology here. Consider an application that consists of multiple hypermedia formats, could be VoiceXML + CCXML or Atom + HTML. It could be the case that the markup is processed by a single process or it could be that different processes are handling the individual markup languages and coordinating somehow. The server is just seeing the HTTP requests and shouldn't really care how the user agent is internally constructed. Of course as I mentioned cookies break this -- it's another way that they are not ideal. VoiceXML/CCXML systems can sometimes be broken into as many as 3 separate components all making requests related to a single application session: the CCXML processor, the VoiceXML processor and a speech processor (performing speech recognition and fetching grammar files). Some of the related protocols have mechanisms to try and coordinate cookies: e.g. http://tools.ietf.org/html/draft-ietf-speechsc-mrcpv2-20#section-6.2.15
Actually, at second glance, CCXML seems more akin to Xforms -- is it an
MVC application the server transfers to the user agent? MVC on the
user agent is a powerful REST design pattern that can be adapted to
That's maybe one way to think about it. It is a finite state machine that communicates via messages/events to resources in an underlying client platform. Events cause state transitions, transitions handlers can send messages back to the platform or place HTTP requests to transition to a new page (or do various other things). I see parallels between this model and an Ajax application -- which can be thought of as a state machine: each "view" is a different state often labelled with a URI fragment (e.g. #inbox in Gmail)