15819Re: [rest-discuss] Restful Approaches to some Enterprise Integration Problems
- Jul 8, 2010On Jul 6, 2010, at 1:00 AM, Jan Algermissen wrote:
> Roy,In this case, yes, though it is true for any client.
> On Jul 6, 2010, at 3:03 AM, Roy T. Fielding wrote:
> > Reliable upload of multiple files can be
> > performed using a single zip file, but the assumption being made
> > here is that the client has a shared understanding of what the
> > server is intending to do with those files. That's coupling.
> Trying to test my understanding:
> By 'client' you are refering to 'user agent'?
> My understanding is that the user agent has no shared understanding beyond how to construct the submission resquest upon the activation of a hypermedia control. (Web browsers know how to create a POST request from a user's submission of a form)which it gets from the media type definition, yes.
> The user however does have an understanding (expectation) of what the server is intending to do with those files. This expectation is the basis for choosing to activate the hypermedia control in the first place.A user (or configured robot) will understand their own intent,
yes, but not necessarily how the server intends to accomplish that
functionality. A user is unlikely to know that a given service
needs guaranteed delivery, since best-effort delivery is the norm.
One would have to add that to the interaction requirements, which
means standardizing that kind of interaction through additional
definitions in the media type or link relations and sending
enough information with the request to enable the recipient to
verify the received message integrity, and both sides need to
know that the request needs to be repeated automatically if
the checks fail. And that still doesn't tell us what to put in
the representations being sent. That's why this kind of
There is also no need to limit yourself to one interface.
Look at all the interfaces on Apache ActiveMQ, for example
The so-called REST protocol calls for POST to a given
queue URI, which I'll just assume isn't guaranteed delivery.
Guaranteed delivery could probably be added with a simple
message integrity check if the messages are unique, but I
would prefer a more explicit pattern.
For example, we might define a message sink with a URI such
that each client knows (by definition) that it should append
its own client-id (perhaps set by cookie) and a message counter
to the request URI, as in
PUT URI/client-id/count HTTP/1.1
and then the client can send as many messages as it wants,
provided the count is incremented for each new message, and
the server must verify (and store) the MIC before responding
with a success code. Each message can therefore be logged,
verified, etc., just like a message queue with guarantees.
We could try to standardize something like what I describe above,
but it would require multiple independent implementations and a
lot more free time than it probably deserves. In any case, it also
begs the question of why would we want to do this using HTTP
[aside from just avoiding firewall blocks, which is not a
The fact is that most people write message queues for systems
that are more operational than informational -- i.e., they are
doing something, usually at a high rate of speed, that isn't
intended to be viewed as an information service, except in
the form of an archive or summary of past events. Would a
more RESTful message queue have significant architectural
properties that outweigh the trade-off on performance, or
would it be better to use a tightly coupled eventing protocol
and merely provide the resulting archive and summaries via
normal RESTful interaction? That kind of question needs to
be answered by an architect familiar with all of the design
contraints for the proposed system.
- << Previous post in topic