Re: Restful Approaches to some Enterprise Integration Problems
- trimmed quotes for brevity
--- In firstname.lastname@example.org, Roy T.Fielding <fielding@...> wrote:
> On Jul 6, 2010, at 12:22 AM, bryan_w_taylor wrote:
> > --- In email@example.com, "Roy T. Fielding" <fielding@> wrote:
> I guess it depends on how you define guaranteed delivery. You can
> certainly do such things with HTTP, but doing CRUD ops via HTTP does
> not automatically make it a RESTful paradigm.
> > But that answers "how" and I think you are getting at "why". I'm imagining that we have two servers A and B, where A plays the role of the client in the interaction. Events happen on server A and server B must receive some representation related to each event or unacceptable business consequences occur.
> Ah, typical event-based integration. That's a good architectural
> style for some applications. Why use REST to do that?
Good question. I think using other tools for eventing makes a lot of sense in some cases. But there are sometimes disadvantages too... any of platform interoperability, additional infrastructure, development or runtime complexity sometimes get in the way. So there are times where it might be nice to at least use a straightforward HTTP based mechanism.
> > Why can't we merge the functionality of server A and B? Lots of reasons: Security, regulatory compliance, use of 3rd party systems, organizational boundaries and/or politics are a few. The way a company manages it's systems engineering work is to partition business functionality into pieces, give ownership of each piece to a team, and align physical resources like servers to those teams. If this imposes constraints not found in RESTful systems, then I have no choice but to deal with those.
> Yes, but the RESTful solution is not to pretend that REST is an
> event-based integration style. What you want to do with REST is
> re-architect the system into more isolated parts that are event-based
> (usually a very small communication subsystem) and the remainder
> as a layered information system. The reason to do this, presumably,
> is to expose the RESTful interface to consumers instead of exposing
> the much more complex (and brittle) event interface.
Well said, and I think this is what I will take away and promote.
> > > Any resource can behave as a long-running service. Just program it that way.
> > Right, the question is how, exactly. Good solutions have been posted in this thread for this. Subbu's RESTful Web Services Cookbook solves this in examples 1.10 and 1.11. I think this was another slam dunk.
> > I'm curious what you think about using so called "web hooks" for this kind of thing. Would you consider this a violation of the client-server constraint?
> No, web hooks is just someone's marketing term for registering
> notifications. The components that act on them are still either
> clients or servers during the communication (i.e., they are not
> trying to do both at the same time and functionality is still
> split across components). This is not a new concept. E.g.,
Good to know. I like section 5.1.3 of that 2nd one from 12 years ago.
> As much as I like doing things in HTTP, there are many closed systems
> that are better implemented in an efficient RPC syntax or a wire
> protocol specifically designed for message queues. Use whatever
> works best for the specific architecture behind the resource interface
> and then apply REST as the external facade to support large-scale
> integration and reusability of the information produced/consumed.
OK, I think this is very practical. Thanks for some good input.
- On Jul 6, 2010, at 1:00 AM, Jan Algermissen wrote:
> Roy,In this case, yes, though it is true for any client.
> On Jul 6, 2010, at 3:03 AM, Roy T. Fielding wrote:
> > Reliable upload of multiple files can be
> > performed using a single zip file, but the assumption being made
> > here is that the client has a shared understanding of what the
> > server is intending to do with those files. That's coupling.
> Trying to test my understanding:
> By 'client' you are refering to 'user agent'?
> My understanding is that the user agent has no shared understanding beyond how to construct the submission resquest upon the activation of a hypermedia control. (Web browsers know how to create a POST request from a user's submission of a form)which it gets from the media type definition, yes.
> The user however does have an understanding (expectation) of what the server is intending to do with those files. This expectation is the basis for choosing to activate the hypermedia control in the first place.A user (or configured robot) will understand their own intent,
yes, but not necessarily how the server intends to accomplish that
functionality. A user is unlikely to know that a given service
needs guaranteed delivery, since best-effort delivery is the norm.
One would have to add that to the interaction requirements, which
means standardizing that kind of interaction through additional
definitions in the media type or link relations and sending
enough information with the request to enable the recipient to
verify the received message integrity, and both sides need to
know that the request needs to be repeated automatically if
the checks fail. And that still doesn't tell us what to put in
the representations being sent. That's why this kind of
There is also no need to limit yourself to one interface.
Look at all the interfaces on Apache ActiveMQ, for example
The so-called REST protocol calls for POST to a given
queue URI, which I'll just assume isn't guaranteed delivery.
Guaranteed delivery could probably be added with a simple
message integrity check if the messages are unique, but I
would prefer a more explicit pattern.
For example, we might define a message sink with a URI such
that each client knows (by definition) that it should append
its own client-id (perhaps set by cookie) and a message counter
to the request URI, as in
PUT URI/client-id/count HTTP/1.1
and then the client can send as many messages as it wants,
provided the count is incremented for each new message, and
the server must verify (and store) the MIC before responding
with a success code. Each message can therefore be logged,
verified, etc., just like a message queue with guarantees.
We could try to standardize something like what I describe above,
but it would require multiple independent implementations and a
lot more free time than it probably deserves. In any case, it also
begs the question of why would we want to do this using HTTP
[aside from just avoiding firewall blocks, which is not a
The fact is that most people write message queues for systems
that are more operational than informational -- i.e., they are
doing something, usually at a high rate of speed, that isn't
intended to be viewed as an information service, except in
the form of an archive or summary of past events. Would a
more RESTful message queue have significant architectural
properties that outweigh the trade-off on performance, or
would it be better to use a tightly coupled eventing protocol
and merely provide the resulting archive and summaries via
normal RESTful interaction? That kind of question needs to
be answered by an architect familiar with all of the design
contraints for the proposed system.