Loading ...
Sorry, an error occurred while loading the content.

919Re: [json] Re: jsonrequest and HTTP/1.1 message pipelining

Expand Messages
  • Mark Nottingham
    Dec 18, 2007
    • 0 Attachment
      On 2007/12/18, at 4:53 PM, Tyler Close wrote:
      > Hi Mark,
      >
      > Thanks for the response. I've got a few questions about your comments
      > and am also wondering if it's feasible to work around the issues you
      > raise.
      >
      > On Dec 17, 2007 4:19 PM, Mark Nottingham <mnot@...> wrote:
      > > Pipelining is often regarded as problematic, especially from the
      > client
      > > side, because of
      > > uneven support in proxies and servers, as well as some uncomfortable
      > > decisions you need
      > > to make about optimisation.
      >
      > Could you elaborate on the optimization issues?
      >
      There are several aspects, but if you have an outstanding request on a
      connection, and another request is queued, deciding whether it's more
      efficient to pipeline or to open a new connection (or, to wait for
      the other connection to clear) isn't always a simple thing to do. If
      the outstanding request takes a long time to process (either because
      the response is very large, or because it takes a lot of server-side
      processing time), it may be better to use your other connection.

      In cases where the resources on the server have low processing
      overhead and are relatively homogenous in size, pipelining works well.
      Subversion is a good example of this, and indeed it benefits from the
      use of pipelining. I'm personally not convinced it's a great solution
      when that isn't the case. YMMV.

      > > Also, non-idempotent methods (e.g., POST, PUT) shouldn't be
      > pipelined, so
      > > this effectively limits it to GET.
      >
      > I remember reading something along these lines in RFC 2616, but the
      > argument never made any sense to me. Perhaps you could clarify the
      > issue. RFC 2616 contains some language about the client not knowing
      > what state the server is in if the connection died with multiple
      > outstanding POST requests, but the same is true if there is even one
      > outstanding POST request. Also, the situation seems to be the same if
      > the client is using multiple non-pipelined connections, since there
      > may be multiple outstanding POST requests.
      >
      Well, it's a SHOULD NOT, not a MUST NOT, but consider a sequence of
      PUT and DELETE requests; if they're pipelined and the connection drops
      in the middle, the client has no idea what state the world is in; if
      it doesn't pipeline, it still has to figure out whether the last
      request was applied, but not the previous ones.

      Also, keep in mind that connections in an intermediary aren't
      necessarily "sticky" to one client; a proxy may be using one single
      persistent connection to send requests from several clients to a
      single server. If pipelining of nonidempotent requests were allowed
      here, the failure cases get really ugly.

      > I expect all the POST
      > requests queued by the client also get sent out regardless of the
      > status of the previous requests, so it seems like the client is in
      > much the same predicament regardless of the use of pipelining.
      >
      Hopefully not...

      > > Even with pipelining on a single connection, you can't make
      > assumptions
      > > about messaging
      > > ordering. Intermediaries are allowed to (and do) split requests up
      > and put
      > > them on
      > > different connections, which may have different routes back to the
      > origin.
      > > Somewhat
      > > pathological, but entirely possible (I've seen configurations
      > which would
      > > allow -- or even
      > > encourage -- this to happen).
      >
      > How about this: If there is no HTTP proxy, pipeline requests;
      > otherwise, send the requests one at a time. So if the client asked
      > that requests be ordered, this is guaranteed and performance is best
      > effort. If the client doesn't care about ordering, but wants best
      > performance, then it uses separately instantiated JSONRequest objects.
      > Sound good?
      >
      You don't always know whether there's an intermediary there;
      interception proxies (aka "transparent proxies") and HTTP accelerators
      aren't apparent to the client.

      --
      Mark Nottingham mnot@...
    • Show all 7 messages in this topic