Loading ...
Sorry, an error occurred while loading the content.

19196Re: [rest-discuss] RESTful order-status API (was: URI design, part 2)

Expand Messages
  • Eric J. Bowman
    Nov 30, 2012
    • 0 Attachment
      Will Hartung wrote:
      >
      > >
      > > The downside of being
      > > an additional round-trip is mitigated by using compression such
      > > that the /status 200 OK response fits in one IP packet.
      > >
      > Unless your in a mobile environment where latency mutders you and
      > compression is a secondary benefit.
      >

      Right, YMMV with compression, but the main point is that the bulk of
      the order is static data and therefore may be made highly persistent in
      the client cache (cache-control: private), _that_ is how both round
      trips and the transfer of a significant amount of bytes may be avoided,
      even without optionally compressing certain traffic into one IP packet.
      Architecture vs. implementation. That being said...

      I don't understand why compression is bad for latency, though. I cache
      compressed data, and unzip it on the fly, so the expensive zip operation
      is only done once by the server each time the resource is updated. Which
      means the only user who experiences zip latency, is the user who updates
      the resource -- but does another second (if that) get perceived by the
      user on PUT/POST/PATCH operations, and is there any real benefit to
      optimizing these operations when GET is only like a billion times more
      common a request method? I think not.

      Are mobile clients still so underpowered that their zip/unzip latency
      exceeds the transfer gains? Especially if we're talking about
      manipulating resources and updating representations using one single-
      packet-each-way round trip, double especially if we're talking about
      users who are, you know, *mobile*. If the client is moving from one
      access point to another, caching the bulk of the data on the client and
      using single-packet messaging is guaranteed to avoid the huge latency
      hit of changing IP address/routing while manipulating a resource, which
      seems to me like a huge gain in user-perceived performance.

      -Eric
    • Show all 28 messages in this topic