Loading ...
Sorry, an error occurred while loading the content.

It's the architecture, stupid.

Expand Messages
  • Eric J. Bowman
    This is a shot across the bow of Web Sockets Protocol (or, as I call it, Google Wave Protocol), followed by some RESTful alternatives. Roy, of course, has the
    Message 1 of 15 , Oct 15, 2010
    View Source
    • 0 Attachment
      This is a shot across the bow of Web Sockets Protocol (or, as I call it,
      Google Wave Protocol), followed by some RESTful alternatives. Roy, of
      course, has the money quote:

      "Generally speaking, REST is designed to avoid tying a server's
      connection-level resources to a single client using an opaque protocol
      that is indistinguishable from a denial of service attack. Go figure."

      http://tech.groups.yahoo.com/group/rest-discuss/message/15818

      I don't think it's possible for any protocol to constrain its
      implementations to be RESTful. All I really require from any extension
      of the Web is that I *can* implement it RESTfully, if I so choose. Web
      Sockets precludes REST, which should be an architectural red flag where
      the Web is concerned. If you know where to look, the rationale behind
      the dissertation's development of an idealized model for the Web, is
      steeped in the fundamentals of the Internet. You can disagree with
      REST, but it's hard to dismiss the logic of 2.3 (which says nothing
      about improving application performance by stripping out protocol
      headers, particularly at the expense of caching, btw):

      "
      The performance of a network-based application is bound first by the
      application requirements, then by the chosen interaction style,
      followed by the realized architecture, and finally by the
      implementation of each component. In other words, software cannot
      avoid the basic cost of achieving the application needs; e.g., if the
      application requires that data be located on system A and processed on
      system B, then the software cannot avoid moving that data from A to B.
      Likewise, an architecture cannot be any more efficient than its
      interaction style allows; e.g., the cost of multiple interactions to
      move the data from A to B cannot be any less than that of a single
      interaction from A to B. Finally, regardless of the quality of an
      architecture, no interaction can take place faster than a component
      implementation can produce data and its recipient can consume data.

      ...

      An interesting observation about network-based applications is that
      the best application performance is obtained by not using the network.
      This essentially means that the most efficient architectural styles for
      a network-based application are those that can effectively minimize use
      of the network when it is possible to do so, through reuse of prior
      interactions (caching), reduction of the frequency of network
      interactions in relation to user actions (replicated data and
      disconnected operation), or by removing the need for some interactions
      by moving the processing of data closer to the source of the data
      (mobile code).
      "

      This issue goes beyond REST, to the architecture of the Web and of the
      Internet itself. Apparently HTTP is incapable of supporting modern Web
      systems which desire to use push. Apparently, push requires all aspects
      of good protocol design to be chucked out the window. Late binding?
      Useless -- who needs compression anyway? These are the assumptions
      seemingly underlying Web Sockets. But where's the rationale behind
      those assumptions? What architectural precepts are guiding the design,
      how does the protocol meet those precepts, and do the results solve the
      problems as rationalized? Why is HTTP being treated as obsolete?

      It appears to me, that Web Sockets is not only being made up as it goes
      along (heh, just like SOA), but represents an outright rejection of
      architecture itself (heh, also just like SOA). REST and Web
      architecture are based on an object model -- each object (resource) has
      properties and methods. In OOP, messaging between objects is part of
      the language; on the Web, this messaging is HTTP. In Web Sockets,
      payloads have no relation to objects -- no properties or methods are
      exposed. I realize that stripped-down packets of data are the goal,
      but *why* is that remotely a good idea when it goes against every peer-
      reviewed and ubiquitous protocol design to ever succeed on the Internet,
      willfully disregarding features that allowed the Web to thrive -- like
      caching, or filtering/negotiating on data type?

      Unlike Web architecture, there is no way to restrict a browser from
      rendering a PDF, except by blocking Web Sockets communication outright.
      Unlike Web architecture, content is sent without indicating length or
      chunked, or even delimiting one message from another by adhering to a
      1:1 request/response ratio? Unlike Web architecture, caching is
      impossible because the protocol is stateful. Unlike Web architecture,
      the user has no control (via browser settings) over what content should
      be handled in what way. All of these features of the Web evolved
      through consensus and working code, guided by solid architectural
      rationale (even before REST), and were essential in the success of the
      Web -- apparently all this is completely irrelevant if we want to do
      push!

      Hogwash. If Web Sockets were to be accepted as an RFC, Jon Postel
      would roll over in his grave. Jon thought it was important that any
      application protocol be a well-behaved citizen of the Net. His
      influence is why RFCs are written the way they're written, to this day,
      except for Web Sockets (which recently introduced three SHOULDs, but
      everything else is MUST/MUST NOT, resting on an assumption that all
      implementations will be fully compliant good Net citizens and therefore
      graceful degradation isn't needed, presumably).

      http://www.ics.uci.edu/~rohit/IEEE-L7-Jon-NNTP.html
      http://tools.ietf.org/html/rfc2468

      Dr. Postel's leadership is responsible for the Internet architecture
      being what it is. Aside from ICMP, every protocol he wrote or
      influenced, push or pull, shares the request/response idiom. IRC, FTP,
      SMTP, NNTP, HTTP and every other client-server application protocol I
      can think of (except Gopher) sends a _response code_ after receiving a
      request. Web Sockets is off in its own little world of completely
      untried and untested architecture astronuttery which goes against the
      very nature of Internet messaging -- once a connection is established
      with a single request, multiple responses are sent until the connection
      is closed. This is not the tried-and-true architecture of the Internet,
      it's a greenfield experiment with no foundation in what's known to work.

      If you're going to propose an extension to the Web architecture that
      defies the Internet itself, I'm gonna need to see your rationale as to
      exactly what problem it is you're trying to solve, why it can't be
      solved in a Web-native or even Internet-native fashion, and what design
      constraints you expect will result in a protocol meeting those needs.
      Lacking that, I just can't be expected to approve of winging it on a
      blank sheet of paper and making up a spec as it moves along. Like SOA,
      Web Sockets is an example of the null architecture, i.e. no constraints.

      While there are smart folks involved, doing their best to make sure
      there are no obvious security holes in the protocol, I can't help but
      think that hackers will be having a field day with it -- any new,
      untried and untested pattern can't be considered to have the same
      security considerations as request/response messaging, meaning it's all
      just guesswork. You can secure against known attack vectors, but you
      can't secure against attack vectors you don't know you're creating,
      which you're likely doing by ignoring all that has come before.

      Despite the efforts of a minority, the WG doesn't seem to think it's
      all that big a deal that their protocol as currently written won't
      interoperate with the deployed infrastructure, or that it isn't really
      a problem to require that such infrastructure be updated to avoid
      deadlock conditions between existing load balancers and the servers
      they farm, when encountering Web Sockets. If that's the case, then why
      not add a push method to HTTP? I'll get to that...

      First, though, how to do RESTful push given the current reality. Is
      there some requirement that long polling results in a 200 response?
      Better to assign a sub-resource to handle long polling, and have it
      send a redirect to the updated resource. Instead of sending a new
      representation to every client polling, just a URI is sent, allowing
      all those clients to take advantage of caching of the main resource.
      Not an ideal solution, but an improvement on common practice. The
      problem is how to make one resource capable of both pull and push...

      http://tools.ietf.org/html/rfc2177

      So why not define HTTP IDLE, if the solution is going to require all
      intermediaries be upgraded in order to work, anyway? IDLE would be
      almost exactly like GET, except that instead of a 304 the connection
      stays open. Caches could pool IDLE requests from multiple clients,
      reducing load on origin servers. The advantage of caching solves the
      problem of reducing the bandwidth required to service push requests, by
      several orders of magnitude at Internet scale as compared to using a
      protocol that's essentially an uncacheable, raw TCP connection based on
      the provably false assumption that network or user-perceived performance
      are somehow impaired by the overhead of HTTP headers (OK, they are a
      little, but it's a tradeoff worth making -- an un-protocol isn't the
      solution).

      Wouldn't it be better, in the commonly-cited use case of a stock ticker,
      if that exactly-the-same data could be shared instead of having to be
      delivered separately to every browser interested in the resource -- at
      the same time, no less? The Web Sockets solution, i.e. reducing
      protocol overhead by eliminating headers entirely, throws this baby out
      with the bathwater. Surely a better solution is warranted?

      Unless Web Sockets is committed to being compatible with HTTP's
      Upgrade facility (instead of requiring an upgrade of the deployed
      infrastructure), just what problem is it solving that wouldn't be
      better, more easily and more securely addressed by extending HTTP
      rather than declaring it obsolete? Even if this problem is recognized
      and solved, is this protocol really an HTTP "upgrade" or rather, a
      completely fundamentally opposed protocol violating basic Web security
      by using HTTP to tunnel through any firewalls, even as a temporary
      stopgap until ws:// and wss:// are approved? Using Upgrade to launch
      HTTP 1.2, rHTTP or Waka makes sense; Web Sockets, not so much.

      Surely *any* solution that's compatible with RESTful implementation, is
      by default aligned with both Web and Internet architecture? I fail to
      understand why REST is a toxic concept to the browser vendors. It
      seems to me like it's in their best interests, unless of course you're
      Google and your goal is not to improve the Web, but to try to corrupt
      it into being a replacement for an OS for the purpose of taking market
      share away from Apple and Microsoft... Without any technical basis for
      Web Sockets, I'm left to ponder the political considerations of those
      pushing hardest against using HTTP for Web messaging (as if it were
      obsolete).

      So I'm calling "ITAS" (see post title) on Web Sockets -- this isn't
      REST, Web or Internet architecture; in fact, it isn't architecture at
      all. As such, it ought to be killed in favor of an architecture-
      oriented solution. Apologies to those working on it, I have no issues
      with y'all trying to make the best of a situation being foisted on us
      by the runaway HTML 5 project. But my opinion is that it's DOA, and
      given that, I'd just as soon it not see the light of day so I won't be
      forced to deal with it for the rest of my career even if I choose _not_
      to implement it in my own projects. Kinda like Flash.

      -Eric
    • Nathan
      ... Architecture is fine IMHO, all we need do to is stick an HTTP server on the client side . We ve been using the pattern for years on the server side and
      Message 2 of 15 , Oct 15, 2010
      View Source
      • 0 Attachment
        Eric J. Bowman wrote:
        > Web Sockets Protocol

        Architecture is fine IMHO, all we need do to is stick an HTTP server on
        the "client side".

        We've been using the pattern for years on the "server side" and it works
        wonders for RESTful async messaging w/ HTTP.

        In fact, the architecture of the Web get's exponentially more
        interesting when you put an HTTP Server, Client and Cache on each
        machine - RESTful-p2p I guess.

        Anyway, nice post, good points, web sockets is a bit of a bag-o-shite
        but it's better than long-poll HTTP, or polling - out of interest have
        looked at sending HTTP messages over WebSockets? if you could then there
        would be nothing to stop you creating an HTTP Server in the browser and
        kicking the web in to almost-async-p2p mode using HTTP and RESTful
        patterns whilst waiting on proper support, and giving the opportunity t
        explore all the many challenges faced coupling it to the presentation tier.

        I'm rambling now!

        Nathan
      • izuzak
        Hi Nathan, A p2p REST style, and consequently a p2p Web, is definitely not rambling -- what it *is* is the focus of my ph.d. research :). Having a p2p network
        Message 3 of 15 , Oct 15, 2010
        View Source
        • 0 Attachment
          Hi Nathan,

          A p2p REST style, and consequently a p2p Web, is definitely not rambling -- what it *is* is the focus of my ph.d. research :). Having a p2p network on the HTTP level would not only solve a lot of existing communication problems of the Web, but also increase the number application-level functionalities exposed on the Web. Think of all the functionalities that are "trapped" on the client side, disconnected from the Web, although they originated from the Web by the means of navigating to a Web application.

          I should shut up before someone publishes these ideas in a paper before I do. :)

          + If you already haven't, you should check out Justin Erenkrantz's dissertation on CREST - http://www.erenkrantz.com/CREST/ which is an "evolution" of REST founded on *very* similar ideas. Guess who Justin's advisor was... :)

          Ivan


          --- In rest-discuss@yahoogroups.com, Nathan <nathan@...> wrote:
          >
          > Eric J. Bowman wrote:
          > > Web Sockets Protocol
          >
          > Architecture is fine IMHO, all we need do to is stick an HTTP server on
          > the "client side".
          >
          > We've been using the pattern for years on the "server side" and it works
          > wonders for RESTful async messaging w/ HTTP.
          >
          > In fact, the architecture of the Web get's exponentially more
          > interesting when you put an HTTP Server, Client and Cache on each
          > machine - RESTful-p2p I guess.
          >
          > Anyway, nice post, good points, web sockets is a bit of a bag-o-shite
          > but it's better than long-poll HTTP, or polling - out of interest have
          > looked at sending HTTP messages over WebSockets? if you could then there
          > would be nothing to stop you creating an HTTP Server in the browser and
          > kicking the web in to almost-async-p2p mode using HTTP and RESTful
          > patterns whilst waiting on proper support, and giving the opportunity t
          > explore all the many challenges faced coupling it to the presentation tier.
          >
          > I'm rambling now!
          >
          > Nathan
          >
        • Bob Haugen
          ... Is that the same as, or different than, the various attempts to put a server in your browser, like the deceased KnowNow or the apparently still-living
          Message 4 of 15 , Oct 15, 2010
          View Source
          • 0 Attachment
            On Fri, Oct 15, 2010 at 12:37 PM, izuzak <izuzak@...> wrote:
            > A p2p REST style, and consequently a p2p Web, is definitely not rambling -- what it *is* is the focus of my ph.d. research :). Having a p2p network on the HTTP level would not only solve a lot of existing communication problems of the Web, but also increase the number application-level functionalities exposed on the Web. Think of all the functionalities that are "trapped" on the client side, disconnected from the Web, although they originated from the Web by the means of navigating to a Web application.
            >

            Is that the same as, or different than, the various attempts to put a
            server in your browser, like the deceased KnowNow or the apparently
            still-living Opera Unite?
          • Mike Kelly
            Do WebHooks make for a p2p web? If so; I guess a (registered?!) media type and/or some link relations would be required to make it RESTful? Cheers, Mike
            Message 5 of 15 , Oct 15, 2010
            View Source
            • 0 Attachment
              Do WebHooks make for a p2p web?

              If so; I guess a (registered?!) media type and/or some link relations
              would be required to make it RESTful?

              Cheers,
              Mike


              On Fri, Oct 15, 2010 at 6:37 PM, izuzak <izuzak@...> wrote:
              > Hi Nathan,
              >
              > A p2p REST style, and consequently a p2p Web, is definitely not rambling -- what it *is* is the focus of my ph.d. research :). Having a p2p network on the HTTP level would not only solve a lot of existing communication problems of the Web, but also increase the number application-level functionalities exposed on the Web. Think of all the functionalities that are "trapped" on the client side, disconnected from the Web, although they originated from the Web by the means of navigating to a Web application.
              >
              > I should shut up before someone publishes these ideas in a paper before I do. :)
              >
              > + If you already haven't, you should check out Justin Erenkrantz's dissertation on CREST - http://www.erenkrantz.com/CREST/ which is an "evolution" of REST founded on *very* similar ideas. Guess who Justin's advisor was... :)
              >
              > Ivan
              >
              >
              > --- In rest-discuss@yahoogroups.com, Nathan <nathan@...> wrote:
              >>
              >> Eric J. Bowman wrote:
              >> > Web Sockets Protocol
              >>
              >> Architecture is fine IMHO, all we need do to is stick an HTTP server on
              >> the "client side".
              >>
              >> We've been using the pattern for years on the "server side" and it works
              >> wonders for RESTful async messaging w/ HTTP.
              >>
              >> In fact, the architecture of the Web get's exponentially more
              >> interesting when you put an HTTP Server, Client and Cache on each
              >> machine - RESTful-p2p I guess.
              >>
              >> Anyway, nice post, good points, web sockets is a bit of a bag-o-shite
              >> but it's better than long-poll HTTP, or polling - out of interest have
              >> looked at sending HTTP messages over WebSockets? if you could then there
              >> would be nothing to stop you creating an HTTP Server in the browser and
              >> kicking the web in to almost-async-p2p mode using HTTP and RESTful
              >> patterns whilst waiting on proper support, and giving the opportunity t
              >> explore all the many challenges faced coupling it to the presentation tier.
              >>
              >> I'm rambling now!
              >>
              >> Nathan
              >>
              >
              >
              >
              >
              > ------------------------------------
              >
              > Yahoo! Groups Links
              >
              >
              >
              >
            • Jan Algermissen
              ... Beware though that all these pubsubby[1] approaches make the system much more difficult to understand and much less easy to evolve. I d personally go a
              Message 6 of 15 , Oct 15, 2010
              View Source
              • 0 Attachment
                On Oct 15, 2010, at 9:01 PM, Mike Kelly wrote:

                > Do WebHooks make for a p2p web?
                >
                > If so; I guess a (registered?!) media type and/or some link relations
                > would be required to make it RESTful?


                Beware though that all these pubsubby[1] approaches make the system much more difficult to understand and much less easy to evolve.

                I'd personally go a very long way trying to get by with polling.

                Jan

                [1] Been there, done that :-) http://search.cpan.org/~alger/Apache-MONITOR-0.02/


                >
                > Cheers,
                > Mike
                >
                >
                > On Fri, Oct 15, 2010 at 6:37 PM, izuzak <izuzak@...> wrote:
                >> Hi Nathan,
                >>
                >> A p2p REST style, and consequently a p2p Web, is definitely not rambling -- what it *is* is the focus of my ph.d. research :). Having a p2p network on the HTTP level would not only solve a lot of existing communication problems of the Web, but also increase the number application-level functionalities exposed on the Web. Think of all the functionalities that are "trapped" on the client side, disconnected from the Web, although they originated from the Web by the means of navigating to a Web application.
                >>
                >> I should shut up before someone publishes these ideas in a paper before I do. :)
                >>
                >> + If you already haven't, you should check out Justin Erenkrantz's dissertation on CREST - http://www.erenkrantz.com/CREST/ which is an "evolution" of REST founded on *very* similar ideas. Guess who Justin's advisor was... :)
                >>
                >> Ivan
                >>
                >>
                >> --- In rest-discuss@yahoogroups.com, Nathan <nathan@...> wrote:
                >>>
                >>> Eric J. Bowman wrote:
                >>>> Web Sockets Protocol
                >>>
                >>> Architecture is fine IMHO, all we need do to is stick an HTTP server on
                >>> the "client side".
                >>>
                >>> We've been using the pattern for years on the "server side" and it works
                >>> wonders for RESTful async messaging w/ HTTP.
                >>>
                >>> In fact, the architecture of the Web get's exponentially more
                >>> interesting when you put an HTTP Server, Client and Cache on each
                >>> machine - RESTful-p2p I guess.
                >>>
                >>> Anyway, nice post, good points, web sockets is a bit of a bag-o-shite
                >>> but it's better than long-poll HTTP, or polling - out of interest have
                >>> looked at sending HTTP messages over WebSockets? if you could then there
                >>> would be nothing to stop you creating an HTTP Server in the browser and
                >>> kicking the web in to almost-async-p2p mode using HTTP and RESTful
                >>> patterns whilst waiting on proper support, and giving the opportunity t
                >>> explore all the many challenges faced coupling it to the presentation tier.
                >>>
                >>> I'm rambling now!
                >>>
                >>> Nathan
                >>>
                >>
                >>
                >>
                >>
                >> ------------------------------------
                >>
                >> Yahoo! Groups Links
                >>
                >>
                >>
                >>
                >
                >
                > ------------------------------------
                >
                > Yahoo! Groups Links
                >
                >
                >
              • rcobbwork
                Well, one point about Mr. Postel -- he largely worked in an Internet where all machines were reachable via the Internet Protocol, and security was managed on a
                Message 7 of 15 , Oct 15, 2010
                View Source
                • 0 Attachment
                  Well, one point about Mr. Postel -- he largely worked in an Internet where all machines were reachable via the Internet Protocol, and security was managed on a protocol-endpoint (port) basis. Most of the protocols he worked on were end-to-end, and the connection could be established in either direction.

                  That Internet is long dead; NAT, HTTP, and RFC1918 killed it. The Web established a network that has big well-named servers that clients must bow in supplication to connect to -- and anonymous clients that can't be reached without them establishing and holding a connection of some sort.

                  There *are* legitimate applications for push. Not everything is request/response: P2P and publish/subscribe are legitimate communication patterns. That's not to say they're REST, but if "it's the architecture, stupid", you do have to look at the application communication pattern and find a way to deal with it.

                  HTTP, essentially the only important protocol in the context of the current Internet, makes it vey hard to do a good job on P2P or pub/sub. Roy's postings about the economics of scale of these communication patterns are sensible (though Facebook seems to have been able to monetize pub/sub pretty well), but people are going to need to implement them.

                  Now, this isn't to defend websockets -- but to say that if you're going to accept a non-addressable Internet, people will need to invent things like it.

                  At KnowNow (thanks to Rohit Khare and Adam Rifkin), we built a tiny web server in Javascript. The implementation of resource handlers were (roughly) Javascript functions; the dominant media type was form/x-www-urlencoded. As we got better at writing this server, it got more RESTful. But the connection itself was always a tunnel; there was no alternative. Whether we implemented that with long-poll or just a big GET with function callbacks, it was certainly more RESTful than the websocket approach -- but it's not like somebody could easily add an HTTP security system on those tunnels.

                  I'm perfectly willing to admit that systems that use P2P or publish/subscribe communication patterns aren't REST, but it's not like anybody out there is generally opening their networks to XMPP, BEEP, AMQP.... Nor are they providing mechanisms (well, other than email addresses, hi, Mr. Spam) for addressing real endpoints so you don't have to hold request/response HTTP connections open in order to implement them.

                  The 304 Idle idea is kind of cool, too; thanks for getting me thinking in that direction.
                • izuzak
                  Hey Mike, IMO, I wouldn t say they do. Webhooks are a way of doing callbacks between components that are accessible on the Web (have a HTTP URI), which are
                  Message 8 of 15 , Oct 16, 2010
                  View Source
                  • 0 Attachment
                    Hey Mike,

                    IMO, I wouldn't say they do. Webhooks are a way of doing callbacks between components that are accessible on the Web (have a HTTP URI), which are server components. However, the Web is still a client-server model where clients are not exposed on the Web.

                    Using the PubSubHubBub protocol as an example, the entity subscribing to a Hub must pass an URI that will be used by the Hub to notify it of new posts. Entities on the Web having URIs are server components, so PSHB itself can't be used to push notifications to client components, but only to servers which then must transfer the notification to the client (somehow). If the Web was p2p, a client could be a PSHB subscriber as it would (be able to) have a (HTTP) URI.

                    Does this make sense?

                    Ivan

                    --- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote:
                    >
                    > Do WebHooks make for a p2p web?
                    >
                    > If so; I guess a (registered?!) media type and/or some link relations
                    > would be required to make it RESTful?
                    >
                    > Cheers,
                    > Mike
                    >
                    >
                    > On Fri, Oct 15, 2010 at 6:37 PM, izuzak <izuzak@...> wrote:
                    > > Hi Nathan,
                    > >
                    > > A p2p REST style, and consequently a p2p Web, is definitely not rambling -- what it *is* is the focus of my ph.d. research :). Having a p2p network on the HTTP level would not only solve a lot of existing communication problems of the Web, but also increase the number application-level functionalities exposed on the Web. Think of all the functionalities that are "trapped" on the client side, disconnected from the Web, although they originated from the Web by the means of navigating to a Web application.
                    > >
                    > > I should shut up before someone publishes these ideas in a paper before I do. :)
                    > >
                    > > + If you already haven't, you should check out Justin Erenkrantz's dissertation on CREST - http://www.erenkrantz.com/CREST/ which is an "evolution" of REST founded on *very* similar ideas. Guess who Justin's advisor was... :)
                    > >
                    > > Ivan
                    > >
                    > >
                    > > --- In rest-discuss@yahoogroups.com, Nathan <nathan@> wrote:
                    > >>
                    > >> Eric J. Bowman wrote:
                    > >> > Web Sockets Protocol
                    > >>
                    > >> Architecture is fine IMHO, all we need do to is stick an HTTP server on
                    > >> the "client side".
                    > >>
                    > >> We've been using the pattern for years on the "server side" and it works
                    > >> wonders for RESTful async messaging w/ HTTP.
                    > >>
                    > >> In fact, the architecture of the Web get's exponentially more
                    > >> interesting when you put an HTTP Server, Client and Cache on each
                    > >> machine - RESTful-p2p I guess.
                    > >>
                    > >> Anyway, nice post, good points, web sockets is a bit of a bag-o-shite
                    > >> but it's better than long-poll HTTP, or polling - out of interest have
                    > >> looked at sending HTTP messages over WebSockets? if you could then there
                    > >> would be nothing to stop you creating an HTTP Server in the browser and
                    > >> kicking the web in to almost-async-p2p mode using HTTP and RESTful
                    > >> patterns whilst waiting on proper support, and giving the opportunity t
                    > >> explore all the many challenges faced coupling it to the presentation tier.
                    > >>
                    > >> I'm rambling now!
                    > >>
                    > >> Nathan
                    > >>
                    > >
                    > >
                    > >
                    > >
                    > > ------------------------------------
                    > >
                    > > Yahoo! Groups Links
                    > >
                    > >
                    > >
                    > >
                    >
                  • Mike Kelly
                    Would there be much of a distinction between clients and servers on a p2p web? What prevents clients having URIs now? Someone s already mentioned Opera Unite
                    Message 9 of 15 , Oct 16, 2010
                    View Source
                    • 0 Attachment
                      Would there be much of a distinction between clients and servers on a
                      p2p web? What prevents "clients" having URIs now?

                      Someone's already mentioned Opera Unite - wouldn't exposing webhooks
                      out of the browser fit the bill?

                      Cheers,
                      Mike


                      On Sat, Oct 16, 2010 at 3:18 PM, izuzak <izuzak@...> wrote:
                      > Hey Mike,
                      >
                      > IMO, I wouldn't say they do. Webhooks are a way of doing callbacks between components that are accessible on the Web (have a HTTP URI), which are server components. However, the Web is still a client-server model where clients are not exposed on the Web.
                      >
                      > Using the PubSubHubBub protocol as an example, the entity subscribing to a Hub must pass an URI that will be used by the Hub to notify it of new posts. Entities on the Web having URIs are server components, so PSHB itself can't be used to push notifications to client components, but only to servers which then must transfer the notification to the client (somehow). If the Web was p2p, a client could be a PSHB subscriber as it would (be able to) have a (HTTP) URI.
                      >
                      > Does this make sense?
                      >
                      > Ivan
                      >
                      > --- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote:
                      >>
                      >> Do WebHooks make for a p2p web?
                      >>
                      >> If so; I guess a (registered?!) media type and/or some link relations
                      >> would be required to make it RESTful?
                      >>
                      >> Cheers,
                      >> Mike
                      >>
                      >>
                      >> On Fri, Oct 15, 2010 at 6:37 PM, izuzak <izuzak@...> wrote:
                      >> > Hi Nathan,
                      >> >
                      >> > A p2p REST style, and consequently a p2p Web, is definitely not rambling -- what it *is* is the focus of my ph.d. research :). Having a p2p network on the HTTP level would not only solve a lot of existing communication problems of the Web, but also increase the number application-level functionalities exposed on the Web. Think of all the functionalities that are "trapped" on the client side, disconnected from the Web, although they originated from the Web by the means of navigating to a Web application.
                      >> >
                      >> > I should shut up before someone publishes these ideas in a paper before I do. :)
                      >> >
                      >> > + If you already haven't, you should check out Justin Erenkrantz's dissertation on CREST - http://www.erenkrantz.com/CREST/ which is an "evolution" of REST founded on *very* similar ideas. Guess who Justin's advisor was... :)
                      >> >
                      >> > Ivan
                      >> >
                      >> >
                      >> > --- In rest-discuss@yahoogroups.com, Nathan <nathan@> wrote:
                      >> >>
                      >> >> Eric J. Bowman wrote:
                      >> >> > Web Sockets Protocol
                      >> >>
                      >> >> Architecture is fine IMHO, all we need do to is stick an HTTP server on
                      >> >> the "client side".
                      >> >>
                      >> >> We've been using the pattern for years on the "server side" and it works
                      >> >> wonders for RESTful async messaging w/ HTTP.
                      >> >>
                      >> >> In fact, the architecture of the Web get's exponentially more
                      >> >> interesting when you put an HTTP Server, Client and Cache on each
                      >> >> machine - RESTful-p2p I guess.
                      >> >>
                      >> >> Anyway, nice post, good points, web sockets is a bit of a bag-o-shite
                      >> >> but it's better than long-poll HTTP, or polling - out of interest have
                      >> >> looked at sending HTTP messages over WebSockets? if you could then there
                      >> >> would be nothing to stop you creating an HTTP Server in the browser and
                      >> >> kicking the web in to almost-async-p2p mode using HTTP and RESTful
                      >> >> patterns whilst waiting on proper support, and giving the opportunity t
                      >> >> explore all the many challenges faced coupling it to the presentation tier.
                      >> >>
                      >> >> I'm rambling now!
                      >> >>
                      >> >> Nathan
                      >> >>
                      >> >
                      >> >
                      >> >
                      >> >
                      >> > ------------------------------------
                      >> >
                      >> > Yahoo! Groups Links
                      >> >
                      >> >
                      >> >
                      >> >
                      >>
                      >
                      >
                      >
                      >
                      > ------------------------------------
                      >
                      > Yahoo! Groups Links
                      >
                      >
                      >
                      >
                    • Eric J. Bowman
                      ... It s also interesting to note that back in the day, bandwidth, CPU, RAM, HD etc. were precious; now they re commodities. The point being, if it were a
                      Message 10 of 15 , Oct 16, 2010
                      View Source
                      • 0 Attachment
                        Rick Cobb wrote:
                        >
                        > Well, one point about Mr. Postel -- he largely worked in an Internet
                        > where all machines were reachable via the Internet Protocol, and
                        > security was managed on a protocol-endpoint (port) basis. Most of
                        > the protocols he worked on were end-to-end, and the connection could
                        > be established in either direction.
                        >

                        It's also interesting to note that back in the day, bandwidth, CPU, RAM,
                        HD etc. were precious; now they're commodities. The point being, if it
                        were a good idea to create a universal application protocol essentially
                        as raw TCP access, instead of protocols with "header overhead", things
                        would have been done that way long ago.

                        http://tools.ietf.org/html/rfc1958
                        http://tools.ietf.org/html/rfc3439
                        http://tools.ietf.org/html/rfc3724

                        It seems to me that the end-to-end principle has evolved since Dr.
                        Postel's time, but still holds. Web Sockets ignores this principle.
                        Is the Web such a failure that it's time to "raze the city and rebuild
                        it" rather than repaving the streets? If the future of HTTP is binding
                        to SCTP instead of TCP, does it make sense to couple Web Sockets to TCP?
                        Isn't this exactly the "vertical coupling" described in RFC 3439?

                        >
                        > That Internet is long dead; NAT, HTTP, and RFC1918 killed it. The
                        > Web established a network that has big well-named servers that
                        > clients must bow in supplication to connect to -- and anonymous
                        > clients that can't be reached without them establishing and holding a
                        > connection of some sort.
                        >

                        I predict that Internet will come back to life in the form of IPv6, but
                        for political rather than technical reasons. The 2010 Postel Award
                        winner is Jianping Wu:

                        http://en.wikipedia.org/wiki/Dr._Jianping_Wu

                        China, due to the political need for censorship and control, requires
                        that each client node have a routable address. Politically, I prefer
                        NAT. Technologically, I prefer IPv6. I agree with you, though -- IPv4
                        begat RFC 1918, begat long-polling.

                        >
                        > There *are* legitimate applications for push. Not everything is
                        > request/response: P2P and publish/subscribe are legitimate
                        > communication patterns. That's not to say they're REST, but if "it's
                        > the architecture, stupid", you do have to look at the application
                        > communication pattern and find a way to deal with it.
                        >

                        I have to disagree. My view is that the application's goals must be
                        realized within the prevailing architecture, and the communication
                        pattern designed accordingly. RESTful pub/sub is possible using
                        request/response HTTP. RESTful P2P? On the one hand, REST has that
                        client-server constraint. OTOH, Roy has stated that Waka is a P2P
                        protocol, in the Q&A at the end of this session:

                        http://streaming.linux-magazin.de/events/apacheconfree/archive/rfielding/frames-java.htm

                        I don't see why push needs to break the request/response model (Waka
                        has the MONITOR method). Each message is still going over a network, so
                        there needs to be some sort of response code indicating success/fail.
                        All that's different is that the user-agent acts as server, and the
                        origin server acts as client. Using rHTTP, this can be just as RESTful
                        as pull.

                        If a stock-ticker app is implemented using Web Sockets, how do I know
                        I'm not missing anything due to dropped packets? Can I verify the
                        integrity of the data received, even if I get all the packets? TCP is
                        fine for this at the transmission layer, but not the application layer.
                        These seem to me like problems inherent to breaking the request/response
                        model, rather than problems specific to Web Sockets; thus, ITAS...

                        "
                        A specific case is that any network, however carefully designed, will
                        be subject to failures of transmission at some statistically determined
                        rate. The best way to cope with this is to accept it, and give
                        responsibility for the integrity of communication to the end systems.
                        "

                        http://tools.ietf.org/html/rfc1958

                        Whereas with BitTorrent, request/response doesn't matter because the
                        end result (a file of size=x and checksum=x) is known -- it's still end-
                        to-end. With Web push, no a priori knowledge of the parameters of the
                        transfer exists unless presented as protocol headers (like rHTTP,
                        unlike Web Sockets).

                        >
                        > HTTP, essentially the only important protocol in the context of the
                        > current Internet, makes it vey hard to do a good job on P2P or
                        > pub/sub. (...)
                        >

                        Very hard, yes, but not impossible (except P2P, HTTP isn't a P2P
                        protocol by any stretch). Which is why I object to the Web Sockets
                        notion that HTTP must be replaced in order to do push, particularly if
                        the alleged benefit is illogical, and the potential consequences severe.
                        As you point out, the requirement of a hanging connection is a
                        limitation imposed not by HTTP but by RFC 1918, so replacing HTTP isn't
                        the answer (without thoroughly documenting rationale, first).

                        http://tech.groups.yahoo.com/group/rest-discuss/message/8314
                        http://www.dehora.net/journal/2007/07/earned_value.html

                        Just some interesting posts about working with the Web instead of
                        against it.

                        >
                        > Roy's postings about the economics of scale of these communication
                        > patterns are sensible (though Facebook seems to have been able to
                        > monetize pub/sub pretty well), but people are going to need to
                        > implement them.
                        >

                        (Not to jump all over your example, I was just looking for any excuse
                        to bring up Fb...)

                        I wouldn't hold Facebook up as an example; there's more to REST than
                        scaling, which Fb doesn't actually do very well -- judging from their
                        reputation for flaky service, and the fact that it's standard practice
                        at Fb (and most other Web 2.0 sites) to disable features during peak
                        usage. I don't even know that Fb is monetized, vs. being a VC funding
                        pit... In fact, Facebook wins my inaugural ITAS Award -- to be granted
                        intermittently based on (de)merit:

                        http://blogs.wsj.com/digits/2010/09/24/what-caused-facebooks-worst-outage-in-four-years/

                        There's a reason HTTP has a 500 error, and why the purported benefit of
                        not exposing errors to *some* users is a logical fallacy. Is total
                        system failure the automatic penalty for coding typos in Web Sockets,
                        due to the lack of *any* response codes, let alone for error handling?
                        From REST, 2.3.7:

                        "
                        Reliability, within the perspective of application architectures, can
                        be viewed as the degree to which an architecture is susceptible to
                        failure at the system level in the presence of partial failures within
                        components, connectors, or data. Styles can improve reliability by
                        avoiding single points of failure, enabling redundancy, allowing
                        monitoring, or reducing the scope of failure to a recoverable action.
                        "

                        I don't even have to look at Facebook, the failure analysis is enough
                        basis for me to wave my magic guru wand and declare NOT REST. RESTful
                        systems don't DDoS themselves! Internet architecture allows for
                        monitoring. Web Sockets doesn't, nor does it "reduce the scope of
                        failure to recoverable actions" due to its cross-layer coupling (RFC
                        3439).

                        >
                        > Now, this isn't to defend websockets -- but to say that if you're
                        > going to accept a non-addressable Internet, people will need to
                        > invent things like it.
                        >

                        Well, sure. But the issue is what problem is Web Sockets trying to
                        solve? There's no workaround to hanging connections, all that can be
                        done about them is make them scale better -- which Web Sockets doesn't
                        do. I think rHTTP is as fine a solution to this problem as is
                        possible, short of IPv6 becoming ubiquitous and allowing pub/sub via
                        server-stored IP addresses.

                        >
                        > What we did at KnowNow (remember Rohit Khare and Adam Rifkin?) is
                        > build a tiny web server in Javascript. The implementation of
                        > resource handlers were (roughly) Javascript functions; the dominant
                        > media type was form/x-www-urlencoded. As we got better at writing
                        > this server, it got more RESTful. But the connection itself was
                        > always a tunnel; there was no alternative. Whether we implemented
                        > that with long-poll or just a big GET with function callbacks, it was
                        > certainly more RESTful than the websocket approach -- but it's not
                        > like somebody could easily add an HTTP security system on those
                        > tunnels.
                        >

                        Yes, actually I just came across a KnowNow reference as I was typing
                        this response:

                        http://lists.w3.org/Archives/Public/www-tag/2002Apr/0242.html

                        That thread discusses the Web architecture as being one in which URIs
                        are used to address resources. In Web Sockets, one URI starts sending
                        multiple, unrelated messages -- each of which seems like a different
                        resource to me, and should therefore be addressable via separate URIs.
                        Nebulous transmissions aren't bookmarkable, or even distinguishable
                        from one another.

                        Can't HTTP security be added to rHTTP, or am I missing something?

                        >
                        > I'm perfectly willing to admit that systems that use P2P or
                        > publish/subscribe communication patterns aren't REST, but it's not
                        > like anybody out there is generally opening their networks to XMPP,
                        > BEEP, AMQP.... Nor are they providing mechanisms (well, other than
                        > email addresses, hi, Mr. Spam) for addressing real endpoints so you
                        > don't have to hold request/response HTTP connections open in order to
                        > implement them.
                        >

                        Well, I'm not willing to say that pub/sub can't be RESTful, just that
                        I've yet to see it done that way (using redirection). RESTful P2P, I
                        don't know... But my post wasn't limited to REST (nowhere else is
                        appropriate for general Internet architecture discussion). The other
                        protocols you mention all represent architectural styles which at least
                        conform to the fundamentals of the Internet, rather than being in
                        active denial of them, like Web Sockets.

                        -Eric
                      • Eric J. Bowman
                        ... I don t see how that s P2P, or what P2P has to do with Web Sockets... ... It still is long-polling, AFAIC. The WG has identified using Web Sockets to
                        Message 11 of 15 , Oct 16, 2010
                        View Source
                        • 0 Attachment
                          Nathan wrote:
                          >
                          > In fact, the architecture of the Web get's exponentially more
                          > interesting when you put an HTTP Server, Client and Cache on each
                          > machine - RESTful-p2p I guess.
                          >

                          I don't see how that's P2P, or what P2P has to do with Web Sockets...

                          >
                          > Anyway, nice post, good points, web sockets is a bit of a bag-o-shite
                          > but it's better than long-poll HTTP, or polling - out of interest
                          > have looked at sending HTTP messages over WebSockets?
                          >

                          It still is long-polling, AFAIC. The WG has identified using Web
                          Sockets to transfer HTTP frames as a security issue. Unfortunately,
                          the solution being discussed is to hash the payload. Solves the
                          problem, but represents a fundamental break with Internet architecture
                          in that it requires developers to use a library to develop/debug their
                          APIs (not seen as a problem except by a minority of participants). I
                          hope I don't have to explain my problem with that, to this audience...

                          I don't see what advantage a browser-based httpd has over rHTTP.

                          -Eric
                        • Eric J. Bowman
                          ... RFC 1918. What s your client IP address? Can I route to it? Will it be the same five minutes from now? In many cases, the answer is, I don t know, no,
                          Message 12 of 15 , Oct 16, 2010
                          View Source
                          • 0 Attachment
                            Mike Kelly wrote:
                            >
                            > Would there be much of a distinction between clients and servers on a
                            > p2p web? What prevents "clients" having URIs now?
                            >

                            RFC 1918. What's your client IP address? Can I route to it? Will it
                            be the same five minutes from now? In many cases, the answer is, "I
                            don't know, no, and probably not." There are plenty of websites out
                            there which echo a visitors' IP address, most of 'em get my dedicated
                            IP address wrong because of how the NAT at my ISP is configured.

                            -Eric
                          • Eric J. Bowman
                            ... I don t see the relation between pub/sub and P2P. ... RESTful Webhooks is a fine HTTP API, but not a REST API. To be RESTful would require a rewrite
                            Message 13 of 15 , Oct 16, 2010
                            View Source
                            • 0 Attachment
                              Mike Kelly wrote:
                              >
                              > Do WebHooks make for a p2p web?
                              >

                              I don't see the relation between pub/sub and P2P.

                              >
                              > If so; I guess a (registered?!) media type and/or some link relations
                              > would be required to make it RESTful?
                              >

                              "RESTful Webhooks" is a fine HTTP API, but not a REST API. To be
                              RESTful would require a rewrite from scratch. Roy had the money quote
                              on this, too, but I couldn't find it. Something about marketingspeak,
                              IIRC.

                              -Eric
                            • Eric J. Bowman
                              ... IOW, violates the principles of simplicity, reliability, visibility, reusability and scalability, in addition to evolvability. ... +1 Using redirection
                              Message 14 of 15 , Oct 16, 2010
                              View Source
                              • 0 Attachment
                                Jan Algermissen wrote:
                                >
                                > Beware though that all these pubsubby[1] approaches make the system
                                > much more difficult to understand and much less easy to evolve.
                                >

                                IOW, violates the principles of simplicity, reliability, visibility,
                                reusability and scalability, in addition to evolvability.

                                >
                                > I'd personally go a very long way trying to get by with polling.
                                >

                                +1

                                Using redirection such that payloads are cacheable, when using long-
                                polling, impacts scalability to a much lesser extent than sending 200
                                OK or using Web Sockets. Whether or not the simplicity tradeoff is
                                appropriate for the system under development, is a decision for the
                                developer of the system.

                                >
                                > [1] Been there, done that :-)
                                > http://search.cpan.org/~alger/Apache-MONITOR-0.02/
                                >

                                Has the MONITOR method ever been documented? I've vaguely heard of it,
                                but my inability to link to a definition is why I used IDLE as an
                                example.

                                -Eric
                              • Eric J. Bowman
                                ... Even in a P2P protocol like BitTorrent, each discrete transfer still has a client and a server, just like how an intermediary cache is either a client or a
                                Message 15 of 15 , Oct 17, 2010
                                View Source
                                • 0 Attachment
                                  Mike Kelly wrote:
                                  >
                                  > Would there be much of a distinction between clients and servers on a
                                  > p2p web?
                                  >

                                  Even in a P2P protocol like BitTorrent, each discrete transfer still
                                  has a client and a server, just like how an intermediary cache is
                                  either a client or a server depending on what it's doing. I think the
                                  right question is whether there's still a distinction between user-
                                  agent and origin server. I think the answer to that is yes -- the
                                  user-agent makes a request from the origin server (tracker). When
                                  serving files, it's acting as an intermediary in the transactions
                                  between some other user-agents and the origin server (tracker).

                                  (Or perhaps there are multiple origin servers, i.e. the tracker and
                                  whatever systems are seeding the actual file. It would be interesting
                                  to use the approach in Roy's thesis to describe a BitTorrent
                                  architectural style. Which is exactly what the starting point should
                                  have been before embarking on Web Sockets -- even if you're not a fan
                                  of REST-the-style, there's a methodology there for the disciplined
                                  development of new Web protocols. It's possible to add other styles
                                  besides what's in Chapter 3, as appropriate, to introduce constraints
                                  which aren't in REST, based on the desirable properties of prior art.)

                                  Or at least this is the explanation I come up with, to square Roy's
                                  statement that Waka is P2P with the client-server constraint. There's
                                  still independent evolvability of components, and separation of user
                                  interface (selecting a torrent from a tracker) from data storage, which
                                  are the purposes of the constraint. If the purpose of the constraint
                                  is met, claiming P2P is a violation would be nitpicking semantics.

                                  In HTTP long-polling, the origin server is essentially a tracker, in
                                  that it knows a bunch of clients are after the same data. In a P2P
                                  protocol, this could work much like BitTorrent, where the tracker
                                  orchestrates the clients to distribute the response amongst themselves
                                  after seeding a few power-users. But I don't think any amount of
                                  scripted tunneling can make HTTP work like that in existing browsers,
                                  Web Sockets or otherwise. It would be interesting to be proved wrong on
                                  that, however.

                                  -Eric
                                Your message has been successfully submitted and would be delivered to recipients shortly.