Re: [decentralization] NAT2NAT & TCP -- useful for P2P?
- On Wed, Feb 28, 2001 at 03:08:44PM -0600, Tony Kimball wrote:
> Quoth Justin Chapweske on Wednesday, 28 February:For messages requiring reliable delivery, but not requiring an
> : Tony, I luv ya, but I think your smoking crack. The way I see it the
> : ordering is a simple by-product of TCP's reliability combined with the
> : congestion control. If your protocol is to have congestion control
> : (which is provided by the sliding window) as
> : well as guarenteed delivery then I doubt very much that it'd be any more
> : efficient than TCP.
> I wouldn't hazard how *much* more wire-efficient it would be without
> trying it, but it should be more wire-efficient, because you don't
> have to ack anything until you can predict resource exhaustion.
application level ACK any kind, TCP is just as efficient or more
efficient than UDP.
For messages requiring an application level ACK, such as a
search request, the answer and the ACK can be guaranteed to be combined
into one in UDP, whereas they may (and probably will) be seperate in
TCP. Also, TCP will ACK even if no reply is required, as for a search
request for which you have no matching index entries.
That's where the talk of 'all those extra acks' comes from.
This means that UDP has some possible benefits even in a
non-massively interconnected P2P network.
> I wouldn't use the windowing mechanism to implement retransmit queuesThe swarmcast model would make a windowing mechanism superfluous
> for a non-stream protocol. That involves an abstraction layer which I
> would eschew for a datagram protocol.
for large files. I always thought windowing was stupid for file
transfers anyway. Why maintain a seperate buffer containing a chunk of
your file when you can perfectly well grab any re-requested chunks off
of your disk anyway? Let the VM subsystem handle what should be in
memory at any given time. That's what it's good at.
> Wire-efficiency isn't my main interest in unordered datagrams. ThatI still think there is a lot of value in peers maintaining some
> interest lies in avoiding kernel resource demands of stream semantics.
> When those are gone, you have the freedom to build a totally connected
> (1) network without connection (2) latencies.
shared state about eachother beyond IP address and port. For example,
in the "Really, I'm connected to absolutely everybody else in the entire
world!" model, it would require at least n - 1 (where n is the total
number of hosts) packets transmitted to a node to find out it was dead.
If you ever put yourself on such a network, your computer may still be
getting packets hours or even days after you've closed down your peer.
A network that has fewer interconnections makes it easier to
localize and efficiently distribute information about a particular host.
Have fun (if at all possible),
The best we can hope for concerning the people at large is that they
be properly armed. -- Alexander Hamilton
-- Eric Hopper (hopper@... http://www.omnifarious.org/~hopper) --
- Justin Chapweske wrote:
> interesting techniques used in the (I think) TCP-Reno implementation whereSome of this happens automatically in that new packet transmissions are
> it can guestimate congestion w/o packet loss based off of latencies.
triggered by the receipt of acks, which will be delayed if queues are growing
in the network.