RE: p2p working group/standards
- In my annoying tradition of responding to digests monolithically:
: How decentralized? Are DNS NAPTR and SRV records not enough?
: UDDI? JINI? I'd like to hear about other
: people's requirements. Here are mine:
: 1. ability to autonomously publish a named resource location
: 2. ability to reliably locate redundant instances of a named resource
: Hmm, so far DNS SRV and NAPTR records seem to do the job just fine.
: It's ubiquitous and cheap and here today.
: 3. ability to autonomously publish metadata describing that resource
: All this requires is a standardized location relative to the result of
: 4. ability to efficiently search (i.e. as locally as possible)
: for resources matching my metadata constraints
: This is more ambitious, probably premature to standardize:
: Requires too many layers of agreement. Implementation first,
Lucas Gonze declaimed:
: The lookup service itself has to be decentralized. There's no point in locking in to
: a centralized directory. So:
: ability to find a decentralized service in a decentralized manner.
and Clay (you know who) uttered this dewdrop:
: But Lucas, there's no point in _not_ locking into a centralized
: directory, if that's helpful.
While I see a root-less DNS as feasible, I'm not so sure it's
desirable: The current system works really well (despite protocol
design bugs and the overwhelming prevalance of an *awful*
implementation -- Vixie BIND) and is decentralized in most senses. It
is not decentralized in the sense that *control* is centralized, which
is a distinct liability, especially within the value system of the p2p
community, but, hey, it is ubiquitous, which counts for a lot. I
don't see political brokenness as being bad enough to justify the
distraction of (potentially quixotic) effort required to fix that
: Before implementations, I vote for a list of requirements, which is what we are doing
Great, please add to/adapt the list!
There devolved from the mind of Jacobsen@... this semantic
: > 1. WBXML tokenized compression
: > Anybody know of a library to do this?
: The following night help:
Yeah, XMill is cool. Good design. Technically superior to the
alternatives. However. It's not WBXML, which has the advantage of
being a de jure standard. And it lacks the ubiquity of gzip, which
gives it the advantage of being a de facto standard.
Wesley Felter observed:
: >: I think peer discovery and directories (i.e. name->IP mappings, where
: >: name may be user@host style (Jabber, SIP), or free-form, or a public
: >: key) are something that could definitely be shared among systems.
My interest is in addressing content. Hence I want to use hashes
as identifiers. But yeah, for some things there is an appropriate
emphasis on naturally named resources. In those cases we need two-way
lookups. In the absence of canonicalization of names (the
impossibility (in the general case) of which is the motivation for
the use of hashes, a name<->hash mapping is required. Doing this in
a way that satisfies the requirements of diverse applications is
a *hard* problem.
E.g.: For an application which utilizes distinctly named local
instances of a resource, I need different mappings depending on the
context of my operation. In one case I will want a local name. In
another context, I will want a remote name (under environmental
constraints, e.g. location constraints). In a subcase of that
context, I will want an authoritative name.
Part of my response to M. Felter was impolitic to include this:
: > I hate name@host. Because I do not live inside a box. I am not
: > @host. I am @large.
Let not my hyperbole scandalize, please: I acknowledge the utility
of names with respect to host location in many instances.
: >: 3. Common crypto framework
I'm not even sure this is a good idea. Justification?
I'm inclined to think that keeping crypto channels confined to
a single application avoid a lot of nasty problems with trust.
If you mean PKI-type things, then I withdraw my complaint.
But then SSL isn't the ticket anyhow.
: If it goes through
: the IETF, then anyone who uses a different P2P routing protocol will
: likely be accused of not supporting standards...
Only where interoperability is a concern, however. And BXXP is
transport, which brings us to...
coderman@..., who pronounced:
: It is a datagram transport running atop UDP/IP. It
: provides virtual connection multiplexing, and supports NAT, and reliable
: transfer, although the default is unreliable.
Firewall transit is best via http. In the context of standardizing
common infrastructure, whenever transport dependence is required, I
have to think that http has got to be a given. But large parts of
common infrastructure will be transport-independent.
: This is a messaging
: protocol, not a transport protocol (although, it can be used for
: transport between two NAT peers as the only option)
Do you claim to have solved the NAT-2-NAT problem in a sufficiently
general way to be of wide interest?
: This is also intended to be a persistant connection protocol, with
: support for reactivating connections after an IP/port change (due to
: dial-up, NAT, etc).
: Feel free to let me of any questions you may have.
I always thought that TCP/UDP (which is just like TCP/IP, but with an
extra 8 octet/packet of header, and has portable, free, user-space
implentations in the wild) would be the way to use UDP NAT transit for
transport, because it has a well-developed theory and longstanding
practice with respect to things like congestion control, et filia.
If you please, I'd like to ask that you consider this suggestion
against DTCP, and comment to the list, on the basis of your unique
expertise with DTCP.
: Jacobsen wrote:
: > Are there any LGPLed or similary licenced P2P C/C++ libraries in existence ?
I suggest that an existing well-established open-source
general-purpose, cross-platform C++ library should be the nexus for
such a standardization effort. I have in my the Bayonne project's
CommonC++. It would provide many infrastructural elements not
specific to P2P, thus expanding the standards base API, with
beneficial convergence for all. If, of course, the Bayonne telephony
project organizers are amenable...
: > Furthermore we would love to have the possibility to use napster-like caches
: > too.
With more detailed requirements, I could judge whether code I have
written is amenable to use as a basis for such a thing.
: ... by allowing the agents to move freely but to co-ordinate
: communication in a central message board, the agents' performance, and
: particularly the evolution of their performance, worked better than
: when they were purely decentralized.
The example as presented is very weak as argument for centralized
services, Clay. Group coordination by SMTP (a peer protocol) is
pandemic. Decentralized message boards are eminently feasible, albeit
they lack a driving motivation.
Chris Cummer, fellow openColan, and all-around hail fellow well met
: It appears that the discussion is becoming centralized peer discovery vs.
: completely decentralized discovery. Neither is optimal. If they were we'd
: all be using one or the other. The question is: in what circumstances is on
: preferable to the other?
When you can't get the centralized service to cooperate? Again, I
like DNS, and will propose, in order to make it cooperative:
Let the community estabish a domain, say peer.net (I haven't checked),
for use as a common namespace for service advertisements, open to all
comers on a charter-enforced first-come/first-served basis. (Probably
requires a usage agreement to the effect that this is a private
namespace, and all parties publishing to it agree so, in order to
avoid the whole WIPO rathole, but a packet-through license mechanism
would make this effectively transparent).
Dan Moniz, an otherwise brilliant man, and fellow openColan put forth:
: I'm hearing is that it's far too early to proceed standardizing on things,
: and I agree with that.
I'm hearing otherwise. I'm hearing that we need to define
requirements in order to find out where sufficient true consensus
exists for such an effort.
Justin Chapweske, also a fellow openColan, with a trick or two up his
: Personally DNS will work just fine for me but
: a decentralized approach would be very interesting from a technical
I think the way to do it is by reputation and peer review. Volunteer
nodes will root different parts of the network, redundantly, and
polling trusted peers will give a bunch of folk's interpretation of
root. This is statistically hardened against spoofing by requiring
good service in order gain reputation. Yeah, it's heirarchy, but
it is decentralized, and every node is equally able to do root
service, so that all peers are created equal -- what they do with
that equality is up to them.
: Jini....too bad its not open source though.
Ugh. Local, not global. And it is a bizatch to use from C/C++.
dave@..., with typical flourish, sang out:
: I think P2P infrastructure, if there ever is such a thing, should be open
: and documented with lots of choice.
: No one religion, no one right way to do it, no one who is holier than all
We need enough focus to standardize, if we want to leverage common
efforts. I'm pretty sick of reinventing wheels, each one of which
messes with my time-to-market.
: The knee-jerk that everything open source is better than everything else is
: a reality distortion field.
But it is the best way to get these common infrastructural issues dealt with
in a mutually beneficial way. Best practice, if you will.
(Now I will break down and switch decentralization digestion off, so
that y'all won't face any more monoliths such as this from my
Quidquid cognoscitur, cognoscitur per modem cognoscentis.
- Justin Chapweske said:
> ... switch to SHA-1.Probably going to SHA-1 isn't too big of a problem. I'll bring it up with
> The biggest group that I havn't yet talked to about this is the Gnutella
> guys, but I'm sure they'd be into it as well. Any Gnutellians on the
those that I know. Interestingly, there are ways to add file hashes within
the existing protocol specifications - it should even be backwards