Re: [decentralization] Re: Common P2P Identifiers
- On Sat, Jun 08, 2002 at 10:03:26AM -0400, Gary Lawrence Murphy wrote:
> >>>>> "Z" == Zane Thomas <zane@...> writes:Besides that, by then I expect we'll have 256 bit or 512 bit hashes.
> >> For SHA-1 (160 bits) it would take 2^80 random documents to
> >> have even a 50% chance of collision between the hashes of any
> >> of them.
> Z> If I got a collision under those conditions I'd frame it.
> Seems to me I've heard this argument before. DNS, XML elements ...
> What if, say in 2125, the galactic Internet started showing up more
> collisions than we can frame? Couldn't we just introduce name spaces?
> It seems counter productive to exhaustively plan for even the remote
> contingency; we can drive off that bridge when we get to it.
With a hash that size, it starts getting to the point where'd you have
to have one document for every electron in the known universe before
you'd have an even tiny chance of a collision.
I'm being a bit crazily paranoid, but my ideal hash length for this
purpose would be 1024 bits. I'd feel safe with a truly universal
docuement reference mechanism with a 1024 bit hash. As soon as a
well-tested 1024 bit one-way hash appears, I'll use it.
If I'm not mistaken, both SHA1 and MD5 are not particularily cheap to
compute either, since their algorithms are based, in part, on DES. Does
anybody know of any good one-way hashes that are cheap to computer?
Have fun (if at all possible),
"It does me no injury for my neighbor to say there are twenty gods or no God.
It neither picks my pocket nor breaks my leg." --- Thomas Jefferson
"Go to Heaven for the climate, Hell for the company." -- Mark Twain
-- Eric Hopper (hopper@... http://www.omnifarious.org/~hopper) --
- blanu said:
> An authority is necessary for both the algorithm and the text used. TheActually, the dictionary is only needed for creation of names. I believe
> security of this scheme is based on the security of the signatures used
> to verify the algorithm and text which is distributed.
that v.01 of the algo required the input text to be carried around,
because names mapped to numbers via their array index, but the .02 version
does mod 256 against a hash to get the corresponding number.
I've noticed the same pattern with other schemes. That is, generating new
names is _much_ more expensive than decoding them, and the input text is
only needed at encoding time. Any mapping algorithm that has this feature
is a hell of a lot more useful than otherwise, because a decoder should be
able to work in embedded devices.