285Re: [syndication] RFC: myPublicFeeds.opml
- Oct 14, 2003Oh please, take the high road Dave.
The idea sound cool on first blush but with even the /tiniest/ bit of
examination it comes up lacking. What you've gotten here is useful feedback
from /very/ informed individuals. But apparently, just as you've done
*countless* times before, you're not at all interested in actually listening to
what people have to say. You're just coming up with a half-baked idea,
pretending to discuss it, completely ignoring feedback, insulting those that try
and then running off and implementing it anyway (usually just to jump the gun on
a worthwhile effort).
On the elemetary school playground this would be called "taking your ball and
going home". This as opposed to the usual "how can I get this guy fired" and
other bullying tactics you've tried using before. Some of us are just wise to
your abuse and not willing to let it slide.
The idea of good site-wide indexes backed up with solid best practices for
discovery and retrieval is something we can all get behind and support. There's
a few little details worth hashing out first.
----- Original Message -----
From: "Dave Winer" <dave@...>
Sent: Tuesday, October 14, 2003 12:37 PM
Subject: Re: [syndication] RFC: myPublicFeeds.opml
> You are one sick dude Bill.
> ----- Original Message -----
> From: "Bill Kearney" <ml_yahoo@...>
> To: <email@example.com>
> Sent: Tuesday, October 14, 2003 12:19 PM
> Subject: Re: [syndication] RFC: myPublicFeeds.opml
> > > inventing names and persuading the world to use them isn't going to
> > > scale,
> > Not to mention being needlessly anglo-centric with the naming conventions.
> > > I fear this'll get us into a situation where I no longer get to choose
> how to
> > manage my
> > > own Web namespaces. Eg. I might decide to map /sitemap to a page about
> > > sitemaps, only to discover that in 2005 the sitemapping community
> > > declare this to be the 'discovery page' for XML sitemap formats. Ditto
> > > /mp3 or whatever.
> > Good point, why harm future choice by imposing this sort of restriction
> > > robots.txt was the wrong way to do it. If folk _really_ want to go this
> > > route I'd suggest using the bit of the namespace already grabbed by
> > > robots.txt, ie. robots.feeds.txt etc., to keep things in the same
> > > "area". But it's still pretty gross.
> > Heh, robots.xml and then setup structures within it. Gadzooks,
> reinventing RSD
> > and RDF inside robots.txt.
> > > But we defer decisions about where/how to put this data to the sites
> > > hosting, rather than impose a decision from above. We don't need a
> > > single location, just some conventions for discoverying those locations.
> > Indeed, setting the good example and not forcing restrictions is the only
> > way to approach it. I know it's just soooo tempting to whack together a
> > URL but the downsides really make it seem like a bad idea.
> > -Bill Kearney
- Next post in topic >>