Loading ...
Sorry, an error occurred while loading the content.

Re: [syndication] Re : Thoughts,questions....

Expand Messages
  • Eric Bohlman
    ... I think we ought to try and at least roughly quantify the learning curve here based on some actually evidence. In particular, we should determine just how
    Message 1 of 2 , Aug 16, 2000
    • 0 Attachment
      On Wed, 16 Aug 2000 paul@... wrote:

      > 1) To enable the average developer to cope, a syndication format must
      > be simple to create and be easily read by a human. The rdf approach
      > requires too much studying and background knowledge to easily pick up
      > and is too hard for humans to read and create manually.

      I think we ought to try and at least roughly quantify the learning curve
      here based on some actually evidence. In particular, we should determine
      just how much RDF one needs to learn to use it for RSS, rather than
      falling into a trap analogous to saying that DOS/Unix/etc. is "hard to
      learn" based on the difficulty of memorizing a whole bunch of commands
      that the average user would never need to issue (e.g. assuming that
      learning DOS requires learning to use debug.com).

      > If the RDF approach is to be widely accepted and adopted then 1) and
      > 2) require solutions. Not all of them may be technical, but better
      > software tools support is part of a solution which does not require
      > the simple syntax required by the "expanded core". This software
      > tools support should span *all* of the environments which people need
      > to use... and we shouldn't sneer at people who try to parse this
      > stuff in Perl, VB or even, shock horror, Macromedia Flash.

      One way to handle this would be to create a tool that would take RDF/RSS
      data and write it out to standard output in a dead-easy-to-parse form,
      such as an extension of PYX (http://www.pyxie.org). That would require
      only one copy of the tool for each OS rather than one for each development
      environment, and would work in any development environment that allows one
      program to read the output of another. Obviously this wouldn't be very
      efficient for highly dynamic processing of huge numbers of feeds, but I
      think we can reasonably assume that the people working on such are
      experienced programmers working in environments that support standard
      parsers (zac's "C or Java" people).

      > Could it be possible to start some kind of co-ordinated open
      > source/community program to share and distribute the tools which are
      > available now... and which will be created? Currently there is no

      One suggestion, which may not be very popular, for this effort: offer the
      tools with the option of a non-copylefting license. There are going to be
      some uses of syndication where the people writing the code have to deal
      with providers of some information in non-open formats that have to be
      read using non-redistributable code furnished by the provider. Obviously
      GPL-only code can't be used in such an environment, and if the feed
      decisions are being made by managers, they'll choose proprietary code and
      keeping the proprietary feeds over free-as-in-speech code and dropping the
      feeds. Allowing the code to be used under the Artistic License or the MPL
      should suffice.

      > Control
      > ========
      > One of the enormous benefits of a namespaced version of the RSS
      > standard is that nobody has to agree to eg: iSyndicates' views. The
      > community will decide either by using or not using those particular
      > attributes, expressed as that particular namespace.

      I think this is the biggest advantage of the namespace-modularized
      approach over the extensible-core approach. The problem with the latter
      is that practically *any* amount of control will become a bottleneck,
      given the rapid pace of development in syndication, and if the process of
      core maintenance becomes anarchic in the sense of "no rules" rather than
      "no rulers," (i.e. because the only way to keep it "open" is to
      automatically add any proposed extension) we'll wind up with a core that's
      accumulated rather than designed; multiple slightly-different ways of
      accomplishing the same task because two people were working on the same
      problem, came up with slightly different solutions and submitted them to
      the maintainer at roughly the same time, lots of remnants of experiments
      that never took off, and lots of constructs that deal with the
      requirements of only a tiny minority of users. The result will be a
      complicated, hard-to-understand and hard-to-parse core. In other words,
      something like what HTML is now.

      The namespaces approach offers an alternative: potentially useful elements
      get a chance to prove themselves in a "private" environment, and the core
      maintainers have the luxury of allowing a period of deliberation before
      deciding whether or not those elements should be promoted into the
      core. If two different developers come up with slightly different ways of
      doing the same thing, they can work out (with the help of others) a
      synthesis if there's a consensus that those things are general enough to
      belong in the core. The result will be a core consisting of things that
      everyone agrees are useful, rather than a thousand-bladed Swiss Army
      knife. In short, this approach provides for more control of the good kind
      (e.g. Linus doesn't have to let just any code into the Linux kernel) while
      reducing the potential for control of the bad kind (if somebody really
      objects to the core maintainers' Way Of Doing It, they can always
      implement their own way and let the marketplace of ideas decide the
    Your message has been successfully submitted and would be delivered to recipients shortly.