Software Keeps On Ticking - Sounds like a great sub project to the Clock
Of The Long Now and Clock Library http://www.longnow.org/about/about.htm
and a nice discussion topic.
> I think that you'd know my motives from what I posted in previous
> messages. What I would like as a "work product" would be a
> manifesto, outlining our vision of how software should interact (and
> I don't mean just UI) with its stakeholders.
> We're getting to a point where software is becoming more
> probabilistic in behavior as an ever more complex environment
> confronts it with situations that its designers did not design it
> for - we need to know if its OK for software to say "I don't know,
> let me go ask my boss". We need to move our focus from micro-
> behavior (e.g., Does this software not leak memory) onto the macro-
> behavior (Does this software ever get to a state where it loses more
> than $5000 in a trade). We need languages and infrastructure that
> support direct expression of these rules and processes and a
> framework to put it all under to have it make sense.
> When I think of this, I also get a bit queasy. My ideal is a
> statement like "I want to know how to design a program that will last
> forever and continue to learn and do useful work in a changing
> environment. Oh yeah, and I want it by next Tuesday." The trouble
> is that (1) I don't think that we're anywhere near to solving this
> question and (2) If we did solve it, we probably wouldn't have the
> right to evaluate whether it was doing "useful work".
> So I'm ready to limit my goal - How do we design a program that will
> be useful for the next 500 years? How do we make it know what we
> want it to do? How do we make it so we don't have to patch it up
> every three months, but we still can, if we want to?
> In the end, I can't help that queasy feeling that I'm asking for a
> reset and re-examination of "hard AI", but there you have it. In the
> final analysis, software is our partner in this world. We depend on
> it too much for it to be stupid any longer. We can't afford to hold
> its hand any longer, but also can't afford to kill it entirely. In
> some sense, we have a responsibility to the bits to make them smart.
> We fear making them smart and we know that we've failed over and over
> before in trying to make them smart.
> But I think the future involves confronting our fears and saying that
> we never should have stopped research twenty-five odd years ago when
> the industry didn't get the immediate payoff it wanted. We were
> responsible, too, because we took off in other directions when
> funding got tight and times got tough. Even though we knew that
> *this was important*. Maybe we need to turn back the hands of time
> that twenty-five years, at least. Maybe fifty-plus years back to
> Turing and Shannon.
> But what's twenty-five or fifty years in an eternity that we
> collectively have available to us and our heirs? Maybe all we've
> really lost is the dream - the one that's been around since the dawn
> of computing - of creating new sentience; of adding another
> intelligent species to the universe. Maybe we've gotten a bit older
> and a bit wiser and a bit more patient. Maybe we're willing to
> embark on a project that will take hundreds - maybe thousands - of
> years to complete - to do something that's larger thn all of us, but
> worthy of all who'll come after us.
> To unsubscribe from this group, send an email to:
> Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/