Presumably, not all problems are solved because of an emotional need, in philosophy for instance I can imagine that some problems have been dealt with althoughMessage 1 of 6 , May 1, 2011View SourcePresumably, not all problems are solved becauseof an emotional need, in philosophy for instance Ican imagine that some problems have been dealtwith although because initially one had no feelings forit, merely because it was intellectually "stimulating".Which does sound like emotion of course, and I agreethat the humans have a motivation system, withoutwhich they would not be inclined to do anything.In fact, interestingly, it seems that the older brainshave *more* command. However, everyone has feltsuch a conflict. When you wanted something, yet knewthat to be false, and didn't do it right? Or the reversethat you forced yourself through reasoning to do somethingthat you hated.So, that's a bit like how an unemotional AGI could workin conjunction with a human. It might tell us a series ofthings that would constantly break our hearts :)Joking of course, but if you need an autonomous AGI,which I think is a bad idea, it could be a goal following agent,and then getting closer to the goal would motivate it.Best,On Sun, May 1, 2011 at 9:52 AM, jgkjcasey <jgkjcasey@...> wrote:
We wouldn't bother to solve any problems without an
emotional need to do so. Having the ability to do
something will not result in you doing it if you
have no interest in it.
I get the feeling that these AGI programs may well
amplify our academic intelligence but by themselves
they lack the mechanisms of motivation that allow
them to select at any given time what is worth doing.
They might extract all sorts of interesting (to us)
stuff from the raw data but for what purpose?
Without purpose, a goal, how can you select what to
learn and what to do with what you have learnt?
> On Sat, Apr 30, 2011 at 10:24 PM, Eray Ozkural <erayo@...>wrote:
--- In firstname.lastname@example.org, boboniboni boboniboni <boboniboni@...> wrote:
> Just think about how people are addicted to solving complex abstract
> problems aka math. It's like a delicious chocolate cake for mathematicians.
> While such pleasure is derived from basic dopaminergic systems and
> other mesolimbic pathways, I think the neocortex may hide some qualities
> inherent to the "feelings" of solving abstract problems.
> > let's ask the neuroscience buffins.
> > feelings seem to be more focused in the older parts of the brain (limbic
> > system etc.). are there feelings in the neocortex?
> > On Wed, Apr 27, 2011 at 11:24 PM, rscan60 <rscan60@...> wrote:> > --
> >> In my opinion, a retrieved feeling if brought about by the activation of
> >> a group of neurons that were origiannly involved with the feeling. The
> >> brfain is designed by the genome to avoid danger, so the most prominent
> >> feelings are those involved with bafd outdomes and physicaal damage. moe
> >> pleasant feelings are a gift an accompaniement of good outcomes.
> >> ray
> >> ------------------------------------
> >> Yahoo! Groups Links
> > Eray Ozkural, PhD candidate. Comp. Sci. Dept., Bilkent University, Ankara
> > http://groups.yahoo.com/group/ai-philosophy
> > http://myspace.com/arizanesil http://myspace.com/malfunct
Yahoo! Groups Links
<*> To visit your group on the web, go to:
<*> Your email settings:
Individual Email | Traditional
<*> To change settings online go to:
(Yahoo! ID required)
<*> To change settings via email:
<*> To unsubscribe from this group, send an email to:
<*> Your use of Yahoo! Groups is subject to:
Eray Ozkural, PhD candidate. Comp. Sci. Dept., Bilkent University, Ankara
... Because the reward for doing it was greater than the punishment for doing it. An unemotional AGI would have no reason to do anything. It would be like anMessage 1 of 6 , May 1, 2011View Source--- In email@example.com, Eray Ozkural <erayo@...> wrote:
> ...Because the reward for doing it was greater than the
> you forced yourself through reasoning to do something
> that you hated.
punishment for doing it.
An unemotional AGI would have no reason to do anything.
It would be like an electronic calculator. Very smart
but dependent on humans to set up the problems for
solutions that the humans desire.
> ... if you need an autonomous AGI, which I think is a bad idea,In which case it wouldn't be intelligent in the sense we are.
We use clever programs and many who use them may not be as
smart as the program they use but the program itself lacks
goals or reasons for what it does. Its purpose resided in
the programmer. It continues to carry out that purpose,
such as say a visual program that identifies faces, without
any reason for it. We have a purpose behind recognizing
faces and the machine does not even if it ever became better
at it than us. It would continue to recognize faces even
if humans had all died out and the purpose behind its actions
was no longer there because no matter how smart it was at the
job it was doing it was really dumb about why it was doing it.
> it could be a goal following agent, and then getting closerAnd now you are coming back to the mechanisms that are part
> to the goal would motivate it.
of how real brains work. Intelligence is a means to an end
in real brains not the end in itself. Intelligence enables
us to achieve our goals (maximize our rewards) such as
getting food, a sexual partner, social recognition and so
on, all innate in us because those goals gave their owners
an advantage in reproductive success.
We can program computers to do anything we know how to do
ourselves at the high symbolic level and that may be new
things we learn to do like statistical extraction of
constraints in raw data, but without any need on the part
of the program for such knowledge to achieve some goal
I don't see it behaving intelligently (behaving with a
purpose) no matter how clever it is at what it does.