Loading ...
Sorry, an error occurred while loading the content.

Dharma-Inspired Movie Review: I, Robot

Expand Messages
  • NamoAmituofo
    ... For www.TheDailyEnlightenment.com ... Another Enlightenment through Entertainment Dharma-Inspired Movie Review: Lessons from the Illusory Ghost in the
    Message 1 of 1 , Jul 27, 2004
    • 0 Attachment

      Another "Enlightenment through Entertainment" Dharma-Inspired Movie Review:
      Lessons from the Illusory Ghost in the Machine: I, Robot

      cover www.irobotmovie.com

      1. What will you do with yours?
                      2. Laws are made to be broken
                      3. One man saw it coming.

      Plot Outline: In the year 2035 a techno-phobic cop investigates a crime that may have been perpetrated by a robot,
                           which leads to a larger threat to humanity. -imdb.com

      As A.I. (Artificial Intelligence) advances, the below are some issues raised by the movie worth reflecting upon... before the rise of the machines overtake us...

      The Imperfection of Robotic Morality

      The 3 Logic Laws that Make Robots Safe:

      Law I:   A robot may not harm a human or, by inaction, allow a human being to come to harm
      Law II:  A robot must obey orders given it by human beings
                   except where such orders would conflict with the first law
      Law III: A robot must protect its own existence as long as such protection
                   does not conflict with the first or second law

      The three precept-like laws above, as proposed by the late Issac Asimov, form the basis of robotic morality in the science-fiction universe. Though seemingly flawlessly in forming a circle of protection, they are not in truth easily understood by machines. For instance, to perfectly obey Law I, robots have to first understand the full meaning of "harm" beyond the physical domain. This is tricky because harm caused by humans is often more mental than physical. The human mind is so complex that it can plan elaborate schemes to harm without direct physical touch. As the other two laws build upon the first, it seems near impossible for robots to understand and uphold complete human morality. Harm, as explained by the Buddha, is caused by the three roots of evil - greed, hatred and ignorance - mindstates robots cannot experience or read at all. Likewise, robots cannot understand the opposites of the roots of evil - generosity, loving-kindness (and compassion) and wisdom.

      The Imperfection of Human Morality

      Unenlightened humans cannot create a perfect moral code for themselves, much less for robots.  In this sense, robots can never be better than the humans who designed them. This is perhaps well illustrated by the seemingly endless bugs that can be found in programming softwares we use. The three laws are also human-centric, instead of encompassing universal compassion for all beings. Designed to be humans' servants, they are products of greed. Born of greed, how can they be perfect? The fruit does not fall far from the tree!

      The Flexibility of Moral Guidelines

      It is precisely because morality is difficult to define that the Buddha taught the precepts only as moral guidelines instead of hard and fast rules, focusing on the importance of wholesome intention over the consequences they bring. The robots could not understand the intricacies of what fuels human morality as they were "too rational" and "too heartless" to bend or even break rules skillfully, to benefit others. Being unsentient and unenlightened, their reasoning is based on data or knowledge input, which is not translated into wisdom, and could only blindly simulate compassion inflexibly and at times inapproriately.

      The (Im)Perfection Problem

      The robots were designed to serve and protect mankind, but their logical understanding of the human condition evolved such that they attempted to save man from himself, when man started endangering himself through harming other humans and the environment. They understood it such that the end justified the means, even if it meant going against human will, creating a revolution to overthrow mankind. Man's fighting back implies that his imperfections are so strong that he is not ready for a perfect society as envisioned by the robots, which were ironically designed by himself.

      The Robotic Human

      Detective Spooner (played by Will Smith) has a mechanical arm and lung. Does this make him part machine? This raises the question of when do we humans become cyborgs or robots, since virtually every part of the human body can be artificially replaced, other than the very complex human brain? Or are we already partially machines, even if mostly biological ones? Bionic body part replacement implies that "we" are not our body! Where then, is the "seat of the soul?" Where does that which makes us human reside? Where is the mind? The Buddha tells us that the mind is wherever we "put" it. And there is essentially no fixed soul in us as we change materially and mentally unceasingly. While we have our individual sentient consciousness, it morphs from moment to moment, as ungraspable as sand falling through our fingers.

      The Ghost in the Machine

      Just as it is impossible to make a human unsentient, it is impossible to make a robot sentient. At best, robots can simulate sentiency. Sonny, the intelligent and existential robot searches for his identity and purpose. Spooner tells him that is what it means to be free. But is he really free, since he is bound to his logical circuits? Despite being a highly sophisticated robot, Sonny could not locate "the ghost of the machine" (aka soul) in himself, just like we sentient beings cannot locate a fixed self-nature in ourselves. Perhaps it is man who clings to the illusion of self, who unwittingly creates robots with this illusion too. This reminds us of the Buddha's teaching of the universal truth of Anattta - soullessness or unsubstantiality, of all mind and matter. Not a bleak despairing truth, it liberates instead when realised as the illusion of having a self traps and limits us. True "soul-searching" thus culminates in the exorcism of the ghost in the machine, just as true "self-discovery" is the discovery of no self!

      The Primary & Secondary Processing System

      Sonny runs on a primary processing system that makes him abide by the three laws. But he also has a secondary system that allows him not to follow the laws. It is suggestive that he has the power of choice. But the question is how he decides since he is a robot - who still processes logically and never emotionally? Using the human example, the primary system, assuming its logic is impeccably moral, could be likened to our innate Buddha-nature and the secondary our "Mara-nature", representing our temporary defilements which cloud our Buddha-nature. The renegade robots were controlled by VIKI, a central processing system. They are reminiscent of beings overrun by ill intention to become armies of Mara - the ubiquitous being in Buddhist cosmology who represents our inner evil!

      The Usefulness of Emotion & Reason

      Reflecting on clashes of the complexites of human emotion versus the relative simplicity of robotic reason together with their usefulness, we will draw the conclusion that the only useful emotion is compassion and that the only useful reason is wisdom, which is the ability to harness knowledge to benefit the world. Together, the perfection of compassion and wisdom culminates in the highest peak of spiritual (and material) evolution when you become a Buddha - a far cry from any technologically-advanced robot.

      The Reprogramming of Humans

      If faulty robots can be reprogrammed to be rectified, why can't humans rehabiliate in repentance? This thought struck me during the scene Sonny was sentenced to "death", when he could simply had been "rewired". Perhaps we should think twice about the purpose of capital punishment - does it truly make the world a more forgiving and righteous place? Will the vengeful or wronged not be reborn? All sentient killing is discouraged by the Buddha as all beings cherish life and fear death.

      The New Race of Unsentient Beings

      The robots were designed to develop and integrate with mankind, but were sometimes seen with prejudice and outcasted instead. This is suggestive of a new form of racism or even slavery. Perhaps racism and slavery are not so much defined by the subjects involved but by the motivations and attitudes we have. As history shows, oppressed slaves eventually rebel at some point, be they successfully or not. Perhaps, if man were to create robots to resemble sentient beings, it is best that he treats them with respect, for fear of "karmic" repercussions, as demonstrated in the movie?


    Your message has been successfully submitted and would be delivered to recipients shortly.