Loading ...
Sorry, an error occurred while loading the content.

Robotic system design ideas

Expand Messages
  • rand3289
    This set of ideas is written to solicit a response from the robotics community. Please forward it to your friends and send me an e-mail with your thoughts at
    Message 1 of 2 , Apr 20, 2014
    • 0 Attachment
      This set of ideas is written to solicit a response from the robotics community. Please forward it to your friends and send me an e-mail with your thoughts at the below address.

      In my opinion, the secret to creating a successful robot system is to express inputs, outputs and internal representations in terms of time and constants. Body provides most constants needed to retain information after sensor readings are converted into events as well as converting events back into actions. Think of them as yardsticks for your vision, hearing and tactile sensors. One can also think of these constants as unit vectors needed in order to convert observations from real world into internal event based representation and back.

      One could reason it is not possible to enumerate all the events happening around. Surely a car moving to the right ten degrees in your visual field is different from a box falling on its side. Think of it from the sensors’ perspective. In the first case an object in your field of view was identified as a car. This is one of the events. It moved several times to the right. It was represented by several events. In the second case an object was identified as a box. It moved several times to the right. It rotated. Looking at a snapshot in time (state of the system) it is a sparsely coded representation of the scenario.
      Let’s say you have a sensor with an 8 bit ADC. Instead of presenting the value to the learning system one needs to convert it to an event. An 8 bit sensor gives you 256 possible values. Type of an event (sensor ID) combined with the value in a range [0-255] becomes a constant. Think of this constant as a bit number in sparse coding. All that is left to describe the event is to specify the time when it occurred. It might seem counterintuitive to measure values in terms of time however this is exactly what some ADCs do internally. They measure time between events however their output is a scaled number of events per unit of time.

      Once measure units are left on the border of your system embedded in the body of the robot in terms of constants your system is left free to compare apples to oranges in terms of time. It does much more than gets rid of units in the system. In biological systems this results in neural plasticity.
      Events should be generated only when there is a change in the sensor reading. Events should not be generated on a timer at specific time intervals but should be caused by the changes in the environment. Body of the robot is part of that environment. Body movements will cause changes in sensor readings. I do not believe that mechanism of attention changes the way events are generated at the inputs.

      For the philosophers out there… If we can perceive and act only in a single moment, where does time come from? Is it not the fabric of our mind? Qualia is a set of constants supplied by the body.

      Besides providing constants there is one more reason why having a body is so important for learning. Deriving causality from correlation is difficult without a body. Body provides feedback into the world. Imagine you have setup an experiment to study the world. Body is the only thing that can change the parameters of your experiment.
      Events are also important as only events can CAUSE other events and state changes. State should only be correlated to states of other things or events and causality should not be derived. For example cold temperature does not cause water to freeze however drop in temperature does.

      Artificial neural networks provide the perfect fabric for time to flow through. Every time pulse propagates from one neuron to the next, time of the event is “incremented”. ANNs are usually crippled by the fact that data is fed into an ANN in parallel and backpropagation algorithm is used for learning. In order for an ANN to learn, the topology of the network has to change. There is at least one reason to seek alternatives to ANNs. Computers of today are better at moving “sets of neurons” by using indirect addressing. This way a set of events can age without moving in memory.

      BEAM robotics seems to experimentally prove that pulse propagation is the basis to achieving complex behavior. Nervous nets even give us a glimpse at a possible hardware implementation of ANNs. Imagine using DRAM memory cells as RC (Resistor/Capacitor) elements of a Nervous Net. Instructions copy data from cell to cell or perform simple operations to form a network. Timing of the pulse propagating through the network is controlled by the decay of individual memory cells.

      Brain is divided into functional regions and cortical columns. You need more than one algorithm to implement a segmented network like that. One algorithm connects neurons within a segment. Another algorithm connects neurons from one segment to another. This could be a genetic algorithm. The second algorithm could be the answer to the hardest question in robotics: “How do you make a robot want something?” Desire is like a fitness function that constrains the learning system from falling into the highest entropy configuration. Given this hierarchical structure a designer could build reflexes into the ANN by manually connecting segments to other segments. For example, connecting sensory neurons of the cheek to a region in the brain responsible for moving neck muscles in order to create a rooting reflex. Network segmentation also helps battle combinatorial explosion since each algorithm is responsible for connectivity only in its own level of hierarchy.

      In order to achieve a complex behavior a robot needs to have a large number of sensors. This is required since sensory inputs serve as building blocks for higher abstractions. Although it is possible to convert an analog signal into digital fairly inexpensive, signals often need to be amplified or filtered in hardware. As an alternative, one can construct optical sensors by using filament to carry light from LEDs to the sensors. Light can be any color, spectrum or modulated. Sensors can mechanically dim the light. Fibers coming from sensors can terminate on CMOS devices such as mouse sensors which have high frame rates. Tactile, compound eye, joint position and many other types can be constructed in this manner. And remember: the designer does not need to understand the values returned by the sensors. Calibrate them or know what units they are measured in. This is what your learning system is for. Just ensure a high dynamic range.

      Andrey Makushkin
      toAndrey(at)yahoo(dot)com
      March 18, 2014
    • robots42
      Hi Andrey It is a start. Have you read everything on this from say the last 40 years? Most of this has been discussed before. Not trying to put you off but it
      Message 2 of 2 , May 9, 2014
      • 0 Attachment
        Hi Andrey
        It is a start.
        Have you read everything on this from say the last 40 years?
        Most of this has been discussed before.
        Not trying to put you off but it is no good discussing foreign lands if you don't know what people have already found out about those lands.
        How much do you know about robotics and what has been achieved and what is hype?
        Have you ever built a robot?
         
        Para 1 - sounds very much like David Heiserman's ideas from his books in the early 80s.
        Para 2 - is very much like you might record things for Logo (a much forgotten language).
        Paras to ANN - is how we understand biological systems
        Para ANN - there is far more than backprop. Are you aware of Scott Falman's papers on self extending ANNs etc
        Para BEAM - BEAM robots don't do anything complex at all, They just show that if you jiggle electro/mechanical things at the right frequency you can build up oscillations, unfortunately apart from Mark Tilden it is beyond most people, rather like the art of Japanese/Chinese bells which ring with two different tones depending on where they are struck. Great idea back in the 90s but even Tilden has moved on.
        Para divided brain - have you read up on drives and hormonal variables, early behavioural programming, multiprocessor systems, etc.
        Para on sensors - all this has been done before by hobbyists and universities alike. You should be aware that all the messy stuff like real circuits, real sensors, real mechanical systems are impossible to model so universities have abandoned the real world for the their virtual worlds where everything works and the sums can be written down and published, unfortunately those virtual worlds bear no relation to the real world.
         
        I suggest that if you want to know how to do it, you actually do it, then talk about what you have done, how it worked and what the next step may be. If all your friends did the same, what might be achieved, what might be learned?
         
        Doing it is the way forward, compare DrGuero's achievements with biped robots to all the university work. His robots have far too small a brain to run ZMP algorithms. Universities throw ZMP at the problem like it was the Holy Grail. Publishable sums and models are far more favourable to them than pragmatic approaches, hundreds of thousands of man hours but yet they can't match DrGuero!
         
        Talk is cheap, it is also just cheeping.
        Just do it.
         
        DAvid
      Your message has been successfully submitted and would be delivered to recipients shortly.