Loading ...
Sorry, an error occurred while loading the content.

61186Re: [beam] Re: Wire Computing? A Theory

Expand Messages
  • Martin McKee
    Jul 24 9:54 AM
      @David -- "...despise interaction with the real world.. " That couldn't be Haskell could it?

      But, yes.  Programming language matters. I think that it matters, more from the standpoint of what it makes easy and intuitive than from any efficiency standpoint.  As stated so clearly by David, the idea that direct assembly language programming must necessarily be much more efficient is a ( very tenacious ) fallacy.  There is some small improvement to be hand in some cases but it is really negligible for the bulk of a program.  As such, the combination of a high-level language with inline assembly can be exceptionally productive.

      It could be interesting to implement a high-level language for the AVR ( I have considered it myself with the focus on robotic systems ), and it would be fairly easy to do so if the compiler were written to produce GCC assembly files.  That way, the GCC assembler, linker and object utilities could be used to process the code and prepare it for the microcontroller.  However, the chances of it ending up more efficient than simply programming in C are astronomically small.  The number of man-hours invested in optimizations for the GCC front-end are staggering.  Also, the very nature of high-level languages ( abstraction ) leads to situations that are exceptionally difficult to optimize.  In many ways, C is not a high-level language.  It is rather more like a very advanced macro assembler.  That allows it to be much more optimizable than most languages ( both by machine and by the programmer ).  Having said that, however, good compilers are still able to get most languages within a few percent these days.

      The biggest argument, though, for using something like C on a microcontroller is rather boring.  It is good to use it, because that is the standard.  It is good to use the standard because other people will understand it ( and be comfortable with it ) and because, as such, it will be more widely accepted.  For a hobby project, it is easy to view this as a non-issue, but I would hate to see further development of BEAM concepts which simply fall by the wayside because it requires too much of others to understand.  Using a lingua franca minimizes that possibility and maximizes the potential for the ideas to spread.

      There are a number of things that have kept me from using traditional BEAM in my projects.  First, though it can be quite fault tolerent, it almost always requires substantial trimming.  In the realm of production, things just need to work.  That is, often, easier with a digital system.  But even for myself, I have not wanted to try fiddling with fifty resistor/capacitor combinations to get a more advanced walker functional.  Beyond that, BEAM is, traditionally, inflexible.  As much as it allows for emergent behaviors, that emergence is tightly bounded by the layout of the control system.  Again, this is a main point in this thread ( and in several over the past decade ) that BEAM must evolve some ability to learn if it is going to grow beyond such constrained behaviors, but such learning is much easier to implement in a digital environment.

      The fact that it is easier ( or more practical ) needn't be sufficient reason to remain wedded to traditional microcontroller/microprocessor based systems.  But it is a powerful enticement.  I think it is clear that nothing short of revolutionary advantages will make most people willing to make a change.  A few percent here or there will have no impact.

      Martin Jay McKee


      On Wed, Jul 24, 2013 at 10:19 AM, David Buckley <david@...> wrote:
       

      'Everyone knows...run significantly more efficiently...' - that is really a
      myth put about by nerds still stuck in the 80s or even earlier.
      The quests for super efficiency and speed for experimental prototypes are
      also off in the same make belief land.

      The problem as I see it is that nobody has a clue how to use Beam technology
      to do more than jiggle things round a bit. MT - with those shovel sized
      hands of his (he is a big guy) - created beautiful and expertly crafted
      critters which were for the time quite amazing and the jiggling was finely
      tuned by a perceptive mind, but still all they did was jiggle around a bit.
      And that really is where Beam is at - things jiggle around a bit.
      Now one, if not the main, driving force for Beam (besides that MT could see
      how to do it) was the fact that digital computers are not fault tolerant,
      get a bit error and the program crashes. Beam circuits are fault tolerant
      and that was MT's argument which I think got him funded.
      However today's digital technology is a lot more stable and bit errors are
      so rare that for all practical experimental purposes they can be ignored.

      So where can Beam go? Choosing different coloured paint or programs or chips
      or .... for Beam Robots isn't going to progress things.
      Although Beam is an excellent introduction to getting things to jiggle about
      a bit, it will only advance IF AND ONLY IF people build working Beam
      critters that actually do something comparable to what is achievable with
      non Beam technology.

      Only by building and trying out more complex architectures will the way
      ahead become clearer because, as has been demonstrated time after time, the
      world isn't really like what the people who theorise think it is.

      How can such architectures be built?
      A table sized breadboard with hundreds of amplifiers and Schmitt triggers
      and inverters and resistors and capacitors and ... - I think not, too many
      wires to come loose.
      A mass of components soldered on prototype boards - been there done that in
      the 1980s - too difficult to change things.
      A specially designed GateArray - a bit pointless if you have no experience
      of slightly simpler architectures.
      A software model of a brain - sounds more like it, far easier to implement.

      Anyway for what it is worth, the latter is the route I am taking, each of my
      robots has a model of a brain constructed in software and that brain
      processes messages from sensors and controls actuators depending on the
      current behaviour, that is the current BEHAVE model, that is the current
      instantiation of variables that the brain Has (Have) which tell it how to
      Be.
      How to Behave (Be Have) - choose from available behaviours. For my small
      robots the behaviours are instantiations from the classes bold, timid, fast,
      slow, like-light, like-dark.
      On top of that are instructions I give to the robot which may be embedded
      routines, remembered routines, or immediate commands over an IR or radio
      link. But since those commands are processed by the brain under its current
      BeHave mode the robot is fault tolerant.
      Also the remembered routines can be modified by commands or other routines
      so the robot can learn.

      Does it matter what language or language implementation is used?
      Actually yes because lots won't run on small processors.
      Others are 'smack your hands if you don't abide by the rules' languages
      which make it especially difficult to build state machines which don't have
      a fixed sequence, the sequences need to be controlled totally by external
      data - when is the last time you saw a brain which only has fixed sequences
      or even only sets of fixed sequences.
      Others won't allow convenient data storage for embedded or remembered
      routines.
      Others seem to be written by people who despise any interaction with the
      real world at all.
      And yet more whose authors think I have all day to wait while their
      compiler/loaders do their job or who think I really love typing in command
      strings.
      And finally no, using a Pi or Beagle XXX or even a Harduino with stacks of
      shields just so I can type in more command strings and have access to a
      filing system isn't an option when they take more power than the actuators.



      David

      ----- Original Message -----
      From: connor_ramsey@...
      To: beam@yahoogroups.com
      Sent: Wednesday, July 24, 2013 6:05 AM
      Subject: [beam] Re: Wire Computing? A Theory

      Does anyone have a suggestion as to how to access an AVR's underlying
      hardware levels directly? I want to compile the program on my laptop and use
      a compiler to write it to the micro's flash directly in machine code format.
      Because I view it as an optimal utilization of the micro's resources,
      particularly smaller ones with limited resources like AtTinys or PICmicros.
      Everyone knows that machine coded programs run significantly more
      efficiently than compiled high level programs, and they use less memory
      because many functions and operators can be represented by merely a few bits
      in machine code, as well as machine code having no need for the syntax and
      idiosyncracies that high level code presents. While machine coding is very
      difficult and slow to do, my computer can do it for me, and I'm free to
      write the code in whatever language I choose. Personally, I like Lisp,
      although I'm barely familiar with it. I could also do Java, Lua, C, etc. So
      if there are any tips for that, I could use some. Thanks, Connor.

      P.S. I also learned about something called a ZISC(Zero Instruction Set
      Computer) architecture. It's basically like a synchronous digital nervous
      network, versus the asynchronous digital nerve nets used in BEAM. It only
      contains a handful of "neurons", not comparable to Nvs, I don't think, but
      still, a ZISC computer's only purpose is pattern recognition and response,
      and they also tend to use Content Addressable Memory(CAM), so basically it's
      a synchronous uber-complex BEAM circuit, and it's actually been around
      almost as long as BEAM, the first one appeared in 1993, I believe. What if
      we combined the two, a synchronous digital neural network with immense
      flexibility and learning capacity, with an asynchronous digital nerve net,
      traditional BEAM, as a lower level interface?


    • Show all 60 messages in this topic