Loading ...
Sorry, an error occurred while loading the content.

Re: [beam] Re: Wire Computing? A Theory

Expand Messages
  • Martin McKee
    @David -- ...despise interaction with the real world.. That couldn t be Haskell could it? But, yes. Programming language matters. I think that it matters,
    Message 1 of 60 , Jul 24, 2013
    • 0 Attachment
      @David -- "...despise interaction with the real world.. " That couldn't be Haskell could it?

      But, yes.  Programming language matters. I think that it matters, more from the standpoint of what it makes easy and intuitive than from any efficiency standpoint.  As stated so clearly by David, the idea that direct assembly language programming must necessarily be much more efficient is a ( very tenacious ) fallacy.  There is some small improvement to be hand in some cases but it is really negligible for the bulk of a program.  As such, the combination of a high-level language with inline assembly can be exceptionally productive.

      It could be interesting to implement a high-level language for the AVR ( I have considered it myself with the focus on robotic systems ), and it would be fairly easy to do so if the compiler were written to produce GCC assembly files.  That way, the GCC assembler, linker and object utilities could be used to process the code and prepare it for the microcontroller.  However, the chances of it ending up more efficient than simply programming in C are astronomically small.  The number of man-hours invested in optimizations for the GCC front-end are staggering.  Also, the very nature of high-level languages ( abstraction ) leads to situations that are exceptionally difficult to optimize.  In many ways, C is not a high-level language.  It is rather more like a very advanced macro assembler.  That allows it to be much more optimizable than most languages ( both by machine and by the programmer ).  Having said that, however, good compilers are still able to get most languages within a few percent these days.

      The biggest argument, though, for using something like C on a microcontroller is rather boring.  It is good to use it, because that is the standard.  It is good to use the standard because other people will understand it ( and be comfortable with it ) and because, as such, it will be more widely accepted.  For a hobby project, it is easy to view this as a non-issue, but I would hate to see further development of BEAM concepts which simply fall by the wayside because it requires too much of others to understand.  Using a lingua franca minimizes that possibility and maximizes the potential for the ideas to spread.

      There are a number of things that have kept me from using traditional BEAM in my projects.  First, though it can be quite fault tolerent, it almost always requires substantial trimming.  In the realm of production, things just need to work.  That is, often, easier with a digital system.  But even for myself, I have not wanted to try fiddling with fifty resistor/capacitor combinations to get a more advanced walker functional.  Beyond that, BEAM is, traditionally, inflexible.  As much as it allows for emergent behaviors, that emergence is tightly bounded by the layout of the control system.  Again, this is a main point in this thread ( and in several over the past decade ) that BEAM must evolve some ability to learn if it is going to grow beyond such constrained behaviors, but such learning is much easier to implement in a digital environment.

      The fact that it is easier ( or more practical ) needn't be sufficient reason to remain wedded to traditional microcontroller/microprocessor based systems.  But it is a powerful enticement.  I think it is clear that nothing short of revolutionary advantages will make most people willing to make a change.  A few percent here or there will have no impact.

      Martin Jay McKee


      On Wed, Jul 24, 2013 at 10:19 AM, David Buckley <david@...> wrote:
       

      'Everyone knows...run significantly more efficiently...' - that is really a
      myth put about by nerds still stuck in the 80s or even earlier.
      The quests for super efficiency and speed for experimental prototypes are
      also off in the same make belief land.

      The problem as I see it is that nobody has a clue how to use Beam technology
      to do more than jiggle things round a bit. MT - with those shovel sized
      hands of his (he is a big guy) - created beautiful and expertly crafted
      critters which were for the time quite amazing and the jiggling was finely
      tuned by a perceptive mind, but still all they did was jiggle around a bit.
      And that really is where Beam is at - things jiggle around a bit.
      Now one, if not the main, driving force for Beam (besides that MT could see
      how to do it) was the fact that digital computers are not fault tolerant,
      get a bit error and the program crashes. Beam circuits are fault tolerant
      and that was MT's argument which I think got him funded.
      However today's digital technology is a lot more stable and bit errors are
      so rare that for all practical experimental purposes they can be ignored.

      So where can Beam go? Choosing different coloured paint or programs or chips
      or .... for Beam Robots isn't going to progress things.
      Although Beam is an excellent introduction to getting things to jiggle about
      a bit, it will only advance IF AND ONLY IF people build working Beam
      critters that actually do something comparable to what is achievable with
      non Beam technology.

      Only by building and trying out more complex architectures will the way
      ahead become clearer because, as has been demonstrated time after time, the
      world isn't really like what the people who theorise think it is.

      How can such architectures be built?
      A table sized breadboard with hundreds of amplifiers and Schmitt triggers
      and inverters and resistors and capacitors and ... - I think not, too many
      wires to come loose.
      A mass of components soldered on prototype boards - been there done that in
      the 1980s - too difficult to change things.
      A specially designed GateArray - a bit pointless if you have no experience
      of slightly simpler architectures.
      A software model of a brain - sounds more like it, far easier to implement.

      Anyway for what it is worth, the latter is the route I am taking, each of my
      robots has a model of a brain constructed in software and that brain
      processes messages from sensors and controls actuators depending on the
      current behaviour, that is the current BEHAVE model, that is the current
      instantiation of variables that the brain Has (Have) which tell it how to
      Be.
      How to Behave (Be Have) - choose from available behaviours. For my small
      robots the behaviours are instantiations from the classes bold, timid, fast,
      slow, like-light, like-dark.
      On top of that are instructions I give to the robot which may be embedded
      routines, remembered routines, or immediate commands over an IR or radio
      link. But since those commands are processed by the brain under its current
      BeHave mode the robot is fault tolerant.
      Also the remembered routines can be modified by commands or other routines
      so the robot can learn.

      Does it matter what language or language implementation is used?
      Actually yes because lots won't run on small processors.
      Others are 'smack your hands if you don't abide by the rules' languages
      which make it especially difficult to build state machines which don't have
      a fixed sequence, the sequences need to be controlled totally by external
      data - when is the last time you saw a brain which only has fixed sequences
      or even only sets of fixed sequences.
      Others won't allow convenient data storage for embedded or remembered
      routines.
      Others seem to be written by people who despise any interaction with the
      real world at all.
      And yet more whose authors think I have all day to wait while their
      compiler/loaders do their job or who think I really love typing in command
      strings.
      And finally no, using a Pi or Beagle XXX or even a Harduino with stacks of
      shields just so I can type in more command strings and have access to a
      filing system isn't an option when they take more power than the actuators.



      David

      ----- Original Message -----
      From: connor_ramsey@...
      To: beam@yahoogroups.com
      Sent: Wednesday, July 24, 2013 6:05 AM
      Subject: [beam] Re: Wire Computing? A Theory

      Does anyone have a suggestion as to how to access an AVR's underlying
      hardware levels directly? I want to compile the program on my laptop and use
      a compiler to write it to the micro's flash directly in machine code format.
      Because I view it as an optimal utilization of the micro's resources,
      particularly smaller ones with limited resources like AtTinys or PICmicros.
      Everyone knows that machine coded programs run significantly more
      efficiently than compiled high level programs, and they use less memory
      because many functions and operators can be represented by merely a few bits
      in machine code, as well as machine code having no need for the syntax and
      idiosyncracies that high level code presents. While machine coding is very
      difficult and slow to do, my computer can do it for me, and I'm free to
      write the code in whatever language I choose. Personally, I like Lisp,
      although I'm barely familiar with it. I could also do Java, Lua, C, etc. So
      if there are any tips for that, I could use some. Thanks, Connor.

      P.S. I also learned about something called a ZISC(Zero Instruction Set
      Computer) architecture. It's basically like a synchronous digital nervous
      network, versus the asynchronous digital nerve nets used in BEAM. It only
      contains a handful of "neurons", not comparable to Nvs, I don't think, but
      still, a ZISC computer's only purpose is pattern recognition and response,
      and they also tend to use Content Addressable Memory(CAM), so basically it's
      a synchronous uber-complex BEAM circuit, and it's actually been around
      almost as long as BEAM, the first one appeared in 1993, I believe. What if
      we combined the two, a synchronous digital neural network with immense
      flexibility and learning capacity, with an asynchronous digital nerve net,
      traditional BEAM, as a lower level interface?


    • connor_ramsey@ymail.com
      Yeah, the usability bit is a primary focus of mine. Just for fun, really, I ve taken an approach at a very traditional style, basically using a set of counters
      Message 60 of 60 , Aug 15, 2013
      • 0 Attachment
        Yeah, the usability bit is a primary focus of mine. Just for fun, really, I've taken an approach at a very traditional style, basically using a set of counters in place of an actual processing unit. At it's simplest, it lacks the hardware to perform Boolean logic operations outside of 1's and 2s complement, but these can still be used to simulate logic functions in a few cycles. It can also simulate bit shifting easily enough by multiplying or dividing by 2. It also places quotients and remainders into different registers for easy handling of remainders. Not to mention floating point math isn't difficult, either. It could even perform <, =, > comparisons between values. As a matter of fact, I can't really say that any electronic computer has ever been built in this fashion. I'm pretty much basing the design entirely on DigiComp2, a mechanical 4-bit binary computer distributed as an educational toy from 1968-1976.
        Yes, the 1-bit processor array concept is in fact cellular automata, which is why I refer to each unit as a "cell". I don't entirely understand bandwidth, yet. But the idea doesn't really focus on that. It regards robustness of the system, as well as massive parallel processing without most of the usability problems. I would also think it much more flexible, because a key construct is that each cell can alter its connectivity with its neighbors. It would take several orders of magnitude more component failures to trash the system than your traditional hardware, it could also be incredibly fault tolerant, and I'm thinking on the lines that the entire system would be programmed as a whole, so that determining how each cell should connect can be left up to the OS shell. Also, even if bandwidth restricts how quickly information is processed, another perk of the idea is that a very large amount of data could be processed at once.
        On a side note, I once came up with an idea for a machine that was mostly electronic, but stored data temporarily as photon states(like, particle for 0 and wave for 1), and would be able to take advantage of the fact that photons, being 4-dimensional objects, can move in more directions than we can perceive, and thus allow the machine to literally do everything at once. What I mean is that each new cycle would take place in the same time frame as the last cycle, so that it could register an infinite amount of data in about a billionth of a second or so. It would only ever have to go forward in time if it needed to write a result back to main memory or update I/O, because the way it works, the events that occurred in previous steps literally would have never happened, and so the electronic memory wouldn't be able to remember such a result, and the outside world could only observe the final state of the program, if there was one. Fundamentally it is a photon based delay line with a negative delay. As in, instead of the delay propagating forward in time, it "rewinds" time slightly. So the potential would be literally instant computation, a stack of infinite size could be fed into the computer and processed in less than a billionth of a second, and an entire program run could be accomplished in the same amount of time. Branches and subroutines would be included. Only writing data back to memory or porting to the I/Os would really take any time at all. Only the program's final result could be observed from outside, as each step in between would never have happened in our timeline. Also, the program counter would have to be photon based, somehow, since if it was electronic, it wouldn't be able to remember what program line to go to next after time was rewritten again. The only thing I can see being interpreted as dangerous with this is that it does, indeed, rewrite time. But it only rewrites about a billionth of a second each time, and it doesn't effect outside events whatsoever. It has absolutely no way to affect reality.

        --- In beam@yahoogroups.com, Martin McKee wrote:
        >
        > For myself, life is catching up with me. Come Monday, I'll be starting a
        > new degree ( one not even tangentially related to my first ), so I've been
        > rushing around trying to get all that in order -- no time for seriously
        > thinking about robotics at all.
        >
        > I've only got a minute or two now, but, some few comments. The massively
        > parallel 1-bit processors sounds a bit like a cellular atomaton type
        > system. I remember having see once ( but can I find it now? of course not!
        > ) a computer system that was being developed in that vein, compiler and
        > all. There is certainly potential for quite a bit of performance, but for
        > maximum performance, the bottleneck is often memory bandwidth, and not,
        > strictly, computational. A large number of processors with a handful of
        > neighbors and a 1-bit interconnect is not going to help in that line.
        >
        > To be honest, much of the architecture design lately has been targeted at
        > increasing performance ( adding parallel instruction sets, vectorizability,
        > hyperthreads, etc. ) but because of memory access issues and programming
        > concurrency issues, simple small instructions and a minimal set of fully
        > atomic instructions has seemed to have the best balance of usability and
        > performance. No one has really been able to demonstrate an architecture
        > that is both highly performant and efficient in the face of concurrency (
        > and many parallel computational units ) while remaining easy to program. I
        > think what can be said about "traditional" architectures, is that they are
        > easy to understand and they work "well enough."
        >
        > Back to work...
        >
        > Martin Jay McKee
      Your message has been successfully submitted and would be delivered to recipients shortly.