'Everyone knows...run significantly more efficiently...' - that is really a
myth put about by nerds still stuck in the 80s or even earlier.
The quests for super efficiency and speed for experimental prototypes are
also off in the same make belief land.
The problem as I see it is that nobody has a clue how to use Beam technology
to do more than jiggle things round a bit. MT - with those shovel sized
hands of his (he is a big guy) - created beautiful and expertly crafted
critters which were for the time quite amazing and the jiggling was finely
tuned by a perceptive mind, but still all they did was jiggle around a bit.
And that really is where Beam is at - things jiggle around a bit.
Now one, if not the main, driving force for Beam (besides that MT could see
how to do it) was the fact that digital computers are not fault tolerant,
get a bit error and the program crashes. Beam circuits are fault tolerant
and that was MT's argument which I think got him funded.
However today's digital technology is a lot more stable and bit errors are
so rare that for all practical experimental purposes they can be ignored.
So where can Beam go? Choosing different coloured paint or programs or chips
or .... for Beam Robots isn't going to progress things.
Although Beam is an excellent introduction to getting things to jiggle about
a bit, it will only advance IF AND ONLY IF people build working Beam
critters that actually do something comparable to what is achievable with
non Beam technology.
Only by building and trying out more complex architectures will the way
ahead become clearer because, as has been demonstrated time after time, the
world isn't really like what the people who theorise think it is.
How can such architectures be built?
A table sized breadboard with hundreds of amplifiers and Schmitt triggers
and inverters and resistors and capacitors and ... - I think not, too many
wires to come loose.
A mass of components soldered on prototype boards - been there done that in
the 1980s - too difficult to change things.
A specially designed GateArray - a bit pointless if you have no experience
of slightly simpler architectures.
A software model of a brain - sounds more like it, far easier to implement.
Anyway for what it is worth, the latter is the route I am taking, each of my
robots has a model of a brain constructed in software and that brain
processes messages from sensors and controls actuators depending on the
current behaviour, that is the current BEHAVE model, that is the current
instantiation of variables that the brain Has (Have) which tell it how to
How to Behave (Be Have) - choose from available behaviours. For my small
robots the behaviours are instantiations from the classes bold, timid, fast,
slow, like-light, like-dark.
On top of that are instructions I give to the robot which may be embedded
routines, remembered routines, or immediate commands over an IR or radio
link. But since those commands are processed by the brain under its current
BeHave mode the robot is fault tolerant.
Also the remembered routines can be modified by commands or other routines
so the robot can learn.
Does it matter what language or language implementation is used?
Actually yes because lots won't run on small processors.
Others are 'smack your hands if you don't abide by the rules' languages
which make it especially difficult to build state machines which don't have
a fixed sequence, the sequences need to be controlled totally by external
data - when is the last time you saw a brain which only has fixed sequences
or even only sets of fixed sequences.
Others won't allow convenient data storage for embedded or remembered
Others seem to be written by people who despise any interaction with the
real world at all.
And yet more whose authors think I have all day to wait while their
compiler/loaders do their job or who think I really love typing in command
And finally no, using a Pi or Beagle XXX or even a Harduino with stacks of
shields just so I can type in more command strings and have access to a
filing system isn't an option when they take more power than the actuators.
----- Original Message -----
Sent: Wednesday, July 24, 2013 6:05 AM
Subject: [beam] Re: Wire Computing? A Theory
Does anyone have a suggestion as to how to access an AVR's underlying
hardware levels directly? I want to compile the program on my laptop and use
a compiler to write it to the micro's flash directly in machine code format.
Because I view it as an optimal utilization of the micro's resources,
particularly smaller ones with limited resources like AtTinys or PICmicros.
Everyone knows that machine coded programs run significantly more
efficiently than compiled high level programs, and they use less memory
because many functions and operators can be represented by merely a few bits
in machine code, as well as machine code having no need for the syntax and
idiosyncracies that high level code presents. While machine coding is very
difficult and slow to do, my computer can do it for me, and I'm free to
write the code in whatever language I choose. Personally, I like Lisp,
although I'm barely familiar with it. I could also do Java, Lua, C, etc. So
if there are any tips for that, I could use some. Thanks, Connor.
P.S. I also learned about something called a ZISC(Zero Instruction Set
Computer) architecture. It's basically like a synchronous digital nervous
network, versus the asynchronous digital nerve nets used in BEAM. It only
contains a handful of "neurons", not comparable to Nvs, I don't think, but
still, a ZISC computer's only purpose is pattern recognition and response,
and they also tend to use Content Addressable Memory(CAM), so basically it's
a synchronous uber-complex BEAM circuit, and it's actually been around
almost as long as BEAM, the first one appeared in 1993, I believe. What if
we combined the two, a synchronous digital neural network with immense
flexibility and learning capacity, with an asynchronous digital nerve net,
traditional BEAM, as a lower level interface?