Re: Wire Computing? A Theory
- good to see some activity here since its been quiet. BEAM certainly has its place in evolution of AI and I hope it won't go silent. It has been a great low voltage segue for me personally to Arduino
It's a great,accessible way to get started in robotics - everyone in the group has always been supportive and helpful.
there are some things still unique to BEAM which I hope to see continued...here and in peripheral areas
- aesthetic inspiration taken from nature which is also inheritly simple in similar ways
- it's physically durable, more complexity = more room for failure
- fosters some very creative solutions for low power consumption
One can practically build a working cockroach primarily from BEAM concepts (runs from light, uses antennae to stay near walls, etc.) But obviously the cockroach isn't extinct (much to many folks chagrin). It's likely to be the only thing left at some point too. So, I think there are just some applications that better fit BEAM than others and hope it will continue to encourage people to learn and build- using SMD and some of the older projects there can be entirely new applications due to the size alone I'm sure. new types of sensors perhaps? RFID applications? wearable computing? smart grid applications?
I'd love to see some projects here that combine those best of BEAM principles with microcontroller intelligence. An uberBEAM step in evolution is definitely needed
--- In email@example.com, "Amit" <amitjones101@...> wrote:
> I used to think as you did, and have come around to what David is telling you now. Don't think of it as discouragement, just pointing in a more cardinal direction. I find myself building these last few robots just for a fun thing, not as a serious research project.
> Even though the spider replica stalled here, I continued designing and fooling around with some of the circuits. The end result was an A1 sized sheet of paper which was quite populated, and a single working leg. I realised that was just a bit ridiculous for the precise control of 10 motors. Like David said, if I were to build the circuits, the boards would be larger than the original spyder planform.
> As for computer control of those circuits, it was simply connecting the uP outputs through diodes to the Nv bias points in one case, and in the broader case, all the A-net parameter resistors are selected through 4096 switches and shift registers via the uP. So no hardwork there, secret's out of the bag.
> OTOH, I have managed to get a neural network based robot, 8 motors, using models similar to bicores etc, to run on an old pocket pc, with plenty resources left to spare.
> We've come a long way since the 80's and 90's.
> For my minor this semester, I get to do a thematic study on the electrical engineering of autonomous robots. We get to play with FPGAs and the like. I'll have the dedicated time to build something which requires lots of sensors and several motors. I'll see how that goes.
> --- In firstname.lastname@example.org, "David Buckley" <david@> wrote:
> > Connor
> > You made a start but I don't think you yet know enough about microcontrollers/computers and what they can do and interfacing them to the real world.
> > You can work with analog values in programs just as easily as you can with straight logic levels.
> > What you should not confuse is writing a conventional program to control a robot (say) and building in software a brain, which may be the equivalent of some connected NNs, and then supplying inputs to the brain just as you do with a NN, and having that brain control a robot.
> > There is a limit to the complexity of NN hardware you can build and although this has not even been approached people find a limit to what they can actually construct and that limit is shown by the pretty simple published BEAM designs.
> > If BEAM in hardware is going anywhere then by now there ought to be designs to control complex hexapods or humanoids with twenty plus motors and say fifty odd sensors for a start. But there aren't any!
> > Even if you spent the time building such a design then using conventional circuit techniques it is going to be bigger than the robot, of course you could implement most of it in a FPGA or something but then you have a problem in selecting the connections and housing the mass of resistors and capacitors needed and remember you need to be able to alter their values to tune the operation of the NNs. Have you ever tried soldering/unsoldering tiny SMT resistors and capacitors?
> > If you build a model of a NN in software, you can fit many such models in common microcontrollers. You can change the 'resistor' and 'capacitor' values by editing the source code in a IDE.
> > The talk of NN taking the load off the computers is nonsense, your Smart phone could do all that was necessary to run models of the NNs required to control a hexapod or humanoid I mentioned before, while you were watching a movie on it and you wouldn't even notice.
> > The main problem is that there is a human in the design loop and it is quite obvious that the techniques of nearly two decades ago are inappropriate and are the stumbling blocks to exploring what BEAM circuits can do.
> > If you look at RoboSapiens from over a decade ago - where are all the resitors and capacitors that make it work? They are values inside the CPU!
> > MT had moved on along with the technology.
> > David
> > ----- Original Message -----
> > From: connor_ramsey@
> > To: email@example.com
> > Sent: Friday, July 12, 2013 5:57 PM
> > Subject: [beam] Re: Wire Computing? A Theory
> > --- In firstname.lastname@example.org, "David Buckley" wrote:
> > >
> > > Connor
> > > I think you will find that in the "ROM units were constructed entirely
> > from wires" the wires passed through Ferrite torroids and read the
> > direction in which the torroids were magnetised. Writing such a ROM(sic)
> > involved remagnetising the torroids one way or another to signify 0 or
> > 1.
> > > As you recently found out microcontrollers can now do anything an
> > analog beam circuit can do, there may be exceptions but they are
> > probably too difficult to understand to use. Times have changed a lot
> > since MT invented Nervous Nets and coined BEAM. Back then a cheap
> > microcontroller development board would cost maybe $300 (not $300 in
> > today's money, $300 in then's money!) Today a much more powerful board
> > can be bought for $20 and for $40 you can buy a board which is orders of
> > magnitude more powerful than the desktop PCs when BEAM was started.
> > > Back then a program language compiler or an interpreted language would
> > cost more than the cost of the board, now good ones are free or are
> > included in the price of the board or chip.
> > > So now as compared to then the sort of microcontroller systems that
> > are easily available are about (at least) 100 times more powerful and at
> > one hundredth of the cost.
> > > That is why MT used analog circuitry and why today you should be using
> > a microcontroller to implement the sort of ideas developed in BEAM. You
> > can build many BEAM circuits in software and have them controlled by
> > other software, and since it is software you can alter the equivalent of
> > the resistor and capacitor values on the fly depending on the, say,
> > sensor values.
> > > Building BEAM circuits from resistors and capacitors and gates is good
> > to get a basic understanding of circuits but it is never going to result
> > in a robot which can do anything useful, you would need too much
> > circuitry.
> > > David
> > >
> > >
> > >
> > > ----- Original Message -----
> > > From: connor_ramsey@
> > > To: email@example.com
> > > Sent: Thursday, July 11, 2013 9:04 PM
> > > Subject: [beam] Wire Computing? A Theory
> > True, true, but even Mark himself stated: "Choosing analog or digital
> > control is like saying you'll live on food or water." Today's
> > microcontrollers may be able to handle every aspect of a robot's
> > behavior on their own, but I like to hold firm to the ultimately
> > intended application of BEAM: to take a majority of that load off of the
> > computer. I believe that by installing the microcontroller on top of a
> > hardwired Nv network, the BEAM circuit is achieving its optimal
> > implementation, and the computer, likewise, is achieving its optimal
> > potential.
> > The microcontroller is still able to manipulate the weights and sensor
> > connections of the underlying core loops just as easily as if it was
> > doing all the work itself, except it doesn't have to. You're basically
> > achieving the exact same outward performance a microcontroller alone
> > could yield, while achieving much higher virtual efficiency, and you're
> > achieving the exact same robustness as the BEAM circuit alone will
> > yield, but with far greater control over the core loop's behavior than
> > otherwise possible. So I trust Mark in his opinion about what his own
> > invention is purposed toward.
> > I'll just leave off with a simple way to think of it: If BEAM provides
> > the nervous system, then the computer is the equivalent of an endocrine
> > system; both must be present in order for both to function in full.
> > Enjoy, Connor.
- Yeah, the usability bit is a primary focus of mine. Just for fun, really, I've taken an approach at a very traditional style, basically using a set of counters in place of an actual processing unit. At it's simplest, it lacks the hardware to perform Boolean logic operations outside of 1's and 2s complement, but these can still be used to simulate logic functions in a few cycles. It can also simulate bit shifting easily enough by multiplying or dividing by 2. It also places quotients and remainders into different registers for easy handling of remainders. Not to mention floating point math isn't difficult, either. It could even perform <, =, > comparisons between values. As a matter of fact, I can't really say that any electronic computer has ever been built in this fashion. I'm pretty much basing the design entirely on DigiComp2, a mechanical 4-bit binary computer distributed as an educational toy from 1968-1976.
Yes, the 1-bit processor array concept is in fact cellular automata, which is why I refer to each unit as a "cell". I don't entirely understand bandwidth, yet. But the idea doesn't really focus on that. It regards robustness of the system, as well as massive parallel processing without most of the usability problems. I would also think it much more flexible, because a key construct is that each cell can alter its connectivity with its neighbors. It would take several orders of magnitude more component failures to trash the system than your traditional hardware, it could also be incredibly fault tolerant, and I'm thinking on the lines that the entire system would be programmed as a whole, so that determining how each cell should connect can be left up to the OS shell. Also, even if bandwidth restricts how quickly information is processed, another perk of the idea is that a very large amount of data could be processed at once.
On a side note, I once came up with an idea for a machine that was mostly electronic, but stored data temporarily as photon states(like, particle for 0 and wave for 1), and would be able to take advantage of the fact that photons, being 4-dimensional objects, can move in more directions than we can perceive, and thus allow the machine to literally do everything at once. What I mean is that each new cycle would take place in the same time frame as the last cycle, so that it could register an infinite amount of data in about a billionth of a second or so. It would only ever have to go forward in time if it needed to write a result back to main memory or update I/O, because the way it works, the events that occurred in previous steps literally would have never happened, and so the electronic memory wouldn't be able to remember such a result, and the outside world could only observe the final state of the program, if there was one. Fundamentally it is a photon based delay line with a negative delay. As in, instead of the delay propagating forward in time, it "rewinds" time slightly. So the potential would be literally instant computation, a stack of infinite size could be fed into the computer and processed in less than a billionth of a second, and an entire program run could be accomplished in the same amount of time. Branches and subroutines would be included. Only writing data back to memory or porting to the I/Os would really take any time at all. Only the program's final result could be observed from outside, as each step in between would never have happened in our timeline. Also, the program counter would have to be photon based, somehow, since if it was electronic, it wouldn't be able to remember what program line to go to next after time was rewritten again. The only thing I can see being interpreted as dangerous with this is that it does, indeed, rewrite time. But it only rewrites about a billionth of a second each time, and it doesn't effect outside events whatsoever. It has absolutely no way to affect reality.
> For myself, life is catching up with me. Come Monday, I'll be starting a
> new degree ( one not even tangentially related to my first ), so I've been
> rushing around trying to get all that in order -- no time for seriously
> thinking about robotics at all.
> I've only got a minute or two now, but, some few comments. The massively
> parallel 1-bit processors sounds a bit like a cellular atomaton type
> system. I remember having see once ( but can I find it now? of course not!
> ) a computer system that was being developed in that vein, compiler and
> all. There is certainly potential for quite a bit of performance, but for
> maximum performance, the bottleneck is often memory bandwidth, and not,
> strictly, computational. A large number of processors with a handful of
> neighbors and a 1-bit interconnect is not going to help in that line.
> To be honest, much of the architecture design lately has been targeted at
> increasing performance ( adding parallel instruction sets, vectorizability,
> hyperthreads, etc. ) but because of memory access issues and programming
> concurrency issues, simple small instructions and a minimal set of fully
> atomic instructions has seemed to have the best balance of usability and
> performance. No one has really been able to demonstrate an architecture
> that is both highly performant and efficient in the face of concurrency (
> and many parallel computational units ) while remaining easy to program. I
> think what can be said about "traditional" architectures, is that they are
> easy to understand and they work "well enough."
> Back to work...
> Martin Jay McKee