Re: Wire Computing? A Theory
- --- In firstname.lastname@example.org, "cweubanks" wrote:
>Yes, I understand the problems with trying to implement high-level BEAM circuits. Unless someone had both the resources and concern to integrate BEAM circuits on a microscopic level, BEAM has nearly been stretched to its limits. And I have indeed considered what you're saying before. I just like applying BEAM wherever I can. But at least consider that the application of BEAM, both in physical control and in power control, could be an effective method of significantly increasing the energy efficiency of the robot. I mean, that's something that BEAM is universally recognized for, being adapted to consume as little power as possible while doing as much with it as possible. Let me just illustrate: picture a typical quadruped, with a microcontroller and probably between 2 and 5 motors/servos. Now picture it having one or more solar engines, perhaps having a back-up nocturnal SE or, even better, two nocturnal SEs modified to detect each others' charge level, thus pulsing intermittently, plus a rechargeable battery on-board in case no light is present. So far this robot, while under complete digital control, is relying on BEAM just for its share of bread. Now envision that it uses an op-amp bicore as its clock, which can tune itself between, say, 0 and 20 Mhtz. Its sums analog signals from the solar engines, from sensor outputs, and from the DAC pin on the microcontroller through droidmakr's summation neuron, and controls how fast the computer is running, and thusly, how much power it's consuming. And assume that the microcontroller is running code that allows it to control this dynamic clock based on how much information is being processed or how strong the sensor signals are(although low power will always override these, being on a heavier input). Now say that there is an Nv core between the computer and the motors. This may only shave about a few hundred bytes or so off the program, and save even less on RAM space, but a bicore can be integrated onto the same chip that drives the motors, without decreasing driving power or drawing excess power(if designed carefully), so the computer can slow down significantly without the robot slowing down in turn. Heck, the system may be set up so that the computer only even turns on when sensor signals are detected, saving loads of power.
> good to see some activity here since its been quiet. BEAM certainly has its place in evolution of AI and I hope it won't go silent. It has been a great low voltage segue for me personally to Arduino
> It's a great,accessible way to get started in robotics - everyone in the group has always been supportive and helpful.
> there are some things still unique to BEAM which I hope to see continued...here and in peripheral areas
> - aesthetic inspiration taken from nature which is also inheritly simple in similar ways
> - it's physically durable, more complexity = more room for failure
> - fosters some very creative solutions for low power consumption
> One can practically build a working cockroach primarily from BEAM concepts (runs from light, uses antennae to stay near walls, etc.) But obviously the cockroach isn't extinct (much to many folks chagrin). It's likely to be the only thing left at some point too. So, I think there are just some applications that better fit BEAM than others and hope it will continue to encourage people to learn and build- using SMD and some of the older projects there can be entirely new applications due to the size alone I'm sure. new types of sensors perhaps? RFID applications? wearable computing? smart grid applications?
> I'd love to see some projects here that combine those best of BEAM principles with microcontroller intelligence. An uberBEAM step in evolution is definitely needed
> --- In email@example.com, "Amit" amitjones101@ wrote:
> > Connor,
> > I used to think as you did, and have come around to what David is telling you now. Don't think of it as discouragement, just pointing in a more cardinal direction. I find myself building these last few robots just for a fun thing, not as a serious research project.
> > Even though the spider replica stalled here, I continued designing and fooling around with some of the circuits. The end result was an A1 sized sheet of paper which was quite populated, and a single working leg. I realised that was just a bit ridiculous for the precise control of 10 motors. Like David said, if I were to build the circuits, the boards would be larger than the original spyder planform.
> > As for computer control of those circuits, it was simply connecting the uP outputs through diodes to the Nv bias points in one case, and in the broader case, all the A-net parameter resistors are selected through 4096 switches and shift registers via the uP. So no hardwork there, secret's out of the bag.
> > OTOH, I have managed to get a neural network based robot, 8 motors, using models similar to bicores etc, to run on an old pocket pc, with plenty resources left to spare.
> > We've come a long way since the 80's and 90's.
> > For my minor this semester, I get to do a thematic study on the electrical engineering of autonomous robots. We get to play with FPGAs and the like. I'll have the dedicated time to build something which requires lots of sensors and several motors. I'll see how that goes.
> > --- In firstname.lastname@example.org, "David Buckley" wrote:
> > >
> > > Connor
> > > You made a start but I don't think you yet know enough about microcontrollers/computers and what they can do and interfacing them to the real world.
> > > You can work with analog values in programs just as easily as you can with straight logic levels.
> > > What you should not confuse is writing a conventional program to control a robot (say) and building in software a brain, which may be the equivalent of some connected NNs, and then supplying inputs to the brain just as you do with a NN, and having that brain control a robot.
> > > There is a limit to the complexity of NN hardware you can build and although this has not even been approached people find a limit to what they can actually construct and that limit is shown by the pretty simple published BEAM designs.
> > > If BEAM in hardware is going anywhere then by now there ought to be designs to control complex hexapods or humanoids with twenty plus motors and say fifty odd sensors for a start. But there aren't any!
> > > Even if you spent the time building such a design then using conventional circuit techniques it is going to be bigger than the robot, of course you could implement most of it in a FPGA or something but then you have a problem in selecting the connections and housing the mass of resistors and capacitors needed and remember you need to be able to alter their values to tune the operation of the NNs. Have you ever tried soldering/unsoldering tiny SMT resistors and capacitors?
> > > If you build a model of a NN in software, you can fit many such models in common microcontrollers. You can change the 'resistor' and 'capacitor' values by editing the source code in a IDE.
> > > The talk of NN taking the load off the computers is nonsense, your Smart phone could do all that was necessary to run models of the NNs required to control a hexapod or humanoid I mentioned before, while you were watching a movie on it and you wouldn't even notice.
> > > The main problem is that there is a human in the design loop and it is quite obvious that the techniques of nearly two decades ago are inappropriate and are the stumbling blocks to exploring what BEAM circuits can do.
> > > If you look at RoboSapiens from over a decade ago - where are all the resitors and capacitors that make it work? They are values inside the CPU!
> > > MT had moved on along with the technology.
> > > David
I guess what I'm trying to convey is that, true, BEAM can no longer live up to its original function, but it has adapted over the past 20 years, become suited to a new niche instead of dying off, one that allows it to live on alongside digital circuitry. Don't get me wrong, I completely understand what you mean. Microcontrollers have basically outdone BEAM in every way. I can't disagree with any of you on that. But I do feel that having some discreet BEAM circuitry on-board can help save a lot of power. If we recognize this, and go the extra mile, then perhaps our bots, in turn, will walk, swim and crawl the extra mile. Enjoy, Connor.
P.S. I like the way cweubanks brung it out.
P.P.S. I need to look up what a "pocket PC" is.
- Yeah, the usability bit is a primary focus of mine. Just for fun, really, I've taken an approach at a very traditional style, basically using a set of counters in place of an actual processing unit. At it's simplest, it lacks the hardware to perform Boolean logic operations outside of 1's and 2s complement, but these can still be used to simulate logic functions in a few cycles. It can also simulate bit shifting easily enough by multiplying or dividing by 2. It also places quotients and remainders into different registers for easy handling of remainders. Not to mention floating point math isn't difficult, either. It could even perform <, =, > comparisons between values. As a matter of fact, I can't really say that any electronic computer has ever been built in this fashion. I'm pretty much basing the design entirely on DigiComp2, a mechanical 4-bit binary computer distributed as an educational toy from 1968-1976.
Yes, the 1-bit processor array concept is in fact cellular automata, which is why I refer to each unit as a "cell". I don't entirely understand bandwidth, yet. But the idea doesn't really focus on that. It regards robustness of the system, as well as massive parallel processing without most of the usability problems. I would also think it much more flexible, because a key construct is that each cell can alter its connectivity with its neighbors. It would take several orders of magnitude more component failures to trash the system than your traditional hardware, it could also be incredibly fault tolerant, and I'm thinking on the lines that the entire system would be programmed as a whole, so that determining how each cell should connect can be left up to the OS shell. Also, even if bandwidth restricts how quickly information is processed, another perk of the idea is that a very large amount of data could be processed at once.
On a side note, I once came up with an idea for a machine that was mostly electronic, but stored data temporarily as photon states(like, particle for 0 and wave for 1), and would be able to take advantage of the fact that photons, being 4-dimensional objects, can move in more directions than we can perceive, and thus allow the machine to literally do everything at once. What I mean is that each new cycle would take place in the same time frame as the last cycle, so that it could register an infinite amount of data in about a billionth of a second or so. It would only ever have to go forward in time if it needed to write a result back to main memory or update I/O, because the way it works, the events that occurred in previous steps literally would have never happened, and so the electronic memory wouldn't be able to remember such a result, and the outside world could only observe the final state of the program, if there was one. Fundamentally it is a photon based delay line with a negative delay. As in, instead of the delay propagating forward in time, it "rewinds" time slightly. So the potential would be literally instant computation, a stack of infinite size could be fed into the computer and processed in less than a billionth of a second, and an entire program run could be accomplished in the same amount of time. Branches and subroutines would be included. Only writing data back to memory or porting to the I/Os would really take any time at all. Only the program's final result could be observed from outside, as each step in between would never have happened in our timeline. Also, the program counter would have to be photon based, somehow, since if it was electronic, it wouldn't be able to remember what program line to go to next after time was rewritten again. The only thing I can see being interpreted as dangerous with this is that it does, indeed, rewrite time. But it only rewrites about a billionth of a second each time, and it doesn't effect outside events whatsoever. It has absolutely no way to affect reality.
> For myself, life is catching up with me. Come Monday, I'll be starting a
> new degree ( one not even tangentially related to my first ), so I've been
> rushing around trying to get all that in order -- no time for seriously
> thinking about robotics at all.
> I've only got a minute or two now, but, some few comments. The massively
> parallel 1-bit processors sounds a bit like a cellular atomaton type
> system. I remember having see once ( but can I find it now? of course not!
> ) a computer system that was being developed in that vein, compiler and
> all. There is certainly potential for quite a bit of performance, but for
> maximum performance, the bottleneck is often memory bandwidth, and not,
> strictly, computational. A large number of processors with a handful of
> neighbors and a 1-bit interconnect is not going to help in that line.
> To be honest, much of the architecture design lately has been targeted at
> increasing performance ( adding parallel instruction sets, vectorizability,
> hyperthreads, etc. ) but because of memory access issues and programming
> concurrency issues, simple small instructions and a minimal set of fully
> atomic instructions has seemed to have the best balance of usability and
> performance. No one has really been able to demonstrate an architecture
> that is both highly performant and efficient in the face of concurrency (
> and many parallel computational units ) while remaining easy to program. I
> think what can be said about "traditional" architectures, is that they are
> easy to understand and they work "well enough."
> Back to work...
> Martin Jay McKee