Re: [beam] Re: Wire Computing? A Theory
- I've been working on the design of an op-amp based servo pulse generator circuit that I will, then, couple to a leg pattern generator. The ultimate platform is targeted as being an 9-motor, 4-leg, walker with an independent head. Over all, the system should be able to walk and survive as a typical BEAM walker but it will contain a tightly coupled microcontroller wired to the different bias points in the control system. The processor I'm looking at is probably an AtMega328. That way I could make the board fully Arduino compatible. Just plug it into the computer and it will act like any other Arduino. Not precisely what I would do if it were just for myself, but it seems to make sense if I am going to try to integrate this into the robot club that already works with Arduinos.Honestly, I love to see many different approaches to the issue. Although digital control has taken over the bulk of the robotics market, analog like control has definite advantages in certain areas. For one, it can be lower power if properly designed. That has been discussed sufficiently at this point though I think. Analog is also "instant." It works at the speed of electricity. On the other hand, digital control has latency limits that can only be overcome by adding a faster processor ( or, sometimes, programming smarter ). Analog also makes it easy to "sum" many different signals from different sources. The availability of bias points in analog control circuits allows for reflexes from sensors or control from above and the systems can remain completely decoupled. In a digital setting, that modularity can be harder to come by.I've though along some of the same lines as well. I've got a whole pile of four quadrant analog multipliers in my junk box. Along with op-amps, there are no reasonable arithmetic operations that cannot be implemented. Combine that with some simple wave form generators, and I think you have the beginnings of an extremely powerful and flexible control system. I just have never gotten around to doing more than think about it in passing. But, I do think it is possible to do so efficiently ( though not with the components I have at the moment ( they expect +/- 10v rails! ). It seems like a logical progression to move in both directions. I have been thinking along the lines of making BEAM even more analog by using op-amps and continuous values, while adding a digital control unit. I am still up in the air about if it is better to use pulse width modulation or a programmable resistor for control voltages though. PWM has two distinct advantages: 1) it needs no external components and, 2) it can be easily disconnected ( simply make the pin an input ).All my thinking has been along the lines of a five to eight motor walker ( not twelve for cost reasons ). I agree that a two motor walker is unlikely to place much of a "walking" load on a processor. Although I don't think it would make the program much more complicated, I do think that the combination of BEAM and a microcontroller would allow the processor to be decoupled from the time constraints of walking and that could simplify the structure of the program ( if not greatly reduce the size ). In the end, I have always looked at the combination in a leveled manner, somewhat akin to the Subsumption Architecture developed at MIT in the '80s. There would be a low-level analog ( BEAM ) control system that is closely coupled with a microcontroller that deals only with "emergency" situations and basic control. There would then be a mid-layer "general learning" system that dealt with optimizing the robot's basic behaviors. At the very top would be a "planning logic" module that deals with big picture, long term, planning. At each of the lower levels, it seems fully reasonable to combine BEAM and digital, at the very top level... I'm not sure.
Martin Jay McKeeOn Wed, Jul 17, 2013 at 2:24 PM, connor_ramsey@... <connor_ramsey@...> wrote:
The energy advantages of conjugating BEAM and microcontrollers in a robot is certainly obvious, but I really do wonder about how many resources it actually saves the program. In a simple two-motor walker, the only obvious software performance advantage is that the program doesn't have to remember how to make the robot walk, the body already knows how, it just needs to know where to take it. This sounds great, but in practice, David's right, the difference in the program's size is barely noticeable compared to what the mc has to offer, and you could just write a single chunk of code that remembers how to walk, and use it like a function throughout the rest of the code. But imagine building a more complicated body, like a 12 motor walker. You can still do the same thing, but the chunk of code that specifies how to control the motors is going to be exponentially larger than before. Applying BEAM to this situation will make a significant dent in resource availability, again with added energy benefits. Now imagine an even more anatomically derived chassis, a biped that has to balance itself to walk. This type of chassis probably would need a much larger chunk of code to control it than before, since it has to work even to stand still, let alone move in any direction, and it would constantly have to run fast enough to simulate a constant feedback loop on almost all of the motors. The code block you would have to write for this would be enormous(at least by microcontroller standards), so a much more overall efficient way to implement control into this design would be to use easily influenced OISM circuits to balance the legs automatically, so that the programmer can write a program just as simple as in the two motor walker if he/she chooses. Although the program could specifically position the body, the underlying circuitry always makes sure it's as balanced as possible. And it also allows the computer to get some beauty sleep.
You know, I've looked into programmable BEAM circuits before. I started at simple BEAM circuits with tunable parts, but the concept went all the way to representing data as points in a wave function. I never really got around to how to store this wave statically using a simple circuit, but the idea was that the wave would travel in a loop. A sensor signal would tune a timer to close the output line at a specific point in the wave. This signal would bias the robot's core, and if the action was successful, the point would remain unchanged, but failure would alter it in some way until it was successful. Basically the goal was to make an uber improvement on Bruce's learning circuit. It would be able to respond uniquely to a multitude of different situations rather than just one. Theoretically the wave could be stored in a res-cap network with analog switches linking the caps to ground for locking a charge on each one when powered off. I figured the gaps between points will fill themselves in, so that if the robot encountered a scenario it didn't know how to respond to, it could try to do something similar to how it responded in the most similar situation it recognizes. The larger the network, the higher resolution the wave could be retained. I figured it might not work that well, though. I just thought of this as I'm writing: instead of storing the wave itself, create a circuit that can perform an equation that represents a wave that represents points of data. Believe or not I've actually had a schematic for an analog arithmetic unit lying around here, and I think I could use electronically tuned resistors as memory devices to program this AU to perform a polynomial algebraic sequence on an input variable to yield the desired data point on an emulated grid chart. This data point would bias a neuron in the control loop, and if it fails, then the equation is altered. I'm not sure if it's plausible, and it's definitely not practical with the advent of today's processors, but no evil mad genius super-scientist like me could possibly resist the challenge. XD. Enjoy, evil mad genius super-scientist Connor.
--- In email@example.com, Martin McKee wrote:
>> The AtTiny85 ( and '25, and '45 ) is, indeed, a very nice chip. In the old
> days I had fun with the '13 but the '25/'45/'85 series are a very nice
> upgrade. I also like the '24/'44/'84 series. It's a 14-pin package
> instead of the 8-pin, but it actually uses slightly less power. I did a
> quick "learn to solder" project with the AtTiny24 and was quite impressed
> with its capabilities vs. price. Another interesting series ( using even
> less power ) are the AtTiny '261/'461/'861 series. They actually include a
> 16-bit timer ( unusual for the Tiny series ) and a high-speed PWM generator
> also. At low-voltages, though, the AtMega328P ( used in the Arduino ) uses
> roughly the same power as the Tinys but it has lots more peripherals. It
> does cost more, however, and a pdip-28 package isn't all that small...
> What I hope, soon ( i.e. when they become available ), to play with are the
> NXP LPC800 series. Much higher running power than the AVRs ( 1mA @ 12MHz
> 3.3v vs. 200uA @ 1MHz 1.8v ) but in sleep they can drop down to the same
> range as the AVRs, ~1uA with a clock still running. And they are 32-bit
> with ( compared to an AVR ) lots of RAM memory ( 4K ). They look ideal for
> something like a learning system that needs to keep track of ( relatively
> speaking ) large amounts of information -- they are also available ( or
> will be ) in a dip-8 package!
> So much for talk of microcontrollers. I've still got designs for easily
> influencable BEAM networks running through my mind. But, I must admit, the
> use of servomotors is proving to be quite a complicating factor... holding
> the network timings in tight bounds with widely varying voltages is proving
> to be something of an annoyance.
> Martin Jay McKee
- Yeah, the usability bit is a primary focus of mine. Just for fun, really, I've taken an approach at a very traditional style, basically using a set of counters in place of an actual processing unit. At it's simplest, it lacks the hardware to perform Boolean logic operations outside of 1's and 2s complement, but these can still be used to simulate logic functions in a few cycles. It can also simulate bit shifting easily enough by multiplying or dividing by 2. It also places quotients and remainders into different registers for easy handling of remainders. Not to mention floating point math isn't difficult, either. It could even perform <, =, > comparisons between values. As a matter of fact, I can't really say that any electronic computer has ever been built in this fashion. I'm pretty much basing the design entirely on DigiComp2, a mechanical 4-bit binary computer distributed as an educational toy from 1968-1976.
Yes, the 1-bit processor array concept is in fact cellular automata, which is why I refer to each unit as a "cell". I don't entirely understand bandwidth, yet. But the idea doesn't really focus on that. It regards robustness of the system, as well as massive parallel processing without most of the usability problems. I would also think it much more flexible, because a key construct is that each cell can alter its connectivity with its neighbors. It would take several orders of magnitude more component failures to trash the system than your traditional hardware, it could also be incredibly fault tolerant, and I'm thinking on the lines that the entire system would be programmed as a whole, so that determining how each cell should connect can be left up to the OS shell. Also, even if bandwidth restricts how quickly information is processed, another perk of the idea is that a very large amount of data could be processed at once.
On a side note, I once came up with an idea for a machine that was mostly electronic, but stored data temporarily as photon states(like, particle for 0 and wave for 1), and would be able to take advantage of the fact that photons, being 4-dimensional objects, can move in more directions than we can perceive, and thus allow the machine to literally do everything at once. What I mean is that each new cycle would take place in the same time frame as the last cycle, so that it could register an infinite amount of data in about a billionth of a second or so. It would only ever have to go forward in time if it needed to write a result back to main memory or update I/O, because the way it works, the events that occurred in previous steps literally would have never happened, and so the electronic memory wouldn't be able to remember such a result, and the outside world could only observe the final state of the program, if there was one. Fundamentally it is a photon based delay line with a negative delay. As in, instead of the delay propagating forward in time, it "rewinds" time slightly. So the potential would be literally instant computation, a stack of infinite size could be fed into the computer and processed in less than a billionth of a second, and an entire program run could be accomplished in the same amount of time. Branches and subroutines would be included. Only writing data back to memory or porting to the I/Os would really take any time at all. Only the program's final result could be observed from outside, as each step in between would never have happened in our timeline. Also, the program counter would have to be photon based, somehow, since if it was electronic, it wouldn't be able to remember what program line to go to next after time was rewritten again. The only thing I can see being interpreted as dangerous with this is that it does, indeed, rewrite time. But it only rewrites about a billionth of a second each time, and it doesn't effect outside events whatsoever. It has absolutely no way to affect reality.
> For myself, life is catching up with me. Come Monday, I'll be starting a
> new degree ( one not even tangentially related to my first ), so I've been
> rushing around trying to get all that in order -- no time for seriously
> thinking about robotics at all.
> I've only got a minute or two now, but, some few comments. The massively
> parallel 1-bit processors sounds a bit like a cellular atomaton type
> system. I remember having see once ( but can I find it now? of course not!
> ) a computer system that was being developed in that vein, compiler and
> all. There is certainly potential for quite a bit of performance, but for
> maximum performance, the bottleneck is often memory bandwidth, and not,
> strictly, computational. A large number of processors with a handful of
> neighbors and a 1-bit interconnect is not going to help in that line.
> To be honest, much of the architecture design lately has been targeted at
> increasing performance ( adding parallel instruction sets, vectorizability,
> hyperthreads, etc. ) but because of memory access issues and programming
> concurrency issues, simple small instructions and a minimal set of fully
> atomic instructions has seemed to have the best balance of usability and
> performance. No one has really been able to demonstrate an architecture
> that is both highly performant and efficient in the face of concurrency (
> and many parallel computational units ) while remaining easy to program. I
> think what can be said about "traditional" architectures, is that they are
> easy to understand and they work "well enough."
> Back to work...
> Martin Jay McKee