## Re: Wire Computing? A Theory

Expand Messages
• Recently, I ve been thinking very heavily about PWM based processing systems, and it seems to me that it would be almost synonymous with a unary computer, that
Message 1 of 60 , Jul 18 9:06 PM
Recently, I've been thinking very heavily about PWM based processing systems, and it seems to me that it would be almost synonymous with a unary computer, that is, every place value in a number is exactly ambiguous with every other, basically a tally mark system. Zero is simply undefined. If you think about it, a PWM signal could be conceived as a seamless stream of 1s. I've been having a little difficulty implementing it proficiently, though. Addition, subtraction, and multiplication all work fine, but I had to develop a complicated algorithm to find quotients, and it's still not quite finished because I haven't found a way to deal with remainders yet. The problem is that the processor doesn't know the value of the signal it's receiving until it reaches the outer boundary, and splitting the signal into smaller segments, which works fine on the other 3 operations, presents some resolution issues with division, because each partial quotient will have a small error with it, and over time these errors accumulate quickly to produce an off result unless complex measures are taken. At least in my experience.
Much more recently I've decided to start looking at FM based computing systems, and this seems much easier to me, because the FM system still isn't required to read the entire value before it starts operating on it, but the division error issue is out of the question because all signals are exactly the same length, and are measured by how many pulses occur within that set time frame. This way the computer knows exactly where the outer boundary is, and so multiplication and division are just as simple as addition and subtraction. Registers in a unary computer can basically just be capacitors storing a unary value in analog format. I also figure the hardware to be simpler to construct, because the FM system processes the data in a more dynamic fashion than the PWM system. I theorize that mass memory could be accomplished through a single capacitor by introducing separate data at different voltages, represented by amperage, so that each variable can be easily reconstructed by draining current from the capacitor until the voltage falls to the threshold of the original voltage minus the voltage of the variable extracted, and then the actual value is determined by the current remaining in the capacitor. This way the voltage profile acts as the address for a variable, and the variable is encoded within the same signal as the amperage profile. So a single super-capacitor or parallel set of such can store every piece of data the computer will ever need to remember, or even that plus program space. This may be the equivalent of gigabytes or even terabytes of binary data. Also if you have it running in the Khtz range, an FM computer could directly modulate analog servos and process analog FM radio signals. The whole problem everyone's had with the concept of unary code(outside of trying to comprehend how it only consists of one value. I really don't see how it's that difficult) is that the memory would be exponentially larger than binary. But it can easily be converted to an analog format because unary signals are almost ambiguous with PWM or FM signals, and can be formatted to AM just as easily. You know what, using the same system as the mass memory, I could send a virtually unlimited amount of different input signals through a single pin on the chip, likewise with the output pin! I think this type of technology could go from breadboard to IC pretty quickly if it can prove itself well. So who's ready for a 4-pin low voltage IC that's more powerful that your PC or laptop?
Enjoy, Connor.
P.S. I also noticed we lost about 280 members since this topic began. Not sure how to intrepret that.

--- In beam@yahoogroups.com, Martin McKee wrote:
>
> I've though along some of the same lines as well. I've got a whole pile of
> four quadrant analog multipliers in my junk box. Along with op-amps, there
> are no reasonable arithmetic operations that cannot be implemented.
> Combine that with some simple wave form generators, and I think you have
> the beginnings of an extremely powerful and flexible control system. I
> just have never gotten around to doing more than think about it in
> passing. But, I do think it is possible to do so efficiently ( though not
> with the components I have at the moment ( they expect +/- 10v rails! ).
> It seems like a logical progression to move in both directions. I have
> been thinking along the lines of making BEAM even more analog by using
> op-amps and continuous values, while adding a digital control unit. I am
> still up in the air about if it is better to use pulse width modulation or
> a programmable resistor for control voltages though. PWM has two distinct
> advantages: 1) it needs no external components and, 2) it can be easily
> disconnected ( simply make the pin an input ).
>
> All my thinking has been along the lines of a five to eight motor walker (
> not twelve for cost reasons ). I agree that a two motor walker is unlikely
> to place much of a "walking" load on a processor. Although I don't think
> it would make the program much more complicated, I do think that the
> combination of BEAM and a microcontroller would allow the processor to be
> decoupled from the time constraints of walking and that could simplify the
> structure of the program ( if not greatly reduce the size ). In the end, I
> have always looked at the combination in a leveled manner, somewhat akin to
> the Subsumption Architecture developed at MIT in the '80s. There would be
> a low-level analog ( BEAM ) control system that is closely coupled with a
> microcontroller that deals only with "emergency" situations and basic
> control. There would then be a mid-layer "general learning" system that
> dealt with optimizing the robot's basic behaviors. At the very top would
> be a "planning logic" module that deals with big picture, long term,
> planning. At each of the lower levels, it seems fully reasonable to
> combine BEAM and digital, at the very top level... I'm not sure.
>
> Honestly, I love to see many different approaches to the issue. Although
> digital control has taken over the bulk of the robotics market, analog like
> control has definite advantages in certain areas. For one, it can be lower
> power if properly designed. That has been discussed sufficiently at this
> point though I think. Analog is also "instant." It works at the speed of
> electricity. On the other hand, digital control has latency limits that
> can only be overcome by adding a faster processor ( or, sometimes,
> programming smarter ). Analog also makes it easy to "sum" many different
> signals from different sources. The availability of bias points in analog
> control circuits allows for reflexes from sensors or control from above and
> the systems can remain completely decoupled. In a digital setting, that
> modularity can be harder to come by.
>
> I've been working on the design of an op-amp based servo pulse generator
> circuit that I will, then, couple to a leg pattern generator. The ultimate
> platform is targeted as being an 9-motor, 4-leg, walker with an independent
> head. Over all, the system should be able to walk and survive as a typical
> BEAM walker but it will contain a tightly coupled microcontroller wired to
> the different bias points in the control system. The processor I'm looking
> at is probably an AtMega328. That way I could make the board fully Arduino
> compatible. Just plug it into the computer and it will act like any other
> Arduino. Not precisely what I would do if it were just for myself, but it
> seems to make sense if I am going to try to integrate this into the robot
> club that already works with Arduinos.
>
> Martin Jay McKee
• Yeah, the usability bit is a primary focus of mine. Just for fun, really, I ve taken an approach at a very traditional style, basically using a set of counters
Message 60 of 60 , Aug 15, 2013
Yeah, the usability bit is a primary focus of mine. Just for fun, really, I've taken an approach at a very traditional style, basically using a set of counters in place of an actual processing unit. At it's simplest, it lacks the hardware to perform Boolean logic operations outside of 1's and 2s complement, but these can still be used to simulate logic functions in a few cycles. It can also simulate bit shifting easily enough by multiplying or dividing by 2. It also places quotients and remainders into different registers for easy handling of remainders. Not to mention floating point math isn't difficult, either. It could even perform <, =, > comparisons between values. As a matter of fact, I can't really say that any electronic computer has ever been built in this fashion. I'm pretty much basing the design entirely on DigiComp2, a mechanical 4-bit binary computer distributed as an educational toy from 1968-1976.
Yes, the 1-bit processor array concept is in fact cellular automata, which is why I refer to each unit as a "cell". I don't entirely understand bandwidth, yet. But the idea doesn't really focus on that. It regards robustness of the system, as well as massive parallel processing without most of the usability problems. I would also think it much more flexible, because a key construct is that each cell can alter its connectivity with its neighbors. It would take several orders of magnitude more component failures to trash the system than your traditional hardware, it could also be incredibly fault tolerant, and I'm thinking on the lines that the entire system would be programmed as a whole, so that determining how each cell should connect can be left up to the OS shell. Also, even if bandwidth restricts how quickly information is processed, another perk of the idea is that a very large amount of data could be processed at once.
On a side note, I once came up with an idea for a machine that was mostly electronic, but stored data temporarily as photon states(like, particle for 0 and wave for 1), and would be able to take advantage of the fact that photons, being 4-dimensional objects, can move in more directions than we can perceive, and thus allow the machine to literally do everything at once. What I mean is that each new cycle would take place in the same time frame as the last cycle, so that it could register an infinite amount of data in about a billionth of a second or so. It would only ever have to go forward in time if it needed to write a result back to main memory or update I/O, because the way it works, the events that occurred in previous steps literally would have never happened, and so the electronic memory wouldn't be able to remember such a result, and the outside world could only observe the final state of the program, if there was one. Fundamentally it is a photon based delay line with a negative delay. As in, instead of the delay propagating forward in time, it "rewinds" time slightly. So the potential would be literally instant computation, a stack of infinite size could be fed into the computer and processed in less than a billionth of a second, and an entire program run could be accomplished in the same amount of time. Branches and subroutines would be included. Only writing data back to memory or porting to the I/Os would really take any time at all. Only the program's final result could be observed from outside, as each step in between would never have happened in our timeline. Also, the program counter would have to be photon based, somehow, since if it was electronic, it wouldn't be able to remember what program line to go to next after time was rewritten again. The only thing I can see being interpreted as dangerous with this is that it does, indeed, rewrite time. But it only rewrites about a billionth of a second each time, and it doesn't effect outside events whatsoever. It has absolutely no way to affect reality.

--- In beam@yahoogroups.com, Martin McKee wrote:
>
> For myself, life is catching up with me. Come Monday, I'll be starting a
> new degree ( one not even tangentially related to my first ), so I've been
> rushing around trying to get all that in order -- no time for seriously
> thinking about robotics at all.
>
> I've only got a minute or two now, but, some few comments. The massively
> parallel 1-bit processors sounds a bit like a cellular atomaton type
> system. I remember having see once ( but can I find it now? of course not!
> ) a computer system that was being developed in that vein, compiler and
> all. There is certainly potential for quite a bit of performance, but for
> maximum performance, the bottleneck is often memory bandwidth, and not,
> strictly, computational. A large number of processors with a handful of
> neighbors and a 1-bit interconnect is not going to help in that line.
>
> To be honest, much of the architecture design lately has been targeted at
> increasing performance ( adding parallel instruction sets, vectorizability,
> hyperthreads, etc. ) but because of memory access issues and programming
> concurrency issues, simple small instructions and a minimal set of fully
> atomic instructions has seemed to have the best balance of usability and
> performance. No one has really been able to demonstrate an architecture
> that is both highly performant and efficient in the face of concurrency (
> and many parallel computational units ) while remaining easy to program. I
> think what can be said about "traditional" architectures, is that they are
> easy to understand and they work "well enough."
>
> Back to work...
>
> Martin Jay McKee
Your message has been successfully submitted and would be delivered to recipients shortly.