Loading ...
Sorry, an error occurred while loading the content.

Re: [beam] Re: Wire Computing? A Theory

Expand Messages
  • Martin McKee
    No conversation killers for me. I ve had a very busy few days, actually ( which may, sadly, continue ). And I m still processing the structure of what you
    Message 1 of 60 , Jul 30, 2013
    • 0 Attachment
      No conversation killers for me.  I've had a very busy few days, actually ( which may, sadly, continue ).  And I'm still processing the structure of what you are proposing ( just as it takes some work for you to think digitally, I do not find analog immediately obvious ).  At the same time, it seems to me that there are a number of possibilities that analog presents that are difficult ( though never impossible  ) with digital.  Analog circuitry has an immediacy that programmed devices just cannot match.  The only way to come close is with judicious use of hardware peripherals and interrupts -- and that has its own problems.  It's hard to beat an op-amp if you want an arithmetic result NOW.

      In the cracks of my schedule, I've been able to do some very minor work on the simulator for the little "neural net chip" I'm working on, but I've nothing interesting to report yet.  Much of the work, honestly, has been aimed toward routing the framework to support a basic world simulation over top of the neural network for testing purposes.

      I do wonder, however, about the finiteness of digital and analog.  They are very similar in practice -- where they vary most is in the theory.  In theory, analog is infinite; but, of course, the noise level of a circuit ( the signal to noise ratio ) imposes an equivalent signal resolution.  Taking from whence I know, older analog R/C servos had no digital control yet were, effectively, 8-bit.  Any higher resolution in the pulse signals had little or not effect on the servo because of dead-band and noise effects.  Some of the newer servos ( both analog and digital ) are doing better than that, 10-bit, perhaps.  It is also fiendishly difficult to lower the noise level to make anything like a 24-bit ADC work at near one LSB.  It's very easy to calculate in 32-bit resolution however, even in an 8-bit micro.  I would, actually, argue, that here digital has the advantage.  Because although the issues are the same with getting any signals into and out of the processing system are the same, the processing system itself can work at what is essentially a zero noise level.  Or, which I find even more interesting, a noise level chosen by the designer.  When I worked on a neural network on a chip years back ( a spiking neural network but sans learning algorithm ), I was able to design oscillators that, like the suspended bi-core, shouldn't have worked.  The way things were connected, it should have sat at a fixed point.  When the system was noiseless, in fact, it did.  But I had written the code such that I had control over the noise level ( in the threshold ) of the neuron membrane potential.  No noise and everything stopped, slight noise created stable oscillations, large noise and everything became chaotic.

      The amount of actual state that my little network will deal with is going to be somewhat mind-boggling as well.  There will be four outputs, each output is driven by, say, fifteen hidden neurons.  In the synapse between neurons, the weight is 8-bits.  That is, the actual state that is being learned in the second half of the network is 4 * 15 * 8-bits or approximately 512-bits ( 2 ^ 9 ), the clustering section contains 15 * 8 * 8-bits of prototypes, approximately 1024-bits, which are also learned.  There are some other small pieces of state that interact with these two major blocks.  While the network does not use a dense encoding of the data, it would be unreasonable to say that it will be limited due to lack of statespace.  Rather, the failure will be in my programming, my choice of algorithms, my choice of feedback values, or something else.

      So many options...

      Martin Jay McKee


      On Tue, Jul 30, 2013 at 11:48 AM, connor_ramsey@... <connor_ramsey@...> wrote:
       

      Did I kill the conversation with my night-blogging? Sorry about that.:-B Sometimes I just don't know where to stop. But I'm still intrigued by the concept I laid out under the influence of an entire 2 liter bottle of Cherry Coke and insomnia. A digital circuit could run on this concept as well, but the problem is that a finite state machine has finite resolution, and so the phase space in the memory register wouldn't be very large, in terms, a very rigid, finite virtual universe. A 64-bit state machine could access up to 2^64 number of virtual addresses, each 2^64 bits long, for a total of 2^64^2/8 bytes. That's 256 exbibytes(1024^6 *256 bytes), about 5 million times the amount of information stored in all books ever written, ever, and it's a good fraction of the size of the entire internet. While that's more than anyone would ever use, an analog construct of the same device could store the detail of every atomic particle in the universe, and then some. While it's still technically finite, it's so incomprehensibly vast that we can't even distinguish it as finite.
      But don't get all excited about it, even though I'm working on a BEAM based universal state machine, it's still a simple state machine, and the problem is that it still has a sort of informational bottleneck. It can process information very quickly, but only a little at a time. While each piece of information can be ambiguously vast, it's still only a piece, out of an ambiguous number of pieces that can only be processed one at a time. I suspect the reason our brains are so large is that they are hugely parallel processors, processing hundreds of millions of pieces of ambiguously large pieces of data simultaneously, whereas the machine I'm designing is strictly serial, and thus requires billions of times less circuitry to accomplish, excluding the reflex response system, in which traditional BEAM comes in handy, since even the simplest Nv loops tend to process different stimuli both in parallel and very quickly. Nervous systems seem to be exceptional at parallel real-time processing, even compared to today's parallel processors.
      Some of you may wonder why I'm so preferal toward analog electronics, and it's for the same reason that everyone else is biased toward digital science: my skill set is more developed in analog and neuromimetic sciences than in digital science. I have some fun designing my own hardware systems and instruction sets, but neural networking just comes naturally to me, whereas computer science isn't as integral a part of my personality. I enjoy digital science because it poses a thought challenge, but analog science because I more deeply understand the logic behind it, and thus I can make my designs progress faster. Besides, I can build a BEAM circuit faster than I can write a program, because I have to construct a program, whereas I can simply let a schematic flow onto the paper. But most importantly is that I can design circuits that produce abstract results, but I can't write a program that does the same thing. Since I'm autistic, I rely on my design being able to self-organize its own abstract behavior based on the specifics that I lay out, but I find it more difficult to program because high level languages in themselves tend to be abstract, which is why I tend to gravitate toward low level languages, which then make it even more difficult to describe the abstract machine that I have enough trouble understanding myself, because the whole idea of abstract doesn't settle with me well. I can't even visualize a Turing machine without visualizing it as a specific, physical model.
      So I hope that clears up a bit of my apparent stubbornness toward moving away from BEAM. And I hope that someone has any more ideas to paste onto my growing project. Enjoy, Connor


    • connor_ramsey@ymail.com
      Yeah, the usability bit is a primary focus of mine. Just for fun, really, I ve taken an approach at a very traditional style, basically using a set of counters
      Message 60 of 60 , Aug 15, 2013
      • 0 Attachment
        Yeah, the usability bit is a primary focus of mine. Just for fun, really, I've taken an approach at a very traditional style, basically using a set of counters in place of an actual processing unit. At it's simplest, it lacks the hardware to perform Boolean logic operations outside of 1's and 2s complement, but these can still be used to simulate logic functions in a few cycles. It can also simulate bit shifting easily enough by multiplying or dividing by 2. It also places quotients and remainders into different registers for easy handling of remainders. Not to mention floating point math isn't difficult, either. It could even perform <, =, > comparisons between values. As a matter of fact, I can't really say that any electronic computer has ever been built in this fashion. I'm pretty much basing the design entirely on DigiComp2, a mechanical 4-bit binary computer distributed as an educational toy from 1968-1976.
        Yes, the 1-bit processor array concept is in fact cellular automata, which is why I refer to each unit as a "cell". I don't entirely understand bandwidth, yet. But the idea doesn't really focus on that. It regards robustness of the system, as well as massive parallel processing without most of the usability problems. I would also think it much more flexible, because a key construct is that each cell can alter its connectivity with its neighbors. It would take several orders of magnitude more component failures to trash the system than your traditional hardware, it could also be incredibly fault tolerant, and I'm thinking on the lines that the entire system would be programmed as a whole, so that determining how each cell should connect can be left up to the OS shell. Also, even if bandwidth restricts how quickly information is processed, another perk of the idea is that a very large amount of data could be processed at once.
        On a side note, I once came up with an idea for a machine that was mostly electronic, but stored data temporarily as photon states(like, particle for 0 and wave for 1), and would be able to take advantage of the fact that photons, being 4-dimensional objects, can move in more directions than we can perceive, and thus allow the machine to literally do everything at once. What I mean is that each new cycle would take place in the same time frame as the last cycle, so that it could register an infinite amount of data in about a billionth of a second or so. It would only ever have to go forward in time if it needed to write a result back to main memory or update I/O, because the way it works, the events that occurred in previous steps literally would have never happened, and so the electronic memory wouldn't be able to remember such a result, and the outside world could only observe the final state of the program, if there was one. Fundamentally it is a photon based delay line with a negative delay. As in, instead of the delay propagating forward in time, it "rewinds" time slightly. So the potential would be literally instant computation, a stack of infinite size could be fed into the computer and processed in less than a billionth of a second, and an entire program run could be accomplished in the same amount of time. Branches and subroutines would be included. Only writing data back to memory or porting to the I/Os would really take any time at all. Only the program's final result could be observed from outside, as each step in between would never have happened in our timeline. Also, the program counter would have to be photon based, somehow, since if it was electronic, it wouldn't be able to remember what program line to go to next after time was rewritten again. The only thing I can see being interpreted as dangerous with this is that it does, indeed, rewrite time. But it only rewrites about a billionth of a second each time, and it doesn't effect outside events whatsoever. It has absolutely no way to affect reality.

        --- In beam@yahoogroups.com, Martin McKee wrote:
        >
        > For myself, life is catching up with me. Come Monday, I'll be starting a
        > new degree ( one not even tangentially related to my first ), so I've been
        > rushing around trying to get all that in order -- no time for seriously
        > thinking about robotics at all.
        >
        > I've only got a minute or two now, but, some few comments. The massively
        > parallel 1-bit processors sounds a bit like a cellular atomaton type
        > system. I remember having see once ( but can I find it now? of course not!
        > ) a computer system that was being developed in that vein, compiler and
        > all. There is certainly potential for quite a bit of performance, but for
        > maximum performance, the bottleneck is often memory bandwidth, and not,
        > strictly, computational. A large number of processors with a handful of
        > neighbors and a 1-bit interconnect is not going to help in that line.
        >
        > To be honest, much of the architecture design lately has been targeted at
        > increasing performance ( adding parallel instruction sets, vectorizability,
        > hyperthreads, etc. ) but because of memory access issues and programming
        > concurrency issues, simple small instructions and a minimal set of fully
        > atomic instructions has seemed to have the best balance of usability and
        > performance. No one has really been able to demonstrate an architecture
        > that is both highly performant and efficient in the face of concurrency (
        > and many parallel computational units ) while remaining easy to program. I
        > think what can be said about "traditional" architectures, is that they are
        > easy to understand and they work "well enough."
        >
        > Back to work...
        >
        > Martin Jay McKee
      Your message has been successfully submitted and would be delivered to recipients shortly.