Loading ...
Sorry, an error occurred while loading the content.

Re: [beam] Re: Wire Computing? A Theory

Expand Messages
  • Martin McKee
    One thing that can, indeed, help, is to surround the positive pin by a guard ring driven by a buffer. Then, when you disconnect the outside world ( through
    Message 1 of 60 , Jul 20, 2013
    • 0 Attachment
      One thing that can, indeed, help, is to surround the positive pin by a guard ring driven by a buffer.  Then, when you disconnect the outside world ( through hopefully a single switch ) it is basically just the capacitor's self-discharge that is draining it as opposed to external loads, or board ( circuit ) resistances, etc.  There will still need to be a high impedance buffer for the voltage "read," but there are plenty of options for that as well.

      While some few of my ideas might simplify things slightly, I usually do so because that means I can add more complexity elsewhere without feeling as though I'm overdesigning!

      I certainly think that there are many cases where analog can match or surpass the performance of digital.  Indeed, the issue is, typically, that digital is simply "easier" ( for some definition thereof ).  Digital may be more repeatable, or less expensive, it may use less power or simply be more flexible.  In the end, though, top of the line oscilloscope front ends remain analog right up to the ( optimized ) ADC.  Scaling is done there, bandwidth limiting is done there.  There are simply things that digital cannot touch.  The area may be larger ( from an absolute performance standpoint ) than analog is currently given credit for and I think it will basically take enough people forcing analog back into the light to make that clear.

      It is a wonderful thing that there are so many options, it keeps the search interesting.  And I think it is admirable to push as much as possible toward analog.  I must admit though, that is just not a place that my brain plays as well as the digital domain.  I keep trying though!

      Martin Jay McKee


      On Sat, Jul 20, 2013 at 3:44 PM, connor_ramsey@... <connor_ramsey@...> wrote:
       

      Analog switches might help to minimize drainage on the capacitor, perhaps containing the cap in an ionized atmosphere might as well. And it's not really suppose to be simpler. That's actually the point, in fact, that an analog circuit might be able to compete with digital performance.



      --- In beam@yahoogroups.com, Martin McKee wrote:
      >
      > The PWM and FM based computing sound interesting. I have really only
      > looked at PWM as a digital to analog connection mechanism. At that point (
      > as it will be sent through a low-pass filter ), it loses its unary signal
      > characteristics and appears as a simple analog voltage. Another way that
      > it could appear is if the PWM pin were set up as open-source or open-drain
      > and, then, instead of an analog voltage, the result could be ( effectively
      > ) an analog current. In either case, it allows for simple interface to
      > BEAM circuits without many external components.
      >
      > Using capacitors for analog memory is a powerful concept, but the difficult
      > remains with keeping the value refreshed. At the very least, the signal
      > needs to be regenerated periodically. I'm not sure, though, that it would
      > end up much simpler than using the memory in a microcontroller. Indeed, a
      > micro with several DAC outputs could ape the analog storage of a similar
      > number of capacitors. Actually, a quick search of Digikey came up with an
      > interesting option for just such an application, a DSPIC33FFJ16GS502. It's
      > a 16-bit microcontroller, in a 28-pin package, with 16kB RAM, 16 channel
      > ADC and 4 channel DAC. It would be a simple program indeed, to program it
      > to read in four analog values ( when a pin is pulled low, for instance )
      > and send out the signal as a voltage from one of the four DAC channels. It
      > would be only slightly more difficult to use some other analog inputs as
      > controls for the four "memory elements." One input channel could, perhaps,
      > control how quickly the inputs were "forgotten," another act as a general
      > modulation value ( multiplied with the stored value ), etc. And, for all
      > of this, the processor could be in sleep most of the time. Sadly, the
      > DSPIC is not what one could classify as a low-power processor. The same
      > results could be achieved with an external DAC and any run-of-the-mill
      > micro though.
      >
      > Martin Jay McKee


    • connor_ramsey@ymail.com
      Yeah, the usability bit is a primary focus of mine. Just for fun, really, I ve taken an approach at a very traditional style, basically using a set of counters
      Message 60 of 60 , Aug 15, 2013
      • 0 Attachment
        Yeah, the usability bit is a primary focus of mine. Just for fun, really, I've taken an approach at a very traditional style, basically using a set of counters in place of an actual processing unit. At it's simplest, it lacks the hardware to perform Boolean logic operations outside of 1's and 2s complement, but these can still be used to simulate logic functions in a few cycles. It can also simulate bit shifting easily enough by multiplying or dividing by 2. It also places quotients and remainders into different registers for easy handling of remainders. Not to mention floating point math isn't difficult, either. It could even perform <, =, > comparisons between values. As a matter of fact, I can't really say that any electronic computer has ever been built in this fashion. I'm pretty much basing the design entirely on DigiComp2, a mechanical 4-bit binary computer distributed as an educational toy from 1968-1976.
        Yes, the 1-bit processor array concept is in fact cellular automata, which is why I refer to each unit as a "cell". I don't entirely understand bandwidth, yet. But the idea doesn't really focus on that. It regards robustness of the system, as well as massive parallel processing without most of the usability problems. I would also think it much more flexible, because a key construct is that each cell can alter its connectivity with its neighbors. It would take several orders of magnitude more component failures to trash the system than your traditional hardware, it could also be incredibly fault tolerant, and I'm thinking on the lines that the entire system would be programmed as a whole, so that determining how each cell should connect can be left up to the OS shell. Also, even if bandwidth restricts how quickly information is processed, another perk of the idea is that a very large amount of data could be processed at once.
        On a side note, I once came up with an idea for a machine that was mostly electronic, but stored data temporarily as photon states(like, particle for 0 and wave for 1), and would be able to take advantage of the fact that photons, being 4-dimensional objects, can move in more directions than we can perceive, and thus allow the machine to literally do everything at once. What I mean is that each new cycle would take place in the same time frame as the last cycle, so that it could register an infinite amount of data in about a billionth of a second or so. It would only ever have to go forward in time if it needed to write a result back to main memory or update I/O, because the way it works, the events that occurred in previous steps literally would have never happened, and so the electronic memory wouldn't be able to remember such a result, and the outside world could only observe the final state of the program, if there was one. Fundamentally it is a photon based delay line with a negative delay. As in, instead of the delay propagating forward in time, it "rewinds" time slightly. So the potential would be literally instant computation, a stack of infinite size could be fed into the computer and processed in less than a billionth of a second, and an entire program run could be accomplished in the same amount of time. Branches and subroutines would be included. Only writing data back to memory or porting to the I/Os would really take any time at all. Only the program's final result could be observed from outside, as each step in between would never have happened in our timeline. Also, the program counter would have to be photon based, somehow, since if it was electronic, it wouldn't be able to remember what program line to go to next after time was rewritten again. The only thing I can see being interpreted as dangerous with this is that it does, indeed, rewrite time. But it only rewrites about a billionth of a second each time, and it doesn't effect outside events whatsoever. It has absolutely no way to affect reality.

        --- In beam@yahoogroups.com, Martin McKee wrote:
        >
        > For myself, life is catching up with me. Come Monday, I'll be starting a
        > new degree ( one not even tangentially related to my first ), so I've been
        > rushing around trying to get all that in order -- no time for seriously
        > thinking about robotics at all.
        >
        > I've only got a minute or two now, but, some few comments. The massively
        > parallel 1-bit processors sounds a bit like a cellular atomaton type
        > system. I remember having see once ( but can I find it now? of course not!
        > ) a computer system that was being developed in that vein, compiler and
        > all. There is certainly potential for quite a bit of performance, but for
        > maximum performance, the bottleneck is often memory bandwidth, and not,
        > strictly, computational. A large number of processors with a handful of
        > neighbors and a 1-bit interconnect is not going to help in that line.
        >
        > To be honest, much of the architecture design lately has been targeted at
        > increasing performance ( adding parallel instruction sets, vectorizability,
        > hyperthreads, etc. ) but because of memory access issues and programming
        > concurrency issues, simple small instructions and a minimal set of fully
        > atomic instructions has seemed to have the best balance of usability and
        > performance. No one has really been able to demonstrate an architecture
        > that is both highly performant and efficient in the face of concurrency (
        > and many parallel computational units ) while remaining easy to program. I
        > think what can be said about "traditional" architectures, is that they are
        > easy to understand and they work "well enough."
        >
        > Back to work...
        >
        > Martin Jay McKee
      Your message has been successfully submitted and would be delivered to recipients shortly.