Loading ...
Sorry, an error occurred while loading the content.

Re: Wire Computing? A Theory

Expand Messages
  • connor_ramsey@ymail.com
    Well the biggest advantage behind a colony of robots composed of simple Braitenburg drones and a high level queen, is that there s far less overhead on the
    Message 1 of 60 , Aug 1, 2013
    • 0 Attachment
      Well the biggest advantage behind a colony of robots composed of simple Braitenburg drones and a high level queen, is that there's far less overhead on the manufacturing process to create the drones. Constructing a drone in a traditional concept would require the resources to print all the circuit patterns for each computer "brain" for each drone, have enough digital memory to store the patterns for the circuitry, the program for each drone(unless it's hard-wired into the ROM), and the equipment necessary to produce the hardware. Versus constructing simple Braitenburg drones, which only requires the production of simple discrete components and a simple mechanature, and the software resources to recall how to assemble these together. If the queen is being used in a mining or salvage operation, then the cost of the drones should be minimal. In general, the net worth of each drone will be considerably less than a digital drone, because the queen doesn't have to put as much effort into their construction, and the queen itself will become a cheaper piece of equipment. Basically the point is to minimize resource overhead for the drones.
      But yes, I find simple robots to be more fun than many digital bots. Although I wish that Hexbugs would come out with a better bug. Like, one that could move in more than two directions. Or have a more adept mechanature. Honestly, they're almost a step down from BEAM. Despite the fact that their circuits seem to use more components. I mean, seriously, I wish they would just pump out a nice, neatly trimmed bicore walker in a plastic exoskeleton, with photodetectors and microphones and feelers and everything, and make everyone happy. Whether it be one or two motors. Heck, even a beetlebot on legs would satisfy me.
      Oh, and I've been working on a digital hardware system the encodes and decodes a data word in a single register, using arithmetic coding. It's SO complicated. The register is very simple. But the decoder stage just gets more and more complicated. First there's the multiplier, which utilizes the "shift and multiply" algorithm. For a 16-bit register I need a 32-bit adder, but it can't just be a ripple-carry adder, because that takes too long, and it can't be a carry-save adder like is often used in multipliers, because each successive product needs to be analyzed individually. So I'm going with a carry-select adder, because it seems to be the simplest adder that offers high-speed performance over ripple-carry. All the multiplier does is exponentiate the word in the register in integrals. But then there's the addressing system. The imaginary storage space in the register is accessed via content, hence it emulates CAM more than RAM, because the files are actually stored by bit similarity. The earlier a file deviates from another, the farther away it is located. The problem, though, is that it would take a very long time for the machine to look through every single file(the imaginary space is laid out in files. In a 16-bit architecture, that's 65,536 8Kib files, or 512Mib total), so the address pointer has to predict where a file will be located, that matches the search word, and then somehow skip the multiplier along by the number of bits the pointer wishes to skip. In theory it's not very complicated, but in terms of designing the circuitry gate by gate, the decoder is absolutely monumental, especially because it has to achieve high performance in order to keep up with a decent processor speed. And I have a feeling that the encoder stage won't be much better. So now I'm very convinced in my stance on the analog variant.
      Enjoy, Connor

      --- In beam@yahoogroups.com, Martin McKee wrote:
      >
      > I agree whole-heartedly that BEAM still has great potential, even if it is
      > just for fun. It is the same everywhere... why duplicate simple CPU
      > architectures in discrete logic or FPGAs? There is not "advantage." Why
      > create yet another imperative programming language. But, more than just
      > for the play aspect, I think it is important to consider the fact that
      > people learn most effectively when they are playing. We do it all the time
      > as children, and it's okay. As we get older, play starts to become a bad
      > word, but the way our brains are wired doesn't change. Sometimes, doing
      > "simple" things just for fun is the most efficacious way to move forward.
      > It can get one's brain out of a rut and lead you to new discoveries.
      >
      > I have become quite aware, over the past few years, how powerful
      > non-traditional use of sensors can be. By rearranging them one can force
      > particular behavior or avoid weaknesses ( blind spots ). A combination of
      > flexible control circuitry, careful sensor layout, and careful tweaking,
      > certainly do show their potential in the spider bot. The behavior of the
      > robot is, as you say, quite complex and impressive. Sadly, it wouldn't be
      > able to handle much from a mechanical standpoint. Even the "obstacle" test
      > ( with chop sticks ), is fairly simple to achieve. I would be surprised if
      > it were not almost immediately stuck when placed in a real natural
      > environment. But that's not the point. The mechanical design could be
      > expanded and the same concepts used to control it. The result would be a
      > more capable robot. The question becomes one of how much work is required
      > and what the perceived benefit is. If the benefit is that one has fun in
      > the process, anything is justifiable. And anything can add to
      > understanding.
      >
      > Martin Jay McKee
    • connor_ramsey@ymail.com
      Yeah, the usability bit is a primary focus of mine. Just for fun, really, I ve taken an approach at a very traditional style, basically using a set of counters
      Message 60 of 60 , Aug 15, 2013
      • 0 Attachment
        Yeah, the usability bit is a primary focus of mine. Just for fun, really, I've taken an approach at a very traditional style, basically using a set of counters in place of an actual processing unit. At it's simplest, it lacks the hardware to perform Boolean logic operations outside of 1's and 2s complement, but these can still be used to simulate logic functions in a few cycles. It can also simulate bit shifting easily enough by multiplying or dividing by 2. It also places quotients and remainders into different registers for easy handling of remainders. Not to mention floating point math isn't difficult, either. It could even perform <, =, > comparisons between values. As a matter of fact, I can't really say that any electronic computer has ever been built in this fashion. I'm pretty much basing the design entirely on DigiComp2, a mechanical 4-bit binary computer distributed as an educational toy from 1968-1976.
        Yes, the 1-bit processor array concept is in fact cellular automata, which is why I refer to each unit as a "cell". I don't entirely understand bandwidth, yet. But the idea doesn't really focus on that. It regards robustness of the system, as well as massive parallel processing without most of the usability problems. I would also think it much more flexible, because a key construct is that each cell can alter its connectivity with its neighbors. It would take several orders of magnitude more component failures to trash the system than your traditional hardware, it could also be incredibly fault tolerant, and I'm thinking on the lines that the entire system would be programmed as a whole, so that determining how each cell should connect can be left up to the OS shell. Also, even if bandwidth restricts how quickly information is processed, another perk of the idea is that a very large amount of data could be processed at once.
        On a side note, I once came up with an idea for a machine that was mostly electronic, but stored data temporarily as photon states(like, particle for 0 and wave for 1), and would be able to take advantage of the fact that photons, being 4-dimensional objects, can move in more directions than we can perceive, and thus allow the machine to literally do everything at once. What I mean is that each new cycle would take place in the same time frame as the last cycle, so that it could register an infinite amount of data in about a billionth of a second or so. It would only ever have to go forward in time if it needed to write a result back to main memory or update I/O, because the way it works, the events that occurred in previous steps literally would have never happened, and so the electronic memory wouldn't be able to remember such a result, and the outside world could only observe the final state of the program, if there was one. Fundamentally it is a photon based delay line with a negative delay. As in, instead of the delay propagating forward in time, it "rewinds" time slightly. So the potential would be literally instant computation, a stack of infinite size could be fed into the computer and processed in less than a billionth of a second, and an entire program run could be accomplished in the same amount of time. Branches and subroutines would be included. Only writing data back to memory or porting to the I/Os would really take any time at all. Only the program's final result could be observed from outside, as each step in between would never have happened in our timeline. Also, the program counter would have to be photon based, somehow, since if it was electronic, it wouldn't be able to remember what program line to go to next after time was rewritten again. The only thing I can see being interpreted as dangerous with this is that it does, indeed, rewrite time. But it only rewrites about a billionth of a second each time, and it doesn't effect outside events whatsoever. It has absolutely no way to affect reality.

        --- In beam@yahoogroups.com, Martin McKee wrote:
        >
        > For myself, life is catching up with me. Come Monday, I'll be starting a
        > new degree ( one not even tangentially related to my first ), so I've been
        > rushing around trying to get all that in order -- no time for seriously
        > thinking about robotics at all.
        >
        > I've only got a minute or two now, but, some few comments. The massively
        > parallel 1-bit processors sounds a bit like a cellular atomaton type
        > system. I remember having see once ( but can I find it now? of course not!
        > ) a computer system that was being developed in that vein, compiler and
        > all. There is certainly potential for quite a bit of performance, but for
        > maximum performance, the bottleneck is often memory bandwidth, and not,
        > strictly, computational. A large number of processors with a handful of
        > neighbors and a 1-bit interconnect is not going to help in that line.
        >
        > To be honest, much of the architecture design lately has been targeted at
        > increasing performance ( adding parallel instruction sets, vectorizability,
        > hyperthreads, etc. ) but because of memory access issues and programming
        > concurrency issues, simple small instructions and a minimal set of fully
        > atomic instructions has seemed to have the best balance of usability and
        > performance. No one has really been able to demonstrate an architecture
        > that is both highly performant and efficient in the face of concurrency (
        > and many parallel computational units ) while remaining easy to program. I
        > think what can be said about "traditional" architectures, is that they are
        > easy to understand and they work "well enough."
        >
        > Back to work...
        >
        > Martin Jay McKee
      Your message has been successfully submitted and would be delivered to recipients shortly.