Loading ...
Sorry, an error occurred while loading the content.

Re: Wire Computing? A Theory

Expand Messages
  • connor_ramsey@ymail.com
    I strongly favor the AtTiny85V, as it provides 8K program flash for about the same price. And although it s not as fast as it s higher voltage twin(roughly
    Message 1 of 60 , Jul 16, 2013
    • 0 Attachment
      I strongly favor the AtTiny85V, as it provides 8K program flash for about the same price. And although it's not as fast as it's higher voltage twin(roughly half the max speed), but its power ratings are ideal for a robot that runs on tiny solar cells, although its running amperage and voltage are both about twice what a master-slave bicore would need while in control, so if the 74HC240s are the only other ICs drawing power from the SE(s), synchronously, then that's still around a 50% increase in SE/battery life, excluding the mc's wake cycles as needed. This rate goes up to nearly 75% if your using the regular variety(which are 2x faster), although the wake cycles on these would decrease the life at a near proportional rate.
      Also, something else I need second and third opinions on, I have an old HT6542 keyboard processor lying on my work-desk, and although it only has a 4-bit mc and lacks internal program memory, since it's here, I need some ideas for using it, because Lord knows it's going to happen regardless. Enjoy, Connor.

      --- In beam@yahoogroups.com, Martin McKee wrote:
      > Almost when I started working with AVRs ( nigh on ten winters now I should
      > think ), I was considering a type-3 SE as a project... never have gotten
      > around to it. Yes, let us know how things turn out.
      > Here's the thing with sleep modes ( as I see it anyway ). Processors are
      > getting down to the leakage current limit for many discrete components even
      > with a clock of some form running. That means that the load in sleep has
      > become negligible for just about any BEAM application ( even solar ), so
      > long as the processor can be kept sleeping most of the time. And therein
      > lies the problem. On the AVRs, for instance, the watchdog timer might be
      > the best way to do a low-power timed interrupt, but it can only be slowed
      > down so far. The chip will still wake up every ten seconds or so, six
      > times a minute. While it is running ( perhaps just to check something
      > quickly and go back to sleep ), it is using power an order of magnitude (
      > or two ) more than when it is sleeping, no matter what is being done. So
      > there are two parts to the challenge: 1) keeping the processor asleep as
      > much as possible and 2) making sure every wake cycle counts for as much as
      > possible. Of course, if motors ( or even LEDs ) are running at the same
      > time, the current may not matter, as the newer processors have <5mA running
      > current consumption.
      > But, there is another issue to keep in mind -- gottchas. Each processor
      > seems to have them. But, when trying to design something for minimum power
      > consumption in sleep ( or active mode for that matter ), things like pullup
      > resistors and floating inputs become major current sources. Unused
      > peripherals in the microcontroller also have to be shutdown or they will
      > produce a constant drain. With power and flexibility come added
      > complexity. There also comes a point of diminishing returns. If there are
      > a half dozen LEDs runniing at 1mA and the processor runs at 1mA in idle,
      > the processor is only 1/7 of the total load, put the processor to sleep and
      > the best that can be hoped for is that the battery lasts 14% longer. With
      > a motor or more LEDs the percentages only drop. On the other hand, with
      > something like a solar engine and a solar panel indoors that, in low light,
      > only puts out 1mA, putting the processor into a deeper sleep mode could be
      > the difference between a dead device and an almost continuous charge.
      > Out of curiosity, what AtTinys are people looking to use ( so many choices!
      > )?
      > Over the years my thoughts have turned much more to how I can use BEAM
      > techniques and ideas in other media rather than how to directly apply the
      > traditional circuits. I am no electronics genius. I am a programmer. But
      > it makes sense to me to make things reactive ( or, using the software
      > engineering term, event-driven ). It also makes sense to me to dedicate
      > circuitry and even small processors to simple tasks. If one breaks down it
      > is decoupled enough from others that the whole can continue to limp along (
      > if properly designed ) and, regardless, it makes it easier to handle
      > problems NOW. If I don't like the idea of having to wait for the right
      > point in a loop to check a current limit and I don't want to complicate
      > code with lots of interrupts, I can simply put a dedicated circuit ( or
      > processor ) on it. A couple of dollars, a bit more work, and it takes care
      > of itself.
      > These days, however, I am moving away from the AVRs as I'm finding the the
      > ARM chips ( I like the NXP LPC series ) provide more bang for the buck
      > without being particularly difficult to use and without ( surprisingly
      > actually ) using additional power. The world is going to go 32-bit soon
      > enough it seems....
      > Martin Jay McKee
    • connor_ramsey@ymail.com
      Yeah, the usability bit is a primary focus of mine. Just for fun, really, I ve taken an approach at a very traditional style, basically using a set of counters
      Message 60 of 60 , Aug 15, 2013
      • 0 Attachment
        Yeah, the usability bit is a primary focus of mine. Just for fun, really, I've taken an approach at a very traditional style, basically using a set of counters in place of an actual processing unit. At it's simplest, it lacks the hardware to perform Boolean logic operations outside of 1's and 2s complement, but these can still be used to simulate logic functions in a few cycles. It can also simulate bit shifting easily enough by multiplying or dividing by 2. It also places quotients and remainders into different registers for easy handling of remainders. Not to mention floating point math isn't difficult, either. It could even perform <, =, > comparisons between values. As a matter of fact, I can't really say that any electronic computer has ever been built in this fashion. I'm pretty much basing the design entirely on DigiComp2, a mechanical 4-bit binary computer distributed as an educational toy from 1968-1976.
        Yes, the 1-bit processor array concept is in fact cellular automata, which is why I refer to each unit as a "cell". I don't entirely understand bandwidth, yet. But the idea doesn't really focus on that. It regards robustness of the system, as well as massive parallel processing without most of the usability problems. I would also think it much more flexible, because a key construct is that each cell can alter its connectivity with its neighbors. It would take several orders of magnitude more component failures to trash the system than your traditional hardware, it could also be incredibly fault tolerant, and I'm thinking on the lines that the entire system would be programmed as a whole, so that determining how each cell should connect can be left up to the OS shell. Also, even if bandwidth restricts how quickly information is processed, another perk of the idea is that a very large amount of data could be processed at once.
        On a side note, I once came up with an idea for a machine that was mostly electronic, but stored data temporarily as photon states(like, particle for 0 and wave for 1), and would be able to take advantage of the fact that photons, being 4-dimensional objects, can move in more directions than we can perceive, and thus allow the machine to literally do everything at once. What I mean is that each new cycle would take place in the same time frame as the last cycle, so that it could register an infinite amount of data in about a billionth of a second or so. It would only ever have to go forward in time if it needed to write a result back to main memory or update I/O, because the way it works, the events that occurred in previous steps literally would have never happened, and so the electronic memory wouldn't be able to remember such a result, and the outside world could only observe the final state of the program, if there was one. Fundamentally it is a photon based delay line with a negative delay. As in, instead of the delay propagating forward in time, it "rewinds" time slightly. So the potential would be literally instant computation, a stack of infinite size could be fed into the computer and processed in less than a billionth of a second, and an entire program run could be accomplished in the same amount of time. Branches and subroutines would be included. Only writing data back to memory or porting to the I/Os would really take any time at all. Only the program's final result could be observed from outside, as each step in between would never have happened in our timeline. Also, the program counter would have to be photon based, somehow, since if it was electronic, it wouldn't be able to remember what program line to go to next after time was rewritten again. The only thing I can see being interpreted as dangerous with this is that it does, indeed, rewrite time. But it only rewrites about a billionth of a second each time, and it doesn't effect outside events whatsoever. It has absolutely no way to affect reality.

        --- In beam@yahoogroups.com, Martin McKee wrote:
        > For myself, life is catching up with me. Come Monday, I'll be starting a
        > new degree ( one not even tangentially related to my first ), so I've been
        > rushing around trying to get all that in order -- no time for seriously
        > thinking about robotics at all.
        > I've only got a minute or two now, but, some few comments. The massively
        > parallel 1-bit processors sounds a bit like a cellular atomaton type
        > system. I remember having see once ( but can I find it now? of course not!
        > ) a computer system that was being developed in that vein, compiler and
        > all. There is certainly potential for quite a bit of performance, but for
        > maximum performance, the bottleneck is often memory bandwidth, and not,
        > strictly, computational. A large number of processors with a handful of
        > neighbors and a 1-bit interconnect is not going to help in that line.
        > To be honest, much of the architecture design lately has been targeted at
        > increasing performance ( adding parallel instruction sets, vectorizability,
        > hyperthreads, etc. ) but because of memory access issues and programming
        > concurrency issues, simple small instructions and a minimal set of fully
        > atomic instructions has seemed to have the best balance of usability and
        > performance. No one has really been able to demonstrate an architecture
        > that is both highly performant and efficient in the face of concurrency (
        > and many parallel computational units ) while remaining easy to program. I
        > think what can be said about "traditional" architectures, is that they are
        > easy to understand and they work "well enough."
        > Back to work...
        > Martin Jay McKee
      Your message has been successfully submitted and would be delivered to recipients shortly.