## 60976Whoa, Breakthrough!

Expand Messages
• Oct 1, 2012

Hey, back again, finally. I think I just figured out how neural networks actually compute input factors when need be.

I started thinking about a PWM circuit that measures the length of each input pulse, and a longer pulse triggers a pulse injection into a proportionately larger clock loop. Each loop is longer than the last by a particular time constant. An output pulse is triggered instantly as the rising edge of the input data arrives. When all clock loops send a pulse simultaneously, the combined signal marks the end of the output pulse. The output pulse represents the product of the equation.

Adding isn't a problem in analog circuitry, just combine the signal values, and subtraction can done by doing the same but with an inverted spike representing a negative number. Therefore only a single neuron is really required to compute these types of equations. While represented externally as PWM signals, the neuron actually represents the equation internally using membrane potential values, and outputs it again as PWM.

As far as division, I suspect it relies on the invert principle of the multipier circuit. Basically it's taking two input pulses and merging them into an output pulse that's shorter than the input pulses by the LCM of both pulses. The table of operations is clear to me, the tough part is envisioning a neural architecture that can execute it. Perhaps the circuit would measure a certain time frame that the output pulse should fit into, this time frame being the capacitative limit of the circuit, and invert the output so that the clock output signals the BEGINNING of the output pulse, rather than the end. The output pulse is then cut off when the allotted time frame is exceded. This way the output pulse is inversely shorter than the input data rather than longer but by the same factor.

Feel absolutely free to add to this thought model, I seriously didn't start thinking about it until about an hour before I posted this. And I didn't even start applying the model to neurology until 20 minutes ago. So I think I've covered the basic area, but it could use plenty of expanding. That's what this list is for anyway. Besides, I'm probably not the one to figure out where this model would apply in BEAM to begin with, but the individual units of each model strongly resemble Nu neurons and such. And everyone knows a good robot can at least put 2 and 2 together.

There's nothing wrong with the simple nerve nets everyone's familiar with. They're serve their purpose effectively. But hands-on interaction only goes so far. In the animal kingdom, almost every mobile creature, particularly the more intricate ones, must instinctively calculate their environment before they act upon it. Math is an integrate component in the way animals behave. They must compute factors like extenuating conditions, immediate conditions, the distance, direction, sensory range and potential intrepretation of stimuli, the velocity and behavior of predator and/or prey, and often times multiple or all of these factors.

What I'm saying is: we've built machines that can interact with their world through reflex, now let's build machines that can truly perceive their environment and interact with it in full.

If you have any more ideas, just say so. I'd like to hear back on this soon.

Enjoy, Connor

• Show all 50 messages in this topic