The only benefit I can think of is the reduced RAM requirement for
network state - if you code it right. Double precision floats are 8
bytes, single precision is 4 bytes, so there's perhaps an argument for
squeezing connection weights into 2 bytes to half the total RAM again,
especially if you're dealing with very large networks and wanting to
run them in a GPU (as one example).
Functionally I can't think of a benefit.
On 18 January 2013 06:57, Ken <kstanley@...> wrote:
> I'm not sure about the motivation for doing that either. Could you point
> out where Soltoggio says he does that? (Then I could ask him.)
> --- In firstname.lastname@example.org, Oliver Coleman wrote:
> > Hi all,
> > I've noticed a few authors (some using NEAT and others not, eg
> > Soltoggio)
> > use a precision or granularity parameter for weight values, ie each
> > weight
> > may only adopt a value from a (fairly large) discrete set of values
> > determined by the granularity/precision parameter.