I've got too much code for such a lame problem to post (I'd have to

dig out a bunch of stuff out too). But, just to make sure, I'm

printing out each factor of the equation.

Basically, what happens, is I'm extracting numbers from a log, then

either adding or subtracting based on some criteria. The numbers

never go deeper than to the hundredth. There is some multiplication,

but it's always an integer x 0.xx -- so there the product/factor

should never go lower than the hundredth. However, halfway through

the script, I get a number taken down to 15 or so decimal places.

As before, I print out each part of the equation. Preceeding where

the messiness starts, it looks like:

111.49 + 1.12 = 112.61

112.61 + 0.48 = 113.09

Then for the next line I get...

113.09 + -16 = 97.0900000000002

I'm looping through, using += to the total each time, so every

subsequent number is carried out this far.

Is there something I missed in the integer chapters, or should I be

using printf for numbers in general?

Tx