Loading ...
Sorry, an error occurred while loading the content.

RE: [atlas_craftsman] Re: new article: An Experimental Measurement System

Expand Messages
  • Rick Sparber
    JT, Lots of observations so let me respond in line. I certainly do appreciate the time you have spent here. Rick From: atlas_craftsman@yahoogroups.com
    Message 1 of 32 , Sep 22, 2013
    • 0 Attachment

      JT,

       

      Lots of observations so let me respond in line. I certainly do appreciate the time you have spent here.

       

      Rick

       

      From: atlas_craftsman@yahoogroups.com [mailto:atlas_craftsman@yahoogroups.com] On Behalf Of jerdal@...
      Sent: Sunday, September 22, 2013 6:55 PM
      To: atlas_craftsman@yahoogroups.com
      Subject: Re: [atlas_craftsman] Re: new article: An Experimental Measurement System

       

      




      Well, I think it is a bit optimistic, actually.  It likely works in selected cases, but I an by no means sure it will work with any/all of the individual HF caliper units of that model.

      >>> You are right. There is no way to prove that it works for all HF calipers. I only reported on my tiny sample of one.

       

       

      Generally, while I am by no means a specialist in metrology, the approach seems solid, as I understand it.   Take what is basically a repeatable device, calibrate it at several points, and apply corrections.  Interpolate between cal points..

      >>> Yes, that is what I’m doing in the calibration part of the program.

       

       

      The assumptions are that:

       

      1)     The tool is repeatable (more about this later)

      >>> the test results in the appendix demonstrate (do not prove) that

       

       

      2)     the errors are such that they are "locally linear", that they are more-or-less on a straight line between calibration points.

      >>> This is the thinnest ice I step out on but, again, the test in the appendix and on page 7 show it was true for my sample of one. I am assuming that since the etched metal foil was generated by photolithography, errors will be due to the transfer of a very accurate master to the foil.

       

       

      3)     That the basic measuring tool justifies the final accuracy.  Another way of asking that it exhibit repeatability, really, but perhaps with a slight difference (more later).

      >>>> the appendix test does demonstrate the increase in accuracy (not proof, just a demonstration).

       

       

      OK... keeping is short, but not in order.

       

      *  per item 2)    Local linearity.   You can demonstrate this, by increasing the number of cal points, and observing whether you get non-linear errors.  If not, then the assumption is that the device is good. 

      >>> appendix test was done in steps of 0.001” over the range of 0.010” with 10 readings per step. I found nothing to disprove the possibility

       

      To be fussier, it might be wise to check small increments of the range to a fine scale, in case there is something about the sensor that isn't right, or has periodic errors.  It probably helps if you know the nature of the measurement device so you can devise a worst-case test.  There is no defined end to this suspicion and re-checking, however, and it may not be worth it past a certain point.

      >>> testing to smaller than 0.001” steps would be hard to judge since the caliper only displays in steps of half a thou. What did you have in mind?

       

       

      Items 1) and 3)  Repeatability.  If you do the same measurement many times  in the same way, do you get the same result?

      >>> Do you mean like in the appendix. Are there other tests you are suggesting?

       

         If yes, within some acceptable range of error, then it is "repeatable" within that range of error.

      Likely most calipers are repeatable in that sense.  If they were not, it would suggest that the scale was unstable, or gears (dial type units) were loose, etc, generally that there was some feature of the measuring device that had a problem.   

       

      *  Per item 1, you check the actual scale. Likely there is little wrong with that. The scales have a known resolving ability, and so there is little to question.  

       

      There IS an issue in that the resolution is given as 0.0005".   So no smaller increments than 0.0005" can be "noticed" and reported.  If that is true, then there is an automatic inability to determine the actual distance to any finer range than 0.0005", because all points within that range are detected by the reading device as being the same. 

      >>> I agree. In fact, when I round the diameter to display radius, I round to the nearest half thou. This actual generates an error of +/- 0.00025” but it doesn’t seem right to show a resolution better than half a thou.

       

       

      Then also, there is a problem with the display, which has some error in the ability to display the exact number, because the display must be either one number, or the next.   Any error in identifying the exact point of that change adds a +- 1 digit error in the least significant digit.  And there IS some error.

      >>> I see the uncertainty in the half thou digit, not the thou digit.

       

       

      Combining these, you can't be sure what the instrument will read out when faced with a measurement which is truly 2.00025". It could read out a number anywhere from 2.0000 to 2.001.   And this readout error is essentially uncalibrateable..... because it is an inherently random effect occurring at the limit of resolution of the measuring device..

      >>> My claim is +/- 0.0005” worst case. I wish I could get more accuracy from it but the digital interface for this particular caliper gives no better resolution. Older calipers did give me a tenth which could be averaged.

       

       

      So, I don't think that in principle the readout can be trusted to better than the 0.001" which HF claims (and I think correctly claims). 

      >>> If I narrow the discussion to just my caliper, I saw better than that.

       

       

      *  Per item 3, you look at the entire instrument.     Immediately you find that the caliper violates several conditions for accuracy, beginning with Abbe's law.  Per that, the "standard" (the scale inside the caliper) is clearly NOT in line with the measurement.... the caliper has a substantial lever arm, the part is measured at some distance from the axis of the scale, and is therefore sensitive to measuring force.  Close the calipers a little harder, and you can get a slightly smaller reading

      >>> I attempt to address this problem by monitoring the velocity of the jaws just as they reach local minimum or maximum. If that velocity is greater than “Limit” (set at 0.1” per second), I display a warning. The hope is that the novice will slow down and not crash the jaws into the workpiece next time. Of course, when I was running my test, I was as gentle as possible so as not to increase error.

       

      .

      That error I would somewhat arbitrarily put at another thou, at least. and it is another random and uncalibrateable effect, partly dependent on the user, so no compensation is effective against it.  Maybe a series of measurements averaged is better, but you cannot be sure. 

       

       

      So my claim is that you really cannot depend on the calipers closer than a range of 0.002", and even that is somewhat dependent on the user, and random errors associated with how the caliper is manipulated in taking the measurement.

      >>> as with any hand measurement instrument, some skill is needed. Part of that skill is not crashing the jaws into the workpiece. Another part is consistency. Since the program monitors and warns against excessive closure speed plus takes the local min and max, I can only hope this helps.

       

       

      The user issues may add enough variability to bring the real repeatable accuracy to 0.005", which is often claimed as the "real" accuracy of calipers.   I think that a bit of care can reduce that to 0.002" or so.

      >>> I know this is more of a demonstration of what is possible, but I was able to get an average error of +0  - 0.0002”. I wasn’t trying to screw things up and see how bad it could be with abuse.

       

       

      I suspect  the "lowest number detector" is likely to find the "too much force" measurement every time.   How is it to know that the caliper was closed with too much force?  It just gives the lowest number it sees.

      >>> Explanation can be found on page 6 under the heading “Velocity Check”. I basically time stamp each reading and then calculate the velocity upon impact.

       

      Bottom line is that in my opinion, the idea is good, but that it is not going to dig the actual measurement out of the noise reliably to the 0.0005" level.

      >>> I can agree with you that there is no proof that it will always “dig the actual measurement out of the noise reliably” but I was able to do it in my tests so at least some of the time, it does work.

       

       

      I believe you can get the probable overall resolution down to 0.001" by incorporating a multi-reading averaging system into it.  Set it up so the user can take 10 measurements, and then do the average of the 8 "middle" measurements, for instance. 

      >>> That would be very easy to do and is what I did do in the appendix.

       

       

      That might be the best feature of all.... because most folks won't get the calculator and do 10 measurements plus the math.

       

      >>> It would be a great option that can be turned on and off. I would then have the calibration/interpolation option, the go/no-go option, and the highest accuracy option.

       

       

      JT

       

       

       

      ----- Original Message -----

      Sent: Sunday, September 22, 2013 11:43 AM

      Subject: RE: [atlas_craftsman] Re: new article: An Experimental Measurement System

       

      JT,

       

      I must be in big trouble to be called “Mr.” ;-)

       

      Yes, dial calipers do have additional problems. I forgot about that.

       

      With digital calipers there is something that can be done to make them more accurate provided they are repeatable. That is the crux of the experimental measurement system I have been developing. It maps a known set of gage blocks to a caliper’s readout and then can interpolate between the block values. I get an accuracy of  +/- 0.0005” over the range of  0 to 5.5” range (could get to 6” with a software change).

       

      You make a lot of sense about mics and standards. I have a good 0-1” mic with no standard but my other mics are third hand and had no standards with them. So you are saying that when the mic’s minimum value is X”, then X” is the size of the standard. It is equivalent to no standard when the minimum is zero.

       

      Also very good point about checking at other than full revolutions of the thimble. I’ve seen that problem on an abused mic that I bought.

       

       

      With your understanding of metrology, I do hope you review my article and can see if I am kidding myself with the approach:

       

      http://rick.sparber.org/CBOO.pdf

       

      Thanks,

       

      Rick

       

      From: atlas_craftsman@yahoogroups.com [mailto:atlas_craftsman@yahoogroups.com] On Behalf Of jerdal@...
      Sent: Sunday, September 22, 2013 9:28 AM
      To: atlas_craftsman@yahoogroups.com
      Subject: Re: [atlas_craftsman] Re: new article: An Experimental Measurement System

       

      



      Mr. Sparber, I am in fact aware of that, which is why I wrote what I wrote....  that there is no point to a setting standard for a caliper.   

       

      You left out the dial caliper, on which you can slip the gear on the rack to move the zero point of the dial pointer to a readable place in the top half of the dial.

       

      Digital calipers have their own set of problems, essentially none of which anybody can do a thing about.  If it is wrong, is zero-set correctly, and a battery does not fix it, it should be binned before it fouls you up in a big way.

       

      Mics commonly do NOT have standards if 0-1".  The zero is the setting standard.  A 1-2" would have a 1", a 2-3" mic would have a 2" standard, etc.   Each of these is the "effective zero" of the mic. 

       

      Of course, nothing stops you from using the 1" standard to check the 1" mic....  But I would suggest that you instead use something which is not an even multiple of 0.025", in order to possibly detect any repeating error of the thread.  One turn is 0.025", so any even inch, for instance, will be on the same exact angular position of the thread, presumably with the same error, which is then not detected.

       

      JT




    • Rick Sparber
      JT, Although I wish I could rebel against the fact, but I know you are right about the 0.0001” reading mic being good only for 0.001”. That doesn’t mean
      Message 32 of 32 , Sep 25, 2013
      • 0 Attachment

        JT,

         

        Although I wish I could rebel against the fact, but I know you are right about the 0.0001” reading mic being good only for 0.001”. That doesn’t mean I don’t push that limit once in a while. It just seems like such a waste ;-)

         

        In a previous email you mentioned the procedure of taking 10 readings, throwing away the min and max, and averaging the rest. I learned this procedure from a metrologist a few years ago. Is your issue with the number of places shown in the result? Would rounding to the nearest half thou be correct?

         

        I do not display any result on the LCD with a resolution better than that of the slider’s display. In all cases, the LCD shows numbers rounded to the nearest half thou or 0. I only walked out on thin ice with that average shown in the appendix.

         

        Maybe one point of disagreement here is generalized theory with its unknowns versus direct testing on a sample of one.

         

        Since the repeatability of these calipers is not specified by the manufacturer, it is impossible to rely on it in any rigorous analysis. As such, I would never go into high volume production with this design and all that I don’t know.

         

        However, my sample of one set up has given me results that are within 0.0005” of the value stamped on every spacer block I measure. In the majority of cases, it has been spot on. My intension was never to claim this constitutes a proof of the general case.

         

        I see the calibration approach as the same as using a finger DTI to measure a stack of gage blocks on a surface plate. Zero the DTI on the stack and then swing over to the unknown. If the DTI reads zero, the unknown equals the height of the gage block stack within the resolution and repeatability of the DTI.

         

        I do assume that the caliper drifts over time. When power is removed from the computer, all calibration data is erased.

         

        I do believe I am justified in saying that the “change points” in the 0 and 5 transitions do not drift. That part of the caliper is digital so these transition points are hard wired into the logic. I am assuming that these change points are located symmetrically around 0 and 5 but don’t see that as much of a stretch.

         

        I learn a lot from this kind of banter so hope you will continue to point out any wishful thinking I proclaim.

         

        Thanks!

         

        Rick

         

        From: atlas_craftsman@yahoogroups.com [mailto:atlas_craftsman@yahoogroups.com] On Behalf Of jerdal@...
        Sent: Tuesday, September 24, 2013 5:09 PM
        To: atlas_craftsman@yahoogroups.com
        Subject: Re: [atlas_craftsman] Re: new article: An Experimental Measurement System

         

        




        Short story:    GET A MICROMETER.

         

        Longer story:  GET A GOOD BOOK ON METROLOGY

         

        Metrologists will argue that a 0.0001 reading mic is good only to a real accuracy of 0.001".  Read why in the books and you may get in tune with the deal here.

         

        I suppose the first issue is the idea that one CAN in any way reduce error to +0 -0.0002" in a device that responds to a minimum detectable change of 0.0005".  As near as I can tell, you are basing this on the number you got when averaging the readings.

         

        But it is NOT POSSIBLE to know that for any single measurement that you have less error than you have as an uncertainty band.....  Primarily because you are trying to look below the actual resolution of the device....  While you may detect a difference, in one spot, over the range of all possible measurements distances, you will find that the basic resolution is the absolute limit of accuracy for any given measurement, you cannot pull out a smaller number, because the device inherently cannot produce that number.

         

        Your premise seems to be that you have an overlay.....  a true measurement, and an overlay of noise imposed on it, so that the output is the sum of the two.  IF you in fact know the character of the noise, you might be able to do what you propose. But you do not, and the instrument cannot help you.  If you assume that the average of a long enough sequence of "0" and "5" LSBs  will give you the actual, then yes, you are getting that.  But there may be consistent or changing offsets, etc, in the results.

         

        That is part of how the NSA gets data off erased HDDs, or an enhanced photo from a fuzzy one.   But many of those techniques depend at least in part on knowing things about the data ahead of time.....  a license plate consists of numbers and letters, and not generally pictographs, in the US or UK. so the fuzziness can be matched to the list of possible data elements, and what they look like when fuzzy.

         

        The assumption may be flawed.  You can measure things, but until you characterise the device under every condition, AND have confidence that it does not drift over time, you have not got enough confidence to use the averaging technique.   And you can never get to where you can take ONE measurement, and reliably detect anything past the inherent resolution, at the very most. To "dive into the noise" you must take a lot of measurements.  Claude Shannon of Bell Labs went through this exhaustively long ago.

         

        Assuming a consistent "change point" for the "0" to "5" transition is also flawed reasoning.... you have no particular justification, and every reason to believe it is a mix of fairly constant (but drifting) offsets and noise.

         

        Bottom line is that I believe you are indulging in a chase for a corrected single measurement, where in reality you can ONLY get what you want by a system of many measurements and processing.   So......... "Get a micrometer".......

         

        JT

      Your message has been successfully submitted and would be delivered to recipients shortly.