new article: An Experimental Measurement System
This article details a digital caliper based measuring system with an accuracy of +/- 0.0005” over a range of 0 to 5.5”. Total cost is around $45.
If you are interested, please see
Your comments and questions are welcome. All of us are smarter than any one of us.
For the full index of my articles, see rick.sparber.org.
Although I wish I could rebel against the fact, but I know you are right about the 0.0001” reading mic being good only for 0.001”. That doesn’t mean I don’t push that limit once in a while. It just seems like such a waste ;-)
In a previous email you mentioned the procedure of taking 10 readings, throwing away the min and max, and averaging the rest. I learned this procedure from a metrologist a few years ago. Is your issue with the number of places shown in the result? Would rounding to the nearest half thou be correct?
I do not display any result on the LCD with a resolution better than that of the slider’s display. In all cases, the LCD shows numbers rounded to the nearest half thou or 0. I only walked out on thin ice with that average shown in the appendix.
Maybe one point of disagreement here is generalized theory with its unknowns versus direct testing on a sample of one.
Since the repeatability of these calipers is not specified by the manufacturer, it is impossible to rely on it in any rigorous analysis. As such, I would never go into high volume production with this design and all that I don’t know.
However, my sample of one set up has given me results that are within 0.0005” of the value stamped on every spacer block I measure. In the majority of cases, it has been spot on. My intension was never to claim this constitutes a proof of the general case.
I see the calibration approach as the same as using a finger DTI to measure a stack of gage blocks on a surface plate. Zero the DTI on the stack and then swing over to the unknown. If the DTI reads zero, the unknown equals the height of the gage block stack within the resolution and repeatability of the DTI.
I do assume that the caliper drifts over time. When power is removed from the computer, all calibration data is erased.
I do believe I am justified in saying that the “change points” in the 0 and 5 transitions do not drift. That part of the caliper is digital so these transition points are hard wired into the logic. I am assuming that these change points are located symmetrically around 0 and 5 but don’t see that as much of a stretch.
I learn a lot from this kind of banter so hope you will continue to point out any wishful thinking I proclaim.
Short story: GET A MICROMETER.
Longer story: GET A GOOD BOOK ON METROLOGY
Metrologists will argue that a 0.0001 reading mic is good only to a real accuracy of 0.001". Read why in the books and you may get in tune with the deal here.
I suppose the first issue is the idea that one CAN in any way reduce error to +0 -0.0002" in a device that responds to a minimum detectable change of 0.0005". As near as I can tell, you are basing this on the number you got when averaging the readings.
But it is NOT POSSIBLE to know that for any single measurement that you have less error than you have as an uncertainty band..... Primarily because you are trying to look below the actual resolution of the device.... While you may detect a difference, in one spot, over the range of all possible measurements distances, you will find that the basic resolution is the absolute limit of accuracy for any given measurement, you cannot pull out a smaller number, because the device inherently cannot produce that number.
Your premise seems to be that you have an overlay..... a true measurement, and an overlay of noise imposed on it, so that the output is the sum of the two. IF you in fact know the character of the noise, you might be able to do what you propose. But you do not, and the instrument cannot help you. If you assume that the average of a long enough sequence of "0" and "5" LSBs will give you the actual, then yes, you are getting that. But there may be consistent or changing offsets, etc, in the results.
That is part of how the NSA gets data off erased HDDs, or an enhanced photo from a fuzzy one. But many of those techniques depend at least in part on knowing things about the data ahead of time..... a license plate consists of numbers and letters, and not generally pictographs, in the US or UK. so the fuzziness can be matched to the list of possible data elements, and what they look like when fuzzy.
The assumption may be flawed. You can measure things, but until you characterise the device under every condition, AND have confidence that it does not drift over time, you have not got enough confidence to use the averaging technique. And you can never get to where you can take ONE measurement, and reliably detect anything past the inherent resolution, at the very most. To "dive into the noise" you must take a lot of measurements. Claude Shannon of Bell Labs went through this exhaustively long ago.
Assuming a consistent "change point" for the "0" to "5" transition is also flawed reasoning.... you have no particular justification, and every reason to believe it is a mix of fairly constant (but drifting) offsets and noise.
Bottom line is that I believe you are indulging in a chase for a corrected single measurement, where in reality you can ONLY get what you want by a system of many measurements and processing. So......... "Get a micrometer".......