Calibration and traceability requirements in ISO/QS 9000 are often interpreted as only requiring a calibration sticker on the measuring equipment and the reference to a NIST test number on a calibration certificate. The presentation explores the technical background for requiring calibration and traceability and how the real calibration needs are determined.
The International Vocabulary of Basic and General Terms in Metrology (VIM)1 defines traceability as:
"property of the result of a measurement or the value of a standard whereby it can be related to stated references, usually national or international standards, through an unbroken chain of comparisons all having stated uncertainties."
The first thing to notice is that only the result of a measurement or the value of a standard can be traceable.
Measuring equipment cannot be traceable in and of itself. What is traceable about the equipment is the determination of its imperfections during calibration. What we mean when we talk about traceable equipment is that it is potentially able to produce traceable measuring results.
Likewise, a standard cannot be traceable, but the value assigned to it can.
ISO/QS 9000 does not use the word traceability directly, but requires (calibration of inspection, measuring and test equipment) against certified equipment having a known valid relationship to internationally or nationally recognized standards. While there are some pieces of the VIM definition missing, it is clear that the intent of the requirement is that measurements be traceable.
Unfortunately it is not clear exactly what the writers of the standard hope to achieve with the requirement. Because the standard does not use the VIM definition verbatim, it has given rise to some questionable practices that can be interpreted as fulfilling the standard’s requirement, while not being technically sound.
The most common of these practices is to interpret a calibration certificate quoting a NIST test number as formally proving traceability in this country.
However, this approach does not account for the uncertainty in all the links in the traceability chain between whatever was calibrated at NIST and the equipment used in the actual calibration, as required in the VIM definition. It also does not demonstrate that there is a proper relationship between the uncertainties in each link.
Furthermore it should be noted that this practice is discouraged by NIST, who regards their test numbers as an internal, administrative tool. NIST also considers its tests confidential between themselves and their customer, so you will not be able to get any information from NIST, if you want to investigate what a given test number covers.
The requirements in the 3rd edition of QS 9000 that outside calibrations be done by laboratories that are working in accordance with ISO Guide 25 is a step towards cleaning up this practice, as the accreditation bodies require traceability as defined in VIM.
But what is the problem and what is it we should be doing instead? To answer this question, we need to look at the purpose of the traceability requirement. We intuitively understand that the requirement has to do with the quality of the measuring results. The customer requiring traceability wants some assurance that the measurements are "right".
The only way to prove that measurements are right, is to prove that their uncertainty is low enough to allow the desired conclusions to be drawn from the results, such as whether or not a workpiece meets its specification.
From a technical point of view this makes traceability part of substantiating the stated uncertainty for the measurement. Therefore the best way of demonstrating traceability is by starting with the uncertainty budget for the measurement.
Uncertainty budgets should be developed in accordance with the Guide to the Expression of Uncertainty in Measurement (GUM)2. This ISO document gives a way to express uncertainty that allows the uncertainty in a calibration to be “carried over” into the uncertainty budget for subsequent measurements with the standard or instrument. It also allows us to compare two measurements of the same item and determine whether they agree, given their uncertainty.
From the uncertainty budget we can find all the significant contributors to the measurement uncertainty (e.g. everything larger than 10-20 % of the largest contributor). Some of those contributors depend on the measurement procedure and others depend on the calibrations of the equipment, sensors, standards, etc.
Traceability may therefore require calibration of several attributes of the measuring equipment and not all of them may be in the unit of what we are measuring. For example the uncertainty of a length measurement may be highly dependent on temperature and therefore the ability to measure temperature. Thus the traceability of the calibration of the temperature sensor becomes a significant part of the uncertainty for the length measurement.
Consequently, we can state our requirement more specifically by saying that for a measurement to be traceable, the equipment has to be traceably calibrated for the characteristics that influence the uncertainty budget to the tolerances that are assumed in the budget.
This means that traceability is a recursive condition. It shall be possible to follow the uncertainty budgets and the calibration of significant attributes of the equipment for each link in each chain all the way back to the appropriate level, e.g. a national laboratory.
Using this logic, the information needed to prove that a measurement is traceable in the technical sense is:
The requirement of credibility of the calibration source is what recursively ensures that this information is available at each link in all the chains all the way back to the national laboratory level.
Accreditation is intended to provide this credibility. Accreditation is essentially a third party putting a seal of approval on the comparisons and the accompanying uncertainties that a calibration laboratory performs.
One of the requirements that is put on accredited laboratories is that they can only have their standards and equipment calibrated at other accredited laboratories.
The result is that each accredited laboratory maintains one or a few links in the chain of comparisons back to the SI unit. By documenting the links they maintain and the source for the next link in the chain, the accredited laboratories ensure the continuity of the traceability chain.
Therefore all the information needed to follow the traceability trails back to the National Laboratory level is now available and traceability can be proven. This explains the QS 9000 3rd edition requirement for outside calibrations to be performed by laboratories accredited to ISO Guide 25.
Having discussed the concept of traceability, let us look at what the consequences of this discussion are on our calibrations. Calibration is often performed without technical justification regarding the attributes that get calibrated and the resulting calibration uncertainty. Worse yet, calibrations are sometimes based solely on whatever auditors demand. This approach leaves companies with the cost of excessive calibration in some areas, while leaving them exposed in other areas where they "undercalibrate".
This section assumes the reader is familiar with the ISO "Guide to the Expression of Uncertainty in Measurement (GUM)2, which is the internationally recognized means of expressing measuring uncertainty. The determination of the individual uncertainty contributors and their combination is therefore treated without much detail. For more details on the concepts used in that respect, see ISO/TR 14253-23 developed by ISO Technical Committee 213.
This section concentrates on exploring the conclusions that can be drawn from an uncertainty budget regarding the "Metrological Confirmation System", see (3), or calibration system used to maintain traceability and the capabilities assumed in various parts of uncertainty budgets. This system may have global elements controlled by NIST or equivalent agencies in other countries as well as elements maintained within an individual company.
At a tactical company level these concepts can be used to determine what capabilities are important and of value to the company. At a strategic (National or International) level the concepts are useful in determining what capabilities are important to us as a nation or as a global society.
This example is a very simple one, regarding ordinary workshop micrometers 0-25 mm, as they are used in every machine shop all over the world.
If we look at the ISO standard for micrometers, ISO 36114, we see a number of requirements for these micrometers (national standards of various countries have essentially the same requirements as the ISO standards). If we combine these requirements with the environmental conditions in a normal workshop, air conditioned for creature comfort (if at all), we get the following contributors to the uncertainty budget for the measurements performed with the micrometers:
Contributor | Variation Limit | Equivalent influence at 1 s level |
---|---|---|
Scale error of the micrometer | 3 µm | 1.73 µm |
Zeropoint error | 2 µm | 1.15 µm |
Parallelism of anvils | 2 µm | 0.58 µm |
Average temperature | +/- 5 oC | 0.09 µm |
Temperature difference | 3 oC | 0.61 µm |
Repeatability | 6 sigma = 2 µm | 0.33 µm |
Combined Standard Uncertainty, u | 2.27 µm | |
Expanded Uncertainty, U | 5.54 µm |
Table 1: Uncertainty budget for measurement using 0-25 mm micrometer in a workshop environment. Assumptions: Micrometer calibrated according to ISO 3611, workshop temperature 20 oC +/- 5 oC, maximum difference in temperature between micrometer and workpiece 3oC.
If we look closer at the contributions, we can derive several very useful pieces of information, that can help us decide where to invest (or not to invest) to improve the uncertainty of our measurement.
We see that we can split our uncertainty contributors in three groups: The first three contributors are instrument attributes, the next two are due to environmental conditions and the last one is due to the measurement process itself.
The first observation is that the major contributor is the scale error of the micrometer. This is one of the attributes we need to calibrate in order to make the measurement traceable. Assuming that we calibrate our micrometers in-house, this leads us to focus on the procedure we use to calibrate the scale of the micrometers.
The normal procedure for this calibration is to use a special set of gage blocks dimensioned to check different points along the scale, as well as pitch errors within each revolution of the spindle of the micrometer. Let us assume that we are planning our capability to calibrate the spindle error of our own micrometers in-house.
The second observation is that temperature is not a main contributor. This tells us that if we had planned to move the measurement to a temperature controlled environment, we would have been wasting our money, as there are much larger contributors to the uncertainty than the temperature influence. It also shows us that the exact measurement of the temperature is not relevant for establishing traceability for the measurement. Developing an uncertainty budget up front just saved us the cost of a temperature controlled enclosure for our micrometer measurements!
Our initial assumption is that we need a good set of steel gage blocks, ISO 36505 grade 0, as well as a temperature controlled environment running at 20 oC +/- 1 oC in order to properly calibrate the micrometers. To verify these assumptions, we look at the uncertainty budget for the calibration of the scale error:
Contributor | Variation Limit | Equivalent influence at 1 s level |
---|---|---|
Gage Block | 0.14 µm | 0.08 µm |
Digital Step/Repeatability | 1 µm | 0.29 µm |
Average Temperature | 1 oC | 0.0017 µm |
Temperature Difference | 0.5 oC | 0.1 µm |
Combined Standard Uncertainty, u | 0.31 µm | |
Expanded Uncertainty, U | 0.62 µm |
Table 2: Uncertainty budget for the calibration of the scale error of a 0-25 mm micrometer using grade 0 gage blocks in a laboratory environment. Assumptions: Laboratory temperature: 20 oC +/- 1 oC, maximum difference in temperature between micrometer and gage block: 0.5 oC.
We find from this initial analysis, that the major contributor to the uncertainty in the calibration is the resolution of the micrometers themselves. That means we can redesign our measurement process for calibrating the scale error of our micrometers, while still maintaining adequate traceability.
First of all we can use less expensive grade 1 gage blocks. We may also be able to extend the calibration interval for the gage blocks, since the tolerance for grade 1 blocks is more than double that of grade 0 blocks and it would be reasonable to assume that they take longer to either grow or wear out of tolerance. Additionally it is also generally less costly to have grade 1 blocks calibrated than grade 0, because of the larger tolerances, so each calibration is cheaper.
Secondly, we see that the average temperature at which the calibration is performed has negligible influence, as long as we are using steel gage blocks to calibrate a steel gage. The temperature difference between the gage block and the micrometer is the second largest contributor, but even if it is doubled, u will only change by 13% and still be small compared to the tolerance for the micrometer scale error. Therefore we can perform the calibration of the micrometer in the shop floor environment, if we are just careful about letting the micrometer and gage blocks acclimate to the same temperature.
Contributor | Variation Limit | Equivalent influence at 1 s level |
---|---|---|
Gage Block | 0.3 µm | 0.17 µm |
Digital Step/Repeatability | 1 µm | 0.29 µm |
Average Temperature | 5 oC | 0.09 µm |
Temperature Difference | 1 oC | 0.2 µm |
Combined Standard Uncertainty, u | 0.4 µm | |
Expanded Uncertainty, U | 0.8 µm |
Table 3: Uncertainty budget for the calibration of the scale error of a 0-25 mm micrometer using grade 1 gage blocks in a work shop environment. Assumptions: Work shop temperature: 20 oC +/- 5 oC, maximum difference in temperature between micrometer and gage block: 1 oC.
The example shows us that we can use uncertainty budgeting both to evaluate the actual measurement and the environment we use to perform it in, and to determine the calibration requirements for our gages, both in terms of the standards and the environment necessary, in order to maintain proper traceability.
The example also demonstrates that by changing our measurement procedure in areas where the influence on the uncertainty is small, dramatic measurement cost savings can be realized.
The concept of traceability requires more than just a calibration sticker. It requires an uncertainty budget for the measurement process and traceable calibration of all the instrument and environmental attributes that have a significant influence on the uncertainty.
In addition to identifying the instrument attributes that need to be calibrated to establish traceability, the uncertainty budget also allows for the optimization of measurement processes both in terms of uncertainty and dollars and cents.