Current gage recall systems connect gages and a calendar, administrating when gages are calibrated. Recent systems include some result verification and links to calibration procedures in form of "dumb" word processor documents.
The new system connects gages, calendar, procedure, standard and tolerances by using calibration procedures built of individual tasks as the focal point. The system verifies calibration results for each task against stored tolerance limits, ensuring that no gage is mistakenly accepted or rejected.
The scope of each task is either the gage type, subtype, size or the individual gage to combine the advantages of commonality and flexibility.
The system keeps track of valid calibration standards for each task, as well as which specific standard was used for each calibration, and their calibration status.
The background for this paper is an effort going on at Cummins Engine Company to develop a corporate standard gage management system. In the development of the system standard requirements have been analyzed, and it has been concluded that most commercially available systems have shortcomings when held up against these requirements.To overcome these shortcomings separate, parallel systems have to be maintained for example for document control, and even with such parallel systems there is still a potential for gaps in the coverage of the gage management system.
This paper presents an analysis of the requirements to a gage management system which can be derived from emerging ISO standards such as the ISO 14253(1,2) series. It then discusses ways to implement solutions to these requirements in a way that combines flexibility with ease of use. Finally it discusses some features of a proposed gage management system, that uses concepts that are going against what is generally considered good data base practice, but offers significant ease of use advantages.
ISO 14253-1(1) lays out a set of decision rules that apply when proving conformance or non-conformance with specification for a given workpiece feature (diameter, surface finish, roundness, etc.).
The rules basically state that a supplier has to subtract his measuring uncertainty from the tolerance to find the interval in which he can prove conformance with specification. The rules also state that a customer has to add his uncertainty to the tolerance in order to find the interval that has to be exceeded in order to prove non-conformance. The rules are discussed in (3).
The rules given in ISO 14253-1(1) raises the question: "How do we assess our measurement uncertainty?" Since we have to assess it in order to be able to add or subtract it from the tolerance. ISO/TR 14253-2(2) is intended to answer that question. It prescribes an iterative process based on the ISO Guide to the Expression of Uncertainty in Measurement(4) as the tool for determining the measuring uncertainty.
The basis for the determination of the uncertainty is an uncertainty budget. The uncertainty budget lists all the items that contribute to the measurement uncertainty, their magnitude and the resulting Combined Standard Uncertainty, u. For a simple micrometer measurement, an uncertainty budget may look like the following:
|Contributor||Variation Limit||Equivalent influence at 1 s level|
|Scale error of the micrometer||3 Ám||1.73 Ám|
|Zeropoint error||2 Ám||1.15 Ám|
|Average temperature||+/- 5 oC||0.09 Ám|
|Temperature difference||3 oC||0.61 Ám|
|Parallelism of anvils||2 Ám||0.58 Ám|
|Repeatability||6 sigma = 2 Ám||0.33 Ám|
Combined Standard Uncertainty, u
Expanded Uncertainty, U
Table 1: Uncertainty budget for measurement using 0-25 mm micrometer in a workshop environment. Assumptions: Micrometer calibrated according to ISO 3611, workshop temperature 20 oC +/- 5 oC, maximum difference in temperature between micrometer and workpiece 3oC.
We find in the uncertainty budget that there are 3 contributors, that are linked to attributes of the micrometer itself. These are:
This directly gives us the calibration requirements for the micrometer. Using this approach, we can interpret the calibration of the micrometer as the process we use to ensure that the assumptions that we have made in the uncertainty budget regarding the micrometer are true.
In this specific example it means that we have 3 specific tasks in the calibration of our micrometer. We also know the tolerance limits, since they are given in our uncertainty budget. Normally, as in this case, the values will have been taken from either a standard or a manufacturers specification, but that is not a given. We choose what values we put in our uncertainty budget and consequently what tolerances we have to calibrate our gages to.
Looking at the calibration requirements, we may find that there are some of the items that are relatively unstable, such as the zeropoint setting. These items need to be calibrated fairly frequently to be kept under control. Other items, such as the parallelism of the anvils are so stable that once they are confirmed to be within specification during initial acceptance test, they only need to be calibrated very infrequently.
The example is trivial, but in other cases, there can be substantial time and money savings involved in being able to have several calibration procedures for a gage, some of which are executed often and others which are executed more rarely. A typical example of such a situation is a coordinate measuring machine (CMM), where the delicate mechanisms in the probe head needs to be checked often whereas the scales of the machine are much more stable.
Based on the requirements discussed above, a design for a gage management system has been developed. The main requirement is that the system ties together 5 different entities:
The focal point for the system is the calibration procedure. A calibration procedure is made up of a number of tasks, each of which may or may not have a result associated with it. Functionality enabling validation of the result entered during a calibration against tolerance limits held within the system has to exist. "Clean the gage" is an example of a task that does not have a result associated with it, whereas "Check the indication using a 25 mm gageblock" is a task with a result associated with it.
Additionally each task may or may not have one or more standards associated with it. Functionality capturing which standard is used during a calibration and verifying it against a list of acceptable standards for that task has to exist. For example "Check the indication using a 25 mm gageblock" has a standard associated with it (in this case the 25 mm gageblock). For those tasks it is necessary to maintain a list of valid standards for that task, as a gage lab may have several sets of gage blocks and only some may be of an adequate accuracy class.
In order to maintain proper traceability documentation it is necessary, to know which specific 25 mm gage block was used in each instance of the task being performed. To cater for this, the system has to keep track of what standards are actually used in each calibration of each gage. With this information in the system, it is possible to document forward and backwards traceability.
Forward traceability is the ability to find out what gages have their traceability hinging on a specific standard. It is useful when a standard is found faulty or out of tolerance during its calibration and it is necessary to find all the gages that may be suspect as a result. Ultimately this should be carried on to the next level to find all the product that has been approved using the gages in question, but it is viewed as the responsibility of the process documentation system to keep track of this and thus outside the scope of the gage management system.
Backwards traceability is the ability to start with a gage and follow its traceability path back through the standards used for its calibration to the point where the traceability is entered into the company through an outside calibration of a standard. Backwards traceability is useful when the results of a measurement (conformance of a product) is questioned by a customer. It is used to prove the traceability of the calibrated properties of the gage.
The last piece of information that is necessary, to be able to show traceability, is to know the actual readings obtained during the calibration. To accommodate this, the envisioned system is capturing the results of each task that has a result associated with it. To further assist in ensuring the traceability of the gage, the system validates the entered results against a set of rules, that are entered in the system along with the task. This way, the operator does not have to look up the tolerances for each gage during each calibration, which would either be time consuming, or not be done. The consequence of the former is lack of efficiency and the consequence of the latter is that some gages get accepted, which should not have been, exposing the company.
For usability purposes, a type (micrometer), sub-type (external micrometer) and optionally a size (0-25 mm external micrometer) is associated with each gage. This allows us within each calibration procedure to associate each task with either the type, the subtype, the size or the individual gage.
The purpose of doing this is that we can have an overall procedure applicable to all micrometers, but with varying tasks based on the attributes of the particular micrometer. For example the tasks associated with checking the spindle error will have different check points and tolerances for a 0-25 mm micrometer and a 75-100 mm micrometer. Therefore these tasks will be associated with the size attribute.
There are two traditional approaches to calibration procedures. One is to have a generic procedure for all gages of one type. The other is to have individual procedures for each set of identical gages.
The advantage of our approach over the generic procedure approach is that specific checkpoints and tolerances can be held within the procedure, without the operator having to look them up in a table or a standard at each calibration.
The advantage of our approach over the individual procedure approach is that in enables us to update or change a procedure in one place and cascade that change to all the places where that change is relevant, for example if a tolerance that is valid for all micrometers change.
Our approach combines the ease of use of the generic procedure approach (all requirements only documented in one place) with the flexibility and detailed accuracy of the individual procedure approach, giving us the best of both worlds.
Another point that is discussed above, is the fact that having only one calibration procedure for a gage is generally not cost effective. A system that only links the gage to a schedule without knowing what needs to be done to the gage during calibration cannot keep track of several schedules for a gage, nor flag which procedure needs to be performed at each calibration.
In our approach, each gage can have several calibration procedures associated with it and each procedure can run on its own individual schedule, such that a procedure for checking the head of the CMM mentioned above can be performed monthly (or even more often) while the procedure that checks the scales is only run annually.
This saves cost, since checking the scales monthly would be a waste of time, and checking the probehead only annually would be an exposure.
It turns out that there are some difficulties associated with the task based approach. These are primarily centered around the ability to keep track of the sequence of the tasks associated with the different levels (type, sub-type, size and individual gage) within a calibration procedure, when a new task is added to that procedure.
The calibration procedure pyramid figure can help explain this difficulty. It shows the calibration procedures associated with gages of "Type 1". We see that all the procedures contain the tasks T1(1) and T1(2). There are two subtypes of Type 1, Subtype 1 and Subtype 2. Subtype 1 has the tasks S1(1) and S1(2) associated with it. Subtype 2 has task S2(1) and S2(2). The same system continues for sizes and individual gages, such that each of the 8 individual gages has an individual procedure associated with it. The sequence of the tasks is the sequence in which they are presented to the operator, and the sequence in which they are expected to be carried out.
If we assume that gage 7 in the figure is a 0-25 mm external micrometer. We are looking at the procedure for this gage when we find that we need to insert a new task for all micrometers. Let us say that the new task, T1(1b) is to be performed just after task G7(1). It is obvious that we can position the task correctly in the sequence of the procedure we are looking at, but there may be some entirely different tasks present in the procedure for internal micrometers, that makes it impossible for the system to deduct by itself where the new task fits in the sequence of this procedure.
If we look at the example, it is intuitive that the task fits after task Z4(1) in the procedure for Size 4 (in the example: 0-25 mm external micrometers), and that it fits after task S2(1) in the procedure for Subtype 2 (in the example: external micrometers), but when we start to move to size Z3, we see that it is impossible to know whether the new task needs to be inserted before or after task Z3(1). This problem cascades down to the procedures for Gage 5 and Gage 6, that cannot be resolved either.
The set of rules that has been developed will ensure the correct sequencing of tasks within a procedure in maybe 90-95 % of the cases, but there are some situations like the one above, where it simply cannot be done.
The traditional data base answer to this problem is to flag the update of the procedure as failed and lock the affected records, in this case the procedures for Size 3, Gage 5 and Gage 6. We were very concerned about this approach, because we may create a substantial number of locked records (procedures) in just one update, and having to clear these before further work can be done, may deter gage lab managers from improving their procedures when they see a need.
The approach we have taken is at the same time simple, novel, user friendly and unheard of in data base design. What we do is that we allow the system to insert the new task at a position in the sequence that is a "best guess" (in all 3 procedures the best guess is immediately following Task S2(1)). We then mark that procedure and task as ambiguous, but do not lock it. This means that further updates can be made to the procedure without it having to be verified first, such that work can go on.
It is not until we try to use the procedure in a calibration, or as the basis for updating other calibration procedures that the system require us to correct and/or verify the sequence of the procedure before it will let us use it. When we have verified the procedure, the system uses the new information (the fact that the sequence of this procedure is now known to be correct) to see if it can auto-verify some of the other procedures marked as ambiguous without operator intervention.
In the example, if we are about to calibrate Gage 5, we must verify the sequence of the procedure, before we can perform the calibration. If, at the verification, we tell the system that the correct position for the task in this procedure is just after Task Z3(1), then the system can correct the procedures for Size 3 and Gage 6 with this added piece of information.
We call it the Master Mind approach, since it resembles that game, where each piece of information is added together until the correct combination can be deducted. The advantage of this approach is that there is never a long list of procedures that needs verifying before work can proceed. The procedures are verified on an as-needed basis and often even auto-verified, as other related procedures are verified.
Gage management systems of the future will need to contain much more information than current systems do. The requirements are already written in standards, such as ISO/QS 9000(5). It is only a question of time before auditors will start asking for this information.
The system outlined in this paper is one approach for accomplishing the task of managing this additional information. It does it in a way that adds value by being flexible and helping our calibration function do their work under an entirely new paradigm, where the purpose of calibration is to ensure that our gages live up to the assumptions we make in our uncertainty budgets.
When we use uncertainty budgets to define our calibration needs, we can make clear, data driven arguments for exactly what we need to calibrate and what not. This will help us be more cost effective and ultimately make higher quality products.
1 ISO 14523-1:1997: Geometrical Product Specifications (GPS) - Inspection by measurement of workpieces and measuring instruments - Part 1: Decision rules.
2 ISO/DTR 14253-2:1997: Geometrical Product Specifications (GPS) - Inspection by measurement of workpieces and measuring instruments - Part 2: Guide to the estimation of uncertainty in GPS measurement, in calibration of measuring instruments and in product verification.
3 Nielsen, H. S.: Uncertainty and Dimensional Tolerances. Quality. May 1992, 25
4 BIPM, IEC, IFCC, ISO, IUPAC, IUPAP, OIML: Guide to the Expression of Uncertainty in Measurement, 1993.
5 Chrysler/Ford/General Motors Supplier Quality Requirements Task Force: Quality System Requirements ISO/QS 9000, 1994
Return to Papers
Return to Home Page