Papers

HSN 05

Henrik S. Nielsen
Technical Management?
We Don’t Need No Technical Management!

As presented at the 2005
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

ISO 17025 requires that laboratories “have technical management which has overall responsibility for the technical operations.” However, the standard does not specify any requirements for the background and education of those assigned to this role. This is disconcerting as they presumably are tasked with ensuring the quality of the measurement results produced by the laboratory as well as “the competence of all who operate specific equipment, perform tests and/or calibrations, evaluate results, and sign test reports and calibration certificates,” as the standard also requires. In the absence of competent technical management, the responsibility for ensuring the integrity of the measurements performed by the laboratory can move in one of two different directions: It can be left to the laboratory technicians, as suggested by the Certified Calibration Technician credentials offered by ASQ, or it can be handed off to assessors or proficiency testing providers. The problem with the first approach is that while a technician may be very good at applying a measurement or calibration procedure, he or she often does not have the necessary background in mathematics and physics to account for all the potentially significant sources of uncertainty in a measurement and properly quantify their influence on the measured value in each particular case. The problem with relying on third parties to determine a laboratory’s measurement capabilities (as opposed to merely evaluating or confirming the laboratory’s own determination of its capabilities), is that it takes away the laboratory’s ownership and responsibility for its own measurement processes.

Dr. Henrik S. Nielsen: Taking Dimensional Metrology to the Next Level

As presented at the International Dimensional Workshop 2005

Abstract:

Traditionally, dimensional metrology has been viewed as just another area of measurement science and dimensional inspection has been considered a non-value-added quality control activity in the manufacturing community. Much effort has been focused on calibrating dimensional measurement equipment, yet very little effort has been expended on developing sound measurement processes that provide information that is useful beyond the simple acceptance and rejection of parts.

To take dimensional metrology to the next level and to unlock some of the enormous potential for savings for manufacturing industry that such a paradigm shift holds, it is necessary to see dimensional metrology not just as an area of measurement science, but as the information exchange format for geometrical properties of parts.

Once viewed in this light, several changes to the way dimensional metrology is applied become logical. Firstly, dimensional metrology shall primarily be applied earlier in the product development cycle than is the case today. This is to ensure that the specification that is developed properly reflects the true functional requirements. Secondly, more attention shall be paid to the measurement operator (the ordered set of operations) that defines the measurement and the changes in measured values that occur when the measurement operator (which is still not the guy that makes the measurement) is changed. Thirdly, product specifications shall be more precise by defining requirements in terms of well defined specification operators.

Henrik S. Nielsen:
Experience Gained from Offering Accredited 3rd Party Proficiency Testing

As presented at the 2004
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

HN Proficiency Testing has been offering third-party proficiency testing for about three years and has been accredited for the last two years. This paper discusses some of the experience gained and the lessons learned in the process. Accreditation bodies generally require accredited laboratories to participate in proficiency testing. Since most laboratories see this as nothing but an inconvenience and an added expense of maintaining accreditation, very few non-accredited laboratories participate. Consequently, the opportunity to analyze third-party proficiency test results provides a unique insight particularly into the state of the accredited laboratories that constitute the backbone of the US metrology infrastructure. As it turns out, technical insight into the measurement processes analyzed is at least as important for the proficiency testing provider as is a thorough understanding of the statistics behind the common proficiency testing metrics. The paper discusses some of the general trends that can be identified from this vantage point as well as some specific examples of where proficiency testing turned out to be more than just an expensive inconvenience for the participating laboratory. In accordance with the rules for accredited proficiency testing providers, the anonymity of all participating laboratories, innocent or otherwise, will be protected throughout the paper.

Dr. Henrik S. Nielsen:
Communicating Functional Requirements with GD&T
(the paper)

The Slides

As presented at the International Dimensional Workshop 2004

Abstract:

GD&T is a language with words, syntax and grammar. However, even if you know the words, syntax and grammar, you may not be able to meaningfully explain your functional requirements with this language.

The task of the designer is to translate functional requirements into geometrical requirements by using GD&T. This is a complex task because there rarely is a one-to-one relationship between functional requirements and geometrical requirements. Further, a significant limitation in the current editions of the GD&T language is that the only function it expresses well is the function of fit for the purposes of assembly.

More complex functional requirements are usually determined based on practical experiments, but there are no provisions in the GD&T language for documenting in the requirements what settings were used on the measurement equipment (e.g. filters and point density) to arrive at the measurement data that was used to determine the requirements.

An additional problem is that most CAD systems tend to lead the designer towards using dimensional tolerancing as the primary tolerancing and only add GD&T as an afterthought. When tolerancing is done in this manner is usually leads to significantly reduced tolerances and therefore more expensive parts.

The cost of bad design is substantial. For example, the automotive PPAP (Purchased Part Approval Process) can be viewed as recognition that the engineering drawings do not fully express the functional requirements. The incremental cost that this process adds to a product can be attributed to poorly expressed functional requirements.

The paper discusses some of the limitations in current GD&T, some good and bad practices for applying GD&T as some of the extensions to the GD&T language that will be necessary in order to enable designers to more clearly express complex functional requirements.

Henrik S. Nielsen:
Determining Consensus Values in Interlaboratory Comparisons and Proficiency Testing

Winner of "Best Paper on Theoretical Metrology" at the 2003
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

An important part of interlaboratory comparisons and proficiency testing is the determination of the reference value of the measurand and the associated uncertainty. It is desirable to have reference values with low uncertainty, but it is crucial that these values are reliable, i.e. they are correct within their stated uncertainty. In some cases it is possible to obtain reference values from laboratories that reliably can produce values with significantly lower uncertainty than the proficiency testing participants, but in many cases this is not possible for economical or practical reasons. In these cases a consensus value can be used as the best estimate of the measurand. A consensus value has the advantage that it often has a lower uncertainty than the value reported by the reference laboratory. There are well known and statistically sound methods available for combining results with different uncertainties, but these methods assume that the stated uncertainty of the results is correct, which is not a given. In fact, the very purpose of proficiency testing is to establish whether the participants can measure within their claimed uncertainty. The paper explores a number of methods for determining preliminary consensus values used to determine which participant values should be deemed reliable and therefore included in the calculation of the final consensus value and its uncertainty. Some values are based on impressive equations and others have curious names. The relative merits of these methods in various scenarios are discussed.

Dr. Henrik S. Nielsen:
Some Limitations in Uncertainty Evaluation

As presented at the International Dimensional Workshop 2003

Abstract:

Uncertainty budgets are receiving increasing attention not only from accredited laboratories, but from the metrology community at large. Often uncertainties are given with three or more significant digits, indicating that the uncertainty has been determined to within one percent or better, but how accurately can uncertainty be estimated?

The paper explores the capabilities of common statistical techniques as well as some of the tools given in the Guide to the Expression of Uncertainty in Measurement. As it turns out, these fundamental building blocks impose some very significant limitations on the uncertainty analysis.

Dr. Henrik S. Nielsen:
Specifications, operators and uncertainties

Keynote paper presented at:
The 8th CIRP International Seminar on Computer Aided Tolerancing
Charlotte, NC, USA - April 2003

Abstract:

ISO/TS17450-2:2002 "Geometrical product specifications (GPS) – General Concepts – Part 2: Basic tenets, specifications, operators and uncertainties" is a new technical specification published by ISO TC 213 "Dimensional and geometrical product specifications and verification". This document defines a number of new concepts that have the potential of revolutionizing how we think about specification and verification. It enhances the specification language by defining specifications as ordered sets of operations, a much richer language than the simplistic notion of tolerance zones. Additionally, it expands the concept of uncertainty from being something measurement related to being the universal currency for quantifying ambiguity in requirements, specifications and verifications.

Henrik S. Nielsen:
Can Proficiency Testing Add Value?

As presented at the 2002
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

Accreditation bodies are increasingly using proficiency testing as a tool to ensure the credibility of their accreditation programs by requiring the laboratories they accredit to demonstrate that they can live up to their uncertainty claims in interlaboratory comparisons. Accredited laboratories mostly see proficiency testing as an added expense they are forced to incur which adds little or no value. However, when used appropriately, proficiency testing can reduce a laboratory’s risk of producing incorrect measuring results. While focusing on the En (normalized error) approach, the paper explores the underlying assumptions and associated limitations in various reporting methods traditionally used in proficiency testing. It discusses the important steps that are necessary to ensure that correct conclusions are drawn from a proficiency test and the exposure and potential unnecessary cost participating laboratories are subject to, if these steps are not taken. Additionally, the paper covers some personal experiences, where the author has gained valuable knowledge of measuring processes and their limitations as a participant in interlaboratory comparisons.

Henrik S. Nielsen: CMMs and Proficiency Testing.

As presented at the International Dimensional Workshop 2002

Abstract:

Many factors add to the variation in CMM measurements. Some originate in the machine itself or the environment. Some come into play in every measurement and some depend on the probe configuration used, including probe articulations or changes. Others depend on the part being measured; its rigidity and thermal properties and still others depend on the measurement strategy and point distribution chosen by the operator. Since geometrical requirements, whether specified using ANSI/ASME Y14.5 or ISO 1101 apply to a continuous surface, it is impossible to measure GD&T “in accordance with the standard” on a CMM. Therefore CMM measurements of geometry become a question of what constitutes an acceptable approximation. For these reasons and because there is a lack of formalized ways of estimating the uncertainty of CMM measurements, proficiency testing can be a valuable reality check for how well one can measure with a CMM.

Henrik S. Nielsen:
Uncertainty2
How uncertain is your uncertainty budget?

As presented at the 2001
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

Uncertainty budgets are receiving increasing attention not only from accredited laboratories, but from the metrology community at large. Often uncertainties are given with three or more significant digits, indicating that the uncertainty has been determined to within one percent or better, but how accurately can uncertainty be estimated?

The paper explores the capabilities of common statistical techniques as well as some of the tools given in the Guide to the Expression of Uncertainty in Measurement. Based on the integrity of these fundamental uncertainty building blocks, the uncertainty of sample uncertainty budgets is estimated.

Henrik S. Nielsen:
Coordinate Measuring Machines and Accrediation.

As presented at the 2001
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

CMMs are very flexible and versatile machines. This makes them especially challenging in the context of accreditation, since the uncertainty of the actual measurements has to be estimated and stated in calibration certificates and test reports.

The paper looks at some of the difficulties caused by the lack of national or international standards for calibration of CMMs.

It also explores some of the practical approaches, based on performance evaluation standards and a combination of calibrations, interim testing and determination of measurement-specific contributors, that can be applied in a simplified, practical concept for estimating the uncertainty of CMM measurements.

 

Henrik S. Nielsen:
How long is a piece of string? - Some overlooked considerations for uncertainty estimation.

As presented at the 2000
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

When estimating measurement uncertainty, there are some contributors that are obvious, such as the equipment and the environment. However, since the uncertainty is supposed to tell us how far away from the true value our measurement result may be, it is important to consider what the definition of the true value is. The author is involved in ISO Technical Committee 213, where a draft Technical Specification is being considered, which expands the concept of uncertainty to encompass not only the measurement itself, but also the definition of the measurand and how the measurement result relates to the function of the product.

Henrik S. Nielsen: CMMs and GD&T

As presented at the International Dimensional Workshop 2000

(slides only)

Henrik S. Nielsen: What Is Traceability and Why Do We Calibrate?

As presented at the 1999
ASQ Midwest Conference

Abstract:

Calibration and traceability requirements in ISO/QS 9000 are often interpreted as only requiring a calibration sticker on the measuring equipment and the reference to a NIST test number on a calibration certificate. The presentation explores the technical background for requiring calibration and traceability and how the real calibration needs are determined.

Henrik S. Nielsen:
ISO 14253-1 Decision Rules - Good or Bad?

As presented at the 1999
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

The newly released standard ISO 14253-1 for determining compliance or non-compliance with a specification breaks with traditions in most industries for how these decisions are made by requiring that measurement uncertainty is taken into account. But is this an improvement or just an unnecessary extra complication? This paper looks at how tolerances and decision rules interact and how you may be able to expand your tolerances and save money while retaining the same functionality, if you implement these new decision rules.

 

Henrik S. Nielsen: The Myth of the Random Error.

As presented at the 1998
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

Man views phenomena as random when he does not understand the underlying mechanisms. The emphasis on statistical tools for uncertainty estimation and lack of knowledge of physics drives our focus towards the apparant random properties of errors.

The paper demonstrates the arbitrary nature of the distinction between systematic and random error. It proposes that there is no reason to believe that any error is random. Finally it concludes that a thorough analysis of the mechanisms that govern variations in measurements integrated into the GUM method can yield not only an estimate of the uncertainty, but can also help improve it.

Henrik S. Nielsen: Using the ISO "Guide to the Expression of Uncertainty in Measurements" to determine calibration requirements.

As presented at the 1997
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

Calibration is often performed without technical justification regarding the attributes that get calibrated and the resulting calibration uncertainty. Worse yet, calibrations are sometimes based solely on whatever auditors demand. This approach leaves companies with the cost of excessive calibration in some areas, while leaving them exposed in other areas where they "undercalibrate". The paper presents an uncertainty budget driven technique, which determines what attributes of each standard and instrument should be calibrated and to what uncertainty. The technique provides clear, data based documentation of the real calibration needs to management and auditors. The underlying concept is that the purpose of calibration is to limit the maximum error of the metrological characteristics of calibrated standards or instruments, that enters into the uncertainty budgets of subsequent measurements.

Henrik S. Nielsen: Next Generation Gage Management System.

As presented at the 1997
National Conference of Standards Laboratories
Workshop & Symposium

Abstract:

Current gage recall systems connect gages and a calendar, administrating when gages are calibrated. Recent systems include some result verification and links to calibration procedures in form of "dumb" word processor documents.

The new system connects gages, calendar, procedure, standard and tolerances by using calibration procedures built of individual tasks as the focal point. The system verifies calibration results for each task against stored tolerance limits, ensuring that no gage is mistakenly accepted or rejected.

The scope of each task is either the gage type, subtype, size or the individual gage to combine the advantages of commonality and flexibility.

The system keeps track of valid calibration standards for each task, as well as which specific standard was used for each calibration, and their calibration status.

Henrik S. Nielsen and Mark C. Malburg: Traceability and correlation in roundness measurement

Abstract:

A new method for the calibration of Flick standards for roundness measuring instruments has been developed through the use of a Form Talysurf stylus instrument. The method incorporates circular fits adn the convolution of the standardized probe geometry over the fitted arcs using the "Kilroy's Cat" function. The method yields stable results, and an error budget documents the low uncertainty of the method. Major uncertainty contributors are the traceability of the Form Talysurf, and the roughness of the Flick standard. It is shown that the tip radius has a significant effect on the results obtained in roundness measurement, even on such simple artifacts as the Flick standard. This underlines the importance of implementing a standardized tip radius. The influence of Gaussian versus 2RC filters is also documented.

To obtain a print of this paper, you can either contact me, or E-mail my co-author, Mark Malburg.