Articles
What is Measurement System Analysis?
Is this a poor process?
Why you should ignore the AIAG MSA Guidelines for Gage R&R Percentages
High Repeatability?  Why it may not be due to the Gage or Operator
Gage R&R Calculator Results Guide
Process and Gage Simulation
Book Reviews
Concepts for R&R Studies
Design and Analysis of Gauge R&R Studies
What is Measurement System Analysis?
CONTENTS
 What is Measurement System Analysis? Overview Video
 Understanding Measurement Error

Bias
 Linearity  Bias relative to size
 Stability  Bias over time

Variation
 Gage R&R
 Reproducibility  Appraiser Variation
 Repeatability  Gage Variation
 Gage R&R
 The Gage R&R Study
 The Observed Process
 Finally ...
What is Measurement System Analysis?  Overview Video
Understanding Measurement Error
All measurement systems have error.
The error may be so small as to be irrelevant or it may be so large that we cannot trust our data.
Regardless, they all have error. This means that if we want to choose a gage we can trust, we need to understand the extent of this error, and we can do that through Measurement System Analysis.
Measurement System Analysis is a set of techniques that allow us to assess how much error is being introduced by the measurement system. Once we understand the extent of this measurement error, we can give answers to questions like:
 Out of these 2 gages, which one should I use on my process?
 Is the measurement error low enough to make this gage useful for classifying product as within specification, or out of specification?
 Can I be confident that this gage is correctly identifying scrap product?
 I want to improve my process  Is this gage good enough to help me identify if I have improved the process?
 Is this gage good enough to enable me to run a successful Statistical Process Control program on my process?
 What is causing my gage error, and how can I reduce the amount of error?
Before we can get into Measurement System Analysis Techniques, let’s start by building a theory of measurement error.
We can imagine taking a part of known size  a reference, or standard, and measuring it multiple times. Let’s assume that these measurements give us a pattern of data that is normally distributed  like the chart shown below.
Note: This is a reasonable assumption to make  For an explanation of why it's reasonable see  assumed normality wikipedia
There are other advantages to modeling measurement error with a normal distribution. The measurement error can now be specified using 2 values:
 The mean  This is the center point of the distribution, also called the average, and in the chart above it is the same as the reference size.
 The standard deviation  This describes the spread of the distribution  the range of measured sizes.
The mean of the measurement error is what we will use to calculate Bias, Linearity, and Stability, and the Standard Deviation is what we will use to calculate Gage R&R, Repeatability, and Reproducibility.
Let’s start with Bias
Bias
Bias is the difference between the mean of the measurement error and the value of a Reference Standard. This is easier to explain by looking at the chart below.
The chart shows that the mean of the distribution is a larger value than the reference size, so this gage has a positive bias  It overestimates the measurement value
And the charts below give some numerical examples
Reference Value = 0.500 Mean of Measured Values = 0.475 Bias = 0.025 
Reference Value = 0.500 Mean of Measured Values = 0.500 Bias = 0.0 
Reference Value = 0.500 Mean of Measured Values = 0.525 Bias = + 0.025 
Bias is a simple assessment of measurement error, but it is very limited.
First, we only measured one size, but our measuring equipment usually measures over a range of sizes
Second, we only measured at one point in time.
So, at this stage, we have to consider the questions:
How consistent is the bias consistent across the operating range of the gage? Would we see the same bias at smaller measurements as we do at larger measurements?  The term for this is Linearity
How consistent is the gage across time? If we use the gage in 2 weeks time will the bias be the same, or will it have drifted  The term for this is Stability
Bias Relative To Size  Linearity
Linearity is the change in bias relative to size, and again this is best explained through charts.
The charts below show the location of the measurement error distribution (the blue curve) when reference standards of 0.25, and 0.5 (the red arrow) are both measured multiple times. In both cases the bias is 0, so it's reasonable to assume there's no problem with linearity across this range.
So, when a measuring gage is calibrated, it is usually measured against several reference standards which span the working range of the gage.
This assesses Linearity, and for most standard measuring tools, if you have a reasonable procedures, your calibration system is controlling linearity.
However, if linearity is a possible issue, you can design a more rigorous Linearity Study. I’ll cover this at a later date.
Note: Have a look at this micrometer calibration procedure and pay particular attention to point 5 where reference standards from the working range of the micrometer are specified.
Bias over Time  Stability
Other factors may affect bias over time, depending on the gage design. For example changes in the environment  temperature, humidity. Changes in the gage itself due to, for example, wear. Changes in gage inputs, e.g. for electrically powered gages  changes in supply voltage, or for other gages changes in air pressure, hydraulic pressure.
The calibration system will cover some of these problems  you can imagine that gage wear may be detected at a future calibration, but for a lot of them you are relying on the gage designer to have designed a robust gage that works as expected under a specified range of operating conditions.
If you have a reason to think that Stability is an issue, it’s possible to design a design a stability study  This involves taking measurements of a reference under conditions that mimic, or better duplicate the actual usage of the gage. I’ll cover Stability Studies at a later date.
The chart above show how bias (represented by the difference between the vertical blue line and the red arrow) may move about over time
So, to summarize, bias is difference between the average of a set measured values to a reference standard.
Linearity is an assessment of how bias over the operating range of the gage, and Stability is the change in bias over time.
Variation
At this point, let’s revisit our original model of measurement error as a normal distribution, and reemphasize that the normal distribution can be specified by 2 values  the mean (related to bias) and the standard deviation (related to the spread, or width of the distribution)
In the previous sections we covered how to make sure that the mean of the measurement error is 0, and in a lot of cases  for standard gages, used under normal conditions  the calibration system will take care of bias, without the need for special Linearity, or Stability Studies.
This leaves us with the problem of how to assess the spread, or standard deviation of the measurement error.
Gage R&R
Reproducibility  If we consider that 2 different people may be using the gage, we could imagine that they may have slightly different techniques, which could lead to some variation  This variation between appraisers is Reproducibility.
Repeatability  On the other hand, it’s feasible that if one person measures the same part 10 times, they might get some variation, purely down to the gage itself. This variation inherent in the gage, or measuring system, itself is Reproducibility.
If I want to assess the spread of measurement error, I can conduct a Gage R&R Study that will give me an estimate of the magnitude of these sources of variation.
The Gage R&R Study
A Gage R&R Study is a particular type of designed experiment used to estimate:
 What is the total variation, or spread, in the measurement system?  What is the Gage R&R?
 How much of the variation is due to differences between the Appraisers?
 How much of the variation is due to the Gage itself?
 Is there a link between the size of the parts measured, and the measurement each appraiser gets?  Is there an Part / Appraiser Interaction?
There are a few ways to carry out the gage r&r study  the most widely used are the Range Method, and the ANOVA Method.
To see an ANOVA Gage R&R Analysis loaded with sample data (from the AIAG MSA 4th Edition) go to my free online Gage R&R Calculator.
Once you’ve completed the Gage R&R Study, you will have an estimate of the amount of error you can expect from the Measurement System when you use it.
This means you can calculate a metric that compares the Gage R&R variation against either the tolerance, or the variation you see from a process. Some of these metrics are named below. I’ll cover the calculations for these metrics at a later date:
 Precision To Tolerance Ratio
 Signal To Noise Ratio
 Probability of Misclassification
 Producers Risk
 Consumers Risk
 Number of Distinct Categories
 Gage R&R as % of Total Variation
 Gage R&R as % of Historical Process Standard Deviation
 Etc, etc
I find it quite surprising that there are so many metrics that can be used to assess a Measurement System, and I think that this proliferation of metrics causes a fair amount of confusion for people learning about Gage R&R.
But, things do become clearer if you can answer the question:
What will I be using the gage for?
For example, will the gage be used to classify product as within spec, or out of spec. Or, Do you want to improve a process, and need a gage that will be sensitive enough to detect small changes in the process?
The Observed Process
All this gage r&r stuff is pretty meaningless, until we relate it to both the purpose of the gage, and the process we will be measuring.
The process, and the way we measure the process are closely tied together, and to understand how they impact each other, we’ll develop another theoretical model.
For a single part produced by the process:
True Value of the size of a Part produced by the Process + Measurement Error = The observed value of the Part
For multiple parts produced by the process:
True Values of Parts from the Process + Measurement Errors = Observed Process
To help illustrate this concept, I've written an online simulator that chooses a sample of 100 parts from a process, and then applies a known gage error to each part, and the result is the observed process. It's an interactive simulator and you can play with it at the Process and Gage Simulator Page
Finally...
I have put everything on this page into a single PowerPoint Package.
You can get it at the Free Downloads Page. It includes:
 Over 20 High Quality Charts
 Zooming Presentation Tool
 The 11 minute Movie  "What is Measurement System Analysis" embedded into the PowerPoint