Executive Snapshot
A process is only as trustworthy as the data behind it. When measurement systems aren’t qualified, teams make
decisions on noise. Charts look unstable, capability studies mislead, and operators disagree about what’s in or out of
spec. Measurement Systems Analysis (MSA) is the gatekeeper. Bias, linearity, stability, and Gage R&R; tell us if the
numbers reflect reality. Without it, ‘data-driven decisions’ are built on sand.
Why it happens
MSA often gets skipped for three reasons. First, launch pressure puts throughput ahead of metrology readiness.
Second, calibration is confused with capability — a gage can be ‘in date’ yet still unreliable. Third, vendor specs are
taken at face value without checking in the actual production context. Each shortcut looks harmless. In practice, they
feed chronic firefighting.
How it shows up
On the floor, weak measurement systems show up quickly. SPC charts are noisy with false alarms. Capability
studies swing from good to bad between runs. Operators dispute pass/fail calls. Re-inspections give different
results on the same part. These aren’t process problems. They’re measurement problems disguised as process
instability.
Consequences
When data can’t be trusted, everything downstream erodes. Engineers waste hours chasing phantom shifts. Scrap
and rework pile up because of false rejects. Quality leaders defend bad metrics to customers. The factory loses
credibility — inside and outside.
The fix
The fix is straightforward: run proper MSAs on critical-to-quality (CTQ) gauges. For variables, use about 10 parts ×
3 operators × 2–3 repeats. For attributes, run at least 50 randomized trials with repeats. Check %GRR, ndc, and
agreement rates. Fix obvious issues — resolution, fixturing, training, environment — then rerun until capable. MSA
isn’t paperwork. It’s the license to trust your data.
Root causes (6M+E)
Dimension Typical issues
Measurement Unqualified gages; inadequate resolution; wear; bias/linearity not checked.
Methods No work instruction; inconsistent part presentation; wrong part range for R&R.
Machines Sensor drift; unstable warm-up; PLC timing issues.
Materials Surface/finish effects ignored; non-representative parts chosen.
Manpower Operator technique varies; no training or certification.
Environment Temperature/humidity swings; ESD; vibration; contamination.
Diagnostics & quick checks
- Verify calibration status and confirm MSA plan exists for CTQs.
- For variables: 10 parts × 3 operators × 2–3 repeats under production conditions.
- For attributes: ≥50 randomized trials with repeats.
- Use parts spanning process variation, not just nominal pieces.
- Confirm adequate resolution — at least 1/10 of tolerance.
Acceptance criteria
Guidance from AIAG MSA (4th ed.) interprets %GRR < 10% as acceptable, 10–30% as application-dependent, and 30% as unacceptable. Number of distinct categories (ndc) should be ≥5, preferably ≥10. For attribute studies, aim
for high repeatability and reproducibility, with overall % agreement and kappa in the ‘good’ range.
30/60/90-day playbook
0–30 days
- List CTQs and link each to a gage/fixture.
- Quarantine decisions relying on unqualified gages.
- Run MSAs on top 3 CTQs.
- Fix obvious issues — worn tips, poor fixtures, unstable environment.
31–60 days - Train operators; create standard work for measurement methods.
- Upgrade weak systems — better sensors, fixtures, environment controls.
- Re-run MSAs to verify improvement.
- Update control plans with qualified gages and SPC links.
61–90 days - Institutionalize MSA re-studies after changes and at intervals.
- Dashboard %GRR/ndc and attribute agreement by CTQ.
- Feed MSA results into PFMEA detection rankings and LPA checks.
Sustain & scale
Sustain the gains by tying MSA to formal triggers: new gages, process changes, operator rotation, or SPC drift
signals. Embed lessons into PFMEAs, control plans, and layered audits. When MSA is treated as a standing
discipline, not a one-off event, data becomes trustworthy — and every decision downstream improves.
References & further reading
[1] NIST/SEMATECH e-Handbook of Statistical Methods — Gauge R&R; Studies
https://www.itl.nist.gov/div898/handbook/mpc/section4/mpc4.htm
[2] ISO 22514-7:2021 — Capability of measurement processes
https://www.iso.org/standard/80624.html
[3] ASQ — Gage Repeatability and Reproducibility
https://asq.org/quality-resources/gage-repeatability
[4] SPC for Excel — Acceptance Criteria for MSA
https://www.spcforexcel.com/knowledge/measurement-systems-analysis-gage-rr/acceptance-criteria-for-msa/
