Attribute Agreement Analysis Acceptance Criteria

We introduced the kappa value in the latest newsletter. It can be used to measure the examiner`s compliance with the benchmark. Kappa can range from 1 to -1. A kappa value of 1 represents a perfect match between the examiner and the benchmark. A kappa value of -1 is a perfect disagreement between the examiner and the benchmark. A Kappa value of 0 indicates that the agreement represents the agreement that is expected only by chance. Therefore, Kappa values close to 1 are desired. If you look at Table 1 for Bob`s data, you can see that this part has been evaluated 25 times by Bob each time. There have been five times that Bob did not rate the piece in the same way each time (for parts 6, 14, 21, 22 and 26).

This corresponds to an approval percentage of 25/30 – 83.3%. So Bob rated each game equal at 83.3% each time. If we do this study again, would Bob have the same percentage agree? We do not know, but probably not because there are frequent causes of variation that are still present. We can develop a confidence interval around this average to give us an idea of the possible variation in Bob`s results. Like any measurement system, the accuracy and accuracy of the database must be understood before the information is used (or at least during use) to make decisions. At first glance, it appears that the apparent starting point begins with an analysis of the attribute (or attribute-Gage-R-R). That may not be a very good idea. The efficiency number in the table above is only the % agreement for each reviewer. There are two new terms in the table: the wrong alert rate and the wrong alert rate. The manual does a bad job of defining the wrong alert rate and false alarm in the manual.

The error rate is applied to non-compliant parts of the study. The error rate is the percentage of time an expert has not refused a defective part. This is determined by the following equation: You have selected a go/no go-gage attribute to use. This payment will simply tell if the part is in the specifications. It does not tell you how “close” is the result of the nominalist; only that it is in the specifications. The tool used for this type of analysis is called the R-R pledge attribute. R-R is synonymous with repeatability and reproducibility. Repeatability means that the same operator who measures the same thing must get the same reading each time with the same measuring instrument.


view more