Negative Percent Agreement Calculation

Negative Percent Agreement Calculation: Understanding the Basics

As a professional, you may come across the term “negative percent agreement” at some point in your work. This term is commonly used in the field of inter-rater reliability, which refers to the consistency of ratings or judgments made by different people.

To put it simply, negative percent agreement (NPA) is the percentage of cases where two or more raters fail to agree that an item does not have a certain characteristic or quality. For example, if two raters are asked to examine a piece of writing and determine whether it contains any spelling errors, and they both agree that there are no errors, then this would be a case of positive percent agreement (PPA). However, if they both fail to identify a spelling error, this would be a case of NPA.

Calculating NPA involves comparing the number of cases where two or more raters failed to agree on a negative rating (i.e. no spelling errors) to the total number of cases evaluated. The formula for NPA is as follows:

NPA = (# of cases with negative non-agreement / total # of cases) x 100

For example, if two raters evaluated 100 pieces of writing and failed to agree on the absence of spelling errors in 20 of them, the NPA would be calculated as follows:

NPA = (20 / 100) x 100 = 20%

In other words, 20% of the cases evaluated resulted in negative non-agreement between the raters.

It is important to note that NPA should always be interpreted in conjunction with PPA. In the example above, if the raters agreed that there were no spelling errors in 70 of the 100 cases evaluated, the PPA would be:

PPA = (# of cases with positive agreement / total # of cases) x 100

PPA = (70 / 100) x 100 = 70%

This means that the raters agreed on a negative rating in 70% of the cases evaluated, while failing to do so in 20% of the cases, resulting in an overall inter-rater reliability of 90% (i.e. 70% PPA + 20% NPA).

In conclusion, understanding negative percent agreement (NPA) is crucial for those working in fields that require inter-rater reliability assessments. By calculating NPA, raters can identify areas where they may need to improve their consistency in judgments. However, it should always be interpreted in conjunction with positive percent agreement (PPA) in order to get a complete picture of inter-rater reliability.