What could be better than sensitivity & specificity?
Probability based (Evidence-based medicine) test interpretation depends on understanding how well a test identifies those with a disease and those without.
The traditional way of describing ability to detect disease is sensitivity (proportion with disease who have a + test). And the ability to correctly identify a well person as negative is the specificity (proportion WITHOUT disease who have a negative test).
But, in working with Laura Scherer, a decision psychologist, it seems the inverse of each may be more useful (building a game, more on this later).
The “Miss rate” instead of sensitivity (a sensitivity of 90% = miss rate of 10%)
The “False positive rate” instead of specificity (a specificity of 80% = false positive rate of 20%). This isn’t new and was the wording of the original Casscells 1978 NEJM letter showing poor clinician test interpretation. But seems to have been dropped from the EBM lexicon in the 1980s (and testing has always seemed the stepchild of EBM, FDA regulation etc. with little scrutiny of whether a test works)
Why new names?
First, they say what they are and one doesn’t have to struggle to remember which is sensitivity or specificity. You miss cases and have false positives in those without disease.
Second, if one is trying to apply natural frequencies to estimate the probability of disease in a quick fashion, it is often the miss rate or false positive rate that one wants to consider (miss rate if high pretest, false positive rate if low pretest).
Talking with John Brush recently, who wrote an excellent book on probability in medicine, he made a similar argument for calling sensitivity the “true positive rate” and specificity the “true negative rate”. Obviously I agree sensitivity/specificity isn’t good but I get confused with all the true and false positives and negatives, so I would advocate for “miss rate and “false positive rate”.