We want tests that are highly sensitive and highly specific for the condition being tested, but that is not always possible. Often, we must sacrifice one for the other. Simply stated, negative results can be trusted when there is high sensitivity, and positive results can be trusted when there is high specificity. So, we have to ask: is it better not to miss negatives or positives?
There is not usually a neat separation between healthy patients and patients with disease. Instead, patient populations exist in overlapping distributions, which can be illustrated as follows:

The vertical blue line represents the cutoff between positive and negative test results. In this illustration, the cutoff is placed in a compromise position between the two populations, creating a group of false negatives (FN) and false positives (FP).

If a test is highly sensitive, the cutoff is shifted to the left, eliminating false negative results, but increasing the number of false positive results.

If a test is highly specific, the cutoff is shifted to the right, eliminating false positive results, but increasing the number of false negatives. As we have discussed previously, this is the situation with antigen tests for SARS-CoV-2.
When screening large populations for disease, it is important not to miss possible positives, so we choose a test that highly sensitive. We do not want any false negatives. False positives can be sorted out later; this is just a screen after all. On the other hand, it is important that confirmatory tests have high specificity. When we are confirming disease in a population selected by a screen, we want to eliminate false positives.
If the goal of testing for SARS-CoV-2 is to avoid false negative results, favor sensitivity over specificity. But this trade-off is not necessary with all test systems. PCR tests increase sensitivity by amplification and increase specificity with detection probes unique to the virus. The result is a separation between populations, increasing specificity and sensitivity at the same time:

Are sensitivity and specificity the only considerations when evaluating a test? No, it is more complicated, but I am sure you guessed that. We will talk about other measures of test systems and the results they produce next time.
One reply on “Sensitivity and Specificity”
[…] unreliable. Using manufacturers’ submission data, over-the-counter tests have average sensitivity confidence intervals of near 70%, meaning that up to 30% infected people will have a negative test. Results are […]