top | item 45652499

(no title)

DavidSJ | 4 months ago

The article seems to suggest the false positive rate is only 38%:

The trial followed 25,000 adults from the US and Canada over a year, with nearly one in 100 getting a positive result. For 62% of these cases, cancer was later confirmed.

(It also had a false negative rate of 1%:)

The test correctly ruled out cancer in over 99% of those who tested negative.

discuss

order

hn_throwaway_99|4 months ago

If the stats were as good as the hyperbole in the article, it would clearly state the only 2 metrics that really matter: predictive value positive (what's the actual probability that you really have cancer if you test positive) and predictive value negative (what's the actual probability that you're cancer free if you test negative). As tptacek points out, these metrics don't just depend on the sensitivity and specificity of the test, but they are highly dependent on the underlying prevalence of the disease, and why broad-based testing for relatively rare diseases often results in horrible PVP and PVN metrics.

Based on your quoted sections, we can infer:

1. About 250 people got a positive result ("nearly one in 100")

2. Of those 250 people, 155 (62%) actually had cancer, 95 did not.

3. About 24,750 people got a negative test result.

4. Assuming a false negative rate of 1% (the quote says "over 99%") it means of those 24,750 people, about 248 actually did have cancer, while about 24,502 did not.

When you write it out like that (and I know I'm making some rounding assumptions on the numbers), it means the test missed the majority of people who had cancer while subjecting over 1/3 of those who tested positive to fear and further expense.

inglor_cz|4 months ago

"only 2 metrics that really matter"

Nope, there is another important thing that matters: some of the cancers tested are really hard to detect early by other means, and very lethal when discovered late.

I would not be surprised if out of the 155 people who got detected early, about 50 lives were saved that would otherwise be lost.

That is quite a difference in the real world. Even if the statistics stays the same, the health consequences are very different when you test for something banal vs. for pancreatic cancer.

thaumasiotes|4 months ago

> If the stats were as good as the hyperbole in the article, it would clearly state the only 2 metrics that really matter: predictive value positive (what's the actual probability that you really have cancer if you test positive) and predictive value negative (what's the actual probability that you're cancer free if you test negative). As tptacek points out, these metrics don't just depend on the sensitivity and specificity of the test

This is a bizarre thing to say in response to... a clear statement of the positive and negative predictive value. PPV is 62% and NPV is "over 99%".

Your calculations don't appear to have any connection to your criticism. You're trying to back into sensitivity ("the test missed the majority of people who had cancer") from reported PPV and NPV, while complaining that sensitivity is misleading and honest reporting would have stated the PPV and NPV.

dv_dt|4 months ago

so possibly saving lives and late stage cancer care level medical expenses 2/3 of positive results vs fear and lighter medical care 1/3 of the time. is this not a win?