(no title)
DavidSJ | 4 months ago
The trial followed 25,000 adults from the US and Canada over a year, with nearly one in 100 getting a positive result. For 62% of these cases, cancer was later confirmed.
(It also had a false negative rate of 1%:)
The test correctly ruled out cancer in over 99% of those who tested negative.
hn_throwaway_99|4 months ago
Based on your quoted sections, we can infer:
1. About 250 people got a positive result ("nearly one in 100")
2. Of those 250 people, 155 (62%) actually had cancer, 95 did not.
3. About 24,750 people got a negative test result.
4. Assuming a false negative rate of 1% (the quote says "over 99%") it means of those 24,750 people, about 248 actually did have cancer, while about 24,502 did not.
When you write it out like that (and I know I'm making some rounding assumptions on the numbers), it means the test missed the majority of people who had cancer while subjecting over 1/3 of those who tested positive to fear and further expense.
inglor_cz|4 months ago
Nope, there is another important thing that matters: some of the cancers tested are really hard to detect early by other means, and very lethal when discovered late.
I would not be surprised if out of the 155 people who got detected early, about 50 lives were saved that would otherwise be lost.
That is quite a difference in the real world. Even if the statistics stays the same, the health consequences are very different when you test for something banal vs. for pancreatic cancer.
thaumasiotes|4 months ago
This is a bizarre thing to say in response to... a clear statement of the positive and negative predictive value. PPV is 62% and NPV is "over 99%".
Your calculations don't appear to have any connection to your criticism. You're trying to back into sensitivity ("the test missed the majority of people who had cancer") from reported PPV and NPV, while complaining that sensitivity is misleading and honest reporting would have stated the PPV and NPV.
dv_dt|4 months ago