Similarly the main calculator used in the US to calculate 10-year risk of cardiovascular incident literally cannot compute scores for people under 40.[0] There are two consequences to this. The first is that if you are under 40 you will never encounter a physician who believes you are at risk of heart attack or stroke, even though over 100,000 Americans under 40 will experience such an incident each year. The second is that even if you get a heart attack or stroke due to their negligence they will never be liable because that calculator is considered the standard of care in malpractice law!
Governing bodies write these guidelines that act like programs, and your local doctor is the interpreter.[1] When was the last time you found a bug that could be attributed to the interpreter rather than the programmer?
I had a heart attack at 35, despite not really having other risks. A sibling who had a heart attack is the biggest risk factor, but later my sister did not qualify for a study on heart attack risk because she was only 39.
My ER notes literally say “can’t be a heart attack but that’s what it looks like, so we’ll treat it as one for now”, which is a little unnerving.
> The second is that even if you get a heart attack or stroke due to their negligence they will never be liable because that calculator is considered the standard of care in malpractice law!
I think you misunderstand how the risk calculator is used.
Physicians are still expected to use their clinical judgement and information from patient conversation to determine the appropriate intervention.
If a 30 year old patient comes in with high blood pressure, but no existing cardiovascular disease (so the calculator could be used except for the age), it would clearly be malpractice for the doctor to say "sorry! you're too young to use the calculator so I'm going to give you a stamp of approval for health!"
Out of curiosity, how is a physician negligent if decades of exposure to hypertension/LDL/smoking/diabetes (the variables on that calculator) give you a heart attack or stroke?
By the time you're put on a statin, for example, you've already had decades of exposure due to your lifestyle.
Also, I don't believe the claim that physicians don't care about CVD risk in patients <40yo including high blood pressure and high cholesterol.
What happens if the doctor says the tool is likely wrong and gives a reasonable (according to their peers) reason why? Does the court blindly accept some algorithm over hard-earned experience?
> An algorithmic absurdity: cancer improves survival
> [...]
> algorithmic absurdity, something that would
> seem obviously wrong to a person based on common sense.
I think I've worked in software/data long enough to be very very suspicious of a one-size-fits-all algorithm like this. I would be very hesitant to entrust something like organ matching to a singular matching system.
There are so many ways to get it wrong - bad data, bad algo design/requirements, mistakes in implementation, people understanding the system too well being able to game it, etc.
Human systems have biases, but at least there are diverse biases when there are many decision makers. If you put something important behind a single algorithm, you are locking in a fixed bias inadvertently.
What does a non "one size fits all" approach for organ matching look like? What does a non-singular matching system work? Do you arbitrarily (randomly?) split up organs into different pools and let each pool match by a different algorithm?
I think the generalized take away from this article, and the position held by the authors is: "Overall, we are not necessarily against this shift to utilitarian logic, but we think it should only be adopted if it is the result of a democratic process, not just because it’s more convenient." and "Public input about specific systems, such as the one we’ve discussed, is not a replacement for broad societal consensus on the underlying moral frameworks.".
I wonder how exactly this would work. As the article identifies, health care in particular is continuously barraged with questions of how to allocate limited resources. I think the article is right to say that the public was probably in the dark to the specifics of this algorithm, and that the transition to utilitarian based decision making frameworks (ie algorithms) was probably -not- arrived by at by a democratic process.
But I think had you run a democratic process on the principle of using utilitarian logic in health care decision making, you would end up with consensus to go ahead. And then this returns us to this specific algorithmic failure. What is the scaleable process to retaining democratic oversight to these algorithms? How far down do we push? ER rooms have triage procedures. Are these in scope? If so, what do the authors imagine the oversight and control process to look like.
Hm, I think the bigger issue presented is that the algorithm in question is heavily biased against younger patients -- it deviates significantly from an ideal utilitarian model.
It’s worth noting that the algorithm in question is not any kind of AI or ML as we might know it from the tech industry. Underneath, it is plain old statistical modelling.
The article doesn’t make this clear, and the name of the blog doesn’t help.
For kidney transplants, for example the EPTS score, compatibility, time on the list, geographic distance and antibody levels are used to generate the wait list ranking. For scale, you accrue 1/365 points for each day waiting on the list. Kids under 10 get 2 extra points. Kids under 20 get 1 extra point. Those with high antibody levels can get up to 20 extra points to increase their chance of getting a match.
The KDPI score is an estimate of how risk of graft failure of the donor organ. The lower the number the better the odds. Those with low EPTS (<20%) will get those with KDPI <20%. Age and diabetes heavily factor in an EPTS score. The donor KDPI is something they will tell you when you get a call. You can always pass on any donor organ.
What can go wrong when you let government agencies with no expertise to develop and maintain AI models and algorithms, right?
And then we get articles saying that AIs are biased, racist and don’t work as expected and that AI in general as a technology has no future.
I can even predict what will be their solution lmao, to pay atrocious lump of money to big consulting agencies with no expertise to develop it for them and fail again.
The fairest way to do it I feel is a FIFO. Yeah you might give an organ to a 70 year old on their last legs, but they don’t have any less of a right to live than anyone else.
After the Horizon scandal, public trust in complicated computer systems are at an all time low. It shouldn’t be an opaque system making such important decisions. Everything should be in the open and explainable.
So, if I as a 38-year-old had a mild liver impairment which could reduce my life expectancy to 60 (22 years from now) I should get priority over a 60-year-old with a debilitating, excruciating condition which will end his life in six months, merely because his life expectancy with the transplant may only be 70?
That’s an outrageous and obscene utility calculation to propose and it should be obviously so to just about anyone.
The NHS does this calculus routinely using Quality Adjusted Life Years. Treatments that get more are favoured which is also how NICE decides what drugs the NHS should offer. There's obviously some utilitarianism in the decision to use QALYs but to some (including me) it seems a reasonable proxy metric to maximise.
Ultimately a sacrifice must be chosen, but I am not sure a discussion about how that should be made is necessarily fit for HN (though I'd be interested in how you'd resolve your proposed scenario).
> if I as a 38-year-old had a mild liver impairment which could reduce my life expectancy to 60 (22 years from now) I should get priority over a 60-year-old with a debilitating, excruciating condition which will end his life in six months, merely because his life expectancy with the transplant may only be 70
No. Because it's mild and could reduce your life expectancy. Once it becomes worse and a will, yes--you should.
ipnon|1 year ago
Governing bodies write these guidelines that act like programs, and your local doctor is the interpreter.[1] When was the last time you found a bug that could be attributed to the interpreter rather than the programmer?
[0] https://tools.acc.org/ascvd-risk-estimator-plus/#!/calculate...
[1] It’s worth considering what medical schools, emergency rooms, and malpractice lawyers are analogous to in this metaphor.
lazyasciiart|1 year ago
My ER notes literally say “can’t be a heart attack but that’s what it looks like, so we’ll treat it as one for now”, which is a little unnerving.
rscho|1 year ago
This is absolutely not true. Only someone knowing nothing about healthcare could come to such a conclusion.
> guidelines that act like programs, and your local doctor is the interpreter.
Such reframing is irrational. You are reframing scientific facts into an almost completely empirical context. It doesn't work like that at all.
refurb|1 year ago
I think you misunderstand how the risk calculator is used.
Physicians are still expected to use their clinical judgement and information from patient conversation to determine the appropriate intervention.
If a 30 year old patient comes in with high blood pressure, but no existing cardiovascular disease (so the calculator could be used except for the age), it would clearly be malpractice for the doctor to say "sorry! you're too young to use the calculator so I'm going to give you a stamp of approval for health!"
hombre_fatal|1 year ago
By the time you're put on a statin, for example, you've already had decades of exposure due to your lifestyle.
Also, I don't believe the claim that physicians don't care about CVD risk in patients <40yo including high blood pressure and high cholesterol.
hansvm|1 year ago
thrw42A8N|1 year ago
On the other hand, when was the last time you used a custom one-off interpreter?
kreyenborgi|1 year ago
> optimize “quality-adjusted” life years
https://repaer.earth/ was posted on HN recently as an extreme example of this hehe
steveBK123|1 year ago
There are so many ways to get it wrong - bad data, bad algo design/requirements, mistakes in implementation, people understanding the system too well being able to game it, etc.
Human systems have biases, but at least there are diverse biases when there are many decision makers. If you put something important behind a single algorithm, you are locking in a fixed bias inadvertently.
icegreentea2|1 year ago
icegreentea2|1 year ago
I wonder how exactly this would work. As the article identifies, health care in particular is continuously barraged with questions of how to allocate limited resources. I think the article is right to say that the public was probably in the dark to the specifics of this algorithm, and that the transition to utilitarian based decision making frameworks (ie algorithms) was probably -not- arrived by at by a democratic process.
But I think had you run a democratic process on the principle of using utilitarian logic in health care decision making, you would end up with consensus to go ahead. And then this returns us to this specific algorithmic failure. What is the scaleable process to retaining democratic oversight to these algorithms? How far down do we push? ER rooms have triage procedures. Are these in scope? If so, what do the authors imagine the oversight and control process to look like.
loeg|1 year ago
jl6|1 year ago
The article doesn’t make this clear, and the name of the blog doesn’t help.
pkaye|1 year ago
For kidney transplants, for example the EPTS score, compatibility, time on the list, geographic distance and antibody levels are used to generate the wait list ranking. For scale, you accrue 1/365 points for each day waiting on the list. Kids under 10 get 2 extra points. Kids under 20 get 1 extra point. Those with high antibody levels can get up to 20 extra points to increase their chance of getting a match.
The KDPI score is an estimate of how risk of graft failure of the donor organ. The lower the number the better the odds. Those with low EPTS (<20%) will get those with KDPI <20%. Age and diabetes heavily factor in an EPTS score. The donor KDPI is something they will tell you when you get a call. You can always pass on any donor organ.
ClassyJacket|1 year ago
Havoc|1 year ago
unknown|1 year ago
[deleted]
jwilk|1 year ago
https://news.ycombinator.com/item?id=38202885 (22 comments)
gyudin|1 year ago
And then we get articles saying that AIs are biased, racist and don’t work as expected and that AI in general as a technology has no future.
I can even predict what will be their solution lmao, to pay atrocious lump of money to big consulting agencies with no expertise to develop it for them and fail again.
cedws|1 year ago
binary132|1 year ago
That’s an outrageous and obscene utility calculation to propose and it should be obviously so to just about anyone.
rainforest|1 year ago
Ultimately a sacrifice must be chosen, but I am not sure a discussion about how that should be made is necessarily fit for HN (though I'd be interested in how you'd resolve your proposed scenario).
JumpCrisscross|1 year ago
No. Because it's mild and could reduce your life expectancy. Once it becomes worse and a will, yes--you should.
unknown|1 year ago
[deleted]
defrost|1 year ago
Welcome to the reality of triage .. all decisions are bad from some PoV or another, some are arguably less bad.
Oh, to live in a world of infinite matching organs and unlimited theatre slots on demand.