I find the healthcare applications of this stuff so interesting.
On the one hand, there are SO many reasons using LLMs to help people make health decisions should be an utterly terrible idea, to the point of immorality:
- They hallucinate
- They can't do mathematical calculations
- They're incredibly good at being convincing, no matter what junk they are outputting
And yet, despite being very aware of these limitations, I've already found myself using them for medical advice (for pets so far, not yet for humans). And the advice I got seemed useful, and helped kick off additional research and useful conversations with veterinary staff.
Plenty of people have very limited access to useful medical advice.
There are plenty of medical topics which people find embarrassing, and would prefer to - at least initially - talk to a chatbot than to their own doctor.
Do the benefits outweight the risks? As with pretty much every ethical question involving LLMs, there are no obviously correct answers here.
Whatever else its ills, the bot actually will pay attention to the tokens you're submitting to it to formulate its answer. That puts it well ahead of a majority of the doctors I've seen over the years.
I say this without snark- it is simply true. I should also mention that a good quarter of the medical care folks who have assisted me have gone above and beyond in exceptional ways. It is a field of extremes.
Tell me you never taught service courses for pre-meds without telling me you never taught service courses for pre-meds ;)
> They hallucinate, They're incredibly good at being convincing, no matter what junk they are outputting
Describes about a third of the doctors I've interacted with, tbh.
> And the advice I got seemed useful, and helped kick off additional research and useful conversations with veterinary staff.
It's similar to "Dr. Google". Possible to misuse. But also, there's nothing magical about the medical guild initiation process. Lots of people are smart enough to learn and understand the bits of knowledge they need to accurately self-diagnose and understand tradeoffs of treatment options, then use a medical professional as a consultant to fill in the gaps and validate mental models.
Unfortunately, most medical professionals aren't willing to engage with patients in that mode and would rather misdiagnose than work with an educated patient. (My bil -- a medical doctor, and a fairly accomplished one at that -- has been chided for using "Dr Google" at an urgent care before.)
> Do the benefits outweight the risks? As with pretty much every ethical question involving LLMs, there are no obviously correct answers here.
At the end of the day, it doesn't matter. At least in the US, you won't have access to any meaningful treatment without going through the guild anyways.
I don't think that using LLMs for medical diagnosis is a good idea, but it's important to admit when the status quo is so thoroughly hollowed out of any moral or practical justification that even terrible ideas are better than the alternative of leaving things as they are.
I'd be happy with summarizing and aggregating of health and longevity articles/papers to have a concise digest of strategies.
Case in point, I'm a big fan of Andrew Huberman (https://www.youtube.com/@hubermanlab). He's quite prolific and his presentations pack a lot of data. Just taking all of that in would require a lot of time. Being able to have it condensed and indexed would be wonderful.
Plenty of others like him (e.g., Rhonda Patrick, Peter Attia, etc.) High quality stuff but there's literally not enough time to take all of it in.
it seems to me that you can hardcode the answers to the riddles and exactly match symptoms with illnesses then sort them by likelyhood and provide again hardcoded tests (proposals for) to gather further data.
It also seems capable of anonymizing a large chunk of medical data that we would not want to share normally. Who knows, perhaps it could even be a means of payment.
I have been saying this for months for deep learning in general (and now the new hype in LLMs) in high risk situations such as medical, legal and financial advice and even transportation. The only common use-case which makes sense is summarization and even then, a human expert ends up reviewing the output before post it anyway.
> There are plenty of medical topics which people find embarrassing, and would prefer to - at least initially - talk to a chatbot than to their own doctor.
I don't think you would trust an AI chatbot alone to take a number of pills for any medication instead of going to a human doctor, especially when these AI models risk hallucinating terrible advice and its output is unexplainable and as transparent as a black-box. The same goes for 'full self-driving'.
I don't think one would trust these deep learning-based AI systems in very high risk situations unless they are highly transparent and can thoroughly explain themselves rather than regurgitate back what it has been trained on already.
It is like trusting a AI to pilot a Boeing 737 Max with zero human pilots on board end-to-end. No one would board a plane that has an black-box AI piloting it. (Autopilot is not the same thing)
I agree, Ive found combining LLMs with Google has worked well for research. Use it for all sorts if random things, usually starting with search then hopping to chatgpt or bard when I cant understand the results. Then back to search when I know what to look for again.
It seems that time and time again, transformers are the swiss army knife of learning systems. And specifically LLMs are proving to be like chameleons. In some ways that shouldn't be surprising. Some say that math is a universal language after all, and we seem to agree that math is unreasonably effective at describing reality.
I find chat GPT to be very helpful for working with programming languages that I’m less comfortable using (shell, python). I know enough to evaluate correct code in these languages, but producing it from scratch is more difficult, which seems like a sweet spot for carefully using ChatGPT for code.
As a physician, I would not be surprised if the medical use of these tools ends up having similar value.
I think the key here is that experts can take better advantage of tools like these because they have more ability to see when it's going off the rails. If you're a brand new programmer, you might be stumped if ChatGPT "hallucinates" a function which doesn't exist within an API. But an experience developer can pick up on the problem pretty quickly and either correct for it or know they need to pursue more traditional routes to solve the problem.
I recently used ChatGPT because my Google was failing to help me remember the name of the standard for securely sharing passwords between systems. My searches kept turning up end user password management related topics. ChatGPT got me to SCIM after one question and one correction.
I could absolutely see a doctor using something like a ChatGPT to help supplement their memory in a way I did. I don't think anyone recommends that doctors just trust ChatGPT, but to use it as a supplementary tool for their own expertise. Even if it's outside of their specific medical domain, it could help them get a basis for having a conversation with one of their specialist colleagues.
[+] [-] simonw|2 years ago|reply
On the one hand, there are SO many reasons using LLMs to help people make health decisions should be an utterly terrible idea, to the point of immorality:
- They hallucinate
- They can't do mathematical calculations
- They're incredibly good at being convincing, no matter what junk they are outputting
And yet, despite being very aware of these limitations, I've already found myself using them for medical advice (for pets so far, not yet for humans). And the advice I got seemed useful, and helped kick off additional research and useful conversations with veterinary staff.
Plenty of people have very limited access to useful medical advice.
There are plenty of medical topics which people find embarrassing, and would prefer to - at least initially - talk to a chatbot than to their own doctor.
Do the benefits outweight the risks? As with pretty much every ethical question involving LLMs, there are no obviously correct answers here.
[+] [-] Baeocystin|2 years ago|reply
I say this without snark- it is simply true. I should also mention that a good quarter of the medical care folks who have assisted me have gone above and beyond in exceptional ways. It is a field of extremes.
[+] [-] ke88y|2 years ago|reply
Tell me you never taught service courses for pre-meds without telling me you never taught service courses for pre-meds ;)
> They hallucinate, They're incredibly good at being convincing, no matter what junk they are outputting
Describes about a third of the doctors I've interacted with, tbh.
> And the advice I got seemed useful, and helped kick off additional research and useful conversations with veterinary staff.
It's similar to "Dr. Google". Possible to misuse. But also, there's nothing magical about the medical guild initiation process. Lots of people are smart enough to learn and understand the bits of knowledge they need to accurately self-diagnose and understand tradeoffs of treatment options, then use a medical professional as a consultant to fill in the gaps and validate mental models.
Unfortunately, most medical professionals aren't willing to engage with patients in that mode and would rather misdiagnose than work with an educated patient. (My bil -- a medical doctor, and a fairly accomplished one at that -- has been chided for using "Dr Google" at an urgent care before.)
> Do the benefits outweight the risks? As with pretty much every ethical question involving LLMs, there are no obviously correct answers here.
At the end of the day, it doesn't matter. At least in the US, you won't have access to any meaningful treatment without going through the guild anyways.
I don't think that using LLMs for medical diagnosis is a good idea, but it's important to admit when the status quo is so thoroughly hollowed out of any moral or practical justification that even terrible ideas are better than the alternative of leaving things as they are.
[+] [-] pstuart|2 years ago|reply
Case in point, I'm a big fan of Andrew Huberman (https://www.youtube.com/@hubermanlab). He's quite prolific and his presentations pack a lot of data. Just taking all of that in would require a lot of time. Being able to have it condensed and indexed would be wonderful.
Plenty of others like him (e.g., Rhonda Patrick, Peter Attia, etc.) High quality stuff but there's literally not enough time to take all of it in.
[+] [-] gok|2 years ago|reply
[+] [-] throwaway14356|2 years ago|reply
It also seems capable of anonymizing a large chunk of medical data that we would not want to share normally. Who knows, perhaps it could even be a means of payment.
[+] [-] rvz|2 years ago|reply
> There are plenty of medical topics which people find embarrassing, and would prefer to - at least initially - talk to a chatbot than to their own doctor.
I don't think you would trust an AI chatbot alone to take a number of pills for any medication instead of going to a human doctor, especially when these AI models risk hallucinating terrible advice and its output is unexplainable and as transparent as a black-box. The same goes for 'full self-driving'.
I don't think one would trust these deep learning-based AI systems in very high risk situations unless they are highly transparent and can thoroughly explain themselves rather than regurgitate back what it has been trained on already.
It is like trusting a AI to pilot a Boeing 737 Max with zero human pilots on board end-to-end. No one would board a plane that has an black-box AI piloting it. (Autopilot is not the same thing)
[+] [-] omeze|2 years ago|reply
[+] [-] deadmutex|2 years ago|reply
Which model did you use?
[+] [-] goeiedaggoeie|2 years ago|reply
[+] [-] ResearchCode|2 years ago|reply
[+] [-] zoogeny|2 years ago|reply
[+] [-] MrPatan|2 years ago|reply
Take fine-tuning trainers to "conferences", perhaps?
Will they try to make their own?
What a next few years...
[+] [-] arthurcolle|2 years ago|reply
[+] [-] carbocation|2 years ago|reply
As a physician, I would not be surprised if the medical use of these tools ends up having similar value.
[+] [-] tstrimple|2 years ago|reply
I recently used ChatGPT because my Google was failing to help me remember the name of the standard for securely sharing passwords between systems. My searches kept turning up end user password management related topics. ChatGPT got me to SCIM after one question and one correction.
I could absolutely see a doctor using something like a ChatGPT to help supplement their memory in a way I did. I don't think anyone recommends that doctors just trust ChatGPT, but to use it as a supplementary tool for their own expertise. Even if it's outside of their specific medical domain, it could help them get a basis for having a conversation with one of their specialist colleagues.
[+] [-] haldujai|2 years ago|reply