The LLM still provide value. They are much quicker than seeing a doctor, and with Deep Research for ChatGPT and whatever Gemini google search is calling it now you can actually get to see the sources from the information that it is looking at.
Parsing 100 different scientific articles or even google search results is not going to be possible before I get bored and move on. This is the value of LLM.
Even if the LLM data is used in training or sold off one way to protect oneself, is to add in knowingly incorrect data to the chat. You know it is incorrect, the LLM will believe it. Then the narrative is substantially changed.
Or wait like 6mo and the opensource Chinese models [Kimi/Qwen/Friends] will have caught up to Claude and Gemini IMO. Then just run these models quantized locally on Apple Silicon or GPU.
You will be assigned an individualized risk figure that will determine whether or not you are given coverage and treatment. Those decisions will happen without you or any MDs involvement. You will never know it happened and it will follow you for the rest of your life and your children's lives.
Don’t forget that majority of the commenters on this platform live in a country that views suffering in pain from incurable disease as a “god intended way” (and a horse dose of morphine). Take it with a grain of salt.
> Dystopian and frankly, gross. Its amazing to me that so many people are willing to give up control over their lives and in this case, their bodies, for the smallest inkling of ease.
I've read people with chronic conditions reporting that chatgpt actually helped them land correct diagnosis that doctors did not consider so people are not just using that for "inkling of ease".
> Dystopian and frankly, gross. Its amazing to me that so many people are willing to give up control over their lives and in this case, their bodies, for the smallest inkling of ease.
You have to be extremely privileged to say something like this.
Do you believe that ChatGPT is doing the the research? I'm all in favor of better access and tools to research but at least in the US all of the research is being defunded, we're actively kicking researchers out of the country, and a bunch of white billionaires are proposing this as an alternative, based on training data they won't share.
This is a product feature that invalidates WebMD and the like. It does not solve any health problems.
enceladus06|1 month ago
Parsing 100 different scientific articles or even google search results is not going to be possible before I get bored and move on. This is the value of LLM.
Even if the LLM data is used in training or sold off one way to protect oneself, is to add in knowingly incorrect data to the chat. You know it is incorrect, the LLM will believe it. Then the narrative is substantially changed.
Or wait like 6mo and the opensource Chinese models [Kimi/Qwen/Friends] will have caught up to Claude and Gemini IMO. Then just run these models quantized locally on Apple Silicon or GPU.
lotsofpulp|1 month ago
DudeOpotomus|1 month ago
wiseowise|1 month ago
azan_|1 month ago
I've read people with chronic conditions reporting that chatgpt actually helped them land correct diagnosis that doctors did not consider so people are not just using that for "inkling of ease".
DudeOpotomus|1 month ago
unknown|1 month ago
[deleted]
wiseowise|1 month ago
You have to be extremely privileged to say something like this.
a) nobody is giving up control of their lives
b) get off your high horse, son
DudeOpotomus|1 month ago
[deleted]
glemion43|1 month ago
DudeOpotomus|1 month ago
threetonesun|1 month ago
This is a product feature that invalidates WebMD and the like. It does not solve any health problems.