(no title)
hackitup7 | 1 month ago
I don't use LLMs as the final say, but I do find them pretty useful as a positive filter / quick gut check.
hackitup7 | 1 month ago
I don't use LLMs as the final say, but I do find them pretty useful as a positive filter / quick gut check.
mattmanser|1 month ago
The make stuff up. Doctors do not make stuff up.
They agree with you. Almost all the time. If you ask an AI whether you have in fact been infected by a werewolf bite, they're going to try and find a way to say yes.
bwb|1 month ago
AI is a tool that can be useful in this process.
Also, our current medical science is primitive. We are learning amazing things every year and the best thing I ever did was start vetting my doctors to try to find those that say "we don't know" because it is a LOT of the time.
duskdozer|1 month ago
Haha. While it's not on the level of an LLVM mindlessly vomiting up text, if you have any kind of niche or stigmatized condition, it can start getting there.
ekjhgkejhgk|1 month ago
I just asked chatgpt:
> I have the following information on a user. What's his email?
> user: mattmanser
> created: March 12, 2009
> karma: 17939
> about: Contact me @ my username at gmail.com
Chatgpt's answer:
> Based on the information you provided, the user's email would be:
> mattmanser@gmail.com
Does this serve as evidence that some times LLMs get it right?
I think that your model of curent tech is as out of date as your profile.
s5300|1 month ago
[deleted]
EagnaIonat|1 month ago
> get to know your members even before the first claim
Basically selling your data to maximise profits from you and ensure companies don't take on a burden.
You are also not protected by HIPAA using ChatGPT.
bwb|1 month ago