Non-deterministic doesn't mean random or unpredictable. That's like saying the weather forecast is useless because it's not deterministic or always 100% accurate.
Most things that are generally helpful and beneficial are not 100% helpful and beneficial 100% of the time.
I used GPT-4 as a second opinion on my medical tests and doctor's advice, and it suggested an alternate diagnosis and treatment plan that turned out to be correct. That was incredibly helpful and beneficial.
You're replying to a person who had a similar and even more helpful and beneficial experience because they're alive today.
Pedantically pointing out that a beneficial and helpful thing isn't 100% beneficial and helpful 100% of the time doesn't add anything useful to the conversation since everyone here already knows it's not 100%.
No, they can be. To state that they are, as an absolute, based on your sample size of one, especially with regard to other instances where ChatGPT has failed the user with serious physical results, is fallacious.
I am glad that you are OK, but as another user suggested, it's nowhere near as consistently accurate as it needs to be in order to be anywhere near an adequate substitute for a call to a GP or 911.
oarsinsync|3 months ago
LLMs sometimes can be incredibly beneficial ... today
LLMs sometimes can be incredibly harmful ... today
Non-deterministic things aren't just one thing, they're whatever they happen to be in that particular moment.
KeplerBoy|3 months ago
panarky|3 months ago
Most things that are generally helpful and beneficial are not 100% helpful and beneficial 100% of the time.
I used GPT-4 as a second opinion on my medical tests and doctor's advice, and it suggested an alternate diagnosis and treatment plan that turned out to be correct. That was incredibly helpful and beneficial.
You're replying to a person who had a similar and even more helpful and beneficial experience because they're alive today.
Pedantically pointing out that a beneficial and helpful thing isn't 100% beneficial and helpful 100% of the time doesn't add anything useful to the conversation since everyone here already knows it's not 100%.
jjulius|3 months ago
No, they can be. To state that they are, as an absolute, based on your sample size of one, especially with regard to other instances where ChatGPT has failed the user with serious physical results, is fallacious.
I am glad that you are OK, but as another user suggested, it's nowhere near as consistently accurate as it needs to be in order to be anywhere near an adequate substitute for a call to a GP or 911.
fukka42|3 months ago
hombre_fatal|3 months ago
juanani|3 months ago
[deleted]