top | item 43948902

(no title)

simplyinfinity | 9 months ago

Even today, leading LLMS Claude 3.7 and ChatGPT 4, take your questions as "you've made mistake, fix it" instead of answering the question. People consider a much broader context of the situation, your body language, facial expressions, and can come up with unusual solutions to specific situations and can explore vastly more things than an LLM.

And the thing when it comes to therapy is, a real therapist doesn't have to be prompted and can auto adjust to you without your explicit say so. They're not overly affirming, can stop you from doing things and say no to you. LLMs are the opposite of that.

Also, as a lay person how do i know the right prompts for <llm of the week> to work correctly?

Don't get me wrong, i would love for AI to be on par or better than a real life therapist, but we're not there yet, and i would advise everyone against using AI for therapy.

discuss

order

sho_hn|9 months ago

Even if the tech was there, for appropriate medical use those models would also have to be strenously tested and certified, so that a known-good version is in use. Cf. the recent "personality" changes in a ChatGPT upgrade. Right now, none of these tools is regulated sufficiently to set safe standards there.

ilaksh|9 months ago

I am not talking about a layperson building their own therapist agent from scratch. I'm talking about an expert AI engineer and therapist working together and taking their time to create them. Claude 3.7 will not act in a default way given appropriate instructions. Claude 3.7 can absolutely come up with unusual solutions. Claude 3.7 can absolutely tell you "no".

creata|9 months ago

Have you seen this scenario ("an expert AI engineer and therapist working together" to create a good therapy bot) actually happen, or are you just confident that it's doable?