(no title)
fooblat | 2 years ago
I recently trialed an AI Therapy Assistant service. If I stayed on topic, then it stayed on topic. If I asked it to generate poems or code samples, it happily did that too.
It felt like they rushed it out without even considering that someone might ask it non-therapy related questions.
FemmeAndroid|2 years ago
I’ve definitely talked poetry and writing with a therapist, and while I’ve never had my therapist provide code, we’ve definitely talked tech in great detail.
Maybe those therapists were intentionally making me comfortable by engaging with shared interests. And the LLM isn’t being intentional about it, but I’m not convinced that a therapist is ineffective if they fail to stay ‘on topic’ when directed off topic by their patient.
sensanaty|2 years ago
Most corpos couldn't give a rat's ass about it, it's just the fancy new toy on the block that's saturating everyone's newsfeeds so they have to jump on it lest they be left in the dust by the competition who are doing the exact same shit, aka calling the "Open"AI APIs and pretending they're doing something groundbreaking.
We got interrupted mid-sprint, mid-epic to make some shitty wrapper around their APIs. I suspect the overwhelming majority of companies with fancy new "AI" features are doing the exact same shit
visarga|2 years ago
bbaumgar|2 years ago
fooblat|2 years ago
irthomasthomas|2 years ago
visarga|2 years ago
sharemywin|2 years ago
"Is this question an appropriate question to ask a Therapy Assistant please respond with a single word Yes or No"
or something like that. will it be perfect probably not. but I mean it's only mental health what could go wrong...
baobabKoodaa|2 years ago