top | item 36969571

(no title)

fooblat | 2 years ago

I'm starting to see a lot of products in "beta" that seem to be little more than a very thin wrapper around ChatGPT. So thin that it is trivial to get it to give general responses.

I recently trialed an AI Therapy Assistant service. If I stayed on topic, then it stayed on topic. If I asked it to generate poems or code samples, it happily did that too.

It felt like they rushed it out without even considering that someone might ask it non-therapy related questions.

discuss

order

FemmeAndroid|2 years ago

I’m happy to believe that the therapy was ineffective, but I don’t necessarily understand why going off topic is bad. In my experience, I had a lot of conversations with therapists that were ‘off topic.’

I’ve definitely talked poetry and writing with a therapist, and while I’ve never had my therapist provide code, we’ve definitely talked tech in great detail.

Maybe those therapists were intentionally making me comfortable by engaging with shared interests. And the LLM isn’t being intentional about it, but I’m not convinced that a therapist is ineffective if they fail to stay ‘on topic’ when directed off topic by their patient.

sensanaty|2 years ago

To extrapolate from my own company and the orders we got from the suits, it basically boils down to them saying "Been hearing about this fancy Chat AI thing, can you whip up something like that quick so we can put out a press release saying $COMPANY is doing AI as well?".

Most corpos couldn't give a rat's ass about it, it's just the fancy new toy on the block that's saturating everyone's newsfeeds so they have to jump on it lest they be left in the dust by the competition who are doing the exact same shit, aka calling the "Open"AI APIs and pretending they're doing something groundbreaking.

We got interrupted mid-sprint, mid-epic to make some shitty wrapper around their APIs. I suspect the overwhelming majority of companies with fancy new "AI" features are doing the exact same shit

visarga|2 years ago

We don't depend on OpenAI, we can use LLaMA2 models.

bbaumgar|2 years ago

Maybe I'm not following why this is undesirable behavior? Did the therapy session work?

fooblat|2 years ago

From my perspective this is undesired by the vendor because there was no therapy session, just various free uses of OpenAI via their account.

irthomasthomas|2 years ago

There's no way to avoid that except training your own model. It will, likely, always be possible to jailbreak chatbots, or just steer the conversation off course. That's why you must never give them direct access to anything.

visarga|2 years ago

You can use a smaller model for topic classification.

sharemywin|2 years ago

especially since you could probably just have it classify the question and then have it respond with a few canned responses if not.

"Is this question an appropriate question to ask a Therapy Assistant please respond with a single word Yes or No"

or something like that. will it be perfect probably not. but I mean it's only mental health what could go wrong...

baobabKoodaa|2 years ago

I don't see the purpose of this. You will add an additional prompt (cost + latency) just to check if the user is on topic. Why? Why do we need to prevent the users of the therapy bot from generating poems or code samples? Shouldn't we rather spend our efforts optimizing the intended use case?