top | item 47167044

(no title)

dibujaron | 3 days ago

A less cynical explanation: It's heavily trained to ask follow-up questions at the end of a response, to drive more conversation and more engagement. That's useful both for making sure you want to renew your subscription, and also probably for generating more training data for future models. That's sufficient explanation for the behavior we're seeing.

discuss

order

g947o|3 days ago

I could be wrong, but I remember that Claude models didn't really ask follow-up questions. But since GPT models are doing that, and somehow people like that (why?), Anthropic started doing it as well.

neya|3 days ago

Because, Anthropic can do no wrong, correct?

drzaiusx11|2 days ago

With this new announcement, Anthropic is saying they can _specifically_ "do wrong" since it's in their best interests...