I don't disagree but on the other hand, I never run into problems with the language model being censored because I am not asking it to write bad words just so I can post online that it can't write bad words.
Hm, I don't buy this. The statistics shown in the blog post revealing the new Claude models (this submission) show a significant tendency to refuse to answer benign questions.
Just the fact that there's a x% risk it doesn't answer complicates any use case unnecessarily.
I'd prefer if the bots weren't antrophomized at all, no more "I'm your chatbot assistant". That's also just a marketing gimmick. It's much easier to assume something is intelligent if it has a personality.
Imagine if the models weren't even framed as AI at all. What if they were framed as 'flexi-search' a modern search engine that predicts content it hasn't yet indexed.
Yeah I spent a lot of time with Claude 2 and if I hadn’t heard online that it’s “censored,” I wouldn’t have even known. It’s given me lots of useful answers in close to natural human language.
geysersam|2 years ago
Just the fact that there's a x% risk it doesn't answer complicates any use case unnecessarily.
I'd prefer if the bots weren't antrophomized at all, no more "I'm your chatbot assistant". That's also just a marketing gimmick. It's much easier to assume something is intelligent if it has a personality.
Imagine if the models weren't even framed as AI at all. What if they were framed as 'flexi-search' a modern search engine that predicts content it hasn't yet indexed.
barfingclouds|2 years ago