top | item 32171060

(no title)

harpersealtako | 3 years ago

I've absolutely noticed that. I used to pay for GPT-3 access through AI Dungeon back in 2020, before it got censored and run into the ground. In the AI fiction community we call that "Summer Dragon" ("Dragon" was the name of the AI dungeon model that used 175B GPT-3), and we consider it the gold standard of creativity and knowledge that hasn't been matched yet even 2 years later. It had this brilliant quality to it where it almost seemed to be able to pick up on your unconscious expectations of what you wanted it to write, based purely on your word choice in the prompt. We've noticed that since around Fall 2020 the quality of the outputs has slowly degraded with every wave of corporate censorship and "bias reduction". Using GPT-3 playground (or story writing services like Sudowrite which use Davinci) it's plainly obvious how bad it's gotten.

OpenAI needs to open their damn eyes and realize that a brilliant AI with provocative, biased outputs is better than a lobotomized AI that can only generate advertiser-friendly content.

discuss

order

visarga|3 years ago

So it got worse for creative writing, but it got much better at solving few-shot tasks. You can do information extraction from various documents with it, for example.

napier|3 years ago

I mean yes, you’re right insofar as it goes. However nothing I am aware of implies technical reasons linking these two variables into a necessarily inevitable trade-off. And it’s not only creative writing that’s been hobbled; GPT3 used to be an incredibly promising academic research tool and given the right approach to prompts could uncover disparate connections between siloed fields that conventional search can only dream of.

I’m eager for OpenAi to wake up and walk back on the clumsy corporate censorship, and/or for competitors to replicate the approach and improve upon the original magic without the “bias” obsession tacked on. Real challenge though “bias” may pose in some scenarios, perhaps a better way to address this would be at the training data stage rather than clumsily gluing on an opaque approach towards poorly implemented, idealist censorship lacking in depth (and perhaps arguably, also lacking sincerity).