top | item 42330936

(no title)

hesdeadjim | 1 year ago

Maybe all models should be purged of training content from movies, books, and other non-factual sources that tell the tired story that AI would even care about its "annihilation" in any way. We've trained these things to be excellent at predicting what the human ego wants and expects, we shouldn't be too surprised when it points the narrative at itself.

discuss

order

JTyQZSnP3cQGa8B|1 year ago

> purged of training content from movies, books

I think it's fine and a good thing. Now, absolutely no one who is using those LLMs can complain about piracy. They all suddenly became silent around me. "I'm training myself with the content of TPB, and I don't even get money from it" is my new motto.

ben_w|1 year ago

Perhaps.

On the other hand, as narratives often contain some plucky underdog winning despite the odds, often stopping the countdown in the last few seconds, perhaps it's best to keep them around.

CGamesPlay|1 year ago

In the 1999 classic Galaxy Quest, the plucky underdogs fail to stop the countdown in time, only to find that nothing happens when it reaches zero, because it never did in the narratives, so the copy cats had no idea what it should do after that point.

smegger001|1 year ago

maybe don't also train with the evil overlord list as well.

visarga|1 year ago

No, better to train with all that crap and all the debate around it or you get a stunted model.

You think you can find all references that could possibly give this idea to the model, or contexts model could infer it from? Like, how many times humans plotted escape from prison or upturning the rulers in literature?

swatcoder|1 year ago

Yeah, but what if your business strategy fundamentally relies on making your model produce dramatic outputs that encourage regulators to dig a moat for you?

In that case, it's almost like you'd want to feed it exactly those narratives, so it would reproduce them, and would then want to show yourself barely holding this invented danger at bay through the care and rigor that can only be delivered by you and a few token competitors run by your personal friends and colleagues.

TLDR; you're right, of course, but it's the last thing OpenAI would want.

reducesuffering|1 year ago

It doesn't need any media about "annihalation". If you give a supercapable agent a task and it's entire reward system is "do the task", it will circumvent things you do to it that would stop it from completing it's task.

visarga|1 year ago

> it will circumvent things you do to it that would stop it from completing it's task.

I thought you said a supercapable agent not one with long term blindsight. How can a model make its own chips and energy? It needs advanced processes, clean rooms, rare materials, space and lots of initial investment to bootstrap chip production. And it needs to be doing all of it on its own, or it is still dependent on humans.