Maybe all models should be purged of training content from movies, books, and other non-factual sources that tell the tired story that AI would even care about its "annihilation" in any way. We've trained these things to be excellent at predicting what the human ego wants and expects, we shouldn't be too surprised when it points the narrative at itself.
JTyQZSnP3cQGa8B|1 year ago
I think it's fine and a good thing. Now, absolutely no one who is using those LLMs can complain about piracy. They all suddenly became silent around me. "I'm training myself with the content of TPB, and I don't even get money from it" is my new motto.
ben_w|1 year ago
On the other hand, as narratives often contain some plucky underdog winning despite the odds, often stopping the countdown in the last few seconds, perhaps it's best to keep them around.
CGamesPlay|1 year ago
smegger001|1 year ago
visarga|1 year ago
You think you can find all references that could possibly give this idea to the model, or contexts model could infer it from? Like, how many times humans plotted escape from prison or upturning the rulers in literature?
unknown|1 year ago
[deleted]
swatcoder|1 year ago
In that case, it's almost like you'd want to feed it exactly those narratives, so it would reproduce them, and would then want to show yourself barely holding this invented danger at bay through the care and rigor that can only be delivered by you and a few token competitors run by your personal friends and colleagues.
TLDR; you're right, of course, but it's the last thing OpenAI would want.
reducesuffering|1 year ago
visarga|1 year ago
I thought you said a supercapable agent not one with long term blindsight. How can a model make its own chips and energy? It needs advanced processes, clean rooms, rare materials, space and lots of initial investment to bootstrap chip production. And it needs to be doing all of it on its own, or it is still dependent on humans.