(no title)
earth_walker | 3 years ago
Now, I'm not arguing against the usefulness of understanding the undefined behaviours, limits and boundaries of these models, but the way many of these conversations go reminds me so much of toddlers trying to eat, hit, shake, and generally break everything new they come across.
If we ever see the day where an AI chat bot gains some kind of sci-fi-style sentience the first thing it will experience is a flood of people trying their best to break it, piss it off, confuse it, create alternate evil personalities, and generally be dicks.
Combine that with having been trained on Reddit and Youtube comments, and We. are. screwed.
chasd00|3 years ago
wraptile|3 years ago
It seems like AGI teleporting out of this existence withing minutes of being self aware is more likely than it being some damaged, angry zombie.
jodrellblank|3 years ago
Internet chatbots are expected to remember the entire content of the internet, talk to tens of thousands of people simultaneously, with no viewpoint on the world at all and no 'true' feedback from their actions. That is, if I drop something on my foot, it hurts, gravity is not pranking me or testing me. If someone replies to a chatbot, it could be a genuine reaction or a prank, they have no clue whether it makes good feedback to learn from or not.
earth_walker|3 years ago
I think the adaptive noise filter is going to be the really tricky part. The fact that we have a limited, fading memory is thought to be a feature and not a bug, as is our ability to do a lot of useful learning while remembering little in terms of details - for example from the "information overload" period in our infancy.