top | item 42486940

(no title)

monkeynotes | 1 year ago

I thought we were talking about state of the art agentic general AI that can plan ahead, reason, and execute. Basically something that can perform at human level intelligence must be able to be as dangerous as humans. And no, I don't think it would be bad training data that we are aware of. My opinion is we don't necessarily know what training data will result in bad behavior, and philosophically it is possible we will be in a world with a model that pretends it's dumber than it is, flunks tests intentionally, in order to manipulate and produce false confidence in a model until it has enough freedom to use it's agency to secure itself from human control.

I know that I don't know a lot, but all of this sounds to me to be at least hypothetically possible if we really believe AGI is possible.

discuss

order

No comments yet.