(no title)
softg | 1 year ago
Sure, we can improve our understanding of how NNs work but that isn't enough. How are humans supposed to fully understand and control something that is smarter than themselves by definition? I think it's inevitable that at some point that smart thing will behave in ways humans don't expect.
ben_w|1 year ago
With this metaphor you seem to be saying we should, if possible, learn how to control AI? Preferably before anyone endangers their lives due to it? :)
> I think it's inevitable that at some point that smart thing will behave in ways humans don't expect.
Naturally.
The goal, at least for those most worried about this, is to make that surprise be not a… oh, I've just realised a good quote:
""" the kind of problem "most civilizations would encounter just once, and which they tended to encounter rather in the same way a sentence encountered a full stop." """ - https://en.wikipedia.org/wiki/Excession#Outside_Context_Prob...
Not that.
softg|1 year ago
> With this metaphor you seem to be saying we should, if possible, learn how to control AI? Preferably before anyone endangers their lives due to it?
Yes, but that's a big if. Also that's something you could never ever be sure of. You could spend decades thinking alignment is a solved problem only to be outsmarted by something smarter than you in the end. If we end up conjuring a greater intelligence there will be the constant risk of a catastrophic event just like the risk of a nuclear armageddon that exists today.