top | item 38408446

(no title)

simonhughes22 | 2 years ago

This is like calling the nuclear anti-proliferationists useless when they had just gotten started. AGI has only been in the general public's consciousness for about a year since ChatGPT released (before then far fewer people were worried about AGI as it seemed to most to be decades away at least). I think it's a bit early to throw in the towel and call them useless. Making progress in this area is difficult, which is why it needs considerable time and money thrown at it. The only real solution is to fund alignment research, putting a halt to actual research is unachievable as you can't police that world over.

I also find any argument saying 'don't worry about AI' as completely illogical, and unlike the author i don't mind stating why. I have yet to hear any arguments that are sufficiently persuasive to convince me that there is zero risk from AGI. I am an AI researcher, and while I think a lot of the risk are over blown, I cannot prove that AI is not some sort of existential risk. Even if you put the likelihood of that at less than 1%, that's still warrants a lot of research and effort to help prevent that from happening. There are no second chances, once the world ends that it. Life is not a video game. This is true of any sufficiently powerful technology, if it gets into the wrong hands or is abused, it can be very dangerous. Einstein didn't think up general relativity to develop nuclear weapons.

discuss

order

No comments yet.