(no title)
Engineering-MD | 1 year ago
It feels so insensitive to that right before a major holiday when the likely outcome is a lot of people feeling less secure in their career/job/life.
Thanks again openAI for showing us you don’t give a shit about actual people.
XenophileJKO|1 year ago
What a weird way to react to this.
achierius|1 year ago
https://www.transformernews.ai/p/richard-ngo-openai-resign-s...
>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.
Almost every single one of the people OpenAI had hired to work on AI safety have left the firm with similar messages. Perhaps you should at least consider the thinking of experts?
hollowturtle|1 year ago
Engineering-MD|1 year ago
keiferski|1 year ago
Engineering-MD|1 year ago
555watch|1 year ago
achierius|1 year ago
tim333|1 year ago
Engineering-MD|1 year ago
unknown|1 year ago
[deleted]
OldGreenYodaGPT|1 year ago
r-zip|1 year ago
lagrange77|1 year ago
stevenhuang|1 year ago
Many of us look forward to what a future with AGI can do to help humanity and hopefully change society for the better, mainly to achieve a post scarcity economy.
jakebasile|1 year ago
achierius|1 year ago
>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.
Almost every single one of the people OpenAI had hired to work on AI safety have left the firm with similar messages. Perhaps you should at least consider the thinking of experts? There is a real chance that this ends with significant good. There is also a real chance that this ends with the death of every single human being. That's never been a choice we've had to make before, and it seems like we as a species are unprepared to approach it.
randyrand|1 year ago
esafak|1 year ago
t0lo|1 year ago
_cs2017_|1 year ago
achierius|1 year ago
Notably, the last key AI safety researcher just left OpenAI: https://www.transformernews.ai/p/richard-ngo-openai-resign-s...
>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.
Are you that upset that this guy chose to trust the people that OpenAI hired to talk about AI safety, on the topic of AI safety?