top | item 42476965

(no title)

Engineering-MD | 1 year ago

Can I just say what a dick move it was to do this as a 12 days of Christmas. I mean to be honest I agree with the arguments this isn’t as impressive as my initial impression, but they clearly intended it to be shocking/a show of possible AGI, which is rightly scary.

It feels so insensitive to that right before a major holiday when the likely outcome is a lot of people feeling less secure in their career/job/life.

Thanks again openAI for showing us you don’t give a shit about actual people.

discuss

order

XenophileJKO|1 year ago

Or maybe the target audience that watches 12 launch videos in the morning are genuninely excited about the new model. The intended it to be a preview of something to look forward to.

What a weird way to react to this.

achierius|1 year ago

It sounds like you aren't thinking about this that deeply then. Or at least not understanding that many smart (and financially disinterested) people who are, are coming to concerning conclusions.

https://www.transformernews.ai/p/richard-ngo-openai-resign-s...

>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.

Almost every single one of the people OpenAI had hired to work on AI safety have left the firm with similar messages. Perhaps you should at least consider the thinking of experts?

hollowturtle|1 year ago

There is no AGI it’s just marketing, this stuff if over hyped, enjoy your holidays you won’t lose your job ;)

Engineering-MD|1 year ago

I agree, it’s just more about the intent than anything else, like boasting about your amazing new job when someone has recently been made redundant, just before Christmas.

keiferski|1 year ago

The vast majority of people who will lose jobs to AI aren’t following AGI benchmarks, or even know what AGI is short for.

Engineering-MD|1 year ago

That’s is true and a reasonable point. But looking in This thread you can see there has been this reaction from quite a few.

555watch|1 year ago

I don't know, maybe it's a bit off topic, but at least in cases that I'm imagining, I would always hire a human than fully rely on AI. Let the human consult with AI if needed, but still finalize the decision or result. The human will be thinking about the problem for months or years, even if passively during a vacation, an idea will occasionaly pop up. AI will think about its task for seconds, in case it missed some information or whatever, it will never wake up in the middle of the night thinking "s**, i forgot about X"

achierius|1 year ago

I feel you. It's tough trying to think about what we can do to avert this; even to the extent that individuals are often powerless, in this regard it feels worse than almost anything that's come before.

tim333|1 year ago

Some of us actual people are actually enthusiastic about AGI. Although I'm a bit weird in being into the sci-fi upload / ending death stuff.

Engineering-MD|1 year ago

Out of interest, what do you think would happen to your sense of subjective experience on sci-fi upload? And secondly have you watched black mirror? In that show they show many great ways there the end of death is just the beginning of eternal techno suffering.

OldGreenYodaGPT|1 year ago

Blaming OpenAI for progress is like blaming a calendar for Christmas—it’s not the timing, it’s your unwillingness to adapt

r-zip|1 year ago

Unwillingness to adapt to the destruction of the middle class and knowledge work is pretty reasonable tbh.

lagrange77|1 year ago

Wow, you just solved the ethics of technology in a one liner. Impressive.

stevenhuang|1 year ago

This is a you problem. Yes there will be pain in short term, but it will be worth it in long term.

Many of us look forward to what a future with AGI can do to help humanity and hopefully change society for the better, mainly to achieve a post scarcity economy.

jakebasile|1 year ago

Surely the elites that control this fancy new technology will share the benefits with all of us _this_ time!

achierius|1 year ago

https://www.transformernews.ai/p/richard-ngo-openai-resign-s...

>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.

Almost every single one of the people OpenAI had hired to work on AI safety have left the firm with similar messages. Perhaps you should at least consider the thinking of experts? There is a real chance that this ends with significant good. There is also a real chance that this ends with the death of every single human being. That's never been a choice we've had to make before, and it seems like we as a species are unprepared to approach it.

randyrand|1 year ago

Post scarcity seems very unlikely. Humans might be worthless, but there will still be a finite number of AIs, compute, space, resources.

esafak|1 year ago

How are you going to make housing, healthcare, etc. not scarce, and pay for them?

t0lo|1 year ago

I hate the deliberate fear-mongering that these companies pedal on the population to get higher valuations

_cs2017_|1 year ago

Wtf is wrong with you dude? It's just another tech, some jobs will get worse some jobs will get better. Happens every couple of decades. Stop freaking out.

achierius|1 year ago

This is not a very kind or humble comment. There are real experts talking about how this time is different -- as an analogy, think about how horses, for thousands of years, always had new things to do -- until one day they didn't. It's hubris to think that we're somehow so different from them.

Notably, the last key AI safety researcher just left OpenAI: https://www.transformernews.ai/p/richard-ngo-openai-resign-s...

>But while the “making AGI” part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the “succeeding” part of the mission, especially when it comes to preventing existential risks to humanity.

Are you that upset that this guy chose to trust the people that OpenAI hired to talk about AI safety, on the topic of AI safety?