top | item 47166184

(no title)

ndr | 3 days ago

Worth checking this post from someone who actually has worked on this change:

> I take significant responsibility for this change.

https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...

discuss

order

bhouston|3 days ago

This guy from Effective Altruism pivoted away from helping the poor to help try to control AI from being a terminator type entity and then pivoted to being, ah, its okay for it to be a terminator type entity.

> Holden Karnofsky, who co-founded the EA charity evaluator GiveWell, says that while he used to work on trying to help the poor, he switched to working on artificial intelligence because of the “stakes”:

> “The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.”

> Karnofsky says that artificial intelligence could produce a future “like in the Terminator movies” and that “AI could defeat all of humanity combined.” Thus stopping artificial intelligence from doing this is a very high priority indeed.

https://www.currentaffairs.org/news/2022/09/defective-altrui...

He is just giving everyone permission to do bad things by saying a lot of words around it.

samjewell|3 days ago

> then pivoted to being, ah, its okay for it to be a terminator type entity.

Isn’t that the opposite of what he’s saying? He’s saying it could become that powerful, and given that possibility it’s incredibly important that we do whatever we can to gain more control of that scenario

drdrek|3 days ago

Effective Altruism is such a beautiful term for a pretentious Karen that needs to wrap their selfish actions with moral superiority.

It's that perfect blend of I'm doing what everyone else are doing, and I'm better than everyone else.

Chefs' Kiss

barbarr|3 days ago

Getting SBF vibes from this. "Earn to give" is an inherently flawed philosophy.

SpaceManNabs|3 days ago

Effective altruism came from the "rationalist"

It was never about helping poor people.

For some reason, the rationalist movement and its offshoots are really pervasive in silicon valley. i don't see it much in the other tech cities.

riffraff|3 days ago

> I generally think it’s bad to create an environment that encourages people to be afraid of making mistakes, afraid of admitting mistakes and reticent to change things that aren’t working

"move fast and break things" ?

freejazz|3 days ago

"don't hold me liable"

pimlottc|3 days ago

> > I take significant responsibility for this change.

Empty words. I would like to know one single meaningful way he will be held responsible for any negative effects.

adverbly|3 days ago

Did this guy actually write this?

Incredibly long and verbose. I will fall short of accusing him of using an AI to generate slop, but whatever happened to people's ability to make short, strong, simple arguments?

If you can't communicate the essence of an argument in a short and simple way, you probably don't understand it in great depth, and clearly don't care about actually convincing anybody because Lord knows nobody is going to RTFA when it's that long...

At best, you're just trying to communicate to academics who are used to reading papers... Need to expect better from these people if we want to actually improve the world... Standards need to be higher.

s1artibartfast|3 days ago

This is where people go to post long verbose statements.

You can usually find the short version on Twitter.

ozozozd|3 days ago

Perhaps they didn’t have the time to write a shorter version.

Or the discipline.

Maybe neither.

mock-possum|3 days ago

This style is in vogue for the less wrong community.

jplusequalt|3 days ago

I genuinely believe that website is responsible for a lot of the worst ideas currently permeating the technology sector.

prodigycorp|3 days ago

pretty much the intellectual equivalent of looksmaxxing