top | item 44320628

(no title)

thomasjudge | 8 months ago

Along these lines I am sort of skimming articles/blogs/websites about Lightcone, LessWrong, etc, and I am still struggling with the question...what do they DO?

discuss

order

Mond_|8 months ago

Look, it's just an internet community of people who write blog posts and discuss their interests on web forums.

Asking "What do they do?" is like asking "What do Hackernewsers do?"

It's not exactly a coherent question. Rationalists are a somewhat tighter group, but in the end the point stands. They write and discuss their common interests, e.g. the progress of AI, psychiatry stuff, bayesianism, thought experiments, etc.

FeepingCreature|8 months ago

Twenty years or so ago, Eliezer Yudkowsky, a former proto-accelerationalist, realized that superintelligence was probably coming, was deeply unsafe, and that we should do something about that. Because he had a very hard time convincing people of this to him obvious fact, he first wrote a very good blog about human reason, philosophy and AI, in order to fix whatever was going wrong in people's heads that caused them to not understand that superintelligence was coming and so on. The group of people who read, commented on and contributed to this blog are called the rationalists.

(You're hearing about them now because these days it looks a lot more plausible than in 2007 that Eliezer was right about superintelligence, so the group of people who've beat the drum about this for over a decade now form the natural nexus around which the current iteration of project "we should do something about unsafe superintelligence" is congealing.)

astrange|8 months ago

> hat superintelligence was probably coming, was deeply unsafe

Well, he was right about that. Pretty much all the details were wrong, but you can't expect that much so it's fine.

The problem is that it's philosophically confused. Many things are "deeply unsafe", the main example being driving or being anywhere near someone driving a car. And yet it turns out to matter a lot less, and matter in different ways, than you'd expect if you just thought about it.

Also see those signs everywhere in California telling you that everything gives you cancer. It's true, but they should be reminding you to wear sunscreen.

throwaway314155|8 months ago

I don't know - the level of seriousness they discuss w.r.t. alignment issues just seem so out of touch with the realities of large language models and the notion of a super intelligence being "closer than ever" gives way too much credit to the capabilities (or lack there of) of LLM's.

A lot of it seems rooted in Asimov-inspired, stimulant-fueled philosophizing than any kind of empirical or grounded observations.