top | item 47200740

(no title)

next_xibalba | 2 days ago

> you could make an automated system that just signs postcards and, if you give it enough access, it could wipe out the human race.

I mean this sincerely. You really ought to stop reading Bostrom and Yudkowsky. It is very hard to take this kind of hysteria seriously.

> Inevitability is not an argument, and I won't humor it.

It is and I don't care what you will or won't humor. Just answer me this: how will you convince all the other countries of the world to not build terminators? The leading example of "it is inevitable" is of course China. They are already testing and deploying semi-autonomous robots throughout their national security apparatus. If you're answer is: "Just because they do it doesn't mean anyone else should" then you're not to be taken seriously on this topic.

> killing and eating children

I'd really like to know what convoluted scenario you could conjure in which one would argue that killing and eating children is inevitable.

discuss

order

array_key_first|21 hours ago

Saying something is hysteria is also not an argument. Again, it's just intellectually lazy. Just because you refuse to take problems seriously doesn't mean they cease to exist, it just means you lack critical thinking.

And, as for eating and killing children, it's easy: starvation. If you're hungry enough you'll eat children. All it takes is a supply chain disruption, much more likely than nuclear war even.

So why not eat the children now? It's gonna happen anyway.

It's true that I am jumping the gun here. We don't need an apocalypse for AI to suck ass. It sucks ass right now and is causing massive problems. We should probably focus on that.

next_xibalba|6 hours ago

Saying something is intellectually lazy is not an argument. It's just intellectually lazy. Just because you refuse to take China's unrestrained development of autokill bots seriously doesn't mean they won't do it, it just means you lack critical thinking.

Perhaps you could write Xi a nicely worded letter informing him that he really shouldn't let his military-industrial complex develop autokill bots. When he inevitably realizes the error of his ways (mostly due to you accusing him of intellectual laziness), he'll no doubt shutdown autokill bot development. Taiwan and India will rest easy and praise your hard working intellect. Then we can then shift all societal resources to focus LLMs and why you think they suck.