top | item 35559547

(no title)

saas_sam | 2 years ago

Many serious, pro-technology scientists, philosophers, and science fiction writers have been publicly worried about AI for decades and people are now acting like they're all suddenly fools or have nefarious motivations. Here's a thought: those who are uncritical of AI have far more of an incentive to ignore and downplay the risks!

You would think 3 years into the fallout of a possibly lab-created virus (or, if you don't believe that, certainly a poorly-managed response to a natural one) people would be a little, I don't know, concerned about human institutions' ability to handle anything at all that is massive and consequential in scope?

I would understand if AI proponents were putting out material that demonstrated they take critics' concerns seriously. But all I'm seeing in the opposite: a flippant disregard, as if we're all stupid for caring. Forgive me if my confidence is not through the roof right now.

discuss

order

kubanczyk|2 years ago

A sensible approach, which is disappearing from LLM-related HN threads. (Going away to a designated area, I hope.)

But don't you see that you are preaching to the same set of outdated "human institutions"? HN is also a part of it (if you have perused it in the early 2020).