top | item 37655755

(no title)

airgapstopgap | 2 years ago

Interested about your logic, what did you like about pre-LLM AGI? The "maximize utility function at any cost" feature? The single-minded focus on beating people in games?

It's quite terrifying how, as we've chosen an apparently very easy path to bake our preferences and quirks into intelligent systems, people became very "responsible" and concerned for survival of human race, parroting alarmist rhetoric that precedes not only LLMs, but even RL successes of early Deepmind and just cites vague shower thoughts of Bostrom and such non-technical ilk. Say what you want about LLMs but there's zero credible reason to perceive them as a more risky approach!

discuss

order

famouswaffles|2 years ago

LLMs prove a very very different path from what most humans for decades assumed artificial intelligence would manifest as.

They're not these rigid, logic/rule bound systems that struggle with human emotions. By all accounts, GPT-4 is as emotionally competent as it is on anything else.

I suppose there's something unsettling about building Super Intelligence in humanity's image.

thfuran|2 years ago

It’s entirely rule-bound. All it does is draw tokens from a statistical distribution. What people mostly don’t like to contemplate is that they too are entirely rule-bound: Brains do nothing but follow the laws of physics, proceeding from one state to the next on the basis of these rules alone.