top | item 47014152

(no title)

alan-stark | 15 days ago

Can you elaborate on the "mode of peril"? Is it:

(a) Top labs quietly signing deals for military deployment of frontier models in unmanned strike weapons?

(b) Top labs agreeing to license LLMs for social engineering/propaganda ops?

(c) Models that vastly exceed human intelligence and have capacity to pursue own agenda (i.e. runaway intelligence)?

(d) Something else?

It looks like dangers of AGI are overblown (perhaps partially due to grant funding and ability to get political traction/investment/competitive advantage), while (a) and (b) are severely underdiscussed. Would love to get other perspectives.

discuss

order