top | item 42704286

(no title)

sreejithr | 1 year ago

How can a language model pose a national security risk?

discuss

order

anon373839|1 year ago

It cannot. Remember when GPT-2 was too dangerous to release? And when industry “leaders” were begging for a moratorium on models stronger than GPT-4?

The idea that this technology carries existential risk is how OpenAI and others generate the hype that generates investment.

lostmsu|1 year ago

Well, would you say Internet turned to even more shit now that majority of content is AI generated?

jofer|1 year ago

Less advanced things have been labeled a national security risk.

It's currently quasi-illegal in the US to open source tooling that can be used to rapidly label and train a CNN on satellite imagery. That's export controlled due to some recent-ish changes. The defense world thinks about national security in a much broader sense than the tech world.

See https://www.federalregister.gov/documents/2020/01/06/2019-27...

AyyEye|1 year ago

Governments everywhere are racing to attach them to weapons.

sreejithr|1 year ago

Genuine question. Regarding language models specifically, would it really have value to be strapped on weapons?