top | item 41441456

(no title)

agucova | 1 year ago

> My issue with AI safety is that it's an overloaded term. It could mean anything from an llm giving you instructions on how to make an atomic bomb to writing spicy jokes if you prompt it to do so. it's not clear which safety these regulatory agencies would be solving for.

I think if you look at the background of the people leading evaluations at the US AISI [1], as well as the existing work on evaluations by the UK AISI [2] and METR [3], you will notice that it's much more the former than the latter.

[1]: https://www.nist.gov/people/paul-christiano [2]: https://www.gov.uk/government/publications/ai-safety-institu... [3]: https://arxiv.org/abs/2312.11671

discuss

order

nradov|1 year ago

Anyone who really wants to make an atomic bomb already knows how to make an atomic bomb. The limitations are in access to raw materials and ability to do large scale enrichment.

agucova|1 year ago

I agree. I’m really more concerned about bioweapons, for which it’s generally understood (in security studies) that access to technical expertise is the limiting factor for terrorists. See Al Qaeda’s attempts to develop bio weapons in 2001.