top | item 46786990

(no title)

neuralkoi | 1 month ago

> The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking.

If current LLMs are ever deployed in systems harboring the big red button, they WILL most definitely somehow press that button.

discuss

order

arthurcolle|1 month ago

US MIC are already planning on integrating fucking Grok into military systems. No comment.

Havoc|1 month ago

Including classified systems. What could possibly go wrong

blibble|1 month ago

the US is going to stop the chinese by mass production of illegal pornography?

groby_b|1 month ago

fwiw, the same is true for humans. Which is why there's a whole lot of process and red tape around that button. We know how to manage risk. We can choose to do that for LLM usage, too.

If instead we believe in fantasies of a single all-knowing machine god that is 100% correct at all times, then... we really just have ourselves to blame. Might as well just have spammed that button by hand.