top | item 40957016

(no title)

throwaway8481 | 1 year ago

> Model outputs are untrusted input.

I think the problem is they're trying to introduce nuance and a narrow path to allow this. They want an acceptable level of risk to using untrusted model output for the efficiency/productivity gains it will bring, notwithstanding hallucinations.

Generative AI would not have flown in the security theater of Yesteryear, but CTOs see productivity multipliers.

discuss

order

lolinder|1 year ago

Right, but that's not a new problem either. We want to allow people to send emails with some acceptably-low level of risk that spam will get through. We want an acceptably-low risk that our image upload feature won't be hosting CSAM. And we want it while still getting the benefits of allowing our real customers to pay us for the services we offer. Businesses have been figuring out the balance of risk:reward for as long as infosec has been a concept.

usea|1 year ago

> CTOs see productivity multipliers

The CTOs are hallucinating as much as the LLMs are.

marcosdumay|1 year ago

The GP didn't state the multiplier's value. Those things absolutely are productivity multipliers...