top | item 40350371

(no title)

heed | 1 year ago

Sam Altman talked a little bit about this in his recent appearance on the All-In podcast [0]. I'm paraphrasing, but his vision is that ai assistants in the near term will be like a senior level employee - they'll push back when it makes sense to and not just be sycophants.

[0]: https://youtube.com/watch?v=nSM0xd8xHUM

discuss

order

ehnto|1 year ago

I don't want to paint with too broad of a brush but the role of a manager is generally to trust their team on specifics. So how would a manager be able to spot a hallucination and stop it from informing business decisions?

It's not as bad for domain experts because it is easier for them to spot the issue. But if your role demands you trust your team is skilled and truthful then I see problems occuring.

jimkleiber|1 year ago

I really wonder how that'll go, because workplaces already seem to limit human communication and emotion to "professional behavior." I'm glad he's thinking about it and I hope they're able to figure out how to improve human communication so that we can resolve conflict with bots. In his example (around 21:05), he talks about how the bot could do something if the person wants but there might be consequences to that action, and I think that makes more sense if the bot is acting like a computer that has limits on what it can do. For example, if I ask it to do two tasks that really stretch its computational limits, I'd hope it would let me know. But if it pretends it's a human with human limits, I don't know how much that'd help, unless it were a training exercise.