top | item 46867036

(no title)

preston-kwei | 27 days ago

Yes on the satire. This post made me laugh :).

I agree with this framing a lot, especially the idea that judgment is the bottleneck

In my experience building Persona, an AI scheduling assistant, the most useful role for humans isn't to be always in the loop. LLM's are terrible at making judgement calls, especially when the right choice depends on a specific user's priorities and the confidence is low. However, even with low confidence, the llm still needs to make a guess.

I think an interesting use case for this would be to have llm's be able to ask questions to users when they hit a specific level of uncertainty. These could be directly answered by a human, or inferred as the user uses the product more.

That feels more scalable than completely blocking human-in-the-loop queues and more honest than pretending the model already knows the user’s preferences.

discuss

order

No comments yet.