(no title)
cracki | 13 days ago
I've been wondering for years how to make whatever LLM ask me stuff instead of just filling holes with assumptions and sprinting off.
User-configurable agent instructions haven't worked consistently. System prompts might actually contain instructions to not ask questions.
Sure there's a practical limit to how much clarification it ought to request, but not asking ever is just annoying.
Nition|13 days ago
- Ask question
- Get answer
- Go back and rewrite initial question to include clarification for the thing the AI got wrong