I think asking your questions in that form is akin to "sorting prompts" that I learned about from https://mikecaulfield.substack.com/p/is-the-llm-response-wro... and I have been using successfully when when writing code (e.g. [as a Claude code slash command](https://www.joshbeckman.org/notes/936274709)).Essentially, you're asking the LLM to do research and categorize/evaluate that research instead of just giving you an answer. The "work" of accessing, summarizing, and valuing the research yields a more accurate result.
consumer451|4 months ago
I love the grounding back to ~“well even a human would be bad at this if they did it the current LLM way.”
Bringing things back to ground truth human processes is something that is surprisingly unnatural for me to do. And I know better, and I preach doing this, and I still have a hard time doing it.
I know far better, but apparently it is still hard for me to internalize that LLMs are not magic.
cyanydeez|4 months ago