top | item 45530413

(no title)

bckmn | 4 months ago

I think asking your questions in that form is akin to "sorting prompts" that I learned about from https://mikecaulfield.substack.com/p/is-the-llm-response-wro... and I have been using successfully when when writing code (e.g. [as a Claude code slash command](https://www.joshbeckman.org/notes/936274709)).

Essentially, you're asking the LLM to do research and categorize/evaluate that research instead of just giving you an answer. The "work" of accessing, summarizing, and valuing the research yields a more accurate result.

discuss

order

consumer451|4 months ago

Thank you so much for sharing this. Myself, and I’m sure many of others, are thinking about these things a lot these days. It’s great to see how someone else is coming at the problem.

I love the grounding back to ~“well even a human would be bad at this if they did it the current LLM way.”

Bringing things back to ground truth human processes is something that is surprisingly unnatural for me to do. And I know better, and I preach doing this, and I still have a hard time doing it.

I know far better, but apparently it is still hard for me to internalize that LLMs are not magic.

cyanydeez|4 months ago

Unfortunately, the sociopath MBAs are still generating a bubble based on instant feedback regardless of underlying value.