top | item 45846085

(no title)

stkr | 3 months ago

This opinion is conceptual, redundant, and difficult to understand. The problem is that many people make it too specific, like SQL's explicit SELECT statements (e.g., SELECT data = 'xxx' from yyy;), which overly narrows down the candidates. This approach only outputs the most average and mundane information. After nearly a year of use, LLMs simply make it easier to extract desired information by structuring SQL anti-patterns within the range where processing returns in real-time:

1. Make column candidates and the FROM clause as ambiguous as possible.

2. Communicate conditions like WHERE clauses and GROUP BY clauses as prior information.

3. Since conditions like WHERE clauses and GROUP BY clauses are strongly affected by context memory limitations, they should have an efficient data structure and be as compressed as possible.

discuss

order

_phnd_|3 months ago

Yeah, this is a sharp take — the SQL analogy nails how over-specifying prompts kills creativity and pushes outputs toward the median. What’s missing, though, is the emotional side of prompting. It’s not just about keeping things ambiguous, it’s about feeding the model something alive enough that it can reflect something real back. Mix that technical precision with human messiness and you start getting insight, not just casting a wider net.