(no title)
kofdai | 1 month ago
The system: • runs locally • uses small, explicit rule sets • falls back to LLMs only when structure is missing • marks all mined output as provisional
The long-term idea is that as the DB grows, dependence on large models shrinks.
This is not about outperforming GPT-4-class models. It’s about making reasoning transparent, testable, and cheap enough for individuals.
No comments yet.