ultimateking | 8 months ago | on: Show HN: TXT OS – Open-Source AI Reasoning, One Plain-Text File at a Time
ultimateking's comments
ultimateking | 8 months ago | on: Show HN: WFGY – A reasoning engine that repairs LLM logic without retraining
Great infomation !!!!
ultimateking | 8 months ago | on: Show HN: WFGY – A reasoning engine that repairs LLM logic without retraining
I went through the structure and found the semantic correction idea pretty intriguing.
Can you explain a bit more about how WFGY actually achieves such improvements in reasoning and stability? Specifically, what makes it different from just engineering better prompts or using more advanced LLMs?
ultimateking | 8 months ago | on: Show HN: WFGY – A reasoning engine that repairs LLM logic without retraining
Skimmed through it briefly — seems like a lot of thought went into the structure. Downloaded the PDF, will give it a deeper read tonight.
page 1
1. How does TXT OS store its “Semantic Tree Memory” between sessions? 2. When `kbtest` detects a hallucination, what happens next? 3. Any idea of the speed impact on smaller models like LLaMA-2-13B?
Thanks for sharing—excited to try it out!