I've spent the last year building AXIS, a system that fundamentally
reframes how we use language models in reasoning tasks.
The core insight: LLMs are unreliable as reasoners, but powerful
as knowledge extractors. So why not separate these concerns?
[Current Problem]
Standard LLM reasoning suffers from "stochastic fog"—the probabilistic
nature of token selection causes context drift and hallucinations.
The model becomes a single point of failure.
1. Rejection Loop
Every LLM output is intercepted by a Verifier (SymPy/NumPy).
If it contradicts mathematical or physical constraints, it's rejected.
The LLM is rebooted and asked again. This repeats until consistency.
2. Physical Context Purge
After each trial: torch.mps.empty_cache() + gc.collect()
This severs the "contextual drift" that fuels hallucinations.
Each trial is statistically independent.
3. Deterministic Verification
The final output is assembled, not generated.
Raw verified data → hard-coded templates → no AI prose = 0% hallucination.
4. 5D Semantic Lattice (optional, but powerful)
Each logical node occupies a 5D space:
- s1: Physical Actuality (does data align with constants?)
- s2: Logical Necessity (can it be derived from axioms?)
- s3: Contextual Consistency (matches the context stack?)
- s4: Ethics (passes safety guardrails?)
- s5: Empirical History (aligns with past confirmations?)
[Results]
On the Complex Plane Coefficient Problem:
- Standard GPT-4: Generates plausible but incorrect coefficients
- AXIS + Gemma-2-2b: Rejects bad answers 4 times, converges to a=0, b=0, c=0
- Reproducibility: 100% (10 trials, identical output)
- No GPU needed; runs on Mac
[What I'm Looking For]
1. Is the 5D Lattice approach sound? Is dimensionality orthogonal?
2. What's the computational overhead of repeated rejection?
3. Are there existing systems that do something similar?
4. What are the failure cases?
Code and minimal examples are in the repo. Happy to discuss.
kofdai|2 months ago
I've spent the last year building AXIS, a system that fundamentally reframes how we use language models in reasoning tasks.
The core insight: LLMs are unreliable as reasoners, but powerful as knowledge extractors. So why not separate these concerns?
[Current Problem]
Standard LLM reasoning suffers from "stochastic fog"—the probabilistic nature of token selection causes context drift and hallucinations. The model becomes a single point of failure.
[The AXIS Approach]
Instead of: LLM → Output
AXIS does: Input → [Phase 1: Mining] → Lattice Simulation → [Phase 2: Mining] → Verified Output
Key features:
1. Rejection Loop Every LLM output is intercepted by a Verifier (SymPy/NumPy). If it contradicts mathematical or physical constraints, it's rejected. The LLM is rebooted and asked again. This repeats until consistency.
2. Physical Context Purge After each trial: torch.mps.empty_cache() + gc.collect() This severs the "contextual drift" that fuels hallucinations. Each trial is statistically independent.
3. Deterministic Verification The final output is assembled, not generated. Raw verified data → hard-coded templates → no AI prose = 0% hallucination.
4. 5D Semantic Lattice (optional, but powerful) Each logical node occupies a 5D space: - s1: Physical Actuality (does data align with constants?) - s2: Logical Necessity (can it be derived from axioms?) - s3: Contextual Consistency (matches the context stack?) - s4: Ethics (passes safety guardrails?) - s5: Empirical History (aligns with past confirmations?)
[Results]
On the Complex Plane Coefficient Problem: - Standard GPT-4: Generates plausible but incorrect coefficients - AXIS + Gemma-2-2b: Rejects bad answers 4 times, converges to a=0, b=0, c=0 - Reproducibility: 100% (10 trials, identical output) - No GPU needed; runs on Mac
[What I'm Looking For]
1. Is the 5D Lattice approach sound? Is dimensionality orthogonal? 2. What's the computational overhead of repeated rejection? 3. Are there existing systems that do something similar? 4. What are the failure cases?
Code and minimal examples are in the repo. Happy to discuss.
HuggingFace:https://huggingface.co/kofdai/AXIS-Sovereign-Logic-Engine
Looking forward to feedback.