top | item 46115226

Show HN: Roampal – a local memory layer that learns from outcomes

1 points| roampal | 3 months ago |github.com

Matthew McConaughey was on Joe Rogan two months ago describing the exact AI he wanted: a private model trained only on his own writings and experiences. I built it — and added outcome-based learning.

On 130 adversarial scenarios designed so the query semantically matches bad advice better than good advice:

→ plain vector search: 0–3% correct

→ Roampal: 100% correct

Efficiency: 63% fewer tokens — retrieves 1 outcome-verified result vs RAG's top-3 semantic matches.

Core mechanism

• AI marks outcome → success +0.2, failure −0.3 (explicit or auto-detected from conversation)

• New memories: 70 % embedding / 30 % outcome score

  Proven memories (5+ uses): 40 % embedding / 60 % outcome score
• Over time, “sounds right” gets demoted, “actually worked” gets promoted

Key difference from Mem0/Zep

They update on relevance/consistency. Roampal updates on real outcomes.

Reproducible results (JSON in repo):

                Plain Vector   Roampal
Finance (100) 0 % 100 %

Coding (30) 3.3 % 100 % ← p=0.001, Cohen’s d=7.49

Learning curve: 58 % → 93 % accuracy as memories accumulate (p=0.005, d=13.4)

I’m not a programmer — psychology degree, MBA, day job managing $6.5 M contracts. Nine months of nights & weekends with only Cursor, Claude, and copy-paste.

100 % local · runs offline with Ollama, LM Studio, or Claude Desktop · MIT license · no telemetry · no signup

GitHub (full benchmarks + all 130 adversarial scenarios):

https://github.com/roampal-ai/roampal

Website + demo video:

https://roampal.ai

Happy to answer technical questions or take brutal feedback in the comments.

4 comments

order

talismehedi|3 months ago

14 hours ago, I posted an idea : https://www.linkedin.com/posts/mehedimdhasan_though-commerci...?

Then I was searching whether anyone had already done it or not.

Then I found this post coming up just 1 day ago.

Best of luck mate, you nailed it.

roampal|3 months ago

Dude, you literally wrote the exact motivation paragraph for Roampal right around the same time I posted this

Thorndike's Law of Effect is the entire reason I built the outcome-scoring (+0.2 for worked, −0.3 for failed) and shift weighting toward proven memories. You're not half-baked — you're 100% right. I just happened to ship the PoC first.

Would love to hear your take on the cold-start problem and whether those reward magnitudes feel right in practice. Shooting you a connection request on LinkedIn if you want to swap notes.