Nice. I think in the future this could be way better if everything was local and didn't require a API key. As far as I can tell mem0 is a fancy retrieval system. It could probably work pretty well locally with simpler models
Yes, you can run Mem0 locally since we have open sourced it but would need some more work to have a server up and running to be able to interact with Claude.
GitHub: https://github.com/mem0ai/mem0
Instead of long-term memory I'd be happy if it had short-term reliability. I lost count the number of times this week that Claude failed to process prompts because it was down.
Completely agree on the reliability front...but I don't think mentioning it on some guy's 3rd party GitHub project is going to help all that much with that.
I've noticed a bug where long conversations timeout on new sends on mobile because of processing time, but in reality the prompt is sent and responded to, it just doesn't show up until you leave and return to the conversation.
I always wonder what the heck people are thinking when they invent some cool AI feature and implement it for one specific LLM since we already have the technology/libraries to make most anything you want to do be able to work with most any LLM. (For you pedantic types, feel free to point out the exceptions).
Personally I use LangChain/Python for this, and that way any new AI features I create therefore easily work across ALL LLMs, and my app just lets the end user pick the LLM they want to run on. Every feature I have works on every LLM.
It only support Chrome for now. I built this in few hours quickly to solve my problem. Happy to accept contributions to the repository if someone builds it.
twothamendment|1 year ago
ggnore7452|1 year ago
deshraj|1 year ago
Shameless plug: We have been working on this problem at Mem0 to solve the long-term memory problem with LLMs. GitHub: https://github.com/mem0ai/mem0
imranq|1 year ago
deshraj|1 year ago
chipdart|1 year ago
Instead of long-term memory I'd be happy if it had short-term reliability. I lost count the number of times this week that Claude failed to process prompts because it was down.
Tostino|1 year ago
kromem|1 year ago
I've noticed a bug where long conversations timeout on new sends on mobile because of processing time, but in reality the prompt is sent and responded to, it just doesn't show up until you leave and return to the conversation.
pigeons|1 year ago
quantadev|1 year ago
Personally I use LangChain/Python for this, and that way any new AI features I create therefore easily work across ALL LLMs, and my app just lets the end user pick the LLM they want to run on. Every feature I have works on every LLM.
BoorishBears|1 year ago
Doubly baffling since the underlying project does support LLMs and this is clearly just a showcase piece.
decide1000|1 year ago
deshraj|1 year ago
shmatt|1 year ago
unknown|1 year ago
[deleted]