Fixed! BSL (to my understanding) is a copy of the license and its a 'hashicorp document' so it had their title on it.
However, someone earlier today put me onto the concept of AGPL licenses so I changed MIRA over to AGPL because it still aligns with my overall intent of protecting my significant effort from someone coming in and Flappy Bird-ing it while still making it freely available to anyone who wants to access, modify, anything it.
We also had this recently with arduino. I don't understand why companies try to get that way. To me it is not an open source licence - it is a closed source business licence. Just with different names.
I tried making something similar a while ago, and the main problem was that long-term memory makes it easy to move the AI into a bad state where it overfixates on something (context poisoning), or decides to refuse talking to me completely. So in the end, I added a command that wipes out all memory, and ended up using it all the time.
Maybe I was doing it wrong. The question is: how do you prevent the AI from falling into a corrupt state from which it cannot get out?
I use a two-step generation process which both avoids memory explosion in the window and the one turn behind problem.
When a user sends a message I:
generate a vector of the user message ->
pull in semantically similar memories ->
filter and rank them ->
then send an API call with the memories from the last turn that were 'pinned' plus the top 10 memories just surfaced. the first API call's job is to intelligently pick the actual worthwhile memories and 'pin' them till the next turn -> do the main LLM call with an up-to-date and thinned list of memories.
I can't say with confidence that this is ~why~ I don't run into the model getting super flustered and crashing out though I'm familiar with what you're talking about.
Doesn't seem to work very well, i have to coerce it to create memories and even then it's losing track of them or failing to create memories altogether. I want a single thread I can talk about multiple different social interactions in my life but it fails to keep track even of a single storyline, and fails entirely to save anything from the second storyline.
No, I wish. That would be a really cool functionality but to my knowledge it is not possible BUT I could be wrong and would be more than happy to implement that support if someone gives me the information needed to integrate.
Just tested the demo, it is really great. Therefore the image attachment, on mobile version, does not upload or process a thing atm?
Domaindocs is a nice no DB solution and easy thing, but got some issues with it.
I create the domaindoc, add manually something inside (list of friends, Name - description), and enable it. Later I ask what I put inside, or who is x, and I got the correct output, but when I try to ask to replace x word by another, he show me what it should be, says is done and completed, but does not edit the actual domaindoc file.
Are you running it locally or the hosted version? I say that because Anthropic models are really good about not lying that they executed a tool call but using another provider/model sometimes they lie to your face.
I'm playing around with it, and it's very cool! One issue is that fingerprint expansion doesn't always work, e.g. I have a memory "Going to Albania in January for a month-long stay in Tirana" and asking "Do I need a visa for my trip?" didn't turn up anything, using expansion "visa requirements trip destination travel documents..."
What would you think about adding another column that is used for matching that is a superset of the actual memory, basically reusing the fingerprint expansion prompt?
:D I’d also like to thank David Hahn for obsessively (and arguably compulsively) learning about a topic way out of his depth and then manifesting it till the cops took him away.
There is a live hosted instance a miraos.org where you can make an account and chat with MIRA through a web frontend. For now during this phase of people discovering it I'm eating the token costs so its 100% free to access and chat with.
If it throws an actual error please let me know by lodging it as an issue in the GitHub repo and I'll modify the code. I'm hanging around the house tonight to fix bugs people uncover.
EDIT: Thanks for the feedback! I was able to pinpoint it to a change I made earlier today to allow simultanious OAI endpoints and the native Claude support. When on a model via a 3rd party provider certain parts of a toolcall were being stripped. Not any more! Pushed an update.
williamstein|2 months ago
taylorsatula|2 months ago
However, someone earlier today put me onto the concept of AGPL licenses so I changed MIRA over to AGPL because it still aligns with my overall intent of protecting my significant effort from someone coming in and Flappy Bird-ing it while still making it freely available to anyone who wants to access, modify, anything it.
api|2 months ago
shevy-java|2 months ago
DHH also claims he is super open source when in reality he already soul-sent to the big tech bros:
https://world.hey.com/dhh/the-o-saasy-license-336c5c8f
We also had this recently with arduino. I don't understand why companies try to get that way. To me it is not an open source licence - it is a closed source business licence. Just with different names.
CamperBob2|2 months ago
[deleted]
kgeist|2 months ago
Maybe I was doing it wrong. The question is: how do you prevent the AI from falling into a corrupt state from which it cannot get out?
taylorsatula|2 months ago
When a user sends a message I: generate a vector of the user message -> pull in semantically similar memories -> filter and rank them -> then send an API call with the memories from the last turn that were 'pinned' plus the top 10 memories just surfaced. the first API call's job is to intelligently pick the actual worthwhile memories and 'pin' them till the next turn -> do the main LLM call with an up-to-date and thinned list of memories.
Reading the prompt itself that the analysis model carries is probably easier than listening to my abstract description: https://github.com/taylorsatula/mira-OSS/blob/main/config/pr...
I can't say with confidence that this is ~why~ I don't run into the model getting super flustered and crashing out though I'm familiar with what you're talking about.
JonChesterfield|2 months ago
taylorsatula|2 months ago
personjerry|2 months ago
oidar|2 months ago
taylorsatula|2 months ago
hohithere|2 months ago
Domaindocs is a nice no DB solution and easy thing, but got some issues with it. I create the domaindoc, add manually something inside (list of friends, Name - description), and enable it. Later I ask what I put inside, or who is x, and I got the correct output, but when I try to ask to replace x word by another, he show me what it should be, says is done and completed, but does not edit the actual domaindoc file.
taylorsatula|2 months ago
Does it produce an error or just lies to you?
lukasb|2 months ago
What would you think about adding another column that is used for matching that is a superset of the actual memory, basically reusing the fingerprint expansion prompt?
Avicebron|2 months ago
taylorsatula|2 months ago
skeledrew|2 months ago
chaosharmonic|2 months ago
This is easily one of my favorite descriptive details I've ever seen in a README.
taylorsatula|2 months ago
unknown|2 months ago
[deleted]
maxwell-neumann|2 months ago
taylorsatula|2 months ago
idiotsecant|2 months ago
taylorsatula|2 months ago
EDIT: Thanks for the feedback! I was able to pinpoint it to a change I made earlier today to allow simultanious OAI endpoints and the native Claude support. When on a model via a 3rd party provider certain parts of a toolcall were being stripped. Not any more! Pushed an update.
fooqux|2 months ago
reilly3000|2 months ago
unknown|2 months ago
[deleted]
z3ratul163071|2 months ago
taylorsatula|2 months ago
rubenvanwyk|2 months ago