I do kinda the opposite where I run my AI in a sandbox. it sends dummy tokens to APIs. the proxy then injects the real creds. so, the AI never has access to creds.
Thank you, what I was thinking was more along the lines of optimizing how you use your context window. So that the LLM can actually access what it needs to, like a incredibly powerful compact that runs in the background with your file system working as a long term memory... Still thinking how to make it work, so I am super open to ideas.
shepherdjerred|1 month ago
https://clauderon.com/ -- not really ready for others to use it though
maxkfranz|1 month ago
jmuncor|1 month ago