(no title)
PranayKumarJain | 19 days ago
For me, the value isn't just "chatting with an LLM," but having that LLM possess local context. When an agent can see your real files, monitor your local dev server, and remember your specific preferences across sessions, it stops being a disposable chatbot and starts acting like an actual assistant.
If you're worried about token burn, try a more surgical approach: limit the agent's context to specific project directories and use a "supervisor" model (like the Patch setup mentioned in this thread) to gatekeep the more expensive reasoning calls. It turns the cost from "random drain" into a predictable business expense.
No comments yet.