Yeah you can, so long as you're hosting your local LLM through something with an OpenAI-compatible API (which is a given for almost all local servers at this point, including LM Studio).
That said, running agentic workloads on local LLMs will be a short and losing battle against context size if you don't have hardware specifically bought for this purpose. You can get it running and it will work for several autonomous actions but not nearly as long as a hosted frontier model will work.
paool|5 months ago
Or do you have to copy paste into LM studio?
evilduck|5 months ago
https://opencode.ai and https://github.com/QwenLM/qwen-code both allow you to configure any API as the LLM provider.
That said, running agentic workloads on local LLMs will be a short and losing battle against context size if you don't have hardware specifically bought for this purpose. You can get it running and it will work for several autonomous actions but not nearly as long as a hosted frontier model will work.
DrAwdeOccarim|5 months ago