top | item 47168306

(no title)

mark_l_watson | 3 days ago

Could it be tooling like Claude Code? I just used Claude Code with qwen3.5:35b running locally to track down two obscure bugs in new Common Lisp code I wrote yesterday.

discuss

order

genghisjahn|3 days ago

I use Claude Code as an orchestrator and have the agents use different models:

  product-designer   ollama-cloud / qwen3.5:cloud
  pm                 ollama-cloud / glm-5:cloud
  test-writer        claude-code  / Sonnet 4.6
  backend-builder    claude-code  / Opus 4.6
  frontend-builder   claude-code  / Opus 4.6
  code-reviewer      codex-cli    / gpt-5.1-codex-mini
  git-committer      ollama-cloud / minimax-m2.5:cloud
I use ollama pro $20/month and OpenAI $20/month. I have an Anthropic max plan at $100/month.

alexsmirnov|3 days ago

I do in similar way, connect claude code to litellm router that dispatches model requests to different providers: bedrock, openai, gemini, openrouter and ollama for opensource models. I have special slash command and script that collect information about session, project and observed problems to evaluation dataset. I can re-evaluate prompts and find models that do a job in particular agent faster/cheaper, or use automated prompt optimization to eliminate problems.

geor9e|3 days ago

Is this because Anthropic models are worse at those tasks, or more expensive, or what?

smt88|3 days ago

Qwen seems fine for analysis to me, but Opus 4.6 is far better to use as a sounding board or for writing code