top | item 47225589 2x Qwen 3.5 on M1 Mac: 9B builds a bot, 0.8B runs it 8 points| advanced-stack | 20 hours ago |advanced-stack.com 2 comments order hn newest ZeroGravitas|5 hours ago I was trying this the other day with opencode and ollama and it all seemed kind of broken/useless.I don't understand the two llm approach plus telegram here?It seems like the bot is both creating the telegram interface and using it to be a coding assistant? advanced-stack|4 hours ago It's mostly a test of the coding capabilities of the 9B model. The 0.8B is used to make the telegram bot smarter than a if/then/else.I find LM Studio more usable for local setups (desktop/laptop) and I would use directly llama.cpp stack for a (local) server deployment unknown|20 hours ago [deleted]
ZeroGravitas|5 hours ago I was trying this the other day with opencode and ollama and it all seemed kind of broken/useless.I don't understand the two llm approach plus telegram here?It seems like the bot is both creating the telegram interface and using it to be a coding assistant? advanced-stack|4 hours ago It's mostly a test of the coding capabilities of the 9B model. The 0.8B is used to make the telegram bot smarter than a if/then/else.I find LM Studio more usable for local setups (desktop/laptop) and I would use directly llama.cpp stack for a (local) server deployment
advanced-stack|4 hours ago It's mostly a test of the coding capabilities of the 9B model. The 0.8B is used to make the telegram bot smarter than a if/then/else.I find LM Studio more usable for local setups (desktop/laptop) and I would use directly llama.cpp stack for a (local) server deployment
ZeroGravitas|5 hours ago
I don't understand the two llm approach plus telegram here?
It seems like the bot is both creating the telegram interface and using it to be a coding assistant?
advanced-stack|4 hours ago
I find LM Studio more usable for local setups (desktop/laptop) and I would use directly llama.cpp stack for a (local) server deployment
unknown|20 hours ago
[deleted]