If you can run this using ollama, then you should be able to use https://www.continue.dev/ with both IntelliJ and VSCode. Haven’t tried this model yet - but overall this plugin works well.
Correct. The only back-end that Ollama uses is llama.cpp, and llama.cpp does not yet have Mamba2 support. The issues to track Mamba2 and Codestral Mamba support are here:
Unrelated, all my devices freeze when accessing this page, desktop Firefox and Chrome, mobile Firefox and Brave.
Is this the best alternative to access code ai helpers besides the GitHub Copilot and Google Gemini on VSCode?
I've been using it for a few months (with Starcoder 2 for code, and GPT-4o for chat). I find the code completion actually better than Github Copilot.
My main complain is that the chat sometimes fails to correctly render some GPT-4o output (e.g. LaTeX expressions), but it's mostly fixed with a custom system prompt. It also significantly reduces the battery life of my Macbook M1, but that's expected.
scosman|1 year ago
HanClinto|1 year ago
https://github.com/ggerganov/llama.cpp/issues/8519
https://github.com/ggerganov/llama.cpp/issues/7727
Mamba support was added in March of this year:
https://github.com/ggerganov/llama.cpp/pull/5328
I have not yet seen a PR to address Mamba2.
unknown|1 year ago
[deleted]
unknown|1 year ago
[deleted]
sadeshmukh|1 year ago
osmano807|1 year ago
raphaelj|1 year ago
My main complain is that the chat sometimes fails to correctly render some GPT-4o output (e.g. LaTeX expressions), but it's mostly fixed with a custom system prompt. It also significantly reduces the battery life of my Macbook M1, but that's expected.
oliverulerich|1 year ago