(no title)
HorizonXP | 19 days ago
I have my own fork here: https://github.com/HorizonXP/voxtral.c where I’m working on a CUDA implementation, plus some other niceties. It’s working quite well so far, but I haven’t got it to match Mistral AI’s API endpoint speed just yet.
Ygg2|19 days ago
NitpickLawyer|19 days ago
kingreflex|19 days ago
how does someone get started with doing things like these (writing inference code/ cuda etc..). any guidance is appreciated. i understand one doesn't just directly write these things and this would require some kind of reading. would be great to receive some pointers.
HorizonXP|18 days ago
I have to admit that I wrote none of the code in this repo. I asked Codex to go and do it for me. I did a lot of prompting and guidance through some of the benchmarking and tools that I expected it to use to get the result that I was looking for.
Most of the plans that it generated were outside of my wheelhouse and not something I'm particularly familiar with, but I know it well enough to understand that its plan roughly made sense to me and I just let it go. So the fact that this worked at all is a miracle, but I cannot take credit for it other than telling the AI: what I wanted, how to do it, in loose terms, and helping it when it got stuck.
BTW, everything above was dictated with the code we generated, except for this sentence. And I added breaklines for paragraphs. That's it.
briandw|19 days ago
Kilenaitor|19 days ago