top | item 44501597

(no title)

vectorcamp | 7 months ago

Getting an LLM to translate code is very tricky, we haven't included AVX2 and AVX512 yet in our SIMD.ai because it requires a lot more work. However, translating code between similarly sized vector engines is doable when we finetuned our own data to the LLM. We tested both ChatGPT and Claude -and more- but none could do even the simplest translations between eg SSE4.2 and Neon or VSX. So trying something harder like AVX512 felt like a bit of a stretch. But we're working on it.

discuss

order

fancyfredbot|7 months ago

It used to be the case that if you wanted to write code once and run it on multiple platforms you'd use a library, and if you wanted to avoid writing code which was ISA specific you used a compiler. Now we use an LLM. This is progress. Probably. It's definitely different anyway.

vectorcamp|7 months ago

You still have to use the library, and it will still work the same way for normal scalar C code. The whole point is that the vectorization is difficult to write and an LLM just might be able to help with some cases, not all.