top | item 45404791

(no title)

olokobayusuf | 5 months ago

Our primary use case is cross-platform AI inference (unsurprising), and for that use case we're already in production by startups to larger co's.

It's kind of funny: our compiler currently doesn't support classes, but we support many kinds of AI models (vision, text generation, TTS). This is mainly because math, tensor, and AI libraries are almost always written with a functional paradigm.

Business plan is simple: we charge per endpoint that downloads and executes the compiled binary. In the AI world, this removes a large multiplier in cost structure (paying per token). Beyond that, we help co's find, eval, deploy, and optimize models (more enterprise-y).

discuss

order

franktankbank|5 months ago

I understood some of it. Sounds reasonable if your market already is running a limited subset of the language, but I guess there is a lot of custom bullshit you actually wind up maintaining.

olokobayusuf|5 months ago

Yup that's true. We do benefit from massive efficiencies though, thanks to LLM codegen.