Show HN: Kernel-level LLM inference via /dev/llm0
2 points| RandomBK | 11 months ago |github.com
This is a rough port of llm.c into a kernel module. A lot of hacks were needed to make this happen, so a lot of performance was left on the table. Nevertheless, it is a minimally functional GPT2 inference loop running in the kernel.
No comments yet.