top | item 43558042

Show HN: Kernel-level LLM inference via /dev/llm0

2 points| RandomBK | 11 months ago |github.com

I saw an April Fools joke and decided to implement it.

This is a rough port of llm.c into a kernel module. A lot of hacks were needed to make this happen, so a lot of performance was left on the table. Nevertheless, it is a minimally functional GPT2 inference loop running in the kernel.

discuss

order

No comments yet.