WingNews logo WingNews
top | new | best | ask | show | jobs
top | item 38728018

LLM in a Flash: Efficient Large Language Model Inference with Limited Memory

12 points| keep_reading | 2 years ago |arxiv.org

1 comment

order

dang|2 years ago

LLM in a Flash: Efficient LLM Inference with Limited Memory - https://news.ycombinator.com/item?id=38704982 - Dec 2023 (52 comments)
powered by hn/api // news.ycombinator.com