(no title)
mlacks
|
1 year ago
There is no configuration that meets both high-quality local LLM performance and excellent portability. A robust discrete GPU in a thin, portable laptop does not offer enough memory bandwidth or thermals for full-scale models, and a laptop with abundant DDR5 memory and high-end CPU power isn’t designed for the throughput that local LLM inference demands. The practical solution is to use a Linux laptop optimized for everyday development and use hosted LLM services when you need more power.
No comments yet.