top | item 47203886 (no title) lubitelpospat | 2 days ago All right guys, this is your time - what consumer device do you use for local LLM inference? GPU poor answers only discuss order hn newest carlio|2 days ago An AMD AI max+ 395 - I use the one from frame.work (https://frame.work/de/en/desktop) with 128GB unified RAM and it can run a 120b model (gpt-oss:120b) just fine.See Wendel's review here - https://www.youtube.com/watch?v=L-xgMQ-7lW0There are other mini-pc manufacturers, the mainboard is the important part. lubitelpospat|2 days ago Wow, that's quite beefy.
carlio|2 days ago An AMD AI max+ 395 - I use the one from frame.work (https://frame.work/de/en/desktop) with 128GB unified RAM and it can run a 120b model (gpt-oss:120b) just fine.See Wendel's review here - https://www.youtube.com/watch?v=L-xgMQ-7lW0There are other mini-pc manufacturers, the mainboard is the important part. lubitelpospat|2 days ago Wow, that's quite beefy.
carlio|2 days ago
See Wendel's review here - https://www.youtube.com/watch?v=L-xgMQ-7lW0
There are other mini-pc manufacturers, the mainboard is the important part.
lubitelpospat|2 days ago