top | item 43551051 The Nvidia DGX Spark Is a Tiny 128GB AI Mini PC Made for Scale-Out Clustering 17 points| PaulHoule | 11 months ago |servethehome.com 5 comments order hn newest [+] [-] Havoc|11 months ago|reply The upcoming wave on APU like minipcs will be really cool in general.The mem throughput looks a tad on the low side but combined with MoE style models will still allow for big models to run at reasonable speeds.Prices will need to drop though. A grand is likely closer to most people budget than 3 for an AI quasi toy. [+] [-] fragmede|11 months ago|reply The original Apple I computer was released in 1976 and sold for $666.66, which is $3,725.38 in Feb 2025 adjusting for inflation.https://data.bls.gov/cgi-bin/cpicalc.pl?cost1=666.66&year1=1... load replies (1) [+] [-] unknown|11 months ago|reply [deleted] [+] [-] captaindiego|11 months ago|reply Are these just good for LLM inference or can they be used to train stuff like CV models too? (Let's say vs. a 5090 which is same ball.park price-wise) [+] [-] banderwidthdk|11 months ago|reply From my experience LLM inference really, really likes memory bandwidth, which at 1.79 TB/s the 5090 has quite the lead on the APU's 273GB/s
[+] [-] Havoc|11 months ago|reply The upcoming wave on APU like minipcs will be really cool in general.The mem throughput looks a tad on the low side but combined with MoE style models will still allow for big models to run at reasonable speeds.Prices will need to drop though. A grand is likely closer to most people budget than 3 for an AI quasi toy. [+] [-] fragmede|11 months ago|reply The original Apple I computer was released in 1976 and sold for $666.66, which is $3,725.38 in Feb 2025 adjusting for inflation.https://data.bls.gov/cgi-bin/cpicalc.pl?cost1=666.66&year1=1... load replies (1)
[+] [-] fragmede|11 months ago|reply The original Apple I computer was released in 1976 and sold for $666.66, which is $3,725.38 in Feb 2025 adjusting for inflation.https://data.bls.gov/cgi-bin/cpicalc.pl?cost1=666.66&year1=1... load replies (1)
[+] [-] captaindiego|11 months ago|reply Are these just good for LLM inference or can they be used to train stuff like CV models too? (Let's say vs. a 5090 which is same ball.park price-wise) [+] [-] banderwidthdk|11 months ago|reply From my experience LLM inference really, really likes memory bandwidth, which at 1.79 TB/s the 5090 has quite the lead on the APU's 273GB/s
[+] [-] banderwidthdk|11 months ago|reply From my experience LLM inference really, really likes memory bandwidth, which at 1.79 TB/s the 5090 has quite the lead on the APU's 273GB/s
[+] [-] Havoc|11 months ago|reply
The mem throughput looks a tad on the low side but combined with MoE style models will still allow for big models to run at reasonable speeds.
Prices will need to drop though. A grand is likely closer to most people budget than 3 for an AI quasi toy.
[+] [-] fragmede|11 months ago|reply
https://data.bls.gov/cgi-bin/cpicalc.pl?cost1=666.66&year1=1...
[+] [-] unknown|11 months ago|reply
[deleted]
[+] [-] captaindiego|11 months ago|reply
[+] [-] banderwidthdk|11 months ago|reply