top | item 42120359

(no title)

nakovet | 1 year ago

It’s expensive and it’s good that other options are available, it’s not insanely expensive though, the people that need the upgrade often make 6 figure salaries and can afford the upgrade although they might not like the idea of paying it, I personally hate it but I know when I buy the machine will last me at least 5 years, which is good enough for me.

discuss

order

Kudos|1 year ago

I took a different route and went with a Framework laptop. I'll be interested to see how long it takes for it to fully ship of Theseus.

yieldcrv|1 year ago

my M1 Max cost $7500 three years ago, was looking forward to the M4 Max..... last year. As in, wish the M3 had these specs, specifically 128gb RAM and this memory bandwidth.

This is unfortunately too late, to justify recouping costs and buying a new M4 Max.

Primarily because I would be using it for large language model inference at higher parameters. What's happened in just one year is that LLM's have gotten muuuch smaller. Llama 3.2 3B only takes 3GB of RAM at 8 bit quantizing. And in addition to that, cost per token in the cloud has subsequently plummuted 99.9% too. Its just not economical.

But yes, I could afford it but no longer a justification. My 64GB RAM M1 Max is going to be future proof for a while.