The perf/W differences here are 4-6x in favour of the M1 Max, all whilst posting significantly better performance, meaning the perf/W at ISO-perf would be even higher than this.
and
>On the GPU side, the GE76 Raider comes with a GTX 3080 mobile. On Aztec High, this uses a total of 200W power for 266fps, while the M1 Max beats it at 307fps with just 70W wall active power.
The sad thing is that what you really want to compare is how their GPU is doing against nVidia, but then they pair it with Intel's CPU which is known to have very poor power efficiency vs. AMD.
For a laptop chip, single threaded integer performance is on par.
Multi-threaded integer and floating point performance is not.
>In the aggregate scores – there’s two sides. On the SPECint work suite, the M1 Max lies +37% ahead of the best competition, it’s a very clear win here and given the power levels and TDPs, the performance per watt advantages is clear. The M1 Max is also able to outperform desktop chips such as the 11900K, or AMD’s 5800X.
In the SPECfp suite, the M1 Max is in its own category of silicon with no comparison in the market. It completely demolishes any laptop contender, showcasing 2.2x performance of the second-best laptop chip. The M1 Max even manages to outperform the 16-core 5950X – a chip whose package power is at 142W, with rest of system even quite above that. It’s an absolutely absurd comparison and a situation we haven’t seen the likes of.
I don't think these efficiencies are just from the node advantage. The fact is that Apple chips follow mobile designs and are highly integrated SoCs where Apple can optimize every aspect of the system in exchange for losing flexibility (no mixing and matching of components).
In contrast to mobile processors, x86 processors live in a world where flexibility is demanded. I need to be able to pick how much RAM I want, which WIFI modem, which graphics, and so on (where I is a combination of the consumer and laptop manufacturer). Sure, laptop processors have gotten more integrated lately, but it's not to the same degree. Competition from Apple might pressure Intel and AMD to integrate much more and sacrifice this flexibility in order to squeeze out better power efficiencies.
>Will we get such efficiencies when intel hits 5nm?
Judging from everything we know. Even in the most optimistic scenario. The answer is no.
Note: And Intel doesn't have 5nm, they go to 4nm and then 3nm. But the answer is still the same.
Edit: For those wondering how the conclusion was arrived, you take a look at the Alderlake SPEC_INT and Geekbench score, look at the power usage per core. ( Forget MT benchmarks ), you scale it by target IPC improvement and node improvement. You should see the gap is in terms of efficiencies is still behind.
I’m not a gamer. And even if the CPU can’t take advantage of the 400GB/s 250 or so is very good indeed.
This is all low hanging fruit for future revisions of this chip. The A15 based cores will further improve single core IPC and in turn make MT workloads even better. Basically if this is the floor then the sky won’t be high enough to contain where we go next.
Actually, TDP stands for "Thermal Design Power" and is not a range. It means "I, the designer, designed it so that this is maximum amount of waste heat as it can safely produce continuously in normal use". It is mainly limited by its physical package and maximum temperature at which the internal components can run.
That you can't observe that max power is due to the fact that those various applications stress the CPU in various ways, not always being able to exercise all internal structures to their maximum potential, at the same time.
> One should probably assume a 90% efficiency figure in the AC-to-DC conversion chain from 230V wall to 28V USB-C MagSafe to whatever the internal PMIC usage voltage of the device is.
(This was regarding idle power usage)
Highly unlikely. I design AC switching power supplies from first principles (and stacks of books). Efficiencies above 90% are normal for newer designs but the PSUs are designed to achieve these efficiencies above significant percentage of their design power. High efficiency at design power is important because it limits worst case waste heat which in turn makes it possible to create smaller PSU. But as PSU is a lot of tradeoffs, one of the tradeoffs that is taken is lesser efficiency at lower power where it doesn't matter as much.
Typically, the lower load on the PSU as portion of design power the lower the efficiency. If the PSU is designed for 140 watt 90% efficiency, I would expect that at 7 watt it is actually much less efficient probably somewhere between 70 and 80 percent.
If people are wondering about why some people in the comments are reporting 3080 vs 3060 levels of performance, it's based on the workload. On synthetic (native I assume) benchmarks, the M1 Max reaches 3080 levels, but in gaming benchmarks (using x86) it reaches 3060 levels.
It's interesting especially since it sounds like the reason it doesn't reach "close to 3080" in many games is because it's CPU bound, specifically because it's emulating x86.
Once we get more benchmarks with non-rosetta apps the picture may be rosier? That said, it's not like Apple was every the company for gaming machines so perhaps that will just be the state of things.
It's been a while since I bought a "Pro" computer from Apple. I am kind of wondering about the perf-per-$$$ factor. With a starting price of $2000, these are expensive computers. But maybe they are worth it!
The M1 computers seemed like an absolute bargain for the performance.
They are tools. If you have specific workloads that these excel at in your job/hobby/money-making venture, then the price shouldn't be a concern.
Depending on workload, they are comparable to $1000 PC laptops in CPU performance... or $3000 PCs. Or PCs that don't exist yet!
As someone who uses a laptop for gaming, my $1000 laptop is infinitely superior to a $6000 Macbook Pro (for the games that I play). For almost every other use, the Macbook Pro is likely far superior!
If you do Final Cut or Xcode work, these are the best tool available to you.
These MacBooks are definitely worth the money. They cost a lot but they are not overpriced.
You don’t have to consider just the CPU and GPU but the whole SoC.
CPU is impressive and GPU is good, but for standard workloads some PCs may give you slightly better performances (on the GPU side), at the cost of needing the power adapter to show that.
However, for some specific workloads (especially the ones involving ProRaw video) the custom modules in it make it perform better not only than a Mac Pro, but than every other machine in the market.
There is also the Neural Engine that could be more important in the future.
You may not need those modules, but it seems like we are forgetting this are laptops with screens, inputs and more.
These machines have one of the best screens, with high DPI, high refresh rate but most important the miniLED technology which brings true HDR. And that’s something very pricey.
Far from defending Apple, they could sell this laptops for less and we would all be happier, but at the end of the day these machines are worthy in every aspect (specific cases aside).
They're definitely pricey, but CPU performance is out of this world. GPU's pretty good too, though not as impressive, I understand. Alas, it'll probably be a few years before I can get one (I use my work machine for almost everything and I just got this one).
They’re also compact, well-built(keyboards are fixed now) and with good battery life. Last time I was shopping, I found direct competition(XPS 13 and the likes) to be about the same price.
Years ago I read someone express the maxim "the computer you want is always around $5k". This has stuck with me. It's been approximately true throughout my life.
Depends what you need them for. On a music video set my 2010 MBP survived a drop from a second story balcony (three stories in a normal home), onto a marble floor. Got dented as hell, but only functional damage was to the ethernet port. I'm excited to get one of these new ones. Imagine it'll last at least 5 years.
Completely depends on what your use case is. For me personally a desktop + mid-range laptop combo is cheaper, more powerful and an all-round better fit than a single $2K+ laptop (which realistically will become $3K+ after adding a few options).
Seems the CPU cluster saturates at about 240 GB/s and can't utilize the full memory bandwidth. This bodes well for future clusters with double the number of CPU cores at a node shrink (M2 Max?) or for a Mac Pro (Mac Quadra?).
Maybe. This seems like a cluster-wide limitation - the individual CPU cores can utilize enough memory bandwidth that together they should be able to saturate the bus, but there's some kind of bottleneck on the entire CPU section of the SoCs and who knows how easy or difficult it would be to alleviate that.
Sure, keep in mind that most competing laptops max at 70GB/sec (the never to exceed number), and most desktops are slower than that with 2 channels of DDR4-3200 to 4200 (41-67GB/sec).
So while it's "only" 240, that's an excellent number. Keep in mind that you generally never see 100% of theoretical bandwidth.
Now that Apple has taken the lead in performance/battery life tradeoff, are there any machines which come close to the M1 for software dev? Specifically, compiling Rust, Android development etc. without giving up too much on battery life?
Also, the last time I checked, CPUs were reporting high performance but only under light load. Has the whole throttling situation changed or should I just expect to get 2 hours battery life in exchange for extreme CPU performance?
Edit: I should have specified machines that can run Linux.
Panzerino tested WebKit compiles on the first run of M1 machines last year, and it seems like the battery held up really well on those.
> After a single build of WebKit, the M1 MacBook Pro had a massive 91% of its battery left. I tried multiple tests here and I could have easily run a full build of WebKit 8-9 times on one charge of the M1 MacBook’s battery.
Intel's Alder Lake is moving to a Performance + Efficiency core setup, which should help overall with battery life. But they are still behind on manufacturing process (Alder Lake is "Intel 7", supposedly roughly comparable to TSMC N7), so Apple will quite likely maintain their lead in power consumption.
Alder Lake is getting announced in 2 days, but rumors have it as a desktop-first product launch, so laptops may be another quarter or two out.
> ...any machines which come close to the M1 for software dev?
If you can stand using macOS, that is.
Personally, I'll continue using Linux because that's where all my software gets deployed to and macOS simply can't approach the value of that or the value of open source. On a Mac, you'll be fighting the OS the whole time.
If speed was all that mattered, Mac users would have left Apple a long time ago because this is the first time they're faster than a PC.
Yeah, the Max has exactly the same CPU as the Pro. Only reason I picked it is because I wanted the 32GB RAM, and only the Max has that by default — customised orders take a long time deliver in India.
A shame that thermal/power limitations aren't investigated. That is the most deciding factor for me getting a Pro or Max. And something Apple has historically had a lot of trouble with.
If I'm understanding you correctly, you're thinking of previous issues with thermals and throttling, but this has been an issue over the past several years due to Intel falling behind AMD and TSMC, and thus driving power through their chips in order to stay competitive, but that generates heat, and ultimately ends up triggering throttling.
If you read about these particular chips, it should be startlingly clear that they are much more efficient than the Intel chips they replace.
In this article:
> Apple doesn’t advertise any TDP for the chips of the devices – it’s our understanding that simply doesn’t exist, and the only limitation to the power draw of the chips and laptops are simply thermals. As long as temperature is kept in check, the silicon will not throttle or not limit itself in terms of power draw.
> The perf/W differences here are 4-6x in favour of the M1 Max, all whilst posting significantly better performance
Read page 3 of this article. They really do cover a lot of this.
>A shame that thermal/power limitations aren't investigated.
It's covered in the comments, along with when the "crank up the fans" mode would be useful.
>Any pure CPU or GPU workload doesn't come close to the thermal limits of the machine. And even a moderate mixed workload like Premiere Pro didn't benefit from High Power mode.
It has a reason to exist, but that reason is close to rendering a video overnight - as in a very hard and very sustained total system workload.
[+] [-] GeekyBear|4 years ago|reply
The perf/W differences here are 4-6x in favour of the M1 Max, all whilst posting significantly better performance, meaning the perf/W at ISO-perf would be even higher than this.
and
>On the GPU side, the GE76 Raider comes with a GTX 3080 mobile. On Aztec High, this uses a total of 200W power for 266fps, while the M1 Max beats it at 307fps with just 70W wall active power.
https://www.anandtech.com/print/17024/apple-m1-max-performan...
[+] [-] AnthonyMouse|4 years ago|reply
[+] [-] kcb|4 years ago|reply
[+] [-] whatever1|4 years ago|reply
Power-wise these chips look like they landed from a different planet. 50% less power draw for most workloads.
Will we get such efficiencies when intel hits 5nm?
[+] [-] GeekyBear|4 years ago|reply
Multi-threaded integer and floating point performance is not.
>In the aggregate scores – there’s two sides. On the SPECint work suite, the M1 Max lies +37% ahead of the best competition, it’s a very clear win here and given the power levels and TDPs, the performance per watt advantages is clear. The M1 Max is also able to outperform desktop chips such as the 11900K, or AMD’s 5800X.
In the SPECfp suite, the M1 Max is in its own category of silicon with no comparison in the market. It completely demolishes any laptop contender, showcasing 2.2x performance of the second-best laptop chip. The M1 Max even manages to outperform the 16-core 5950X – a chip whose package power is at 142W, with rest of system even quite above that. It’s an absolutely absurd comparison and a situation we haven’t seen the likes of.
https://www.anandtech.com/print/17024/apple-m1-max-performan...
We'll see what happens when they make a desktop chip and are no longer so constrained on thermals and power draw.
The unreleased Mac Pro chip is said to have the resources of either two or four M1 Pro chips glued together.
[+] [-] uluyol|4 years ago|reply
In contrast to mobile processors, x86 processors live in a world where flexibility is demanded. I need to be able to pick how much RAM I want, which WIFI modem, which graphics, and so on (where I is a combination of the consumer and laptop manufacturer). Sure, laptop processors have gotten more integrated lately, but it's not to the same degree. Competition from Apple might pressure Intel and AMD to integrate much more and sacrifice this flexibility in order to squeeze out better power efficiencies.
[+] [-] ksec|4 years ago|reply
Judging from everything we know. Even in the most optimistic scenario. The answer is no.
Note: And Intel doesn't have 5nm, they go to 4nm and then 3nm. But the answer is still the same.
Edit: For those wondering how the conclusion was arrived, you take a look at the Alderlake SPEC_INT and Geekbench score, look at the power usage per core. ( Forget MT benchmarks ), you scale it by target IPC improvement and node improvement. You should see the gap is in terms of efficiencies is still behind.
[+] [-] dangus|4 years ago|reply
I think the answer is "maybe," "probably," or "sort of."
But I also wonder if x86 can ever truly outdo ARM on power efficiency.
If we want potential evidence, we could look at what AMD is able to do on TSMC's manufacturing: better than Intel, but still short of Apple.
Then again, AMD is tiny compared to Apple and Intel.
Granted, I think I'm vastly oversimplifying processor architecture. I know it's way more complicated than "x86 vs. ARM."
[+] [-] gigatexal|4 years ago|reply
I’m not a gamer. And even if the CPU can’t take advantage of the 400GB/s 250 or so is very good indeed.
This is all low hanging fruit for future revisions of this chip. The A15 based cores will further improve single core IPC and in turn make MT workloads even better. Basically if this is the floor then the sky won’t be high enough to contain where we go next.
[+] [-] lmilcin|4 years ago|reply
Actually, TDP stands for "Thermal Design Power" and is not a range. It means "I, the designer, designed it so that this is maximum amount of waste heat as it can safely produce continuously in normal use". It is mainly limited by its physical package and maximum temperature at which the internal components can run.
That you can't observe that max power is due to the fact that those various applications stress the CPU in various ways, not always being able to exercise all internal structures to their maximum potential, at the same time.
> One should probably assume a 90% efficiency figure in the AC-to-DC conversion chain from 230V wall to 28V USB-C MagSafe to whatever the internal PMIC usage voltage of the device is.
(This was regarding idle power usage)
Highly unlikely. I design AC switching power supplies from first principles (and stacks of books). Efficiencies above 90% are normal for newer designs but the PSUs are designed to achieve these efficiencies above significant percentage of their design power. High efficiency at design power is important because it limits worst case waste heat which in turn makes it possible to create smaller PSU. But as PSU is a lot of tradeoffs, one of the tradeoffs that is taken is lesser efficiency at lower power where it doesn't matter as much.
Typically, the lower load on the PSU as portion of design power the lower the efficiency. If the PSU is designed for 140 watt 90% efficiency, I would expect that at 7 watt it is actually much less efficient probably somewhere between 70 and 80 percent.
[+] [-] Leherenn|4 years ago|reply
[+] [-] marricks|4 years ago|reply
Once we get more benchmarks with non-rosetta apps the picture may be rosier? That said, it's not like Apple was every the company for gaming machines so perhaps that will just be the state of things.
[+] [-] Thaxll|4 years ago|reply
However gaming is a poorer experience, as the Macs aren’t catching up with the top chips in either of our games.
It's far far from a 3060 for gaming.
[+] [-] OldHand2018|4 years ago|reply
The M1 computers seemed like an absolute bargain for the performance.
[+] [-] neogodless|4 years ago|reply
They are tools. If you have specific workloads that these excel at in your job/hobby/money-making venture, then the price shouldn't be a concern.
Depending on workload, they are comparable to $1000 PC laptops in CPU performance... or $3000 PCs. Or PCs that don't exist yet!
As someone who uses a laptop for gaming, my $1000 laptop is infinitely superior to a $6000 Macbook Pro (for the games that I play). For almost every other use, the Macbook Pro is likely far superior!
If you do Final Cut or Xcode work, these are the best tool available to you.
[+] [-] heresaPizza|4 years ago|reply
You don’t have to consider just the CPU and GPU but the whole SoC. CPU is impressive and GPU is good, but for standard workloads some PCs may give you slightly better performances (on the GPU side), at the cost of needing the power adapter to show that. However, for some specific workloads (especially the ones involving ProRaw video) the custom modules in it make it perform better not only than a Mac Pro, but than every other machine in the market. There is also the Neural Engine that could be more important in the future.
You may not need those modules, but it seems like we are forgetting this are laptops with screens, inputs and more. These machines have one of the best screens, with high DPI, high refresh rate but most important the miniLED technology which brings true HDR. And that’s something very pricey.
Far from defending Apple, they could sell this laptops for less and we would all be happier, but at the end of the day these machines are worthy in every aspect (specific cases aside).
[+] [-] hajile|4 years ago|reply
That's $3670 -- If I build it myself. I'd expect to pay much more from a big box store for a prebuilt.
A new 14" MacBook with M1 Max, 2TB SSD, and 64GB of RAM is around $4100.
That's a great deal IMO
[+] [-] mrtranscendence|4 years ago|reply
[+] [-] f6v|4 years ago|reply
[+] [-] pohl|4 years ago|reply
[+] [-] cududa|4 years ago|reply
[+] [-] paxys|4 years ago|reply
[+] [-] xqcgrek2|4 years ago|reply
[+] [-] makomk|4 years ago|reply
[+] [-] sliken|4 years ago|reply
So while it's "only" 240, that's an excellent number. Keep in mind that you generally never see 100% of theoretical bandwidth.
[+] [-] crateless|4 years ago|reply
Also, the last time I checked, CPUs were reporting high performance but only under light load. Has the whole throttling situation changed or should I just expect to get 2 hours battery life in exchange for extreme CPU performance?
Edit: I should have specified machines that can run Linux.
[+] [-] raydev|4 years ago|reply
> After a single build of WebKit, the M1 MacBook Pro had a massive 91% of its battery left. I tried multiple tests here and I could have easily run a full build of WebKit 8-9 times on one charge of the M1 MacBook’s battery.
https://techcrunch.com/2020/11/17/yeah-apples-m1-macbook-pro...
I'm looking forward to the MBP compile benchmarks.
[+] [-] ac29|4 years ago|reply
Intel's Alder Lake is moving to a Performance + Efficiency core setup, which should help overall with battery life. But they are still behind on manufacturing process (Alder Lake is "Intel 7", supposedly roughly comparable to TSMC N7), so Apple will quite likely maintain their lead in power consumption.
Alder Lake is getting announced in 2 days, but rumors have it as a desktop-first product launch, so laptops may be another quarter or two out.
[+] [-] wayneftw|4 years ago|reply
If you can stand using macOS, that is.
Personally, I'll continue using Linux because that's where all my software gets deployed to and macOS simply can't approach the value of that or the value of open source. On a Mac, you'll be fighting the OS the whole time.
If speed was all that mattered, Mac users would have left Apple a long time ago because this is the first time they're faster than a PC.
[+] [-] sz4kerto|4 years ago|reply
[+] [-] kabes|4 years ago|reply
[+] [-] sudhirj|4 years ago|reply
[+] [-] sylens|4 years ago|reply
[+] [-] sliken|4 years ago|reply
[+] [-] tromp|4 years ago|reply
Would love to see how well the GPU runs a memory-bound Proof-of-Work like Cuckatoo Cycle [1].
[1] https://github.com/tromp/cuckoo
[+] [-] rowanG077|4 years ago|reply
[+] [-] neogodless|4 years ago|reply
If you read about these particular chips, it should be startlingly clear that they are much more efficient than the Intel chips they replace.
In this article:
> Apple doesn’t advertise any TDP for the chips of the devices – it’s our understanding that simply doesn’t exist, and the only limitation to the power draw of the chips and laptops are simply thermals. As long as temperature is kept in check, the silicon will not throttle or not limit itself in terms of power draw.
> The perf/W differences here are 4-6x in favour of the M1 Max, all whilst posting significantly better performance
Read page 3 of this article. They really do cover a lot of this.
[+] [-] perardi|4 years ago|reply
https://www.anandtech.com/show/17024/apple-m1-max-performanc...
[+] [-] GeekyBear|4 years ago|reply
It's covered in the comments, along with when the "crank up the fans" mode would be useful.
>Any pure CPU or GPU workload doesn't come close to the thermal limits of the machine. And even a moderate mixed workload like Premiere Pro didn't benefit from High Power mode.
It has a reason to exist, but that reason is close to rendering a video overnight - as in a very hard and very sustained total system workload.
https://www.anandtech.com/comments/17024/apple-m1-max-perfor...
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] jb1991|4 years ago|reply
Did not realize Apple was first in that area in previous decades.
[+] [-] perfopt|4 years ago|reply
[+] [-] lnxg33k1|4 years ago|reply