top | item 26910193

Apple’s M1 Positioning Mocks the Entire x86 Business Model

604 points| danaris | 5 years ago |extremetech.com | reply

918 comments

order
[+] klelatti|5 years ago|reply
One thing which I don't think has been commented on is the marketing effort behind the M1.

It's getting a lot of prominence and its use across lots of computers means that there can be a consistent message that "M1 is great - get a new Mac and get an M1".

It also provides an opportunity to distinguish between M1 and the next generation M2 (presumably).

I've always thought Intel's marketing was a bit confused - i7 stays the same over 10+ years with only the obscure (to the general public) suffix changing from generation to generation.

[+] monkmartinez|5 years ago|reply
You get an M1 with the new iPad Pro as well! I hadn't thought of the situation as the article presents. When shown in that light, it made me pause to reflect. The M1 doesn't make sense when its in every darn product. The only differentiator is screen size, RAM, and OS?

I will admit that I switched to Thinkpad and Win10 about two years ago when I had to return my butterfly for the 5th time. I am not looking back either. If anything, I am more focused on AMD Ryzen and Nvidia 30 series chips in MSI, Lenovo Legion and Asus offerings. There is nothing I can't do with one of those machines. Going Apple is a backward move for me as I like to program, design in CAD, play steam VR, and run blender sims. Can't do any of those well with Apple hardware.

[+] kmeisthax|5 years ago|reply
Apple differentiates the majority of their products by generation rather than binning. If you buy a low-end iPad, you get an A12; the iPad Air steps you up to an A14; and the iPad Pro gets you an M1.

This is less evident on the Macintosh side of the business right now because they're just trying to get M1 silicon into as many product lines as their fab capacity will allow. They don't actually have an M2 (or even M1X) to sell high-end products with yet, which is why they're starting with low-end products first. When they release upgraded chips, they will almost be used to transition the high end models with lower-end product getting it later.

[+] solatic|5 years ago|reply
> The only differentiator is screen size, RAM, and OS?

Apple is iPhone-izing (for lack of a better word) the rest of their product lines. If for the last ten years, the market hasn't really cared about the speed of the phone's processor within the same generation, but rather about physical differentiators (e.g. screen size, number of camera lenses, adding facial recognition), and the non-professional market is overwhelmingly characterized by light-usage applications, then why, pray tell, should laptops and desktops be so different?

[+] fogihujy|5 years ago|reply
> The M1 doesn't make sense when its in every darn product. The only differentiator is screen size, RAM, and OS?

Why not? It's basically back to where we were in 1980 when "everything" had a Z80 or 6502 (or both!) in it, and the major differences were in what else was in the system.

[+] Corrado|5 years ago|reply
> The only differentiator is screen size, RAM, and OS?

I think that's the point. Until now, buying a computer has always been focused on the CPU and RAM stats. If you wanted faster/bigger you had to spend more. With Apple's new strategy you almost don't even care about CPU/RAM stats. They are focusing on providing value in other ways; larger screen, lighter weight, different colors, more ports, etc. I think this is the biggest shift in computers in quite a while and makes it much more akin to purchasing a phone or tablet than specing out a computer.

[+] fumar|5 years ago|reply
Why doesn't the M1 make sense in a variety of products? I don't follow the logic. It is a processor that can scale to meet the demands of mobile computers including laptops, tablets, and designer desktops (iMac). In each use case it fulfills the its computational role regardless of I/O or even operating system.

Based on your own description, you are self-selecting as an enthusiast that prefers gaming-like PCs. Isn't that exactly the sweet spot that Apple doesn't support?

[+] crooked-v|5 years ago|reply
> The M1 doesn't make sense when its in every darn product. The only differentiator is screen size, RAM, and OS?

They've only replaced the lower tier of Macbooks and iMacs with the current M1 board, which suggests to me that they're working on a variant with more CPU and GPU cores that will go into the higher tiers of those machines.

[+] ksec|5 years ago|reply
>The M1 doesn't make sense when its in every darn product.

Would you feel better if they called it A14X?

It is basically the same thing as what Intel is doing. Same Die, different binning, different naming. Same with Core count on AMD, same die, different binning on Cores and Clock Speed.

Apple doesn't bother do any of that because well it is complicated for consumers. I call it TDP Computing. You are limited by the TDP Design of the product. Not the Chip.

I am waiting to see Apple absolutely max out their SoC approach for Mac Pro.

[+] maliker|5 years ago|reply
I agree that the M1 doesn't look good for high performance computing right now or anything with poor ARM support. But strategically, I think Apple is in a good spot. For heavy computation these days, I always remote into another machine. With increased bandwidth and more efficient remote desktop protocols, I even do all my graphics-intensive 3d work remotely now. By focusing on low-power processors, Apple is making the laptop/tablet/phone experience better, and I could see them handling the performance issues via remote compute. It could be a very effective strategy (if it is their strategy).
[+] nashashmi|5 years ago|reply
I think what apple may be trying to do is reduce macOS sales and full throttle on iOS sales. It no longer makes sense for a macOS if it is going to have the same chip as iPad Pro.
[+] izacus|5 years ago|reply
How's that any different than having a single Intel generation scale all the way from low powered laptop SoCs to 12 core i9s though?

After all, the M1s across devices aren't the same either - iMacs have different configuration from iPads, those have different amount of cores and clocks from Airs and MBPs as well.

It seems like the difference is only in Apple vs. Intel marketing blurb.

[+] sto_hristo|5 years ago|reply
Make it as fabulous as much as you want, $1700 for an 8 core, 8 gigs of ram machine is just one plain fabulous joke. This in the time when 16 gigs ram is the baseline if you plan to do anything more than facebook and instagram.
[+] gambiting|5 years ago|reply
So my sister does professional photography, and went from a 16GB MacBook Pro(2016) to an 8GB Air, and reports no decrease of productivity - quite the opposite, in her experience all Adobe programs run much faster, and the machine is quieter and lighter to boot. So yeah, I'm not sure - maybe the ram amount isn't as a big deal as people make it out to be. On the other hand I'm a C++ programer and my workstation has 128GB of ram and I wouldn't accept any less.....so obviously it varies.
[+] Tehnix|5 years ago|reply
I just went from a 32GB RAM Macbook Pro to an 8GB RAM M1 Macbook Air...the difference is insane, I don't know what the hell the MB Pro is doing with its RAM, but it just felt like RAM was never enough on Intel Macbooks. Here on the M1, I don't feel my system crawling to a halt like I did on the MB Pro, and I'm doing the same workloads.
[+] mrweasel|5 years ago|reply
Really? I still use seven year old MacBook Pro with 8GB of RAM. It’s perfectly fine for developing in at least Python, Go or Nim, with a Docker container or two and MariaDB in the background.

Sure, when/if I upgrade I’d go for 16GB of memory, but one should be careful about projecting ones own needs onto other.

[+] _xnmw|5 years ago|reply
X cores + X gigs of RAM does not mean better performance for higher values of X. That is the fundamental innovation of Apple Silicon, which people still struggle to grasp. The M1 upended how we think about CPU performance. It's not even a CPU, it's a unified memory architecture with hardware-level optimizations for macOS. You can't even cleanly compare the performance benefits of having memory and CPU on the same chip, because the time spent on copying operations is far less.

The plain fabulous joke is that we've spent 30 years thinking that increasing cores and increasing RAM is the only way to increase performance while the objective M1 benchmarks blow everything out of the water. The proof is in the pudding.

[+] Someone|5 years ago|reply
On the other hand “16 gigs ram is the baseline if you plan to do anything more than facebook and instagram” is something to cry about.

Programmers working on products with a billion users should realize that every kilobyte they save decreases world-wide memory usage by a terabyte.

[+] raxxorrax|5 years ago|reply
I have an M1 from work (which I now mainly use for private stuff...) and it does have 16gb memory. It did cost a bit less than $1700 (after taxes) I believe, although I am not sure. I know it has some upgraded gpu compared to the standard model in some form, but didn't know about the memory.

As for the device: It is neat, but not revolutionary to any degree. I do paint and model and can run blender/krita just fine, it is even quite performant. This is through emulation, I don't have native builds for arm. Maybe those have become available in the meantime, but you don't notice the emulation at all.

But it won't be the end of x86 in my opinion. Why would it?

[+] thefz|5 years ago|reply
Don't forget the entry-level 256GB SSD. With 1TB drives around 90€ at retail.
[+] agloeregrets|5 years ago|reply
I mean, that price tag is being swung in a kinda silly way. The Mini with 16GB of ram is like $899. At that price the CPU has 8 Cores but 50% greater single core performance than basically anything in the price range. In lightly threaded or thread-racey tasks the M1 will outperform any chip on the market for most people working on their machine. In my case, we build, run, and run tests on a very large typescript app in half the time of any single Intel chip we have ever tested, including desktop i9. I'm not defending the pricing of their ram or storage upgrades, those are nuts. But the pricing you are comparing with here is for machines that include a LOT of gravy in the build sheet intended to draw your point such as a 4.5K Display or TBs of PCIe SSD.

Did I mention that the case I was mentioning was in the very base Macbook Air with 8Gb ram cross-compared with an i9 64Gb machine?

You are kinda making a blanket statement that is a little unfaithful to the intent of any of these machines. None of these machines are intended as 'pro' machines, even the 'Macbook Pro' is just the low end model and it is three times as powerful as the outgoing model. Sure you can spec one to the moon to make a price point, but that's the story of anything.

[+] paulpan|5 years ago|reply
I wonder if Apple's strategy is to push macOS developers to optimize for certain SOC cadences, rather than having to traditionally target every system configuration possible. Therefore they opted to only have limited SKUs of the M1: only differentiated by RAM and GPU cores.

Analogy is gaming consoles - the hardware is fixed for X years so game developers know exactly what to target and make better looking & performing games over the cycle of that console. Compared to, say, Windows 10 that has to run on an almost infinite number of hardware configurations.

This is actually similar to their approach to iOS and iPhones - iOS versions span across multiple iPhone cycles, but they are limits. For example, iOS 13 was supported on iPhones 6S thru iPhone 11, or the A9 thru the A13 SOCs. There's probably some correlation between good performance and the tight SOC-to-OS coupling.

We'll likely see similar, limited configurations for future Apple M* SOCs.

[+] otabdeveloper4|5 years ago|reply
> This in the time when 16 gigs ram is the baseline if you plan to do anything more than facebook and instagram.

Facebook and Instagram is probably the upper bound on memory-hungry apps. (The average webpage consumes more memory nowadays than gcc -O3.)

[+] ghshephard|5 years ago|reply
The 8 gigabyte machine is fine for 90% of people. I've recommended just getting that platform to a number of people, and none of them have had issues.

I've got an 8 Gigabyte MBAir, and it never stutters. Meanwhile, on my 16 GByte Dell XPS (Ubuntu 20.04) - I routinely live in fear of exceeding my chrome tab quota because I know it will bring the system to a crashing halt. Somewhere around 45 is the point it all comes down.

Meanwhile, I don't even think about how many safari tabs I have open (hundred+) - and 8-10 applications open at the same time.

Different operating system has different models of swapping and degrading performance. Apple has nailed it.

[+] neximo64|5 years ago|reply
Went from 64GB to 8GB. Didn't notice a difference - infact it is better. I kind of get your mindset but they have done some fuckery somewhere to get it to work so good.
[+] speedplane|5 years ago|reply
The M1 processor is a direct result of the death of Moore's law. It's an amazing processor, but a sad sign of things to come.

The performance gains from Moore's law have typically come from shrinking die size. That has ended, you can't juice more performance from general purpose CPUs. If general purpose processors no longer advance quickly enough, the only way to get performance gains is to build custom chips for common specific tasks. That's what we're seeing now with the M1. The M1 buys us a few more years of exponential-appearing performance gains, but it's a one-trick pony. You can turn code into an ASIC once, but after that, your performance is at the mercy of the foundry and physics.

The death of Moore's law has many consequences, the rise of ASICs and custom co-processor chips is just one of them.

[+] slver|5 years ago|reply
> The M1 processor is a direct result of the death of Moore's law.

I know most people misunderstand Moore's law, but this is HN, so I expect better:

https://en.wikipedia.org/wiki/Moore%27s_law#/media/File:Moor...

Moore's law is quite alive and showing no signs of problems.

> The performance gains from Moore's law have typically come from shrinking die size.

Moore's Law is about number of transistors. Not about their size, and not about performance.

And it's ESPECIALLY not about linear core performance.

> That has ended, you can't juice more performance from general purpose CPUs.

You don't need to, they're fast enough. Performance is expanding in other areas like GPU and ML.

> The death of Moore's law has many consequences, the rise of ASICs and custom co-processor chips is just one of them.

No, Moore's law is the very thing supporting them... You need extra transistors for those co-processors.

[+] nightowl_games|5 years ago|reply
On his recent Lex Fridman podcast appearance, Jim Keller speaks to exactly this mindset. He says that they've been heralding the death of Moore's law since he started and that the "one trick ponies" just keep coming. He says he doesn't doubt that they will continue.
[+] GuB-42|5 years ago|reply
I get your point but...

> The M1 processor is a direct result of the death of Moore's law.

It is a bit ironic since the M1 is a 5 nm processor, currently the finest process, and I think it plays no small part in its success. A very Moore's law-esque solution.

[+] tobyhinloopen|5 years ago|reply
Death of moore’s law? Hmm. Meanwhile I just got a R9 5950X and it is drastically faster than my 5 year old i7.

There must be some doubling of transistors in there, right?

Also maybe buying a 5950X at the birth of a new generation of ARM CPUs wasn’t the wisest choice.

Or maybe it is, idk.

[+] Tade0|5 years ago|reply
Moore's law is safe for at least two generations of chips, for which there are processes developed.

As we speak people are putting together 3nm(TSMC) designs, which will ship once the infrastructure is there.

[+] imtringued|5 years ago|reply
Scaling will continue for at least another 10 years.
[+] nhourcard|5 years ago|reply
The death of moore's law made us wonder - there is so much effort trying to optimise hardware, but less emphasis on making software more efficient. Our view is that there is a lot to do with regard to software efficiency to mitigate the limitations in hardware progress... See the company we founded in my profile, this was one of the drivers to build it
[+] Someone|5 years ago|reply
Let’s not debate whether we really are at the end of Moore’s law (not a foregone conclusion, given that the M1 is the first CPU at 5nm)

Why do you find it sad that we now have a holistically designed system, rather than the glueing together of ever more powerful parts that desktop PCs have gotten away with for a few decades?

[+] apatheticonion|5 years ago|reply
Can't wait for an iPhone Pro with an M1 processor that I can plug into a thunderbolt/usbc dock, run monitors, a keyboard, ethernet, and have it running MacOS when in desktop mode.

EDIT: A little context; when I am in the office, I use vscode over ssh to connect to my desktop PC at home. My desktop takes care of my language server, syntax highlighting, compilation and vscode forwards my ports and spins up terminals. All I will ever need is a low powered computer that can run my browser and tooling fast enough.

[+] yalogin|5 years ago|reply
The difference is the advantage Apple has with their HW+SW vertical integration. Its as simple as that.

Intel sells CPUs, so it creates the ranges of CPUs to make money. They advertise clock speed and put the higher ones on a pedestal, that is how they can charge more money. The OEMs just used that playbook and developed their own marketing stories on top of Intel's marketing. Either no one tried to differentiate or they just didn't have the power to fight it.

Apple has a lot going for it in that scenario. They never had proliferation of models and always kept the number of options to a minimum. They also don't deal with volume, so they didn't have to do 20 variations of mac mini or the iMac. They kind of did their own thing even with the intel macs. Now with their own processor they were in a position to double down and make the whole product line even more efficient.

Like the article said, they couldn't have done if the M1 is not clearly better than the competition.

[+] outside1234|5 years ago|reply
x86 is dead - first in consumer, then in cloud.

It is hard to see to me how this ends any other way. The creative class (us) will quickly have largely all ARM computers within 4 years.

Its not hard to see from there how software will be even more optimized for ARM variants than x86 and that the scale of both mobile and consumer computing will push x86 out of the datacenter slowly as old software that relies on x86 is retired over the next decade.

People won't want to develop on x86 and deploy to ARM. ARM is more power efficient which is important in the data center too. We already scale by the core in the cloud, so why not just heap a few more cheap cores on if we need more to match x86 (which right now looks like we might not).

Tell me how I am wrong.

[+] paulpan|5 years ago|reply
The article makes a good point on positioning, but I'm not sure if it's due to lack of data points.

Sure Apple seems to be using the M1 in across every price segment for their products, but M1 is also literally the first iteration of their shift to running macOS on ARM and not x86 architecture. This mass push mainly serves to speed up the transition.

No doubt there'll be a higher performing SOC for Apple's Pro lineup such as Mac Pro, Macbook Pro. History has confirmed this since Apple developed the A*X chips specifically for the Pro lineup of iPads. Main question is, how many concurrent SOCs will Apple maintain? Just 2 as they've done for the iPhone & iPad Pro divide or potentially more?

[+] tyingq|5 years ago|reply
I remain interested in seeing how much of Apple's lead is the process size, and how much is engineering prowess.

That is, would a more generic new ARM Neoverse on 5nm perform at roughly the same clip? I suppose AWS's Graviton 3 would be the first place to see that, or something close to it.

[+] rektide|5 years ago|reply
Ryzen 7401p still represents the pinnacle-for-it's-time value offering to me. 24 cores, single socket, on a 14nm process, launched July 2017 for $1075. Just an amazing breakthrough processor. At the time there were supposed to be X300 and A300 chipsets coming, basically just boot bios, to make ultra-low cost motherboards possible. There have been improvements since then to architecture & IPC, but overall it feels like we've been headed in reverse since then in terms of chips that get put into medium/large-ish sized chassis.

It has been remarkable what a mockery Mac has made of mobile chips, and now of desktop chips. At a way more reasonable price point.

> but the company’s decision to eschew clock speed disclosures suggests that these CPUs differ only modestly.

I forget exactly when but the first Google IO that happened where Google started offering simply "Intel Core i5" or i7, without saying the model number (2017?), without revealing speeds, & it was a huge huge jumping the shark moment for me. A post competitive market, where speeds were good enough, where reputation & market presence domineered over metrics & comparable factors. I don't think chasing specific GHz & cache size numbers &c is super rewarding or important, but it felt like the first time we were being sold an unspecific system, where obvious inquiry into what we were buying was blocked.

This is somewhat the opposite of this article: that Apple has found a good enough CPU to sell everywhere. But I still think the real truth is that the providers, those building systems, have begun to refuse to compete. They refuse to detail what they are offering. AMD has been without competition and the new 24c chip is considerably more expensive, albeit for yes more IPC, but it still hurts me a little. Looking at Google no longer allowing us to have any idea what kind of Core i5 or whatever chip though, they beat this article to the punch almost half a decade ago. Consumers haven't been given respect, haven't been allowed to know what wattage, what caches, what Hz their chips use for a long time now. Google started that, Google pushed the post-knowable computing upon us. Apple is merely following up on this, merely delivering what Google started, at a far better price point, with far better underlying technology.

[+] rvanlaar|5 years ago|reply
What I find lacking in the article is an apt comparison with AMD's Ryzen chips.

Those are all the same chiplets, just binned differently. High performance ones go into the 5800x and 5950x. Lower performance ones into the 5600x and 5900x.

Which seems to be the same thing apple does, with a slight naming difference. Calling everything M1, instead of naming the CPUs.

[+] sebyx07|5 years ago|reply
At least on a x86, you can install Windows / Linux / Hackintosh. With m1 you have can install HackinWindows / HackinLinux / Mac, if you understand the joke.

The good part with m1 is that force amd and intel better cpus. Competition in always good. The not so good part is that Apple might start a trend of higher prices for CPUs.

[+] adhambadr|5 years ago|reply
Economics of scale. my bet is Apple will develop a cluster of M1s and call them M-power-2 to address the 1.5TB RAM workstation market you mentioned. It will be practically an array of M1s (or next gen) together. The way the M1 is used from iPad to iMacs is genius in terms of cost reduction at scale and for an end consumer, who doesnt care who else uses their chip, I get a good $/cpu power bargain. Tim Apple being the supply-chain guy he is, I see him doubling down and scrambling engineers to user more M1s in array to build a stronger core. Maybe an M1-based server rack for AWS?
[+] fireattack|5 years ago|reply
I was always taught ARM (and M1), being a RISC architecture, isn't as "capable" as x86 in some way, whatever it means.

I am no longer sure if that's still the case, since they seem to work just as good, if not better (energy efficiency). Of course, it's not exactly an apple to apple comparison since Apple upgraded so many other things, but I just didn't see any mention about the limitation of being RISC in these articles.

Could someone enlighten in this respect for an average Joe who knows nothing about hardware?

[+] sudhirj|5 years ago|reply
Certainly seems that way. It looks like there going to be an M1 chip for 99% of folks, which works fine for all non-CPU-pegging work (Air, 13" MBP, 24"iMac, Mini), an M1.Large for stuff that pegs CPU (27"iMac, 16"MBP), and M1.XL for 0.01% Mac Pro folks who drop 5 figures USD on computers. But I'd expect the numbers to decrease logarithmically and the prices to be multiples. M1 machines from $700 to $1700, M1.L from $2000 to $4000, and M1.XL from $6000 onwards.
[+] morpheuskafka|5 years ago|reply
> Second, Intel and AMD both benefit from a decades-old narrative that places the CPU at the center of the consumer’s device experience and enjoyment and have designed and priced their products accordingly, even if that argument is somewhat less true today than it was in earlier eras.

I would have argued that memory was far more important than CPU in quickly judging the performance of a machine, once the 7th generation Intels made dual-core obsolete. But the M1 seems to buck this trend a bit as well, given that it only has an 8GB and 16GB variant and its new unified memory model makes traditional estimates of how much memory is needed less important. Some workloads such as an in-memory database won't change, but the memory usage for GUI rendering, graphics, etc. can take advantage of much faster accesses. And, with SSDs which are now considered a must-have for any serious machine, paging to disk is far less expensive than before in any case.

On another note, the M1 iPad Pro is the first time Apple has ever officially confirmed or marketed, let alone offered a choice in, the RAM for an iOS/iPadOS device.

[+] giorgioz|5 years ago|reply
M1 is the fastest Apple CPU YET. I suspect in the fall they will release the M2 for MacBook Pro 15 inches.

They have also delayed releasing the MacBook Pro 15inches with Intel on purpose in my opinion. When they will release the MacBook Pro 15 inches with M2 they will compare it with the Intel version with a 3 years old processor.

I don't trust the Apple benchmarks much. They are choosing what to compare and what metrics. Let's wait 3 years when the dust has settled and we'll be able to compare apples with apples.

Let's see also if Apple will be able to keep up the improvement of in-house built CPU+GPU with ALL COMPETING MANUFACTURERS OF CPU&GPU. What if Nvidia or AMD or Intel comes out with a huge leap, Apple then won't be able to take advantage of that. In my opinion the M1 is the new PowerPC. In 10-20 years from now Apple will have slow in-house built hardware and we'll be getting back off the shelf hardware like when Steve Jobs moved from PowerPC to Intel.

[+] adrian_mrd|5 years ago|reply
What’s also lacking from a marketing perspective is the “Intel Inside” campaign - which was incredibly successful for the Wintel monopoly in the 1990s and early 2000s.

Seeing the sticker or hearing this slogan used to imply a premium or cachet to the product/hardware to the average Joe or Joanne.

No longer, Intel’s brand recognition has really taken a hit in the past decade.

[+] domano|5 years ago|reply
People underestimate AMD and mix it in together with the current Intel chips when talking about the M1. Ryzen is not far behind M1 in single thread performance and beats it in multi core. If Intel had not made all kinds of deals with laptop makers for exclusivity then the narrative would be totally different in my opinion.