I am surprised the article doesn't talk about Intel CPU performance degradation due to speculative timing attack workarounds. The bugs are more severe on Intel CPUs and as such, Intel CPUs took a much bigger performance hit.
This is a hidden thing that seems to always go under the radar.
My Intel i7 4790k has lost at least 40% speed since I originally bought it, and for years, I always thought that Intel machines were getting slower without any actual proof whereas AMD machine I returned to after years just felt as fast as ever.
Now with microcode updates and similar getting more coverage, I'm certain that this is what it is.
AMD's partner program is meh, where as Intel is really partner focused - if anything breaks, I can get a replacement shipped to me in advanced the next day for ~3-5 years (component depending), but I am seeing much more demand for AMD as of late.
Chiplets is what gives AMD an absolutely brutal advantage. Their high end chips do not need expensive large dies -- just a few small ones. Yields are much better. And they can bin each chiplet separately. Oh and they don't spend the expensive top notch process on the I/O part of the CPU either. Intel might be hard pressed to catch up to this -- sure the 7nm EUV process in two years and a bit will very likely be a serious jump in IPC but if you are comparing similarly priced server CPUs then even that is very likely to be simply not enough due to this chiplet strategy. For the foreseeable future, inertia alone is the only reason for anyone to buy an Intel server chip.
So the chiplet strategy is clearly paying dividends for AMD but I'm curious as to what has changed to have allowed this idea to be so effective? It's not like the concept of multi-die CPUs is new, Intel even implemented this on their legendary Q6600 CPU which was basically 2 Core 2 Duo dies on a single chip. The issue with the approach used with the Q6600 was that communication across cores on separate dies was orders of magnitude slower than communication across the cores sharing a die. Is AMD's success down to recent advancements in brand prediction and core scheduling optimization?
But once they do get on the chiplet train properly, their EMIB technology means they only need a small 'bridge' piece of silicon to connect their chiplets. In contrast to what TSMC offer where all the chiplets need to sit within a large silicon interposer. This is more expensive and it limits the size of the chip you can produce (as there is a limit to the interposer size). They also announced that Foveros stacking thing and now co-EMIB too; on paper Intel's new packaging tech looks like it could really help them. They just need to make use of it.
Intel announced a 56-core server CPU (presumably purely to try to keep up with AMD), which is just two 14nm 28-core dies “glued” together like Zen 1. But AMD clearly has an lead on getting this to market and optimising it into their current chiplets.
We were constantly saying this about AMD until Zen, and Zen is largely credited to Jim Keller. Where did Jim Keller go after Zen? Intel. I'm lowkey afraid that AMD will run out of steam after one or two gens and Intel+Keller will have just finished developing an insane architecture that brings us right back to the pre-Zen era.
The author's comments on cache sizes are a bit reductive. Not all "L3" is created equal, and designers always make tradeoffs between capacity and latency.
In particular, the EPYC processors achieve such high cache capacities by splitting L3 into slices across multiple silicon dies, and accessing non-local L3 incurs huge interconnect latency - 132ns on latest EPYC vs 37ns on current Xeon [1]. Even DDR4 on Intel (90ns) is faster than much of an EPYC chip's L3 cache.
Intel's monolithic die strategy keeps worst case latency low, but increases costs significantly and totally precludes caches in the hundreds of MB. Depending on workload, that may or may not be the right choice.
Very interesting info on the overclocking difference between TSMC 7nm and Intel 14nm+++ however a few misconceptions:
- Intel staying low core count probably wasn't evil intent: the software wasn't there and Intel had better single thread perf than AMD. AMD was basically forced into more cores earlier because of weakness in single thread. Today, the software _is_ there (well, mostly) and we can all take advantage of more cores.
- Why did Intel fall behind? Easy: Brian Krzanich's hubris pushed the process too hard, taking many risks, and the strategy failed spectacularly.
- PCIe Gen4 does matter. M.2 NVMe has been read limited for a long time already (NAND bandwidth scales trivially). The I/O section of this article is basically nonsense.
- There's is nothing magical about x86, nor about the AMD and Intel design team. If the market is there, there will be competitive non-x86 alternatives. The data center market is pretty conservative for good reason - but ML is upending a lot of conventional wisdom so it'll be interesting to see what happens.
Intel is the last US company with a bleeding edge fab. All other fabs below 15nm are outside the US, except for one Samsung fab in Texas. When Intel falls behind, that's the end of the US as a leader in the semiconductor industry.
They look at the military angle specifically ("Wall Street's short-term incentives have decimated our defense industrial base and undermined our national security.") but it's wider than that.
Example quotes:
> ...in the last 20 years, every single American producer of key telecommunication equipment sectors is gone. Today, only two European makers—Ericsson and Nokia—are left to compete with Huawei and another Chinese competitor, ZTE.
> ...public policies focused on finance instead of production, the United States increasingly cannot produce or maintain vital systems upon which our economy, our military, and our allies rely.
The idea of globalization was that where production is located does not matter, market "magic" somehow makes it irrelevant. I don't see how it does not matter when the imbalance becomes as extreme as it is nowadays. "Finance" is not an industry (if anyone wants to argue with me about the definition of that word I refer to https://www.lesswrong.com/posts/7X2j8HAkWdmMoS8PE/disputing-... -- you know what I mean).
To also give context as to what Dragonfly BSD is, DragonFly BSD was forked from FreeBSD 4.8 in June of 2003, by Matthew Dillon over a differing of opinion on how to handle SMP support in FreeBSD. Dragonfly is generally consider as having a much simpler (and cleaning) implementation of SMP which has allowed the core team to more easily maintain SMP support; yet without sacrificing performance (numerous benchmarks demonstrate that Dragonfly is even more performant than FreeBSD [5]).
The core team of Dragonfly developers is small but extremely talented (e.g. they have frequently found hardware bugs in Intel/AMD that no one else has found in the Linux/BSD community [6]). They strive for correctness of code, ease of maintainability (e.g. only support x86 architecture, design decisions, etc.) and performance as project goals.
If you haven't already looked at Dragonfly, I highly recommend you to do so.
It's weird for Intel to be falling behind against AMD. Intel has about 10x the revenue and 10x the number of employees. Yet AMD is able to compete directly, and even beat Intel in some areas.
Good for AMD, but I'm more interested in an explanation of how Intel allowed this to happen.
Intel ties its architectures to its foundry nodes since it both designs the chips and produces them, unlike AMD. This means that they can get some extra performance but it also means that if there problems with the new node, like the ones they had with 10nm, everyone just has to sit on their hands until it's resolved. Going forward they've decided to decouple their architectures and nodes so that this doesn't happen again. Despite what other commenters are saying, this issue isn't really a result of them getting complacent. Intel's original plan for their 10nm node was actually extremely ambitious both in transistor density and in implementation of new techniques. Too ambitious, as it turned out.
Intel has always bet hard on their homegrown fab process and efficiencies winning over competitors.
With 10nm, it's years overdue (we're now on 14nm++++, if my count is right), and Intel's just been incrementally iterating on Skylake in the meanwhile.
I'm really looking forward to the next generation or two, when we see non-mobile 10nm+ chips from Intel and whether they can make up the lost ground between Spectre variants and AMD's gains.
The writing was on the wall years ago. Intel became one of those "big company" places without much of a competitive spirit, and they started focusing more on pet projects (technical and not), internal politics, and all the other decay products of bigcorpism instead of continuing to advance chips. AMD, meanwhile, had existential pressure to get better at their core business, and it eventually worked.
Somebody made a similar comment [1] not long ago. And I copy my reply below.
I don't particularly like Intel, but that is Hardly a fair comparison.
TSMC 48K employees, and $13 - $15B R&D.
And even that is excluding all the ecosystem and tooling companies around TSMC. Compared to Intel which does it all by themselves. Not to mention Intel does way more than just CPU, GPU, also Memory, Network, Mobile, 5G, WiFi, FPGA, Storage Controller etc.
I'm not sure such a comparison can hold up, e.g. those employee numbers probably include the people working in Intel's foundries, whereas AMD has outsourced that to TSMC.
Intel has apparently been milking their comfortable first place for years. So it's not weird, they've been coasting for years as they thought the race was won.
I was a little surprised, but not shocked, that Intel's best reaction to the new AMD processors was so underwhelming.
Yet it's what we expect in most markets - and what happened after IE6 (or Chrome) won. Innovation stops, differentiation by anti-consumer means ramps up.
They put a lot of their beans into stuff that died like Atom, "12000 Worldwide, 11 Percent" of the Intel workforce was let go becasue of the Atom chip bugs and failures.
In all seriousness, it is a miracle Intel is able to compete at all.
Pros of higher headcount: theoretically lower transaction costs for highly specialized people to work together.
Cons of higher headcount: more command and control; fewer competing ideas; no market mechanism to evaluate the beat ideas; massively increased politics and executive jockeying; way more middle managers and fewer entrepreneurs; and continuing to invest in failed ventures. I’m sure there are many more cons.
Intel still has the marketshare and last time I checked, AMD instances in AWS have less than half the performance than similarly priced Intel instances.
I am an AMD fan and my laptop is Ryzen, but I would not say AMD has already won. It is catching up fast, but they can't stop innovating.
> (read: Intel trying real hard to keep people on 4 cores so they could charge an arm and a leg for more)
There's quote, something along the lines of "You have to cannibalise your own business. If you don't someone else will". A lesson intel chose to forget because it made them more money, right up until it didn't.
I always felt Apple was good at this, in the Steve Jobs era, anyway. Everyone said the iPhone would kill the iPod, but that didn't matter in reality because the iPhone opportunity was so much larger.
I think number one is definitely wrong. Recent inquiry led reviewers to discover not all boards are created equal. While differences aren't drastic on some boards you can't reach boost clock written on the box.
Intel has been in this situation before and found their way out from it. They may pull another rabbit out of a hat like they did with the core architecture, or they could always just buy AMD.
> on a Zen 2 system if you increase the voltage to push frequency you also wind up increasing the temperature which retards the maximum possible stable frequency
Well, that effect is there on Zen 1 to an extent, but you can overpower it with >1.4V.
Eh, I don't really like text that fills all the way to the edges. It's a bit difficult to read that way. The http://bettermotherfuckingwebsite.com/ posted below definitely reads better to me (though expanding line length a bit might still apply to longer articles).
I picked up a tidbit about optimal line widths in one of my courses and it's stuck with me. I think back to it whenever I see websites that don't constrain the width.
Most users run full screen browsers. Yes we run full screen browsers across our 1980, 2560 and 3840 screens. There is no way a web designer could or should say "well that's the fault of users". That's just a reality.
Users want a readable-width column in front of their eyes. That is: it shouldn't be full width, it has to have a max size that is smaller. And the column has to be centered because the left edge of a big screen is way too off center to read comfortably.
Agree on the minimalist design being a breath of fresh air though. Just needs a reasonable paragraph formatting and it's perfect.
The nears a design what I call Spartan Webdesign [0]. The UX on mobile devices is not there yet but I agree this is one of the better sites we've seen here in a long time :-)
The line length just expands to fill the screen. Which means if you happen to have the "right" device/screen size, it will look good and be easy to read. But on many (probably most) screens the lines will be far too long to be easily readable.
I'm not a fan of the text that stretches across the whole screen, but I do like the use of <ul> to slightly indent the paragraphs wrt to the headings. Don't know how common it is, but it's the first I've seen it.
That is cpu counts etc abc temp. But is it work for systems. Suppose you run nvidia chips for AI, does this matter and how. If you do adobe’s premier or apple’s, do it matter. The cpu matter still?
[+] [-] guardiangod|6 years ago|reply
[+] [-] wilhil|6 years ago|reply
This is a hidden thing that seems to always go under the radar.
My Intel i7 4790k has lost at least 40% speed since I originally bought it, and for years, I always thought that Intel machines were getting slower without any actual proof whereas AMD machine I returned to after years just felt as fast as ever.
Now with microcode updates and similar getting more coverage, I'm certain that this is what it is.
AMD's partner program is meh, where as Intel is really partner focused - if anything breaks, I can get a replacement shipped to me in advanced the next day for ~3-5 years (component depending), but I am seeing much more demand for AMD as of late.
[+] [-] chx|6 years ago|reply
[+] [-] cc439|6 years ago|reply
[+] [-] kingosticks|6 years ago|reply
[+] [-] TazeTSchnitzel|6 years ago|reply
[+] [-] jorvi|6 years ago|reply
We were constantly saying this about AMD until Zen, and Zen is largely credited to Jim Keller. Where did Jim Keller go after Zen? Intel. I'm lowkey afraid that AMD will run out of steam after one or two gens and Intel+Keller will have just finished developing an insane architecture that brings us right back to the pre-Zen era.
[+] [-] novaRom|6 years ago|reply
[+] [-] neilmovva|6 years ago|reply
In particular, the EPYC processors achieve such high cache capacities by splitting L3 into slices across multiple silicon dies, and accessing non-local L3 incurs huge interconnect latency - 132ns on latest EPYC vs 37ns on current Xeon [1]. Even DDR4 on Intel (90ns) is faster than much of an EPYC chip's L3 cache.
Intel's monolithic die strategy keeps worst case latency low, but increases costs significantly and totally precludes caches in the hundreds of MB. Depending on workload, that may or may not be the right choice.
[1] https://www.anandtech.com/show/14694/amd-rome-epyc-2nd-gen/7
[+] [-] FullyFunctional|6 years ago|reply
- Intel staying low core count probably wasn't evil intent: the software wasn't there and Intel had better single thread perf than AMD. AMD was basically forced into more cores earlier because of weakness in single thread. Today, the software _is_ there (well, mostly) and we can all take advantage of more cores.
- Why did Intel fall behind? Easy: Brian Krzanich's hubris pushed the process too hard, taking many risks, and the strategy failed spectacularly.
- PCIe Gen4 does matter. M.2 NVMe has been read limited for a long time already (NAND bandwidth scales trivially). The I/O section of this article is basically nonsense.
- There's is nothing magical about x86, nor about the AMD and Intel design team. If the market is there, there will be competitive non-x86 alternatives. The data center market is pretty conservative for good reason - but ML is upending a lot of conventional wisdom so it'll be interesting to see what happens.
[+] [-] Animats|6 years ago|reply
[+] [-] nosianu|6 years ago|reply
They look at the military angle specifically ("Wall Street's short-term incentives have decimated our defense industrial base and undermined our national security.") but it's wider than that.
Example quotes:
> ...in the last 20 years, every single American producer of key telecommunication equipment sectors is gone. Today, only two European makers—Ericsson and Nokia—are left to compete with Huawei and another Chinese competitor, ZTE.
> ...public policies focused on finance instead of production, the United States increasingly cannot produce or maintain vital systems upon which our economy, our military, and our allies rely.
As for chip production capacity, "N. America" (is there anything in Canada or is this just "U.S."?), it's just 12.8% of worldwide capacity and 3/4 of capacity is in Asia: https://anysilicon.com/semiconductor-wafer-capacity-per-regi...
The idea of globalization was that where production is located does not matter, market "magic" somehow makes it irrelevant. I don't see how it does not matter when the imbalance becomes as extreme as it is nowadays. "Finance" is not an industry (if anyone wants to argue with me about the definition of that word I refer to https://www.lesswrong.com/posts/7X2j8HAkWdmMoS8PE/disputing-... -- you know what I mean).
[+] [-] Teckla|6 years ago|reply
Should Intel really be considered a U.S. company these days?
I tend to consider companies like Intel (and IBM, and Apple, and Microsoft, etc.) international companies.
[+] [-] tiffanyh|6 years ago|reply
I'll repost a previous post I made
https://news.ycombinator.com/item?id=15484735 ------
To also give context as to what Dragonfly BSD is, DragonFly BSD was forked from FreeBSD 4.8 in June of 2003, by Matthew Dillon over a differing of opinion on how to handle SMP support in FreeBSD. Dragonfly is generally consider as having a much simpler (and cleaning) implementation of SMP which has allowed the core team to more easily maintain SMP support; yet without sacrificing performance (numerous benchmarks demonstrate that Dragonfly is even more performant than FreeBSD [5]).
The core team of Dragonfly developers is small but extremely talented (e.g. they have frequently found hardware bugs in Intel/AMD that no one else has found in the Linux/BSD community [6]). They strive for correctness of code, ease of maintainability (e.g. only support x86 architecture, design decisions, etc.) and performance as project goals.
If you haven't already looked at Dragonfly, I highly recommend you to do so.
[5] https://www.dragonflybsd.org/performance/
[6] http://www.zdnet.com/article/amd-owns-up-to-cpu-bug/
[+] [-] twblalock|6 years ago|reply
Good for AMD, but I'm more interested in an explanation of how Intel allowed this to happen.
[+] [-] thunderbird120|6 years ago|reply
[+] [-] rincebrain|6 years ago|reply
With 10nm, it's years overdue (we're now on 14nm++++, if my count is right), and Intel's just been incrementally iterating on Skylake in the meanwhile.
I'm really looking forward to the next generation or two, when we see non-mobile 10nm+ chips from Intel and whether they can make up the lost ground between Spectre variants and AMD's gains.
[+] [-] quotemstr|6 years ago|reply
It's a pattern you see over and over again.
[+] [-] ksec|6 years ago|reply
I don't particularly like Intel, but that is Hardly a fair comparison.
TSMC 48K employees, and $13 - $15B R&D.
And even that is excluding all the ecosystem and tooling companies around TSMC. Compared to Intel which does it all by themselves. Not to mention Intel does way more than just CPU, GPU, also Memory, Network, Mobile, 5G, WiFi, FPGA, Storage Controller etc.
[1] https://news.ycombinator.com/item?id=20646790
[+] [-] klingonopera|6 years ago|reply
[+] [-] NeedMoreTea|6 years ago|reply
I was a little surprised, but not shocked, that Intel's best reaction to the new AMD processors was so underwhelming.
Yet it's what we expect in most markets - and what happened after IE6 (or Chrome) won. Innovation stops, differentiation by anti-consumer means ramps up.
[+] [-] lumberingjack|6 years ago|reply
[+] [-] abtinf|6 years ago|reply
In all seriousness, it is a miracle Intel is able to compete at all.
Pros of higher headcount: theoretically lower transaction costs for highly specialized people to work together.
Cons of higher headcount: more command and control; fewer competing ideas; no market mechanism to evaluate the beat ideas; massively increased politics and executive jockeying; way more middle managers and fewer entrepreneurs; and continuing to invest in failed ventures. I’m sure there are many more cons.
[+] [-] Shorel|6 years ago|reply
I am an AMD fan and my laptop is Ryzen, but I would not say AMD has already won. It is catching up fast, but they can't stop innovating.
[+] [-] virtualwhys|6 years ago|reply
What a great time to be in the market for a new machine, lots of hot air (literally) and throttling with the 6/8 core Intel i9s.
[+] [-] tempguy9999|6 years ago|reply
There's quote, something along the lines of "You have to cannibalise your own business. If you don't someone else will". A lesson intel chose to forget because it made them more money, right up until it didn't.
[+] [-] lunchables|6 years ago|reply
[+] [-] reacweb|6 years ago|reply
[+] [-] Ygg2|6 years ago|reply
https://youtu.be/o2SzF3IiMaE
[+] [-] Joeri|6 years ago|reply
[+] [-] floatboth|6 years ago|reply
Well, that effect is there on Zen 1 to an extent, but you can overpower it with >1.4V.
[+] [-] psurge|6 years ago|reply
[+] [-] wmf|6 years ago|reply
[+] [-] blingojames|6 years ago|reply
[+] [-] superkuh|6 years ago|reply
[+] [-] castratikron|6 years ago|reply
I actually used this template for an internal development tool and everyone loves how fast and simple it is.
[+] [-] esyir|6 years ago|reply
[+] [-] greyskull|6 years ago|reply
I picked up a tidbit about optimal line widths in one of my courses and it's stuck with me. I think back to it whenever I see websites that don't constrain the width.
[+] [-] saagarjha|6 years ago|reply
[+] [-] alkonaut|6 years ago|reply
Users want a readable-width column in front of their eyes. That is: it shouldn't be full width, it has to have a max size that is smaller. And the column has to be centered because the left edge of a big screen is way too off center to read comfortably.
Agree on the minimalist design being a breath of fresh air though. Just needs a reasonable paragraph formatting and it's perfect.
[+] [-] michaelanckaert|6 years ago|reply
[0] https://www.sinax.be/blog/general/guidelines-for-spartan-web...
[+] [-] goto11|6 years ago|reply
[+] [-] smcl|6 years ago|reply
[+] [-] emmp|6 years ago|reply
[+] [-] ksec|6 years ago|reply
Somewhere along the line people just want to make Web Apps, when the Apps itself is no different to a page.
[+] [-] tuananh|6 years ago|reply
[+] [-] coheeman1|6 years ago|reply
http://apollo.backplane.com
[+] [-] TheRealSteel|6 years ago|reply
[+] [-] sireat|6 years ago|reply
A tiny bit of CSS would make it perfect
[+] [-] ngcc_hk|6 years ago|reply