top | item 14563780

AMD EPYC 7000 CPUs: 32 cores, 64 threads, 8 memory channels, 128 PCIe lanes

242 points| mrb | 8 years ago |wccftech.com | reply

93 comments

order
[+] mrb|8 years ago|reply
The Zen core had a pretty good (but no that amazing) success in the consumer market, because although people admired its multi-threaded performance, they were merely lukewarm about its single-threaded perf which "just" almost matches Intel. But oh boy, the Zen core in the server market is going to make a killing. Servers are all about multi-threaded performance (hence why 80% of the server market is dual socket). And it looks like a single socket EPYC is beating a dual socket Xeon... ouch. Finally a good kick in Intel's resting bottom.
[+] noir_lord|8 years ago|reply
> The Zen core had a pretty good (but no that amazing) success in the consumer market, because although people admired its multi-threaded performance, they were merely lukewarm about its single-threaded perf which "just" almost matches Intel.

That's highly variable, I was actually pretty astounded by it's single core performance and completely blown away by its multicore performance.

I built a 1700 (not even a +) for work a few weeks ago and I keep running into things where I pause and think something crashed because it can't have finished that fast...

Yesterdays was 5Gb of mixed data in 18s (turns out the SSD is the bottleneck), if I wasn't busy with a new job I'd be trying that 5Gb of data out of a RAM disk just to see how fast pigz (love pigz by the way, multithreaded gzip) can go with 8 cores/16 threads.

[+] onli|8 years ago|reply
> The Zen core had a pretty good (but no that amazing) success in the consumer market

In the communities I look at, the only Intel processors still getting regularly recommended are the Pentium G4560 and the Intel i7-7700K. And that's from consumers to consumers.

Also, when I look at my meta-benchmark for games, I see Ryzen with really good results by now. That changed a bit, it improved now that games are getting optimized and ram support is getting better. Before the i5 was still more viable.

I don't have insight into the whole market, but from my small observer position Ryzen does look like a pretty huge success.

[+] nodesocket|8 years ago|reply
While the raw compute performance numbers of Zen may be better than Intel, don't underestimate other economic and business factors. AMD has always played 2nd fiddle to Intel and it is hard to shake that perception. I'd love to see a major cloud provider (AWS, Google, Azure) actively buying AMD chips and making them available for compute. However, I am still a bit skeptical this is ever going to happen. There is just too much risk for a cloud provider. Intel holds market and nearly all mindshare in mainstream cloud computing.

Disclosure: $AMD shareholder

[+] Zenst|8 years ago|reply
Certainly for server software that is licensed per CPU and not per core, this is a major plus point for AMD.
[+] jacquesm|8 years ago|reply
I've used a dual cpu AMD Bulldozer for years (and still do), it's been rock solid and the 32 threads really helped with certain workloads. At the time the equivalent from Intel would have been far more expensive.
[+] myrandomcomment|8 years ago|reply
Well I know what will be in the next stack of servers my company buys (in 20RU chunks). It's all Linux+Docker for dev & test with some KVM. Right now we use 2xCPU 48 core Intel, 2x1G and 2X10G. 1RU form factor holds two of these servers. It's all about thread scale out for us. The more containers we can run per server = faster build and test throughput. Pretty cool AMD. Happy to have you back.
[+] RichardHeart|8 years ago|reply
TLDR: Intel's advantage of being able to clock higher gets removed because of heat in these high density multi-core chips. 1p 32core $2k. 2p 32core $4k.

"..14% advantage of cores per rack that ship with their Naples platform compared to Intel’s. On Intel, a singular rack will consist of 4704 cores while AMD’s Zen based Naples Rack will ship with 5376 cores.

There’s also 14% advantage in VM (Virtual Machines) per socket. Memory bandwidth sees a 33% advantage as AMD has 8 channels while Intel’s Purley platform is configured for 6 channels per socket. Intel platform also supports 24 DIMMs while AMD can support up to 32 DIMMs." "release 20th of June."

[+] ComputerGuru|8 years ago|reply
I just posted this yesterday about Naples/Epyc already having a huge advantage over Xeons for certain server workloads due to support for hardware-assisted SHA calculations: https://neosmart.net/blog/2017/will-amds-ryzen-finally-bring... , already supported by the Linux kernel and many open source crypto libraries.

I honestly had no clue this reveal was right around the corner. These numbers really do give AMD a fighting chance here.

[+] jamesfmilne|8 years ago|reply
Surely these processors introduce another new level of NUMA dynamics? Each group of 8 cores has its own memory controller, own PCIe root complex, and then there is a crosslink between each group of 8 cores.

Up until now you would (potentially) have to consider which socket you are on, and where your memory or IO devices (PCIe) are.

Now you have the same considerations within a socket, as well as between sockets?

[+] wmf|8 years ago|reply
Previous Opterons were also MCMs with NUMA within the socket, although their performance was poor enough that many people probably never noticed they existed. If Intel makes cluster-on-die mode mandatory then they'll also have NUMA within the socket.
[+] sliken|8 years ago|reply
Not really.

So the Epyc is basically 4 ryzens. So you get 4x the cores, 4x the memory channels, and 4x the pieces of silicon.

So think of a single socket ryzen as a quad socket motherboard. In either case you have clusters of cores/cache connected to memory controllers and hypertransport. For most workloads a NUMA aware kernel does a pretty good job of minimizing hitting pages on other controllers. But it's not a particularly big deal when you miss, typically about 10% (latency and bandwidth).

AMD makes all the I/O pins capable of hypertransport (or whatever they call it now) and PCI-e.

This isn't particularly new btw. MCM (multiple chips per package) go way back to the pentium pro if not before. Intel Xeons are all single chip, but have similar on chip architectures. The 4,6,8 core chips are pretty simple, but the larger core chips have a ring bus for one set of cores, and another ring bus for the second.

But generally for most workloads the NUMA issues related to the newer chips isn't a particularly large hurdle from getting good performance. What I am concerned about though is how good the Epyc floating point is, I fear they are bragging about integer performance and not FP because they are behind on FP.

[+] Dylan16807|8 years ago|reply
It's more than that, I think the cores are still in clusters of 4 that talk over infinity fabric.

But there's no avoiding some kind of complex inter-core dynamics at this level. The Intel alternative is a bunch of ring busses that have different speeds to each core from any point. And this design makes every memory access go over the infinity fabric, so latency might be surprisingly even.

[+] dom0|8 years ago|reply
For most server workloads this plays a relatively minor role.
[+] smilekzs|8 years ago|reply
Haven't been following this closely, but has the random segfault problem [1] been addressed? I would imagine this a bigger problem for servers almost constantly maxing out all cores and threads compared to a desktop/laptop... Just imagine the horror when you write safe Rust code and get hit hard by heisenbugs in production...

[1]: https://community.amd.com/thread/215773

[+] examancer|8 years ago|reply
While the full root cause has not yet been found or resolved the limited issues have been pretty reliably resolved by disabling ASLR. Whatever the root cause it is likely the issue can be fixed through BIOS/microcode updates.

The number of people affected are low. My Ryzen machine has only ever run linux and compiles a lot and has never exhibited this behavior. Also, most new platforms have issues, even new server platforms. These will be worked through during substantial validation server OEMs will go through.

Lastly, look up the errata list for any Xeon CPUs. Intel releases microcode updates for them several times a year to fix bugs. Modern CPUs are complex and will pretty much always have bugs. Luckily some combination of BIOS or microcode updates will almost always resolve them.

[+] my123|8 years ago|reply
It was addressed by better ECC usage on the micro-ops cache in the latest AGESA as far as I know.
[+] mschuster91|8 years ago|reply
So, this means a 2P system can pack 64C/128T, 256 PCIe lanes and 4 TB RAM?

What a monster. Pack this together with a couple Quadro GPU accelerators and you got some serious allround performance.

[+] __jal|8 years ago|reply
I think a 2P system still only gets 128 lanes. But still.

For the more boring among us (like me), swap the GPUs for 24-ish NVMe SSDs and a few 10G cards, and that is one hell of a DB server...

[+] tracker1|8 years ago|reply
If the Zen 2 / Rome series brings down the power/heat a bit, that will probably be around the time I'm seriously considering another upgrade... my i7-4790 desktop has been really good to me for a couple years, but within another 2, may be looking around again. Though, the consumer variant will also be up for consideration.

Nice to see AMD competing again, knew they would get some ground in the server space looking at the Zen benchmarks on the consumer CPUs.

[+] TazeTSchnitzel|8 years ago|reply
The bottom of the article mentions a supposed leak of a 2018 server chip with 48 cores. I wonder if that's just an MCM with 6 chips, made feasible by the smaller die size and reduced thermals?
[+] examancer|8 years ago|reply
Kind of curious: How much lower you want them to go on the power/heat metric? Your current CPU is an 84W TDP part. A Ryzen 7 1700 has twice the number of cores and draws 65W with stock settings. The 1700 can get hot, but really only if you overclock it.

Of course more performance per watt is always better and I too hope Zen 2 can be even better. But right now, in certain scenarios, Ryzen is already leading in both performance per watt and per dollar.

[+] ksec|8 years ago|reply
Remember the TDP for AMD is for the whole SoC. While with Intel you should include the PCH as well. At any rate you should compare the whole system with exact same components apart from the CPU. And the difference is minimal.
[+] tonyplee|8 years ago|reply
It looks like this is putting 4 8 cores chips on the same package to scale up the cores counts, etc.

What are some of the technical limit to if AMD to 2, 4, 8x this approach?

Only power/heat? IO should not be hard since pins on MCM should be able to scale out easily, right?

[+] sp332|8 years ago|reply
There's going to be a hit to I/O when one of those modules needs to access RAM that's on a different module's controller. The new bus seems to be really fast though, so I'm not sure how many you get have before it's a real problem.
[+] TazeTSchnitzel|8 years ago|reply
Supposedly the big innovation of the Zen platform is its interconnect tech managing to scale performance nearly linearly with more chips. Or so AMD claims.
[+] jabl|8 years ago|reply
I'd guess external IO (memory and PCIe) might be a problem. How are you going to route all those wires out from the socket?

Secondly, there's probably some economic argument as well. Too few customers willing to pay for a humongous MCM, and with the attendant wiring complexity requiring more layers for the motherboard, it might be cheaper to go to more sockets instead?

[+] GordonS|8 years ago|reply
It's interesting that the EPYC 7601 comes at a significant premium to the EPYC 7551P - double the price for a 200MHz base clock increase and dual-socket support. Question for those more knowledgeable on data centres - is the modest performance gain and increased density worth it for the cost?
[+] std_throwaway|8 years ago|reply
You don't buy a CPU alone. You buy a whole server. If the CPU increases 1000$ in price to gain 10% performance but the complete server already is 20000$, it's quite worth it to go from 20000$ to 21000$ for 10% of performance.

Also if you factor in a performance critical software for 100000$ that runs on that single node, you will buy the fastest hardware that you can get for a few thousand dollars more.

[+] TazeTSchnitzel|8 years ago|reply
The differences between the 7601, 7551 and 7501 stuck out at me as interesting. The 7601 is at least $1300 more expensive for 200MHz more clock and 10W/25W (???) more TDP. Is that an artificial distinction, or do such high-TDP chips have a worse yield?
[+] ajaimk|8 years ago|reply
It comes down to binning of the same chip for AMD and there are people out there willing to pay the 100% extra for a 10% improvement.
[+] gigatexal|8 years ago|reply
This site is a rumor mill. So add some grain of salt.
[+] shaklee3|8 years ago|reply
Videocardz (the source) has been right far more often than not. Still a rumor, but credible.
[+] ksec|8 years ago|reply
I have been wondering how PCIe 4.0 and 5.0 will play out for AMD. Since changes of PCIe and DDR requires a changes of Socket.

For AMD there is no PCIe 4.0 chip in 2018. And PCie 5.0 is already out in 2019.

[+] dom0|8 years ago|reply
> Since changes of PCIe requires a changes of Socket.

No

> Since changes of DDR requires a changes of Socket.

Not necessarily, but usually done so.

[+] scopecreep|8 years ago|reply

[deleted]

[+] redial|8 years ago|reply
So they can have names like i7 6459K Extreme Edition?
[+] lawrenceyan|8 years ago|reply
If there's a reason for it, I would have to guess the marketing team assumes that since many millennials are now in the position to make corporate buying decisions, the combined irony and nostalgia will entice them to look more deeply into the new server architecture? Personally, I could care less what the name is, and I'm sure the majority that will be buying these feel the same as I do. Performance and price is all that counts, and AMD has come out with something pretty remarkable in that respect. It's exciting to see Intel actually being caught with their pants down for once.
[+] djsumdog|8 years ago|reply
The marketing might seem silly, but you want to hit the people who actually build your data centres with the awe aspect. They're the ones who make the recommendations when CTOs want to expand, and even if they think the name is stupid, it's something they remember when building those cost to performance charts/spreadsheets for expansions.