(no title)
blattimwind | 5 years ago
On the other hand, few applications scale efficiently to more than just four cores. Yes, of course, AMD delivers more Cinebenchpoints-per-Dollar and usually more Cinebenchpoints overall, but that's not necessarily an interesting metric.
Personally I find that if I'm waiting on something to complete that the application in question tends to use only a tiny number of cores for the task at hand. Usually one.
Another significant weakness of AMD's current platform is idle power consumption.
These factors leave me with a much more nuanced impression than "Intel is ded" or "HOW IS INTEL GOING TO CATCH UP TO THIS????"; CPU reviews these days are just pure clickbait.
jchw|5 years ago
Meanwhile, pointing at memory latency as the flaw in Ryzen has been a popular misdirection for a while now. People warned me about it being a performance pitfall since before I bought my first Ryzen processor. In practice it doesn’t show up in even the most complexity intensive workloads as a serious issue. For example, Zen 2 performs very well on hardware emulation. This is possibly because where it takes a hit in memory latency it makes up in caching and prefetching, but honestly I don’t know and I am not sure how to measure. In any case it’s certainly favorably comparable to Intel’s best chipsets in single core workloads even if not on top. Factor in price and multicore workloads and you now have the exact reasons why people like me have been singing the praises... Intel’s single core lead may exist in some form but it is not what it once was, it is not an unconditional lead where an Intel core beats an AMD core. Not even close.
None of this means Intel’s dead of course, but IMO thats mostly because they have a lot more going on than just being the best CPU. They’ve got their dedicated GPU coming out, and plenty of ancillary technology as well. It does seem like for a company like Intel having to take a backseat in CPUs for a while will be painful; unlike AMD, this is a new position for Intel and maybe not one they will handle well.
lend000|5 years ago
Of course, this is all a factor of Amazon's supply of instances and their chosen on-demand pricing level, but the trends are certainly interesting, and show steady demand for fast Xeon's and increasing demand for ARM's. I have run some compute heavy workloads on the best AMD's I could find on AWS and the speed difference per core for my particular workload was nearly 50%, which got worse as it scaled up to bigger instances because my workload uses a lot of L3 cache. I hear about EPYC's with 256MB of L3 cache but I can't seem to find those on AWS -- only ones with 8MB of cache.
user5994461|5 years ago
gridlockd|5 years ago
Compiling code isn't embarrassingly parallel unless you're building some project with lots of files from scratch. Video rendering and compression also don't benefit as well as you may think:
https://www.phoronix.com/scan.php?page=article&item=3900x-39...
Meanwhile, single-threaded performance affects pretty much 100% of what you do.
In the end, I don't think there's a big difference either way.
blattimwind|5 years ago
How is it a misdirection? The data is accurate and memory latency scaling is a well-known issue for simulations like e.g. games (which is a huge market for high end desktop CPUs and also the market 90 % of reviews address), where you can't really explain the performance differences just by higher clocks. It's considered the main reason why much older Intel CPUs can still outperform Ryzen CPUs in games.
On the other hand, if you take something like Cinebench you can literally turn XMP off (thus using JEDEC timings and bus speed) and still get almost the same score (within, say, 2 %). That's because Cinebench is benchmarking pretty much only ALU throughput. That's obviously an important factor for performance, but just as obviously not the only one.
dralley|5 years ago
This is already only marginally true, the difference is only about 5% depending on the application, and in some applications AMD comes out ahead anyway. Expect the remaining difference to disappear when Zen 3 releases in a few months.
>Another significant weakness of AMD's current platform is idle power consumption.
AMD seems to have caught up here almost entirely. They've done a lot of work to improve idle power consumption lately and the node advantage probably helps, too.
highfrequency|5 years ago
blattimwind|5 years ago
Clock speed advantage -- Most Zen 2 CPUs don't overclock to 4.5 GHz on any core, let alone all-core. The boost numbers are reached with current firmware, but only for tiniest fractions of a second and never under any real load. Sustained single-core boost frequencies are 200-400 MHz lower than the specified boost frequency. On the other hand, Intel CPUs consistently reach their boost frequencies under load, and most CPUs can do their single-core boost as an all-core overclock under load (with much greater power consumption of course).
In practice this means that for equivalently priced parts (e.g. 3900X vs 10900K) the AMD part will have about a GHz lower clock for lightly threaded workloads, which are most workloads. With Intel settings, the Intel and AMD parts have about the same sustained clocks (3.8-4 GHz) under all-core load, but with the defaults of many motherboards the Intel part will run at 4.8-5 GHz, depending on the cooling.