top | item 21535630

AMD Ryzen 9 3950X Review

434 points| neogodless | 6 years ago |anandtech.com

283 comments

order
[+] mrandish|6 years ago|reply
I noticed yesterday's articles about Intel's upcoming unusually large release of security mitigations. Under serious competitive threat for the first time in a while, I'm curious if Intel may have slowed the release of some mitigations to land after this round of comparison benchmarks.
[+] ibobev|6 years ago|reply
As a developer I am keen to see some compilation benchmarks. Unfortunately those kinds of benchmarks are almost never included in such reviews. Instead there are many gaming benchmarks which purpose is not exactly clear for me, after obviously gaming is not the primary target market for R9 3950X.
[+] lliamander|6 years ago|reply
As a developer, other benchmarks I would like to see:

* compile benchmarks for different mainstream languages (Java, C#, etc.)

* IDE related benchmarks (i.e. how long does it take to index a large solution/workspace)

* source control conversion tests (svn to git). Obviously not something that happens everyday, but I've done it at two different jobs, and generally when it happens you are often converting many repos as an organization shifts its policy.

* maybe some VM/docker related tests

I'm probably asking for too much, but working on a micro-service system even with enough RAM my system becomes less responsive when I'm testing the interactions between services on my local machine. I'm also limited on the number of Java projects I can add into a single workspace in Intellij. I've been forced to open each project as a separate workspace and in a separate window.

Granted, I'm working off of a dual-core i7 laptop (with 32GB of RAM) but I want to know what kind of upgrade it would take for those problems to go away.

> Instead there are many gaming benchmarks which purpose is not exactly clear for me, after obviously gaming is not the primary target market for R9 3950X.

Well, it must be said that the 3950X has the fastest single-core performance of AMD's lineup, so while it is overkill from a core-count perspective, it's still technically their best gaming CPU.

[+] Matthias247|6 years ago|reply
As already said by others, Phoronix has some compilation benchmarks.

I have recently bought a 3900X. Some things about compilation I would note is that it heavily depends on your programming language and tooling. And that you would note is that you get most benefits on clean builds of big projects, which scale very well per core. On incremental builds or smaller projects it is however not uncommon to see < 25% of the CPU being used. That is especially with Rust, where a single compilation unit (crate) is compiled in a single-threaded fashion. Might be better with C or C++. But then again linking might also be blockers.

It's all nice and fast, but on day to day use you likely won't see a 100% speed increase compared to a 3600.

[+] gameswithgo|6 years ago|reply
GCC at least goes very fast on the new Ryzens, due to the huge l3 cache. That may translate to many compilers. Rust (which uses llvm on the backend) also compiles very quick.
[+] IanCutress|6 years ago|reply
Normally I run a Chrome compile benchmark, but for whatever reason it wasn't running properly on Win 10 1909. When I get a chance to debug (55k miles of travel over the next four weeks), I'm going to see if I can fix it and expand that bit of our testing.
[+] spamizbad|6 years ago|reply
I'd love to see more container-oriented benchmarks, particularly with docker-compose and/or minikube.
[+] zamubafoo|6 years ago|reply
I find it interesting how desktop CPUs are essentially coming down to two enthusiast markets, developers/content creators/workstations and gaming.

While the gaming market is (usually) seeking the highest top single core clock speed with respect to CPUs, it also relies on other expensive hardware. Meanwhile the dev/content creator/workstation market is much better served by these multi-core behemoths.

Intel really has their work cutout for them with performance to cost for consumer desktops.

[+] CoolGuySteve|6 years ago|reply
Before Vulkan, there were more bottlenecks on multithreaded rendering. Most games pushed everything to a single drawing thread.

Hopefully going forward that will change, but even with Red Dead Redemption 2's Vulkan implementation, a 6 core Intel/AMD chip is competitive with this processor at lower settings/resolutions where the GPU is less of a bottleneck.

Compute bound games like Civilization 6 do scale with processor count however.

[+] api|6 years ago|reply
It's like vehicles. Desktops/workstations and even most non-tablet laptops are now analogous to small trucks such as pickups. Tablets and phones are cars. Most people drive cars. Trucks are for work or people who like to haul stuff.

Servers and huge workstations I guess are like 18-wheelers and locomotives. :)

The "mobile is the future of everything" people were wrong. It's not that desktop is going away, but the market is re-segmenting. Most casual users don't need a desktop. They just need a UI device that can run apps and access services. So you now have a bisection of the computing device market into pro/workplace type devices and casual user devices. I suppose there's a third category too: hobbyist and enthusiast devices. That's where I'd put things like the Raspberry Pi or more techie geared laptops that run Linux.

I predict that rather than converging with mobile, desktop will actually pivot more toward pro, developer, and power user needs. Apple's recent 16 inch Pro release is a baby step in that direction on the hardware level. You'll probably see it at the software and OS level too. Desktop might actually shed a little bit of its user-friendliness gloss in favor of being unabashedly "pro." "If you don't want to know what a network, a folder, a file, or an IP address is, get a tablet."

[+] intarga|6 years ago|reply
>While the gaming market is (usually) seeking the highest top single core clock speed with respect to CPUs, it also relies on other expensive hardware. Meanwhile the dev/content creator/workstation market is much better served by these multi-core behemoths.

It's not really that cut and dried, plenty of dev workloads are better served by high top single core, while plenty of gaming workloads are better served by multi core.

[+] Someone1234|6 years ago|reply
I agree with your overall synopsis of the status quo. We might however see gaming shift more to multi-core workloads as the next gen of consoles are highly likely to contain 8 core (16T) AMD CPUs based on Zen2 (and a lot of games are made for the lowest common denominator).

This shouldn't be confused with the 8 core APUs found in current gen. Next gen will have an 8 core CPU and a NAVI 12+ GPU.

[+] burtonator|6 years ago|reply
I LOVE that the gaming market has created a serious desire for quality desktop hardware.

I now build my workstations by hand and install Ubuntu. It's really nice to work on quality hardware!

[+] pmoriarty|6 years ago|reply
I'd expect there are far, far more servers than there are developer machines. There are probably many more servers than hardcore gamers as well, and maybe even more than all gamers put together. It'd be nice to see some hard stats on this, however.
[+] Tepix|6 years ago|reply
You're forgetting about the low budget CPUs used for desktop PCs. AMD has a pretty good standing with their Vega GPUs built into the Ryzen/Athlon APUs.

Or are you only talking about enthusiasts?

[+] paulmd|6 years ago|reply
GamersNexus really hammering AMD's deceptive marketing around boost clocks. The only time it can hit its advertised boost clocks in when it's in the menu and you have near-zero load, and it barely touches it for an instant. Under even a single-core load it's failing to meet its advertised spec.

There was a lot of hubbub around this with the initial release, AMD released a BIOS which they claimed fixed this, looks like it still has not.

https://www.youtube.com/watch?v=M3sNUFjV7p4

The performance is good enough as it is, no need to lie about it being 200 MHz higher than it can actually do. But here we are again 6 months later and AMD does it again...

[+] NKosmatos|6 years ago|reply
Way to go AMD! One of the benefits of competition between the duopoly. AMD has cornered Intel these last couple of years and I don’t see this trend changing soon, not with all these vulnerabilities that chipzilla is having ;-)

One thing I’d like to note, is that with all this computing power being available to users at a relatively affordable price, software developers (games - commercial software) won’t optimize their code. I’ve seen it happening where a loop/scanning/sorting algorithm won’t be optimized since the user will anyhow have a few cores and GHz to spare.

[+] oouiterud|6 years ago|reply
I read that all these new AMD CPUs support ECC, but it’s been hard finding verification. Can any one recommended a motherboard that both supports and uses ECC RAM with this new CPU?
[+] neogodless|6 years ago|reply
This isn't the most groundbreaking release - it's not the first 16-core chip you can buy (edited), nor the first 7nm. No new clockspeed records were set. Still, $750 for all that power!

Any professionals shopping for this, or waiting for 24 or 32-core Threadripper?

Anyone trying to upgrade on an older motherboard, or are you getting a matching X570 to ensure maximum boost and PCI-e bandwidth?

[+] jagger27|6 years ago|reply
I would say it is the first 16-core "consumer chip", since every other 16-core chip has either been HEDT-class or server-grade, both requiring expensive motherboards. I would be happy to try plopping this into my B450 motherboard from my 2700X. I'm happy with my current NVMe storage, which is just about the only thing that takes advantage of the extra bandwidth. Graphics cards aren't there yet.
[+] MrGilbert|6 years ago|reply
I did some math for myself and realized that I can skip the 4 extra cores, and go with the Ryzen 9 3900X (530€ vs 820€).

That's 44.16 € / Core (3900X) vs 51.25 € / Core (3950X) where I live. Yes, the base clock is 200 Mhz lower, but for my use case (homeserver with gaming vm and minecraft server etc), I need cores that I can assign. Im not ready to pay 290€ für an extra 4 cores yet.

[+] velox_io|6 years ago|reply
I'm looking forward to a 3950X, this is the first time you can get a lot of cores without the expense of low clockspeeds.

I was interested in the new ThreadRippers however, I do beleive this gen is overpriced. I realise this is how supply and demand works, but I don't think the expense is worth it personally.

In addition, from what I've seen so far. AMD/ OEMs are chrging a large premium for TRX40 mother boards. You're talking ~$550 & up. Bare in mind that for a similar price you can get i an Epyc board with double the memory channels (8), AND add an addional CPU, AND duel 10gbe! (you would think a single port would be default on most workstations late 2019) It's a lot of money for 2 addional memory chennels & and PCIe lanes (PCIe is a serial interface so it only needs one wire per lane, per direction).

[+] bob1029|6 years ago|reply
I am waiting for the 3990X announcement before pulling the trigger on 3rd gen TR. I am already running a 2950X in my main workstation, and it's currently really hard to make an argument for even more performance... That said, I am targeting the 3970X as my next upgrade option, but AMD might be able to upsell me on the 3990X. Once the trigger has been pulled on 3rd gen, I will look at repurposing the 2950X workstation as a Jenkins build agent. We currently only have 2 vCPUs in AWS for Jenkins, so this could make a huge difference for our build process and give the old machine a really good 2nd life.

In terms of the 3990X specifically, I am most interested to see if they are going to provide additional platform capabilities. There were rumors regarding a TRX80/WRX80 platform, which seemed to imply to some tech journalists that there would be an octal channel variant of TR available early next year. It would be very hard to turn this down if it were an option.

[+] snagglegaggle|6 years ago|reply
The step from Ryzen to Threadripper is quite a big one. I'd benefit from a Threadripper system, but budget only allows a (top tier) Ryzen for now. I suppose spare money could go towards a Threadripper right now and I wouldn't consider it wasted per se.
[+] solotronics|6 years ago|reply
If you look at performance per watt AND performance per dollar it is a record breaker. Also, it is the highest base clock speeds of any of the AMD chips.
[+] chaosbutters|6 years ago|reply
I'm waiting for 64 core CPU. 32 is nice but still not enough.
[+] wayneftw|6 years ago|reply
Any problems with AMD cpus and containers or virtualization?

(Really? Wow. Sorry for asking a question!)

[+] tracker1|6 years ago|reply
Been waiting for this for about a year now... pulled the trigger early on an X570 build as my old system (i7-4790K) was acting up and bought early. Been running an r5 3600, but replacing cpu with a 3950X.
[+] fock|6 years ago|reply
Can anyone comment on the IOMMU-grouping on typical boards? From what I've just googled, the CPU "lanes" seem to support ACS so it could indeed be working to replace my IGP+GPU-for-the-gaming-VN-Haswell system with a dual-GPU-Ryzen one (contrary to what I believed previously).

Have been eyeing TR for that reason, but as I don't really need this amount of I/O (and cores), I might be well served by AM4.

[+] LatteLazy|6 years ago|reply
7nm is about 35 times the diameter of a silicon atom.
[+] Tepix|6 years ago|reply
Looking at some benchmarks done by PC Games Hardware it appears that games are still not capable of taking advantage of 16 cores and 32 threads. It's not surprising - why would developers optimize for something that's not yet widely used. But I wonder when we'll get there...
[+] rafaelvasco|6 years ago|reply
Went with AMD with my latest build. Last time I went with AMD was back in 1999 with an AMD K6-2 500mhz.
[+] piinbinary|6 years ago|reply
When GPUs became vastly more powerful over the last ~10 years, it made big neural nets practical. I wonder what an equivalent jump in CPU power will unlock.
[+] th-miracle-257|6 years ago|reply
Why did Apple not release their new MBP 16 with the 3950X? [1]

[1] https://news.ycombinator.com/item?id=21523780

[+] 3JPLW|6 years ago|reply
Because it's not a notebook chip? 105W is... a little toasty.
[+] neogodless|6 years ago|reply
Yeah, I don't think you need downvoted into oblivion for not knowing, but AMD mobile APUs are also not anywhere near this level. The top-end Ryzen 7 3780U is a 4-core multi-threaded part, using Zen+ rather than Zen 2, and the older Vega graphics. (And that's only available in a Surface laptop.)

https://en.wikipedia.org/wiki/List_of_AMD_accelerated_proces...

[+] Someone1234|6 years ago|reply
They're Intel based mobile systems. The 3950X is a desktop class AMD CPU.
[+] HHad3|6 years ago|reply
In addition to the points already mentioned, a good bit of Apple's software depends on Intel-proprietary features, e.g. QuickSync for video encoding. So far, no macOS system ever shipped with an AMD CPU, and porting the OS and user-mode application stack is not trivial.
[+] ErneX|6 years ago|reply
Desktop chip and even AMD recommends water cooling it.
[+] mtarnovan|6 years ago|reply
I'm very curious how this CPU performs for Elixir compilation.
[+] Bayart|6 years ago|reply
Compilation should profit pretty linearly from more cores thrown at it. Considering the Erlang VM is very concurrency-conscious it should be a pretty natural fit.

But I'm not at all in on the intricacies of it, are there any particular CPU features (instruction set or module) it's known to take advantage of ?

[+] lliamander|6 years ago|reply
Indeed, the only compiler benchmark one typically sees are for C/C++. But Lots of developers use other languages, and it would be nice to have a cross-section because each language is so different.

BTW, how much does elixir benefit from multiple cores for compilation?

[+] seminatl|6 years ago|reply
I don't really get how they reached their conclusion. It seems like on most of the tests this new part gets beat by a cheaper one from Intel. It seems like a kinda unfair approach to use handbrake without AVX-512 support? Also not sure why they include the 3-d particle thing without AVX ... I guess because Intel is just too fast on that?

If you look through the results, the things most people want to do with a computer, like browse the web and start their applications, are noticeably faster with the Intel i9-9900K at half the price. And the only game where the CPU makes a difference in these benchmarks is also a lot faster with the Intel part.

[+] cracker_jacks|6 years ago|reply
Where is the 9980XE @ $979 coming from in the price vs performance chart? Where can I get a 9980XE for $979??
[+] ArlenBales|6 years ago|reply
I wish the reviews that included gaming benchmarks would have included Monster Hunter World. Capcom's MT Framework game engine is extremely multi-threading capable.
[+] davidy123|6 years ago|reply
Slight meta, and part of a larger 'rant,' but when are we going to get away from reviews that might as well have been printed on glossy pages in PC Magazine in 1986? Anandtech has been at it since the 90s and hasn't changed their format at all. This is a serious decades-long stagnation of the web. The graphs should be dynamic (able to choose what scenarios and components to compare, and able to search within them), and user contributed. Instead we get feeble excuses like "it doesn't make sense to compare a two-year-old-generation to this one," well yes it does if I'm considering an upgrade.

Only a few sites support these options. Storage Review was an early leader but hasn't much moved much, Notebookcheck is another, and of course Phoronix.

[+] wtallis|6 years ago|reply
> Instead we get feeble excuses like "it doesn't make sense to compare a two-year-old-generation to this one," well yes it does if I'm considering an upgrade.

You don't get that excuse from AnandTech. We do our best to keep a long history of benchmark data for users to peruse: https://www.anandtech.com/bench/CPU-2019/2224

The main limiting factor on how far back our benchmark database goes is software updates. When we have to update the OS or CPU microcode for Spectre, Meltdown, etc., or update GPU drivers, that invalidates results, and re-testing a large pile of older hardware takes a long time. Historically this has mostly been a problem for GPUs since their drivers are such a moving target, but the past two years of CPU vulnerabilities have been a hassle.