top | item 14979151

AMD Ryzen Threadripper 1950X and 1920X Review

192 points| jsheard | 8 years ago |anandtech.com | reply

150 comments

order
[+] pella|8 years ago|reply
[+] LeonM|8 years ago|reply
I was looking to build myself a new (Dual) Xeon workstation, but looking at these specs and performance, I am going to consider threadripper instead.

With this many threads, ECC support and 64 PCIe lanes this CPU looks perfect for my intended workload. It's also gonna be slightly cheaper than a dual xeon.

Exciting times!

[+] sireat|8 years ago|reply
I am in the same boat, one of my use cases is many many instances of the same application. (100 instances would be good, as I get 70 on my dual socket E5-2670 v1)

Only problem when compared to dual socket Xeon E5 solutions up to Broadwell(v4) is support for Windows 7 which is a must for my case.

I remain hopeful that it will be possible to run Windows 7 on Threadripper "unoficially".

It might take a PS/2 keyboard and a DVD(USB is not supported on Ryzen for Windows 7 without slipstreaming USB drivers) to install but it should be doable.

[+] loeg|8 years ago|reply
Only slightly cheaper than a 2P Xeon system?
[+] curiousgal|8 years ago|reply
What workload is that? If I may ask.
[+] heycam|8 years ago|reply
Unfortunately for me, the fact that rr can't run on Ryzen CPUs is a dealbreaker. :(

https://github.com/mozilla/rr/issues/2034

[+] aleden|8 years ago|reply
I am wondering why this isn't getting more attention.

> It appears that the Ryzen PMU just isn't quite accurate enough :-(. rr > might work OK for some kinds of usage but I wouldn't recommend it. > > I'll land the patches I have with a warning for Ryzen users that things > won't be reliable.

Can someone with a technical understanding describe what's going on here? Is this a flat-out bug- or does the x86 architecture spec allow for this?

[+] gcp|8 years ago|reply
Awwww :( That's really sad.

rr is important enough for C++ development that I agree it's dealbreaker.

[+] Boothroid|8 years ago|reply
Barring a major technical problem being uncovered (hopefully the segfault thing won't be an issue) or nefarious action by Intel I cannot see how this thing can fail. So much power at the price point. As soon as the price drops a bit I'll be replacing my FX-8350 setup with one of these. Go competition! Go AMD!
[+] renesd|8 years ago|reply
Considering they didn't give samples to any of the linux press... ?
[+] dmoy|8 years ago|reply
I mean I could guess one way it'd fail - if they market it heavily to the PC gaming market, and their single threaded performance isn't as good at similar price points. There's still lot of workloads out there that can't take advantage of 4 threads, let alone 16/32+. I would assume they don't market it heavily to gaming, but it's like the second thing they mention on their main landing page, so... idk.
[+] TheJazi13|8 years ago|reply
Wasn’t the segfault issue determined to be a bug in the php steps?
[+] rbanffy|8 years ago|reply
One thing I have with Intel is the virtual certainty the machine will work flawlessly with Linux. Atom, Celeron, Pentium, Core, and Xeon E3 have built-in GPUs that are very well supported.

Before jumping to AMD gear, I'd like to know if I'd have the same "just works" experience. I'm well past the age I liked to waste time debugging setups.

[+] old-gregg|8 years ago|reply
Ryzens do not have an integrated GPU, at least not at the moment. The CPU itself works out of the box, even slightly more exotic features like virtualization is flawless under KVM.

If you want a Ryzen-based desktop with "just works experience" similar to i7-based Intels, go with a Radeon RX460 or RX560 GPU. It will get picked up by the open source AMDGPU driver in the latest popular desktop distros.

[+] TazeTSchnitzel|8 years ago|reply
The note about this being the first time consumers will see NUMA systems is interesting. I hope reviewers are familiar with it when they're benchmarking.
[+] moogly|8 years ago|reply
Apparently, if you were interested in using these for a DAW, you might want to wait for Tech Report's review[1]. They're late to the party[2] because AMD didn't send them a review kit until TR publicly asked their readers for a CPU to review[3]. Maybe because TR is one of very few publications using DAWBench?

[1] https://twitter.com/jkampman_tr/status/895645729972080640

[2] http://techreport.com/news/32377/here-a-sneak-peek-at-our-ry...

[3] http://techreport.com/news/32343/updated-wanted-for-review-a...

Ryzen is also comparatively poor at DAW workloads.

[+] sevensor|8 years ago|reply
I haven't been paying tremendously close attention to this line of CPUs, but I'm interested. I've seen indications that it's not particularly stable under Linux. Is that true? Is there a microcode update?
[+] old-gregg|8 years ago|reply
Does anyone have a theory for why Ryzens are beaten so badly by Intel in the Chromium compile benchmark?
[+] 0xcde4c3db|8 years ago|reply
On Twitter, the reviewer blamed it on Ryzen's L3 cache being a victim cache [1] (which I don't really understand; maybe it's related to having ephemeral data structures displace long-lived ones?). I think it's also been stated elsewhere that the Intel chips dominate during linking, which is a large chunk of the overall build time for Chromium.

[1] https://twitter.com/IanCutress/status/868799386079420416

[+] ssutch3|8 years ago|reply
Cross-core communication
[+] vbezhenar|8 years ago|reply
Well, for multi core workloads AMD is a clear winner. Other features are nice too: ECC memory; no thermopaste nonsense. I'm really waiting for future Intel offerings, they must do something extraordinary to gain lead.
[+] paulmd|8 years ago|reply
> Well, for multi core workloads AMD is a clear winner.

Actually not really. It's actually slower in x265 encoding than a 7900X, and it doesn't really pull away much in x264 or Blender rendering either (10-25% over a 7900X). Also, it appears to perform pretty badly at some compilation workloads too, like a 1950X barely beats a 6900K at Chromium compiling using MSVS.

At most you can say that you really need to look at the specific task. Of course there may still be some tuning, but right now it's certainly not the slam-dunk that everyone assumed it would be.

http://www.anandtech.com/show/11697/the-amd-ryzen-threadripp...

http://www.anandtech.com/show/11697/the-amd-ryzen-threadripp...

http://www.anandtech.com/show/11697/the-amd-ryzen-threadripp...

https://www.overclock3d.net/reviews/cpu_mainboard/asus_x399_...

https://www.overclock3d.net/reviews/cpu_mainboard/asus_x399_...

Not a particularly great showing overall for a processor with 60% more cores. Despite AMD's attempted pushback it appears Intel's smack-talk was correct and Infinity Fabric is not a magic panacea for NUMA performance problems.

It also pulls an absolutely absurd amount of power to do it, literally more than a 7900X. The onboard package-power measurement appears to be drastically undershooting the power as measured at the wall. Even factoring out PSU efficiency losses, measuring inside the case something is eating at least 130W that isn't showing up as package power.

https://www.overclock3d.net/reviews/cpu_mainboard/asus_x399_...

http://www.anandtech.com/show/11697/the-amd-ryzen-threadripp...

[+] valarauca1|8 years ago|reply
As somebody who uses a lot of NVMe storage it is nice to see expanded PCIe lane options. I can really a put one more drive into my current Intel box.

NVMe SSD drives feel like the only _large_ performance change this decade.

[+] rocky1138|8 years ago|reply
Having to reboot to enable Game Mode really sucks. Usually I game after I finish working or maybe in between while I'm waiting for a download. I don't want to have to re-open all my programs and set up my desktop yet again after I'm done gaming.

I wonder if this will be fixed with software later on or we'll have to wait for the next model of Threadripper.

[+] cptskippy|8 years ago|reply
Any chance of just hibernating the system and then resuming it?
[+] paulmd|8 years ago|reply
In theory, using Process Lasso to tie the game to 8 specific cores on a single die should produce equivalent results.
[+] dijit|8 years ago|reply
I hope they do not suffer the same issues the already released Ryzen 7 and Ryzen 5 series CPUs are having.

I'm also cautiously optimistic for EPYC the server grade CPU that AMD is releasing soon, although in this world of "per-core" licensing costs there is a strong need for less cores and more single-thread performance in the server.

[+] TazeTSchnitzel|8 years ago|reply
> Given that Threadripper is a consumer focused product – and interestingly, not really a workstation focused product

Huh. Will AMD workstations be built around similar Epyc products, then? But those would have worse power consumption, right? (4 dies versus 2)

[+] foepys|8 years ago|reply
Epyc as lower clock speeds and slightly lower TDP.

The fastest 16-core Epyc has 2.4/2.9 GHz (base/boost) while TR has 3.4/4.0 (4.2 GHz with XFR, also 16 cores).

[+] krsdcbl|8 years ago|reply
This could actually be huge news for desktop based rendering, video editing, cgi of any kind.

Beeing able to render on 64 threads for just the price of 2 high end graphics cards seriously makes me consider stick with cpu rendering for my next workstation - even at a lower frequency per core the time savings could be substanial.

[+] samcat116|8 years ago|reply
How long until we can get chips with this many cores while still maintaining the clock speed needed for good single threaded performance?
[+] morrbo|8 years ago|reply
I mean, it's a tradeoff. It will always be a tradeoff, so the answer you're going to get is "never". There will always be less cores vs higher base clock (i'd imagine).

However, the clock speeds of ryzen/tr/epyc/whatever are more than enough for a "good" workload. In fact you could argue that unless you're doing something which requires only one single core (and honestly, I can't envisage a workload like this in a professional environment, but i'm surely wrong) the speed is fantastic and not really noticeable.

It's incredibly easy these days, at least in .NET, to parallelize workloads locally, so using something like this would outshine any 4ghz base 8 core any day of the week. But to actually answer your question as to when can we see 4ghz+ base clocks on 16/32 core processors...the next 5 years? I guess...

[+] Symmetry|8 years ago|reply
If the other cores aren't doing useful work they'll be powered down to let the remaining core run faster. Not perfectly but the extra millimeters of surface contact with the cooler the extra cores provide probably makes them a net benefit in that case.
[+] sp332|8 years ago|reply
These chips go up to 4.2 GHz, if you're using 4 cores or less and they can stay within their power & thermal envelope for your workload.
[+] drudru11|8 years ago|reply
No question about it - this is cool technology. I will be building an ECC based Threadripper this fall.
[+] post_break|8 years ago|reply
My next plex server for sure.
[+] samcat116|8 years ago|reply
I don’t really see the point of using this for a Plex server. Unless you’re doing massive amounts of transcoding 24/7
[+] deelowe|8 years ago|reply
My plex server is an rpi. What are you doing in plex that requires such horsepower?
[+] needz|8 years ago|reply
In an attempt to be as independent of third-party services as possible I switched from Plex to Emby and I see no reason to go back. You can even stream media outside your home network for free.

edit: https://emby.media/

[+] roel_v|8 years ago|reply
So have there been any announcements about whether there will be dual-CPU Threadripper mobo's in the future? Does the CPU even support it?
[+] jsheard|8 years ago|reply
I don't think they plan to support dual Threadripper, because single Epyc does more or less the same thing without the inter-socket latency or expense of a dual-socket motherboard.

It's effectively two Threadrippers on a single package (2x cores, 2x memory channels, 2x PCIe lanes) for only slightly more than double the price ($999 for 16C TR, $2100 for 32C Epyc).

[+] tiffanyh|8 years ago|reply
Is Threadripper the first CPU that can support 2+ NVME (m.2) cards due to all of the PCI lanes it has?
[+] caleblloyd|8 years ago|reply
Most PCIE NVME SSDs use 4 lanes right now. So theoretically CPUs could support (total PCIE lanes / 4). It usually boils down to motherboard configuration or add-on card support for how many you can practically get in a system though.
[+] vetinari|8 years ago|reply
Intel 7900X with an X299 board can do that too, with UEFI-level RAID on top of that (might be important for Windows users, not important at all for Linux users).
[+] shubb|8 years ago|reply
The other cool thing is, (given that these will end up in a lot of servers) - instead of graphics card, read CUDA card.