I was looking to build myself a new (Dual) Xeon workstation, but looking at these specs and performance, I am going to consider threadripper instead.
With this many threads, ECC support and 64 PCIe lanes this CPU looks perfect for my intended workload. It's also gonna be slightly cheaper than a dual xeon.
I am in the same boat, one of my use cases is many many instances of the same application. (100 instances would be good, as I get 70 on my dual socket E5-2670 v1)
Only problem when compared to dual socket Xeon E5 solutions up to Broadwell(v4) is support for Windows 7 which is a must for my case.
I remain hopeful that it will be possible to run Windows 7 on Threadripper "unoficially".
It might take a PS/2 keyboard and a DVD(USB is not supported on Ryzen for Windows 7 without slipstreaming USB drivers) to install but it should be doable.
I am wondering why this isn't getting more attention.
> It appears that the Ryzen PMU just isn't quite accurate enough :-(. rr
> might work OK for some kinds of usage but I wouldn't recommend it.
>
> I'll land the patches I have with a warning for Ryzen users that things
> won't be reliable.
Can someone with a technical understanding describe what's going on here?
Is this a flat-out bug- or does the x86 architecture spec allow for this?
Barring a major technical problem being uncovered (hopefully the segfault thing won't be an issue) or nefarious action by Intel I cannot see how this thing can fail. So much power at the price point. As soon as the price drops a bit I'll be replacing my FX-8350 setup with one of these. Go competition! Go AMD!
I mean I could guess one way it'd fail - if they market it heavily to the PC gaming market, and their single threaded performance isn't as good at similar price points. There's still lot of workloads out there that can't take advantage of 4 threads, let alone 16/32+. I would assume they don't market it heavily to gaming, but it's like the second thing they mention on their main landing page, so... idk.
One thing I have with Intel is the virtual certainty the machine will work flawlessly with Linux. Atom, Celeron, Pentium, Core, and Xeon E3 have built-in GPUs that are very well supported.
Before jumping to AMD gear, I'd like to know if I'd have the same "just works" experience. I'm well past the age I liked to waste time debugging setups.
Ryzens do not have an integrated GPU, at least not at the moment. The CPU itself works out of the box, even slightly more exotic features like virtualization is flawless under KVM.
If you want a Ryzen-based desktop with "just works experience" similar to i7-based Intels, go with a Radeon RX460 or RX560 GPU. It will get picked up by the open source AMDGPU driver in the latest popular desktop distros.
The note about this being the first time consumers will see NUMA systems is interesting. I hope reviewers are familiar with it when they're benchmarking.
Apparently, if you were interested in using these for a DAW, you might want to wait for Tech Report's review[1]. They're late to the party[2] because AMD didn't send them a review kit until TR publicly asked their readers for a CPU to review[3]. Maybe because TR is one of very few publications using DAWBench?
I haven't been paying tremendously close attention to this line of CPUs, but I'm interested. I've seen indications that it's not particularly stable under Linux. Is that true? Is there a microcode update?
AMD have acknowledged there is an issue with their mainstream Ryzen CPUs that can cause segfaults on Linux, but they claim it doesn't affect their Threadripper or Epyc models.
On Twitter, the reviewer blamed it on Ryzen's L3 cache being a victim cache [1] (which I don't really understand; maybe it's related to having ephemeral data structures displace long-lived ones?). I think it's also been stated elsewhere that the Intel chips dominate during linking, which is a large chunk of the overall build time for Chromium.
Well, for multi core workloads AMD is a clear winner. Other features are nice too: ECC memory; no thermopaste nonsense. I'm really waiting for future Intel offerings, they must do something extraordinary to gain lead.
> Well, for multi core workloads AMD is a clear winner.
Actually not really. It's actually slower in x265 encoding than a 7900X, and it doesn't really pull away much in x264 or Blender rendering either (10-25% over a 7900X). Also, it appears to perform pretty badly at some compilation workloads too, like a 1950X barely beats a 6900K at Chromium compiling using MSVS.
At most you can say that you really need to look at the specific task. Of course there may still be some tuning, but right now it's certainly not the slam-dunk that everyone assumed it would be.
Not a particularly great showing overall for a processor with 60% more cores. Despite AMD's attempted pushback it appears Intel's smack-talk was correct and Infinity Fabric is not a magic panacea for NUMA performance problems.
It also pulls an absolutely absurd amount of power to do it, literally more than a 7900X. The onboard package-power measurement appears to be drastically undershooting the power as measured at the wall. Even factoring out PSU efficiency losses, measuring inside the case something is eating at least 130W that isn't showing up as package power.
Having to reboot to enable Game Mode really sucks. Usually I game after I finish working or maybe in between while I'm waiting for a download. I don't want to have to re-open all my programs and set up my desktop yet again after I'm done gaming.
I wonder if this will be fixed with software later on or we'll have to wait for the next model of Threadripper.
I hope they do not suffer the same issues the already released Ryzen 7 and Ryzen 5 series CPUs are having.
I'm also cautiously optimistic for EPYC the server grade CPU that AMD is releasing soon, although in this world of "per-core" licensing costs there is a strong need for less cores and more single-thread performance in the server.
This could actually be huge news for desktop based rendering, video editing, cgi of any kind.
Beeing able to render on 64 threads for just the price of 2 high end graphics cards seriously makes me consider stick with cpu rendering for my next workstation - even at a lower frequency per core the time savings could be substanial.
I mean, it's a tradeoff. It will always be a tradeoff, so the answer you're going to get is "never". There will always be less cores vs higher base clock (i'd imagine).
However, the clock speeds of ryzen/tr/epyc/whatever are more than enough for a "good" workload. In fact you could argue that unless you're doing something which requires only one single core (and honestly, I can't envisage a workload like this in a professional environment, but i'm surely wrong) the speed is fantastic and not really noticeable.
It's incredibly easy these days, at least in .NET, to parallelize workloads locally, so using something like this would outshine any 4ghz base 8 core any day of the week. But to actually answer your question as to when can we see 4ghz+ base clocks on 16/32 core processors...the next 5 years? I guess...
If the other cores aren't doing useful work they'll be powered down to let the remaining core run faster. Not perfectly but the extra millimeters of surface contact with the cooler the extra cores provide probably makes them a net benefit in that case.
In an attempt to be as independent of third-party services as possible I switched from Plex to Emby and I see no reason to go back. You can even stream media outside your home network for free.
I don't think they plan to support dual Threadripper, because single Epyc does more or less the same thing without the inter-socket latency or expense of a dual-socket motherboard.
It's effectively two Threadrippers on a single package (2x cores, 2x memory channels, 2x PCIe lanes) for only slightly more than double the price ($999 for 16C TR, $2100 for 32C Epyc).
"With Threadripper, you can run two graphics cards at X16 PCIe speeds, two at X8, and still have enough lanes left over for three X4 NVMe SSDs connected directly to the CPU."
Most PCIE NVME SSDs use 4 lanes right now. So theoretically CPUs could support (total PCIE lanes / 4). It usually boils down to motherboard configuration or add-on card support for how many you can practically get in a system though.
Intel 7900X with an X299 board can do that too, with UEFI-level RAID on top of that (might be important for Windows users, not important at all for Linux users).
[+] [-] pella|8 years ago|reply
* http://www.guru3d.com/articles-pages/amd-ryzen-threadripper-...
* https://www.techspot.com/review/1465-amd-ryzen-threadripper-...
* http://www.tweaktown.com/reviews/8303/amd-ryzen-threadripper...
* https://hothardware.com/reviews/amd-ryzen-threadripper-proce...
* https://arstechnica.co.uk/gadgets/2017/08/amd-threadripper-r...
* http://www.pcworld.com/article/3214635/components-processors...
* https://www.forbes.com/sites/antonyleather/2017/08/10/amd-ry...
Other Links:
* https://videocardz.com/71804/amd-ryzen-threadripper-review-r...
[+] [-] redtuesday|8 years ago|reply
http://www.hardwarecanucks.com/forum/hardware-canucks-review...
https://www.hardocp.com/article/2017/08/10/amd_ryzen_threadr...
http://www.gamersnexus.net/hwreviews/3015-amd-threadripper-1...
http://www.legitreviews.com/amd-ryzen-threadripper-1950x-thr...
http://www.techradar.com/reviews/amd-ryzen-threadripper-1950...
http://hexus.net/tech/reviews/cpu/108628-amd-ryzen-threadrip...
https://www.vortez.net/articles_pages/amd_ryzen_threadripper...
google translate of computerbase.de: https://translate.google.com/translate?hl=de&sl=de&tl=en&u=h...
google translate of pgh.de: https://translate.google.com/translate?sl=de&tl=en&js=y&prev...
google translate of tomshardware.de: https://translate.google.com/translate?sl=de&tl=en&js=y&prev...
google translate of hardwareluxx.de: https://translate.google.com/translate?sl=de&tl=en&js=y&prev...
[+] [-] LeonM|8 years ago|reply
With this many threads, ECC support and 64 PCIe lanes this CPU looks perfect for my intended workload. It's also gonna be slightly cheaper than a dual xeon.
Exciting times!
[+] [-] sireat|8 years ago|reply
Only problem when compared to dual socket Xeon E5 solutions up to Broadwell(v4) is support for Windows 7 which is a must for my case.
I remain hopeful that it will be possible to run Windows 7 on Threadripper "unoficially".
It might take a PS/2 keyboard and a DVD(USB is not supported on Ryzen for Windows 7 without slipstreaming USB drivers) to install but it should be doable.
[+] [-] loeg|8 years ago|reply
[+] [-] curiousgal|8 years ago|reply
[+] [-] heycam|8 years ago|reply
https://github.com/mozilla/rr/issues/2034
[+] [-] aleden|8 years ago|reply
> It appears that the Ryzen PMU just isn't quite accurate enough :-(. rr > might work OK for some kinds of usage but I wouldn't recommend it. > > I'll land the patches I have with a warning for Ryzen users that things > won't be reliable.
Can someone with a technical understanding describe what's going on here? Is this a flat-out bug- or does the x86 architecture spec allow for this?
[+] [-] gcp|8 years ago|reply
rr is important enough for C++ development that I agree it's dealbreaker.
[+] [-] Boothroid|8 years ago|reply
[+] [-] renesd|8 years ago|reply
[+] [-] dmoy|8 years ago|reply
[+] [-] TheJazi13|8 years ago|reply
[+] [-] rbanffy|8 years ago|reply
Before jumping to AMD gear, I'd like to know if I'd have the same "just works" experience. I'm well past the age I liked to waste time debugging setups.
[+] [-] old-gregg|8 years ago|reply
If you want a Ryzen-based desktop with "just works experience" similar to i7-based Intels, go with a Radeon RX460 or RX560 GPU. It will get picked up by the open source AMDGPU driver in the latest popular desktop distros.
[+] [-] TazeTSchnitzel|8 years ago|reply
[+] [-] moogly|8 years ago|reply
[1] https://twitter.com/jkampman_tr/status/895645729972080640
[2] http://techreport.com/news/32377/here-a-sneak-peek-at-our-ry...
[3] http://techreport.com/news/32343/updated-wanted-for-review-a...
Ryzen is also comparatively poor at DAW workloads.
[+] [-] sevensor|8 years ago|reply
[+] [-] jsheard|8 years ago|reply
http://www.phoronix.com/scan.php?page=news_item&px=Ryzen-Seg...
[+] [-] pella|8 years ago|reply
[+] [-] old-gregg|8 years ago|reply
[+] [-] low_battery|8 years ago|reply
And also Arstechnica reported much better numbers, I think Anantech either had a bad config or there is a bug in the compiler they used.
https://arstechnica.co.uk/gadgets/2017/08/amd-threadripper-r...
[+] [-] 0xcde4c3db|8 years ago|reply
[1] https://twitter.com/IanCutress/status/868799386079420416
[+] [-] ssutch3|8 years ago|reply
[+] [-] vbezhenar|8 years ago|reply
[+] [-] paulmd|8 years ago|reply
Actually not really. It's actually slower in x265 encoding than a 7900X, and it doesn't really pull away much in x264 or Blender rendering either (10-25% over a 7900X). Also, it appears to perform pretty badly at some compilation workloads too, like a 1950X barely beats a 6900K at Chromium compiling using MSVS.
At most you can say that you really need to look at the specific task. Of course there may still be some tuning, but right now it's certainly not the slam-dunk that everyone assumed it would be.
http://www.anandtech.com/show/11697/the-amd-ryzen-threadripp...
http://www.anandtech.com/show/11697/the-amd-ryzen-threadripp...
http://www.anandtech.com/show/11697/the-amd-ryzen-threadripp...
https://www.overclock3d.net/reviews/cpu_mainboard/asus_x399_...
https://www.overclock3d.net/reviews/cpu_mainboard/asus_x399_...
Not a particularly great showing overall for a processor with 60% more cores. Despite AMD's attempted pushback it appears Intel's smack-talk was correct and Infinity Fabric is not a magic panacea for NUMA performance problems.
It also pulls an absolutely absurd amount of power to do it, literally more than a 7900X. The onboard package-power measurement appears to be drastically undershooting the power as measured at the wall. Even factoring out PSU efficiency losses, measuring inside the case something is eating at least 130W that isn't showing up as package power.
https://www.overclock3d.net/reviews/cpu_mainboard/asus_x399_...
http://www.anandtech.com/show/11697/the-amd-ryzen-threadripp...
[+] [-] valarauca1|8 years ago|reply
NVMe SSD drives feel like the only _large_ performance change this decade.
[+] [-] rocky1138|8 years ago|reply
I wonder if this will be fixed with software later on or we'll have to wait for the next model of Threadripper.
[+] [-] cptskippy|8 years ago|reply
[+] [-] paulmd|8 years ago|reply
[+] [-] dijit|8 years ago|reply
I'm also cautiously optimistic for EPYC the server grade CPU that AMD is releasing soon, although in this world of "per-core" licensing costs there is a strong need for less cores and more single-thread performance in the server.
[+] [-] TazeTSchnitzel|8 years ago|reply
Huh. Will AMD workstations be built around similar Epyc products, then? But those would have worse power consumption, right? (4 dies versus 2)
[+] [-] foepys|8 years ago|reply
The fastest 16-core Epyc has 2.4/2.9 GHz (base/boost) while TR has 3.4/4.0 (4.2 GHz with XFR, also 16 cores).
[+] [-] krsdcbl|8 years ago|reply
Beeing able to render on 64 threads for just the price of 2 high end graphics cards seriously makes me consider stick with cpu rendering for my next workstation - even at a lower frequency per core the time savings could be substanial.
[+] [-] samcat116|8 years ago|reply
[+] [-] morrbo|8 years ago|reply
However, the clock speeds of ryzen/tr/epyc/whatever are more than enough for a "good" workload. In fact you could argue that unless you're doing something which requires only one single core (and honestly, I can't envisage a workload like this in a professional environment, but i'm surely wrong) the speed is fantastic and not really noticeable.
It's incredibly easy these days, at least in .NET, to parallelize workloads locally, so using something like this would outshine any 4ghz base 8 core any day of the week. But to actually answer your question as to when can we see 4ghz+ base clocks on 16/32 core processors...the next 5 years? I guess...
[+] [-] Symmetry|8 years ago|reply
[+] [-] sp332|8 years ago|reply
[+] [-] drudru11|8 years ago|reply
[+] [-] post_break|8 years ago|reply
[+] [-] samcat116|8 years ago|reply
[+] [-] deelowe|8 years ago|reply
[+] [-] needz|8 years ago|reply
edit: https://emby.media/
[+] [-] roel_v|8 years ago|reply
[+] [-] jsheard|8 years ago|reply
It's effectively two Threadrippers on a single package (2x cores, 2x memory channels, 2x PCIe lanes) for only slightly more than double the price ($999 for 16C TR, $2100 for 32C Epyc).
[+] [-] tiffanyh|8 years ago|reply
[+] [-] tiffanyh|8 years ago|reply
From TFA:
"With Threadripper, you can run two graphics cards at X16 PCIe speeds, two at X8, and still have enough lanes left over for three X4 NVMe SSDs connected directly to the CPU."
https://arstechnica.co.uk/gadgets/2017/08/amd-threadripper-r...
[+] [-] caleblloyd|8 years ago|reply
[+] [-] vetinari|8 years ago|reply
[+] [-] shubb|8 years ago|reply