The rumor on the street is that Zen 3 is a major upgrade with huge IPC improvements.
If that's true, Intel has basically nothing. All they've got left at this point is "I want the best gaming performance and literally nothing else matters". AMD may just win that crown too.
It's never been more exciting to be an enthusiast, and Intel was asleep for a long time. It's about time someone woke the sleepy giant.
Weren't we all sick of Intel churning out 10% incremental improvements every year on an entirely new socket every time? They were phoning it in for a long time.
Biggest issue I see with this latest breed of Intel chips are their thermals... looks like you'll need a motherboard that can push quite a few watts to the CPU and you'll need a cooling solution (liquid cooling or a high-end aircooler) to remove the heat.
If you need to drop $$$ on a high-end motherboard and cooler to actually hit top performance it can obscure the true cost of the CPU, because you can easily drop an extra $100 trying to cope with its power and thermal requirements.
> Weren't we all sick of Intel churning out 10% incremental improvements every year on an entirely new socket every time? They were phoning it in for a long time.
Sort of yes (although, they often did two generations per socket in recent times), but then they started just rehashing Skylake every year, and we long for the time of predictable incremental improvement. On the server side, the predictable incremental improvement also came with more cores each time too, so that was nice.
I have the fastest HEDT Intel processor (10980xe) and have benchmarked it extensively against AMD's equivalents (3950x and 3960x).
If you are willing to OC then Intel has a significantly better price/performance ratio than AMD for general use. If you doing anything that benefits from AVX then Intel will pull away significantly. And if you are doing deep learning than DLBoost is seriously under-appreciated. However if you're not in these three camps then AMD is the better option.
Also the x299 LGA1151 socket has been around for 3 years so Intel doesn't always just change sockets for no reason.
All they've got left at this point is "I want the best gaming performance and literally nothing else matters".
For the average person maybe that's all they've got left.
But technically their are still ahead in some interesting ways when we look at x86 extensions:
By decreasing order of impact:
AVX 512
TSX
SGX
Their next architecture could bring bfloat16 too but I wonder how useful that would be on a CPU instead of on a GPU.
AVX512 and TSX can give Intel a BIG performance advantage in some niche (but foundational) applications
A _lot_ of people (myself included) buy Intel and only Intel. Maybe there isn't a good reason by a strict price-performance option, but I like the wider choice of motherboards and I just trust their compatibility more.
Also, all our desktop systems here are Xeon because we've had bad experience with ECC support on AMD.
Edit: Honest truthful non-inflammatory comments get you downvoted and mocked. That's some toxic community you've built here, @dang.
Can someone breakdown what the most exciting CPU developments have been recently? I stopped paying attention a long time ago, but find myself interested having recently gotten back into 3D animation. I know GPU renderers are gaining steam, but there are still CPU renderers out there and putting my toes in the water I’m finding it difficult to parse the advantages of one new chip vs another.
* AMD introduced their Ryzen architecture ~3 years ago. This was the first time in years that their chips were competitive with Intel chips.
* AMD's newfound competitiveness forced Intel to increase core counts in their consumer chips. 6 and 8 core chips are now extremely common in the consumer space.
* AMD has released consumer chips with as many as 16 physical cores (32 threads) in their most recent generation. This is a huge improvement for tasks that scale well to multiple cores like (I presume) 3D animation. Intel has 10 core consumer chips out.
* If you're willing to move up to the HEDT chips you can now buy an AMD Threadripper chip with up to 64 cores (128 threads).
* Intel has struggled to make the next step in manufacturing. Their desktop chips have been stuck at 14nm for half a decade now. AMD contracts their manufacturing out and has been able to take advantage of other foundries improved processes. Their current chips are using 7nm process nodes and use significantly less power than their Intel counterparts. Because of this they require less extreme cooling.
* There are some new instruction sets such as AVX and AVX512 that can improve performance in some software. I'm not an expert there and you'd probably want to do some research on sites related to 3D rendering to see which chip would perform best for your needs.
The CPU developments may or may not be of use to you.
If you're really interested in buying hardware, you can follow a rough process. Web search for the software that peaks on an X/Y axis of "how much time I spend using it" and "how long things take because of CPU processing" and add in keywords like "benchmark" and "CPU." Read those articles carefully - maybe three different sites/authors to get different biases ironed out.
I've been an AMD fan since the Athlon XP (though I skipped over the FX / Bulldozer mess and upgraded a Phenom to a Ryzen 2700X) so I can't pretend I'm unbiased. My opinion, though, is that Intel lost their way. They were tick-tocking really well, where they'd improve architecture and then process, and they were dominating AMD. When the Zen cores were released, first as Ryzen desktop chips (up to 1800X), the clouds began to part, because they were starting to look pretty competitive. The clocks are still not as high, and in specific gaming scenarios (specifically 1080p where high frame rates are critical), the Intel chips still showed a clear lead. The follow up Zen+ (up to 2700X) was a clear evolutionary improvement. (Along the way, Threadripper and Epyc chips were also released on the Zen/Zen+ architecture, and for workstation/server loads, they were competitive.)
Zen 2 was released last year with a move to chiplets, 7nm processes and another 15% increase in IPC. This really moved the needle. In many cases, The Ryzen 3xxx, Threadripper and Epyc chips are clearly better than the Intel counterparts. We also see Intel cutting the price of their HEDT chips in half overnight, so that gives you some insight. Most recently, the first Zen 2 laptop chips were released as Ryzen 4xxx with U, H and HS to indicate low power, high power and special high power chips that perform about on par with HS but using less power. They seem to be a huge upgrade to Ryzen mobile, and are much more power efficient, and battery life is finally reasonable (and in some cases impressive.)
On the Intel front, they've really struggled to break the cycle of tweaking their (5?) year old 14nm process, and they tweak the architecture, but they haven't had a major overhaul in quite some time. Meanwhile, the performance of Intel chips has been affected (arguably) more than AMD by having to implement fixes for various security flaws. And this old process/architecture is requiring Intel to continue to increase the power requirements so they can boost the clocks over 5Ghz, but of course this results in lots of heat, as well as a reputation for laptops throttling down in response to that heat.
There are still cases where it makes sense to buy Intel, but each person has to know their workload well enough to do the research to make that decision. If it's anything but mission critical (every second is worth real money), I don't think you can go wrong by just buying the best AMD chip you can afford. I don't think there are any real bad apples in the bunch, and in most cases you'll get a solidly adequate cooling solution for free and pretty good motherboard options. (Though I find the X570 market pretty unpleasant at the moment.)
$50 either way doesn’t change much especially since it’s often lost in motherboards, or if you need to buy a cooler (When B550 motherboards launch the balance will tilt further towards AMD).
What really does make a difference is thermals/noise. The higher end 9X00 and 10X00 series feel like they have been squeezed very tightly into their thermal envelopes.
A CPU should come with a cooler and with that cooler the CPU should be quiet with some headroom. Reviews of 10900 seem to indicate you should opt for one of the most expensive air coolers or even water cooling, which of course throws the price comparison off completely.
You'd think so, but it's a well known marketing trick that pricing at $399 and $449 will absolutely guarantee more sales of the lower priced item than if priced at $402 and $449.
I've been very happy with my 3900X so far. The only suggestion I would make would be to upgrade the cooler from the one that comes in the AMD retail box to something like a Noctua. Since I made that change, it's run much cooler and likely extended it's life significantly. Just make sure you have room in your case, as the Noctua is pretty darn big.
My workload with it is general .NET development, transcoding movies, and light gaming.
I can attest to Noctua building large coolers. I recently built a PC with a NH-D15 which came with two NF-P12 redux-900 140mm fans for the cooler. It was easily the largest cooler I’ve ever installed, and also the quietest. I only had it paired with an i7-9700K, but it was amazingly quiet even under max load. Wasn’t able to get it to thermally throttle with 5Ghz boost enabled and running prime95 and MSI Kombustor burn-in mode on the RTX 2060 Super simultaneously to heat up the case. Really happy with how that build turned out.
What would you all recommend for someone who just wants to maximize the performance of his IDE (.NET in Visual Studio)?
I get that going 3900X will give me the fastest compile times, which is cool. But I hit the build button relatively infrequently, and anyway I'm usually switching to reddit if the build is going to take longer than a few seconds, so I'm actually more interested in minimizing the latency of all the moment-to-moment operations like Intellisense, refactorings, highlighting, code folding, etc.
I suspect that some of those operations are mostly single-threaded, but even if there is some degree of parallelism going on, surely this type of "bursty" workload is where Intel can still register an advantage?
So, in other words, do these operations more closely resemble Benchmark A: "Blender render time" or Benchmark B: "Max FPS in Tomb Raider" (and why don't we talk more about the difficulty of making these sorts of determinations)?
Visual studio is slow because its an unoptimized piece of software (it misses CPU cache often and does frequent disk seeks, etc.)
The best way to get performance from such software is:
1) make sure it (and the project) are on a fast SSD. NVME is better but SATA is good.
2) getting a good CPU for the workload. For pure IDE user experience it won't make much of a difference as long as it's a recent high end enough CPU (fast RAM latency, high enough clock frequency, enough cores, ...)
For compute workloads, I highly recommend the AMD CPUs. In this case, doing multiple tasks at once - like switching betweeen reddit and a built qualifies :-)
For these type of workloads, I personally find that the synthetic tests (like
Geekbench) are pretty good. In addition Linus tech tips does a good job of covering “productivity” workloads - which AMD usually massacres the intel workloads. Finally if you use VMS or docker, the added CPU count is critical.
I’ve been looking at Ryzen CPUs for a while, and although I don’t have the budget for a top-tier box right now, this is excellent news. Drop one of these and an RTX into a good case and you have a workstation that will blast most Macs out of the water (if not all).
People that buy a Mac are looking for the fastest computer that will run OSX.
And if you want to do a Hackintosh then it really doesn't work that well with Ryzen e.g. issues with Creative Suite, Docker etc as well as poorer performance due to lack of AVX2/AVX-512.
I went with a 3700X and a 5700xt and can’t be happier. Compilation times have dropped significantly from my old box, that suffered heavily in performance from the meltdown/specter mitigations.
It also holds nicely my occasional gaming sessions :)
I don't know about that. The i5 6 core chips seem like pretty good value. I would probably go with an r5 3600, but it isn't really as clear cut anymore.
Checking local prices the 3600x is 90aud cheaper than the 10600k. Factor in a ~60aud cooler for the 10600k and the 6 core is competing with the 3700x on price, not the 3600x. In terms of value it still seems pretty clear-cut to me.
The article is comparing the upcoming Intel Core i9-10900K and Core i7-10700K chips, and how their price and performance compares to the AMD Ryzen 9 3900X.
That being said, it seems like, with the exception of some advantages in low resolution gaming, the less expensive Ryzen 5 3600 is still a better option than the (so far) unavailable Core i5 10600K.
There might be almost no reason to buy an Intel CPU, but interestingly, they do have the advantage in very small desktops because they have powerful CPUs with integrated graphics. Cooling without throttling is probably tricky if you get super small, but AMD's APUs are in need of an update.
AMD APUs are not that bad anymore, I am certainly considering a 3000G for my next cheap build. Obviously it would be nice to have a 15W Zen3 APU though, but one can only dream.
I was compiling the new and experimental Emacs native package and was warned that the compilation time took over 7 hours on a "fast computer." I was pleasantly surprised that it finished in under 20 minutes on my 3900x. That thing is insane!
In the first graphic, if you click the right arrow, you see single-core benchmarks. Even a 5.0Ghz desktop chip from Intel is behind the 4.6/4.7Ghz Ryzen chips.
Yeah, really. If they can pull something amazing out of an old CPU again, that'd really be something. Perhaps a 128 penryn with modern accelerators bolted on? :p
[+] [-] 0xy|5 years ago|reply
If that's true, Intel has basically nothing. All they've got left at this point is "I want the best gaming performance and literally nothing else matters". AMD may just win that crown too.
It's never been more exciting to be an enthusiast, and Intel was asleep for a long time. It's about time someone woke the sleepy giant.
Weren't we all sick of Intel churning out 10% incremental improvements every year on an entirely new socket every time? They were phoning it in for a long time.
[+] [-] spamizbad|5 years ago|reply
If you need to drop $$$ on a high-end motherboard and cooler to actually hit top performance it can obscure the true cost of the CPU, because you can easily drop an extra $100 trying to cope with its power and thermal requirements.
[+] [-] toast0|5 years ago|reply
Sort of yes (although, they often did two generations per socket in recent times), but then they started just rehashing Skylake every year, and we long for the time of predictable incremental improvement. On the server side, the predictable incremental improvement also came with more cores each time too, so that was nice.
[+] [-] threeseed|5 years ago|reply
If you are willing to OC then Intel has a significantly better price/performance ratio than AMD for general use. If you doing anything that benefits from AVX then Intel will pull away significantly. And if you are doing deep learning than DLBoost is seriously under-appreciated. However if you're not in these three camps then AMD is the better option.
Also the x299 LGA1151 socket has been around for 3 years so Intel doesn't always just change sockets for no reason.
[+] [-] qaq|5 years ago|reply
[+] [-] The_rationalist|5 years ago|reply
Their next architecture could bring bfloat16 too but I wonder how useful that would be on a CPU instead of on a GPU.
AVX512 and TSX can give Intel a BIG performance advantage in some niche (but foundational) applications
[+] [-] fortran77|5 years ago|reply
Also, all our desktop systems here are Xeon because we've had bad experience with ECC support on AMD.
Edit: Honest truthful non-inflammatory comments get you downvoted and mocked. That's some toxic community you've built here, @dang.
[+] [-] gdubs|5 years ago|reply
[+] [-] Exmoor|5 years ago|reply
* AMD introduced their Ryzen architecture ~3 years ago. This was the first time in years that their chips were competitive with Intel chips.
* AMD's newfound competitiveness forced Intel to increase core counts in their consumer chips. 6 and 8 core chips are now extremely common in the consumer space.
* AMD has released consumer chips with as many as 16 physical cores (32 threads) in their most recent generation. This is a huge improvement for tasks that scale well to multiple cores like (I presume) 3D animation. Intel has 10 core consumer chips out.
* If you're willing to move up to the HEDT chips you can now buy an AMD Threadripper chip with up to 64 cores (128 threads).
* Intel has struggled to make the next step in manufacturing. Their desktop chips have been stuck at 14nm for half a decade now. AMD contracts their manufacturing out and has been able to take advantage of other foundries improved processes. Their current chips are using 7nm process nodes and use significantly less power than their Intel counterparts. Because of this they require less extreme cooling.
* There are some new instruction sets such as AVX and AVX512 that can improve performance in some software. I'm not an expert there and you'd probably want to do some research on sites related to 3D rendering to see which chip would perform best for your needs.
[+] [-] neogodless|5 years ago|reply
If you're really interested in buying hardware, you can follow a rough process. Web search for the software that peaks on an X/Y axis of "how much time I spend using it" and "how long things take because of CPU processing" and add in keywords like "benchmark" and "CPU." Read those articles carefully - maybe three different sites/authors to get different biases ironed out.
I've been an AMD fan since the Athlon XP (though I skipped over the FX / Bulldozer mess and upgraded a Phenom to a Ryzen 2700X) so I can't pretend I'm unbiased. My opinion, though, is that Intel lost their way. They were tick-tocking really well, where they'd improve architecture and then process, and they were dominating AMD. When the Zen cores were released, first as Ryzen desktop chips (up to 1800X), the clouds began to part, because they were starting to look pretty competitive. The clocks are still not as high, and in specific gaming scenarios (specifically 1080p where high frame rates are critical), the Intel chips still showed a clear lead. The follow up Zen+ (up to 2700X) was a clear evolutionary improvement. (Along the way, Threadripper and Epyc chips were also released on the Zen/Zen+ architecture, and for workstation/server loads, they were competitive.)
Zen 2 was released last year with a move to chiplets, 7nm processes and another 15% increase in IPC. This really moved the needle. In many cases, The Ryzen 3xxx, Threadripper and Epyc chips are clearly better than the Intel counterparts. We also see Intel cutting the price of their HEDT chips in half overnight, so that gives you some insight. Most recently, the first Zen 2 laptop chips were released as Ryzen 4xxx with U, H and HS to indicate low power, high power and special high power chips that perform about on par with HS but using less power. They seem to be a huge upgrade to Ryzen mobile, and are much more power efficient, and battery life is finally reasonable (and in some cases impressive.)
On the Intel front, they've really struggled to break the cycle of tweaking their (5?) year old 14nm process, and they tweak the architecture, but they haven't had a major overhaul in quite some time. Meanwhile, the performance of Intel chips has been affected (arguably) more than AMD by having to implement fixes for various security flaws. And this old process/architecture is requiring Intel to continue to increase the power requirements so they can boost the clocks over 5Ghz, but of course this results in lots of heat, as well as a reputation for laptops throttling down in response to that heat.
There are still cases where it makes sense to buy Intel, but each person has to know their workload well enough to do the research to make that decision. If it's anything but mission critical (every second is worth real money), I don't think you can go wrong by just buying the best AMD chip you can afford. I don't think there are any real bad apples in the bunch, and in most cases you'll get a solidly adequate cooling solution for free and pretty good motherboard options. (Though I find the X570 market pretty unpleasant at the moment.)
[+] [-] alkonaut|5 years ago|reply
What really does make a difference is thermals/noise. The higher end 9X00 and 10X00 series feel like they have been squeezed very tightly into their thermal envelopes.
A CPU should come with a cooler and with that cooler the CPU should be quiet with some headroom. Reviews of 10900 seem to indicate you should opt for one of the most expensive air coolers or even water cooling, which of course throws the price comparison off completely.
[+] [-] DrBazza|5 years ago|reply
You'd think so, but it's a well known marketing trick that pricing at $399 and $449 will absolutely guarantee more sales of the lower priced item than if priced at $402 and $449.
[+] [-] chiph|5 years ago|reply
My workload with it is general .NET development, transcoding movies, and light gaming.
[+] [-] aspenmayer|5 years ago|reply
[+] [-] joelfolksy|5 years ago|reply
I get that going 3900X will give me the fastest compile times, which is cool. But I hit the build button relatively infrequently, and anyway I'm usually switching to reddit if the build is going to take longer than a few seconds, so I'm actually more interested in minimizing the latency of all the moment-to-moment operations like Intellisense, refactorings, highlighting, code folding, etc.
I suspect that some of those operations are mostly single-threaded, but even if there is some degree of parallelism going on, surely this type of "bursty" workload is where Intel can still register an advantage?
So, in other words, do these operations more closely resemble Benchmark A: "Blender render time" or Benchmark B: "Max FPS in Tomb Raider" (and why don't we talk more about the difficulty of making these sorts of determinations)?
[+] [-] VHRanger|5 years ago|reply
The best way to get performance from such software is:
1) make sure it (and the project) are on a fast SSD. NVME is better but SATA is good.
2) getting a good CPU for the workload. For pure IDE user experience it won't make much of a difference as long as it's a recent high end enough CPU (fast RAM latency, high enough clock frequency, enough cores, ...)
[+] [-] InTheArena|5 years ago|reply
For these type of workloads, I personally find that the synthetic tests (like Geekbench) are pretty good. In addition Linus tech tips does a good job of covering “productivity” workloads - which AMD usually massacres the intel workloads. Finally if you use VMS or docker, the added CPU count is critical.
[+] [-] BubRoss|5 years ago|reply
[+] [-] rcarmo|5 years ago|reply
[+] [-] threeseed|5 years ago|reply
And if you want to do a Hackintosh then it really doesn't work that well with Ryzen e.g. issues with Creative Suite, Docker etc as well as poorer performance due to lack of AVX2/AVX-512.
[+] [-] xondono|5 years ago|reply
It also holds nicely my occasional gaming sessions :)
[+] [-] bjoli|5 years ago|reply
[+] [-] ben-schaaf|5 years ago|reply
[+] [-] neogodless|5 years ago|reply
That being said, it seems like, with the exception of some advantages in low resolution gaming, the less expensive Ryzen 5 3600 is still a better option than the (so far) unavailable Core i5 10600K.
https://www.youtube.com/watch?v=yCcfgcjMFNk#t=962
[+] [-] delfinom|5 years ago|reply
[+] [-] gambiting|5 years ago|reply
[+] [-] mgraupner|5 years ago|reply
[+] [-] tyingq|5 years ago|reply
That would have been around $575 before.
[+] [-] BubRoss|5 years ago|reply
[+] [-] random9763|5 years ago|reply
[+] [-] jhoechtl|5 years ago|reply
So whenI buy an AMD CPU I also have to buy a GPU?
[+] [-] rcarmo|5 years ago|reply
[+] [-] dilandau|5 years ago|reply
[+] [-] smabie|5 years ago|reply
[+] [-] hartator|5 years ago|reply
[+] [-] neogodless|5 years ago|reply
https://www.techspot.com/review/2003-amd-ryzen-4000/
Here, a 35W AMD laptop chip (3.0/4.3Ghz) scores 492 on single-core Cinebench R20 against a 90W Intel chip (2.3/4.8Ghz) scoring 449.
https://www.techradar.com/reviews/amd-ryzen-7-3700x
Here, the Intel desktop chip does edge out the AMD desktop chips.
https://www.tomshardware.com/reviews/amd-ryzen-9-3950x-revie...
In the first graphic, if you click the right arrow, you see single-core benchmarks. Even a 5.0Ghz desktop chip from Intel is behind the 4.6/4.7Ghz Ryzen chips.
[+] [-] damniatx|5 years ago|reply
[+] [-] silverreads|5 years ago|reply