Another issue beyond the fact that the X299 boards are problematic in terms of how many, and/or what slots you can and can't use seems to be excessive heat and stability in some of the reviews I've seen.
I'm still running a i7-4790K at home, and though I'd like something with more cores... nothing is compelling enough to bring me to switch given the costs involved. If I were building new, would most likely go with an AMD solution.
CPU gains for the last few gens have mainly been in perf/watt and, now, core count in lower-class chips thanks to AMD. The biggest benefit of a system upgrade for the last few gens, from a user perspective, has been for the chipset's gains - NVMe, USB3, USB-C, etc. more than raw performance. That's slowly trickled up but the real-world gains in many tasks haven't been enough to really justify it.
I know many people still sitting on anywhere from a 2xxx gen to a 5xxx gen and just don't feel compelled to upgrade from a CPU perspective. Those that eventually do, do so for the motherboard features more than the CPU - that's just a necessary cost for a small benefit.
This is marginally different for laptops where perf/watt becomes more important, of course. However for desktops, I certainly wouldn't be troubled by a 4xxx gen. I upgraded last year and it was from a 920 to a 6600K. Even the 920 did much of what I wanted, honestly, it was more a luxury upgrade.
Indeed, especially with Intel still trying to upsell users to the highest level X chips to get more PCI-e channels. Fortunately AMD has more than intel available from their cheapest chip to their most expensive.
I'm not a fan of either megacorp (though I prefer Intel because their stuff works much better historically on GNU/Linux), but it should be painfully obvious that "i9" is just a reaction to the threat of ThreadRipper from AMD (which is still going to be more powerful and far more affordable than i9 when it launches).
EDIT: To be fair, they do mention this in TFA:
> That these chips are currently little more than a product name and a price [...] is a strong indication that Intel was taken aback by AMD's Threadripper, a 16-core chip due for release this summer.
> it should be painfully obvious that "i9" is just a reaction to the threat of ThreadRipper from AMD
This is incorrect. As Ryan Shrout from PcPer notes,[1]
> In some circles of the Internet, the Core i9 release and the parts that were announced last month from Intel seem as obvious a reaction to AMD’s Ryzen processor and Threadripper as could be shown. In truth, it’s hard to see the likes of the Core i9-7900X as reactionary in its current state; Intel has clearly been planning the Skylake-X release for many months. What Ryzen did for the consumer market was bring higher 4-count core processors to prevalence, and the HEDT line from Intel has very little overlap in that regard. Threadripper having just been announced in the last 60 days (even when you take into account the rumors that have circulated), seems unable to have been the progenitor of the Core i9 line, isn't its entirety. That being said, it is absolutely true that Intel has reacted to the Ryzen and Threadripper lines with pricing and timing adjustments.
> it should be painfully obvious that "i9" is just a reaction to the threat of ThreadRipper from AMD
Maybe the branding but the HEDT lineup has been around since 2010. Shockingly, this lineup escaped the notice of AMD loyalists, or was assumed to be equivalent to Bulldozer's CMT (which is much more like a SMT/hyperthread than most people would admit at the time) - however they have always been excellent at gaming, particularly compared to Bulldozer's miserable IPC.
The high-core-count lineup are probably a reaction to Threadripper but these chips have been around in the Xeon lineup since forever and really should have stayed there. These chips (including Threadripper) are really multi-socket-in-a-package and don't perform particularly well in gaming workloads (despite AMD advertising them for such). Games don't scale well given the latency. They will be nice for things like CAD rendering workstations but that's a much narrower niche.
The i9 branding also slices off ECC, which is a significant feature for many things these will actually be good at. Threadripper will be at a significant advantage for actual server usage (even home-server) as a result. Intel is trying to avoid cannibalizing sales in their more expensive Xeon lineup but it does kill a bunch of the utility of the processor as a result.
Between the Intel Core i9-7900X and AMD Ryzen 7 1800X there is a $540 price difference.
But for that extra cost you get four more threads at a higher clock rate, twenty extra PCIE lanes, a 500 MHz higher turbo clock speed, and double the memory bandwidth.
Even with the lesser Intel Core i7-7820X you will get the same thread count but with a higher clock rate, four extra PCIE lanes, still a 500 MHz higher turbo clock speed, and double the memory bandwidth for only $140 more.
Now, of course, the AMD Ryzen Threadripper 1950X comes much closer to the i9-7900X price point.
However, you will sacrifice single core performance to gain twelve more threads at a lower clock rate. But, you will receive twenty more PCIE lanes, over twice as much L3 cache, and the same memory bandwidth as the i9-7900X.
So if your plan is to build a 3D render farm, the Threadripper seems quite appropriate.
Although, if you plan to build a workstation on which to model 3D assets and to perform preview renders--the Intel i9 series seems more apt.
Not to take away from your numbers, but you should consider the cost of the motherboard, too. The HEDT Intel motherboards tend to be more expensive than what the Ryzen ones are going for and much harder to find replacements for down the line. Microcenter seems to only sell one X299 and it's at $310. Depending on your desired feature set, you can get a Ryzen motherboard for < $100.
Are there really that many people doing 3d model rendering?
My guess would be PC users by count are
gamers > programmers > 3d renderers
For most gamers, seems like i7 or maybe i9 wins in current benchmarks. For programmers maybe ryzen is a better fit, but I bet it depends on your language.
The real problems with Skylake-X are chipset cost, power consumption, shitty partner boards, and TIM. All of these are forgivable given the performance - except the TIM.
Chipset cost will come down in 6-12 months after launch like it always does. This is part for the course, at launch X370 boards for Ryzen were going for well over $250 as well.
Power consumption is a consequence of AVX512 and the mesh interconnect along with raw core count. Everyone wants higher clocks, more cores, and more functional units. There are no easy efficiency gains anymore, and this is the price - power consumption. This is the "everything and the kitchen sink" processor and it runs hot as a result - but it absolutely crushes everything else on the market. This is no Bulldozer.
Board partners with insulators on top of their VRMs was going to come to a head sooner or later. This is the natural outgrowth of form over function, RGB LEDs on everything and stylized heatsink designs that insulate the board instead of actual cooling. The terrible reviews on those boards will sort this problem right out, they will be unusable in their current form.
Intel has been cruising for issues with their TIM for years (since Ivy Bridge), this time they finally have a chip that puts out enough heat they can't ignore it. Intel can get away with making you delid a $200 i5 or a $300 i7, it's not acceptable on a $1000 processor.
There is still a market for a 6-12C HEDT chip that can hit 5 GHz overclocked. This thing absolutely smokes Ryzen in gaming at stock clocks let alone OC'd - single-thread performance is still a dominant factor in good gaming performance and this chip delivers in spades. Combining its leads in IPC and clocks, it's fully 33% faster than Ryzen in single-thread performance. This is just a brutal amount of performance for gaming. Unfortunately without delidding you're not going to hit good OC clocks given the current TIM. And delidding is a dealbreaker on a $1000 CPU.
TIM is the actual core problem with Skylake-X - everything else will sort itself out. Skylake-X with solder would be a winner and Intel would be wise to turn the ship as fast as possible. The 6C and 8C version are priced much more reasonably and will sell great as long as they fix the TIM problem.
Intel claims they have problems with dies cracking, but AMD manages to solder much smaller dies, so IMO Intel just doesn't have a leg to stand on here. This is not something that should be pushed onto the customer with a $1000 processor - you're Chipzilla, figure something out.
"This thing absolutely smokes Ryzen in gaming at stock clocks let alone OC'd - single-thread performance is still a dominant factor in good gaming performance and this chip delivers in spades."
That if you are limiting yourself to 1080p... At the resolution that I game (3440 x 1440) those performance differences disappear very fast. And even at 1080p 150 vs 180 frames don't matter that much for the majority of people.
The cost of this chip alone is the same as some Ryzen builds. There is a point where it financial doesn't make sense (price/performance) even if it's the fastest chip around.
At the desktop level I don't get why people care that much about power consumption? It means you have to dissipate more heat, okay, that means you can't use a cheap cooler. But it AFAIK even an extra 100w is cheap in the most expensive areas, especially when contrasted against productivity or cigarette breaks, or people sometimes being 20 minutes late to work...
I bought my Ryzen chip on its release day, I don't need some X370 board, I got myself a B350M board which doesn't affect me anything for my applications. It cost me $100 delivered.
There is no such alternative for Skylake-X - Intel charge you an arm and leg for its half decent products.
Overclockers are vocal but I can't imagine they're more than a tiny portion of the market these days? For most use cases modern CPUs are plenty fast enough stock. Businesses won't do something unsupported. Games will be written to run properly on supported chips. Maybe with a lot of effort you can get your games looking slightly nicer, sure, but for how many people is that worth it?
> Unfortunately without delidding you're not going to hit good OC clocks given the current TIM. And delidding is a dealbreaker on a $1000 CPU.
Yeah, absolutely agree. What's sad is there are fanboys defending this and saying this makes direct die cooling easier through delidding, which saves 1-2 C° over solder, and is the beast idea Intel had in recent years.
I'll answer your subtle comment quite explicitly: never.
Intel clearly rushed this out to prevent AMD from having the perception of leading the space, at least in terms of the largest core count. The article even references this.
I just hope that it stays competitive, as this is clearly a win for consumers.
I'm ignorantly guessing Intel always keep something in their back pocket ever since AMD's Athlons bested Intel's P4, just in case AMD ever tried to wrest the frown back. Or, I would, anyway.
I'm all for double the memory bandwidth (and more importantly double the memory channels) as long as it's not too expensive. But I'm holding off on buying a new desktop till the AMD Threadripper hits in a few weeks.
I suspect that benchmarks do a really poor job of measuring worst case performance... which is what users notice. Things like UI lag and audio skipping. I suspect double the memory bandwidth (assuming a nice fast M.2 SSD) is the limiting factor for heavy workloads made up of independent tasks.
Impressive for a x86 as this is, some POWER and SPARC users may disagree with this assessment. In fact, some Xeon users will doubtlessly scratch their heads too.
Only reason they'd scratch their head is to try to find the hair they lost explaining why they are on POWER or (especially) SPARC.
Power9 is competitive in performance/Watt, and in some weird benchmarks which no one cares about. I haven't seen anything competitive in any way from SPARC for a long time.
But once again they have included a second "processor" on each chip with a bunch of restricted backdoors that can not be removed [1]. There have already been bugs [2] and exploits [3] found and therefore no Intel or AMD chip can be used if you care about security and/or privacy.
If I think Microsoft isn't free enough for me, then I can remove Windows and install Linux (or BSD). If I think Chrome is sending my data to Google, then I can remove it and install Firefox.
But if don't like that Intel can take over my PC at any time, watch my screen, log my keystrokes, prevent me from installing another operating system, manipulate what I see on the screen, and much much more, then there is nothing I can do. I can not remove the second chip or remove the code. I must have a proprietary blob [4] (of who's source code no one can see to audit) running on my Intel PC.
But the worst thing has to be Intel and AMT's complete refusal to provide a clean chip to companies that are trying to provide backdoor free computers. Look at http://puri.sm [5] for example. They are trying to provide a PC that does not restrict what operating system or BIOS you run, and have repeatedly contacted Intel to ask them to provide a batch of chips with no ME or AMT installed. Even Google, which sells millions of chromebooks (coreboot preinstalled) have been unable to persuade them. [6]
As Intel and AMT are the biggest players and arguably a monopoly of the microprocessor market, they have a responsibility to provide safe and clean processors that customers can truly own. Please try your best not to buy these products until they resolve these issues.
Who is AMT? I thought AMT is just Intel's marketing name for certain remote management features.
Also, I'm under impression that [6] refers not only to firmware blobs running on various auxiliary coprocessors but also the machine's firmware, i.e. BIOS, and in particular the CPU initialization component which had been provided only in binary form by both Intel and AMD for a few years now.
Careful with these benchmarks. The 6700K and the 6700T shows the same Cinebench R15 on https://www.notebookcheck.net/Intel-Core-i7-6700T-Processor-... -- click on Show comparison chart below Cinebench R15 - CPU Multi 64Bit. I do not think anyone believes those two CPUs are performing the same -- they are the same Skylake architecture but one is 4-4.2GHz w/ a 91W TDP while the other is 2.8-3.6GHz w/ a 35W TDP. The difference is decidedly not 1%.
It's not that notebookcheck is benchmarking something outlandish: it shows 668 while this Ars article claims 637 for the 6700K, Notebookcheck benchmarked the 6950X to 1859 while Ars has 1786, both are very close.
I think you may have some model numbers and/or benchmark numbers mixed up. I don't see the 6700K or 6700T in the charts in this Ars article.
I see a 7600K with the 637 score, but that lacks hyperthreading and has 25% less L3 cache compared to the 6700T, so it makes sense that the 17% frequency advantage is mostly balanced out (there's little IPC difference between Kaby Lake and Skylake).
Edit: Actually the frequency difference may be a bit off from 17%, that was based off the max single core turbo frequencies. I don't know what the all core turbos are.
Out of the loop when it comes to the latest processor technology. How does this chip maintain the same TDP as a 6850K but with 4 more core? Same size (lithography), roughly the same freq, memory size/types.
Power/frequency scaling is quadratic since voltage can decrease with frequency, so even the 10% difference in base clocks could explain it, depending on where in the voltage/frequency curve they are.
Plus, this is Intel's 3rd iteration on the same process; even if the feature size doesn't change you can extract a bit more power efficiency with 2 years of feedback.
what a huge load of biased non-sense! I have a machine pretty similar to the one mentioned in the article below, it is using Intel processors released ages ago, in fact they were from decommissioned servers from some random data centres. I'd willing to bet that machine with "the fastest chip in the world" is significantly slower/more expensive than mine when it comes to my long list of day to day development tasks.
Oh, don't forget to mention the fact that the "fastest chip in the world" can reach >100 degrees when fully loaded. Maybe Intel should pay some review sites to claim it to be the processor most suitable for cooking a meal.
In case you want to argue that my machine has two Xeon - you can actually order one single Xeon from newegg.com which is more recent, put it into a consumer motherboard and beat the xxx out of i9-7900x. There is no way a 10-core Intel processor could possibly be the "fastest chip in the world".
The 20+ core Xeons run at 2.1 or 2.2 Ghz, while this runs at 4, with a newer architecture. I don't think it's a big stretch to call this the fastest chip, especially since we're referring to consumer chips and not server chips that cost 9000 eurodollars (as is the case for those 20 core Xeons).
Also, have both of these setups run a mixed workload that isn't absolutely parallelizable and watch the Xeon struggle.
[+] [-] tracker1|8 years ago|reply
I'm still running a i7-4790K at home, and though I'd like something with more cores... nothing is compelling enough to bring me to switch given the costs involved. If I were building new, would most likely go with an AMD solution.
[+] [-] NamTaf|8 years ago|reply
I know many people still sitting on anywhere from a 2xxx gen to a 5xxx gen and just don't feel compelled to upgrade from a CPU perspective. Those that eventually do, do so for the motherboard features more than the CPU - that's just a necessary cost for a small benefit.
This is marginally different for laptops where perf/watt becomes more important, of course. However for desktops, I certainly wouldn't be troubled by a 4xxx gen. I upgraded last year and it was from a 920 to a 6600K. Even the 920 did much of what I wanted, honestly, it was more a luxury upgrade.
[+] [-] sliken|8 years ago|reply
[+] [-] alikoneko|8 years ago|reply
[+] [-] eloisant|8 years ago|reply
[+] [-] cyphar|8 years ago|reply
I'm not a fan of either megacorp (though I prefer Intel because their stuff works much better historically on GNU/Linux), but it should be painfully obvious that "i9" is just a reaction to the threat of ThreadRipper from AMD (which is still going to be more powerful and far more affordable than i9 when it launches).
EDIT: To be fair, they do mention this in TFA:
> That these chips are currently little more than a product name and a price [...] is a strong indication that Intel was taken aback by AMD's Threadripper, a 16-core chip due for release this summer.
[+] [-] throwaway209402|8 years ago|reply
This is incorrect. As Ryan Shrout from PcPer notes,[1]
> In some circles of the Internet, the Core i9 release and the parts that were announced last month from Intel seem as obvious a reaction to AMD’s Ryzen processor and Threadripper as could be shown. In truth, it’s hard to see the likes of the Core i9-7900X as reactionary in its current state; Intel has clearly been planning the Skylake-X release for many months. What Ryzen did for the consumer market was bring higher 4-count core processors to prevalence, and the HEDT line from Intel has very little overlap in that regard. Threadripper having just been announced in the last 60 days (even when you take into account the rumors that have circulated), seems unable to have been the progenitor of the Core i9 line, isn't its entirety. That being said, it is absolutely true that Intel has reacted to the Ryzen and Threadripper lines with pricing and timing adjustments.
[1] : https://www.pcper.com/reviews/Processors/Intel-Core-i9-7900X...
[+] [-] paulmd|8 years ago|reply
Maybe the branding but the HEDT lineup has been around since 2010. Shockingly, this lineup escaped the notice of AMD loyalists, or was assumed to be equivalent to Bulldozer's CMT (which is much more like a SMT/hyperthread than most people would admit at the time) - however they have always been excellent at gaming, particularly compared to Bulldozer's miserable IPC.
The high-core-count lineup are probably a reaction to Threadripper but these chips have been around in the Xeon lineup since forever and really should have stayed there. These chips (including Threadripper) are really multi-socket-in-a-package and don't perform particularly well in gaming workloads (despite AMD advertising them for such). Games don't scale well given the latency. They will be nice for things like CAD rendering workstations but that's a much narrower niche.
The i9 branding also slices off ECC, which is a significant feature for many things these will actually be good at. Threadripper will be at a significant advantage for actual server usage (even home-server) as a result. Intel is trying to avoid cannibalizing sales in their more expensive Xeon lineup but it does kill a bunch of the utility of the processor as a result.
[+] [-] jitl|8 years ago|reply
> Intel Core i9-7900X review: The fastest chip in the world, but too darn expensive
> When eight-core Ryzen costs £300, do any of these new Intel chips make sense?
[+] [-] kough|8 years ago|reply
[+] [-] msimpson|8 years ago|reply
But for that extra cost you get four more threads at a higher clock rate, twenty extra PCIE lanes, a 500 MHz higher turbo clock speed, and double the memory bandwidth.
Even with the lesser Intel Core i7-7820X you will get the same thread count but with a higher clock rate, four extra PCIE lanes, still a 500 MHz higher turbo clock speed, and double the memory bandwidth for only $140 more.
Now, of course, the AMD Ryzen Threadripper 1950X comes much closer to the i9-7900X price point.
However, you will sacrifice single core performance to gain twelve more threads at a lower clock rate. But, you will receive twenty more PCIE lanes, over twice as much L3 cache, and the same memory bandwidth as the i9-7900X.
So if your plan is to build a 3D render farm, the Threadripper seems quite appropriate.
Although, if you plan to build a workstation on which to model 3D assets and to perform preview renders--the Intel i9 series seems more apt.
[+] [-] nirvdrum|8 years ago|reply
[+] [-] brianwawok|8 years ago|reply
My guess would be PC users by count are
gamers > programmers > 3d renderers
For most gamers, seems like i7 or maybe i9 wins in current benchmarks. For programmers maybe ryzen is a better fit, but I bet it depends on your language.
[+] [-] paulmd|8 years ago|reply
Chipset cost will come down in 6-12 months after launch like it always does. This is part for the course, at launch X370 boards for Ryzen were going for well over $250 as well.
Power consumption is a consequence of AVX512 and the mesh interconnect along with raw core count. Everyone wants higher clocks, more cores, and more functional units. There are no easy efficiency gains anymore, and this is the price - power consumption. This is the "everything and the kitchen sink" processor and it runs hot as a result - but it absolutely crushes everything else on the market. This is no Bulldozer.
Board partners with insulators on top of their VRMs was going to come to a head sooner or later. This is the natural outgrowth of form over function, RGB LEDs on everything and stylized heatsink designs that insulate the board instead of actual cooling. The terrible reviews on those boards will sort this problem right out, they will be unusable in their current form.
Intel has been cruising for issues with their TIM for years (since Ivy Bridge), this time they finally have a chip that puts out enough heat they can't ignore it. Intel can get away with making you delid a $200 i5 or a $300 i7, it's not acceptable on a $1000 processor.
There is still a market for a 6-12C HEDT chip that can hit 5 GHz overclocked. This thing absolutely smokes Ryzen in gaming at stock clocks let alone OC'd - single-thread performance is still a dominant factor in good gaming performance and this chip delivers in spades. Combining its leads in IPC and clocks, it's fully 33% faster than Ryzen in single-thread performance. This is just a brutal amount of performance for gaming. Unfortunately without delidding you're not going to hit good OC clocks given the current TIM. And delidding is a dealbreaker on a $1000 CPU.
TIM is the actual core problem with Skylake-X - everything else will sort itself out. Skylake-X with solder would be a winner and Intel would be wise to turn the ship as fast as possible. The 6C and 8C version are priced much more reasonably and will sell great as long as they fix the TIM problem.
Intel claims they have problems with dies cracking, but AMD manages to solder much smaller dies, so IMO Intel just doesn't have a leg to stand on here. This is not something that should be pushed onto the customer with a $1000 processor - you're Chipzilla, figure something out.
[+] [-] FiReaNG3L|8 years ago|reply
TIM = Thermal Interface Material
[+] [-] glenndebacker|8 years ago|reply
That if you are limiting yourself to 1080p... At the resolution that I game (3440 x 1440) those performance differences disappear very fast. And even at 1080p 150 vs 180 frames don't matter that much for the majority of people.
The cost of this chip alone is the same as some Ryzen builds. There is a point where it financial doesn't make sense (price/performance) even if it's the fastest chip around.
[+] [-] vosper|8 years ago|reply
[+] [-] dis-sys|8 years ago|reply
There is no such alternative for Skylake-X - Intel charge you an arm and leg for its half decent products.
[+] [-] lmm|8 years ago|reply
[+] [-] redtuesday|8 years ago|reply
Yeah, absolutely agree. What's sad is there are fanboys defending this and saying this makes direct die cooling easier through delidding, which saves 1-2 C° over solder, and is the beast idea Intel had in recent years.
[+] [-] sengork|8 years ago|reply
[+] [-] arcanus|8 years ago|reply
Intel clearly rushed this out to prevent AMD from having the perception of leading the space, at least in terms of the largest core count. The article even references this.
I just hope that it stays competitive, as this is clearly a win for consumers.
[+] [-] JonRB|8 years ago|reply
[+] [-] mc32|8 years ago|reply
[+] [-] RachelF|8 years ago|reply
[+] [-] sliken|8 years ago|reply
I suspect that benchmarks do a really poor job of measuring worst case performance... which is what users notice. Things like UI lag and audio skipping. I suspect double the memory bandwidth (assuming a nice fast M.2 SSD) is the limiting factor for heavy workloads made up of independent tasks.
[+] [-] dom0|8 years ago|reply
[+] [-] rbanffy|8 years ago|reply
Impressive for a x86 as this is, some POWER and SPARC users may disagree with this assessment. In fact, some Xeon users will doubtlessly scratch their heads too.
[+] [-] nl|8 years ago|reply
Power9 is competitive in performance/Watt, and in some weird benchmarks which no one cares about. I haven't seen anything competitive in any way from SPARC for a long time.
[+] [-] pella|8 years ago|reply
[+] [-] turblety|8 years ago|reply
If I think Microsoft isn't free enough for me, then I can remove Windows and install Linux (or BSD). If I think Chrome is sending my data to Google, then I can remove it and install Firefox.
But if don't like that Intel can take over my PC at any time, watch my screen, log my keystrokes, prevent me from installing another operating system, manipulate what I see on the screen, and much much more, then there is nothing I can do. I can not remove the second chip or remove the code. I must have a proprietary blob [4] (of who's source code no one can see to audit) running on my Intel PC.
But the worst thing has to be Intel and AMT's complete refusal to provide a clean chip to companies that are trying to provide backdoor free computers. Look at http://puri.sm [5] for example. They are trying to provide a PC that does not restrict what operating system or BIOS you run, and have repeatedly contacted Intel to ask them to provide a batch of chips with no ME or AMT installed. Even Google, which sells millions of chromebooks (coreboot preinstalled) have been unable to persuade them. [6]
As Intel and AMT are the biggest players and arguably a monopoly of the microprocessor market, they have a responsibility to provide safe and clean processors that customers can truly own. Please try your best not to buy these products until they resolve these issues.
[1] https://libreboot.org/faq.html#intelme
[2] https://www.theregister.co.uk/2017/05/01/intel_amt_me_vulner...
[3] https://www.intel.com/content/www/us/en/architecture-and-tec...
[4] http://boingboing.net/2016/06/15/intel-x86-processors-ship-w...
[5] https://puri.sm/learn/intel-me/
[6] https://libreboot.org/faq.html#intel-is-uncooperative
[+] [-] kennydude|8 years ago|reply
So you can view the code, verify it's what is installed but for security purposes can't change it.
The way this reads it seems deliberately suspicious. Having a HTTP server so low in the hardware stack feels wrong.
[+] [-] qb45|8 years ago|reply
Also, I'm under impression that [6] refers not only to firmware blobs running on various auxiliary coprocessors but also the machine's firmware, i.e. BIOS, and in particular the CPU initialization component which had been provided only in binary form by both Intel and AMD for a few years now.
[+] [-] Shorel|8 years ago|reply
I myself am thinking of buying one of these new AMD processors.
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] chx|8 years ago|reply
It's not that notebookcheck is benchmarking something outlandish: it shows 668 while this Ars article claims 637 for the 6700K, Notebookcheck benchmarked the 6950X to 1859 while Ars has 1786, both are very close.
[+] [-] vith|8 years ago|reply
I see a 7600K with the 637 score, but that lacks hyperthreading and has 25% less L3 cache compared to the 6700T, so it makes sense that the 17% frequency advantage is mostly balanced out (there's little IPC difference between Kaby Lake and Skylake).
You don't have notebookcheck's numbers matched up with the right CPUs either: https://www.notebookcheck.net/Mobile-Processors-Benchmark-Li...
To be fair, Ars has the i5-7600K listed as an i7, and Notebookcheck has the cache sizes wrong: http://ark.intel.com/compare/88200,97129,97144,88195
So there is plenty of confusion to go around.
Edit: Actually the frequency difference may be a bit off from 17%, that was based off the max single core turbo frequencies. I don't know what the all core turbos are.
[+] [-] rkv|8 years ago|reply
[+] [-] piinbinary|8 years ago|reply
The CPUs will generally use far less power than the TDP might suggest.
[+] [-] brigade|8 years ago|reply
Plus, this is Intel's 3rd iteration on the same process; even if the feature size doesn't change you can extract a bit more power efficiency with 2 years of feedback.
[+] [-] dis-sys|8 years ago|reply
what a huge load of biased non-sense! I have a machine pretty similar to the one mentioned in the article below, it is using Intel processors released ages ago, in fact they were from decommissioned servers from some random data centres. I'd willing to bet that machine with "the fastest chip in the world" is significantly slower/more expensive than mine when it comes to my long list of day to day development tasks.
Oh, don't forget to mention the fact that the "fastest chip in the world" can reach >100 degrees when fully loaded. Maybe Intel should pay some review sites to claim it to be the processor most suitable for cooking a meal.
https://www.techspot.com/review/1218-affordable-40-thread-xe...
In case you want to argue that my machine has two Xeon - you can actually order one single Xeon from newegg.com which is more recent, put it into a consumer motherboard and beat the xxx out of i9-7900x. There is no way a 10-core Intel processor could possibly be the "fastest chip in the world".
[+] [-] Sorreah|8 years ago|reply
Also, have both of these setups run a mixed workload that isn't absolutely parallelizable and watch the Xeon struggle.
[+] [-] polskibus|8 years ago|reply
[+] [-] TwoBit|8 years ago|reply
[+] [-] Frogolocalypse|8 years ago|reply
[+] [-] gcb0|8 years ago|reply
maybe if it ended with "for very specific cases" it would be more true.
[+] [-] fasterthanjim|8 years ago|reply
[+] [-] fasterthanjim|8 years ago|reply