top | item 22890191

10-core i9-10900F desktop CPU lags behind 8-core Ryzen 9 4900HS mobile APU

206 points| DeathArrow | 6 years ago |notebookcheck.net | reply

141 comments

order
[+] sharken|6 years ago|reply
The 65W is a bit misleading, according to https://www.techpowerup.com/265695/intel-core-i9-10900f-can-... and https://wccftech.com/intel-core-i9-10900f-10-core-desktop-cp... it is allowed to draw 170W in PL1 mode and up to 224W(!) in PL2 mode.

Those numbers should feature prominently alongside the 65W figure.

It does not look good for Intel at the moment.

[+] baybal2|6 years ago|reply
Yes, Intel's "marketing TDP" is a complete sham.

You get completely different digits once you get full engineering datasheets under NDAs.

Intel's 5W "ultraportable" CPUs for examples go to 17W.

15W ones can boost well above 40W.

[+] ipsum2|6 years ago|reply
Yeah TDP is calculated differently for Intel and AMD, usually to make Intel look better. It makes it harder to compare because hardware components are going to be different (e.g. different mobo, different memory, etc) but they should measure wattage out the wall.
[+] briffle|6 years ago|reply
Does this new intel CPU have 'fixes' for all issues like spectre and meltdown, where you were supposed to turn off hyperthreading for better security? I'm just curious if the fixes for those (and other similar vulnerabilities) are starting to make their way into silicon, but also that the fixes recommended usually slow cpu's down quite a bit too.
[+] kllrnohj|6 years ago|reply
> Does this new intel CPU have 'fixes' for all issues like spectre and meltdown

It's worth noting that Spectre-V1 is likely to be with us for many many years. CPUs have so far only shown interest in addressing spectre when spectre is crossing process or ring boundaries. In-process spectre leaks (so variant 1, Bounds Check Bypass) are currently not really being considered a problem in the CPU's eyes. No privilege boundaries were crossed, therefore not a bug.

As in, secure in-process sandboxing seems to just be dead. Or left as an exercise for the embedding code to figure out how to in some way handle with no CPU support.

[+] kilo_bravo_3|6 years ago|reply
Despite the protestations of l33t Cyberhaxxing Z3r0 Cools everywhere, you only have to turn off hyper-threading if you are sharing a machine with someone you don't trust.

If someone is on your machine and you don't know it, they don't have to use side-channel attacks.

They'll can just use any of the thousands of other ways of privilege escalation to read the super-secret information you have stored in RAM, or find it when it is written to disk.

The OpenBSD folks disabled SMT by default because of TLBleed.

A non-datacenter/cloud user disabling SMT to avoid TLBleed is like a normal person carrying a fireman's rescue saw around with them 24 hours a day in case they get into a situation they have to saw themselves out of.

[+] heelix|6 years ago|reply
I've got a 4800hs 8-core laptop that should be showing up on my doorstep later today. My kid is doing some heavy video/graphics work in Adobe in her undergraduate program, and going to try this as a potential desktop/workstation replacement since she is still a bit of a nomad as a student. Her current laptop just can't handle the work without trying to cook itself - but will have some apples to apples rendering comparisons based on her homework soon.

The crazy thing is the cost.. about $1000 for a base unit. Will be doing some comparisons to my other desktop threadrippers just to see where it stacks up.

[+] formercoder|6 years ago|reply
Nice - assuming it has USB-C? The external USB-C SSDs are great for source media.
[+] nicoburns|6 years ago|reply
I'm really hoping apple puts these AMD mobile chips in their next MacBook Pro. At the moment I'm not seeing much reason to upgrade from my 2015 model, but if I could have 8 cores, that would a different matter...
[+] hajhatten|6 years ago|reply
If apple ever switches CPU supplier for their macbooks, it'll probably be for their own.
[+] jeffnappi|6 years ago|reply
One of the key reasons we might not see this yet is Thunderbolt 3 support. Thunderbolt is noticeably missing from the new AMD laptops on the market - e.g. the Asus ROG Zephyrus G14 (2020). Thunderbolt itself is an Intel technology.

The emergence of USB 4 (effectively TB3) as a standard will change that. Perhaps we'll see that become available in AMD's next generation of CPUs.

[+] vijaybritto|6 years ago|reply
I think Apple is already in the process of making their own chips. I'm thinking that they'll release an ARM chip for laptops. They are already blazing out on the mobile side.
[+] pram|6 years ago|reply
The i9 in a 16 inch MBP is 8 cores though?
[+] nottorp|6 years ago|reply
Hmm I need some schooling please:

"The TDP for the latter part is 35 W while the i9-10900F is listed at 65 W, but being an Intel desktop processor that just reflects the TDP for the base clock, with much higher energy demands required for higher clocks (e.g. maximum PL1 has been recorded at 170 W)."

So where do I find the actual maximum power consumption on recent desktop CPUs then? Do I set a limit in the BIOS? Do I read each and every review to see what they measured?

Pointers to relevant links very much appreciated, thankee sai.

[+] leeter|6 years ago|reply
> So where do I find the actual maximum power consumption on recent desktop CPUs then?

Unfortunately you do this by putting it on a motherboard and and generally putting an amp meter on the 12v EPS rail. Either that or just power from wall. The CPU vendors have worked really hard to hide this. Mostly because for most people it's largely irrelevant these days as the CPU will scale to cooling and power delivery. There are some limits though.

[+] rowanG077|6 years ago|reply
You can't find it. Intel has obscured the measurement too much. AMD is guilty of this as well but on a much smaller scale then Intel.
[+] kllrnohj|6 years ago|reply
You look at measured draws for the relevant workloads. Anandtech has a good intro to the various states that an Intel CPU can be in here: https://www.anandtech.com/show/13544/why-intel-processors-dr...

For Intel specifically the TDP is what you need to dissipate to sustain the base frequency. If you want to run beyond base frequency (the "all core turbo"), then you need to handle more. How much more isn't officially documented, nor are the turbo charts either annoyingly (aka, this: https://en.wikichip.org/wiki/intel/core_i9/i9-9900k#Frequenc... )

So for a 9900K looking the official numbers are 95W for 3.6GHz of all core load. But it'll turbo with all cores loaded up to 4.7GHz, for which you'll need some amount of cooling. How much? Well more than 95W that's for sure, but that's all you really officially know. So then you look at reviews and you find that under full load the 9900K pulls down ~170W if it's not hitting thermal limits: https://images.anandtech.com/graphs/graph14605/111362.png

[+] Havoc|6 years ago|reply
I really appreciate this entire race frankly.

For a desktop computer even mid tier 3700X is now "good enough" even for someone with aspirations towards solid gaming.

On home use front CPUs seem to have outsprinted their usage cases.

[+] _bxg1|6 years ago|reply
For gaming in particular, the GPU is almost always the bottleneck. It's much easier to throw in more polygons and texture pixels than it is to come up with bigger and more complex gameplay simulations. I'm still running a quad-core i5 from 2014 in my gaming desktop and it rarely has an impact.
[+] leetcrew|6 years ago|reply
this doesn't quite pass the smell test. I can't quite tell if there's a way to filter out results from overclocked parts in geekbench, but it looks like these results for the i9-10900F are significantly lower than typical results for a stock i9-9900K. hard to believe they would do worse on the same process with more cores.
[+] kllrnohj|6 years ago|reply
I think the key thing you've missed is the i9-10900F is a 65W part, while the i9-9900K is 95W.

Given both are on basically the same manufacturing process with basically the same architecture, it's really not surprising the i9-10900F isn't keeping up with the i9-9900K in max load scenarios. Having 50% more power easily makes up for 20% fewer cores.

Which is then also why the 4900HS is able to be so competitive with it. It's a 35W part but on a more advanced manufacturing node.

And although TDP is a made up number that has very little meaning, it is roughly what the CPU will settle into when the boost duration has exceeded. So eventually under a constant multithreaded workload the 9900K and 10900F should settle in to their 95W and 65W limits respectively.

[+] piinbinary|6 years ago|reply
It is worth noting that this is the i9-10900F, not the i9-10900k (thank you, Intel, for the lovely naming scheme). I'd expect the -k variant to be at least as fast as the i9-9900k.
[+] AnotherGoodName|6 years ago|reply
It also doesn't pass the smell test in that the single core performance is higher than the Ryzen 4900 and it also has both more cores and more threads yet it is far behind on multicore performance?

Likely some thread affinity issue that just needs a patch.

[+] 3fe9a03ccd14ca5|6 years ago|reply
These mobile APUs are amazing. Especially when combined with a GPU. It would be really nice to be able to use these in a standard ATX motherboard because it would make the perfect home server CPU.
[+] 1996|6 years ago|reply
I share your interest for a silent yet powerful homeserver.

Unfortunately, there doesn't seem to be any desktop motherboards for mobile zen2 yet.

Too bad. I would even settle on a NUC.

[+] 2OEH8eoCRo0|6 years ago|reply
My home server is a Dell Latitude with an i5-2520M

$80 on craigslist 3 years ago.

[+] jamiek88|6 years ago|reply
What would you use the GPU for in a home server?
[+] websg-x|6 years ago|reply
The leaked benchmark ran with single memory channel instead of dual. The multicore score is 30% lower because of that. Somebody are trying to mislead here.
[+] _ph_|6 years ago|reply
Is there a 4900HS laptop around already that can be recommended for Linux usage?
[+] jakogut|6 years ago|reply
Reading this now on a ROG Zephyrus G14 ($1,449)[1] running Arch. I've had a few random lockups with Linux 5.6.4, though they seem to have become less frequent with amd-ucode and a bios update. Additionally, I haven't had any problems with 5.7rc1, though the proprietary Nvidia driver (440.82) doesn't compile with that kernel yet.

Otherwise, it's been running great. The screen looks great, the touchpad feels really good, and the performance is amazing. The biggest complaint I have at the moment is the lack of PgUp/Dn and Home/End keys.

[1] https://www.bestbuy.com/site/asus-rog-zephyrus-g14-14-gaming...

[+] papermachete|6 years ago|reply
No, Nvidia Optimus is still an unsolved problem in Linux. Vegas run fine tho.
[+] paypalcust83|6 years ago|reply
Btw, does anyone know how to buy boxed (PIB/WOF) AMD Rome EPYCs at reasonable (not full MSRP) market prices rather than OEM/tray (non-WOF) ones?

I recently ordered WOF EPYCs from a small business that fronts one of the largest distributors of PC components, and they/distributor sent the no-warranty OEM/tray ones instead. (30 day warranty + 15% restocking fee is no deal at all.)

[+] all_blue_chucks|6 years ago|reply
Geekbench is self-reported, unverified information. Anyone could post arbitrary benchmark results for any chip.