top | item 19966457

Intel Performance Hit 5x Harder Than AMD After Spectre, Meltdown Patches

319 points| mepian | 6 years ago |extremetech.com

107 comments

order
[+] listenandlearn|6 years ago|reply
Things just keep getting better and better for AMD... it’s like once you have a little bit of luck things just keep snowballing from there.

They are on track to have an amazing year with CPUs that may take a very significant chunk of the cloud compute away from intel and then maybe even on the performance side.. it’s just fascinating that a company that almost died just a few years ago is now a big contender on multiple fronts.

[+] NetBeck|6 years ago|reply
This reminds me of AMD's launch of the 1.4GHz Athlon Thunderbird. People were shocked that benchmarks proved AMD dethroned Intel.

>The Athlon 1.4GHz is far and away the fastest PC processor you can buy. Well, OK, it's not that much faster than the 1.33GHz Athlon, but it's a heap faster than Intel's 1.7GHz Pentium 4. Through a range of tests measuring a variety of applications and abilities, every one of our Athlons from 1.2GHz to 1.4GHz beat out the Pentium 4 with regularity."[1]

[1] https://techreport.com/review/2523/amd-athlon-1-4ghz-process...

[+] chrisseaton|6 years ago|reply
> may take a very significant chunk of the cloud compute away from intel

My understanding is that they don't have the manufacturing capacity to ship this many processors, even if data centre operators wanted to buy them.

[+] adventured|6 years ago|reply
I'm a fan of AMD, I hope they succeed. However, how do you figure things are getting better and better based on actual results?

Right now it looks like they're going to be reset back to 2017 numbers, losing the business gains they made in 2018. Their sales have fallen for the last three quarters in a row, quarter over quarter, and they barely turned a profit last quarter. Sales imploded by 23% last quarter year over year. When does the amazing year start?

[+] yogthos|6 years ago|reply
I'm much more excited about RISC V and MIPS myself because they're actual open architectures.
[+] JohnFen|6 years ago|reply
> Things just keep getting better and better for AMD

I'm happy to hear that AMD does better with this. I'd already decided that I won't be buying Intel CPUs anymore, so I like that AMD is a reasonable replacement.

[+] chungleong|6 years ago|reply
Persistent memory is going to a big thing in the cloud computing space in the coming years. Without an answer to Optane AMD is going to have a tough time competing with Intel.
[+] InTheArena|6 years ago|reply
At some point we need to start pointing the blame directly at Intel here. It’s becoming more and more obvious that this isn’t a problem where modern CPU architecture didn’t anticipate a security attack vector — but more that Intel took shortcuts with security infrastructure on the chip in order to improve their IPC count.
[+] alkonaut|6 years ago|reply
As far as I understood, the solution to this in sandboxes such as the js world, is simply to deny anyone using timers with a resolution that could reveal cache misses. How much is software really relying on timers with this resolution? What would it mean if CPU manufacturers simply gave up and said "to mitigate side channels, you can't have a clock that is so accurate that it lets you measure whether X has happened because knowing that is equivalent to reading any memory".

Or, instead of detecting various things and flushing out sensitive data on some context switch, the CPU just adds noise to the timers instead? I'm gussing this is a complete no-go, but I'm wondering why it is?

[+] cesarb|6 years ago|reply
Adding noise just makes side channel attacks slower, it doesn't stop them; there are statistical techniques to extract the original signal from the signal plus noise, given enough samples.

For a simple example, imagine you want to distinguish a 1ms difference in the execution time of some operation. Without noise, you just have to time it; now let's randomly add either nothing or 1ms to the operation time, so the "fast" operation will take either +0ms or +1ms, and the "slow" operation will take either +1ms or +2ms. But if you repeat the same operation several times and average the execution times, the "fast" operation will take an average of +0.5ms, and the "slow" operation will take an average of +1.5ms. As you can see, in this simple example the random noise averaged itself to a normal distribution, and the original signal is still visible on top of it.

[+] josefx|6 years ago|reply
The browser makers also had to disable mutable JS shared memory arrays until other mitigations where in place. Having a single thread that continuously increments a shared value serves as a good enough approximation of the CPU clock for these exploits.
[+] ss248|6 years ago|reply
If you really want to, you can "manufacture" high resolution timer pretty easily with thread spinning.
[+] jmkni|6 years ago|reply
Proper dumb-ass question here, so please forgive.

When you talk about timers and resolution, what do you actually mean? When I hear timer, I think about setTimeout, when I hear resolution, I'm thinking about screen resolutions.

Is that what you mean, or are you referring to other things?

[+] rwmj|6 years ago|reply
Does anyone have the full kernel command line to turn all of the mitigations off? I don't need this on my private development server which runs all code either written by me or under my control.
[+] saati|6 years ago|reply
mitigations=off is the only parameter you need.
[+] Skunkleton|6 years ago|reply
What are we supposed to draw from those graphs? The AMD part is already much slower than the intel part.
[+] bearjaws|6 years ago|reply
I mean, a 15% reduction in effective computation is not nothing just because AMDs CPUs aren't faster. The 8700k has 2 less cores and costs ~$100 more, so now it is not a very compelling purchase if you plan to turn mitigations on.

Many people bought the 8700k for a level of performance, that they now need to compromise for: 1) higher performance, small chance of exploits, 2) lower performance and less exploits.

This dilemma did not exist until this year, and now everyone is paying for it, besides Intel.

[+] robbyt|6 years ago|reply
I wonder what other optimizations Intel have made, which could be potential security threats some day...
[+] hinkley|6 years ago|reply
What’s the cost of interprocessor communication versus clearing these caches so aggressively?

I wonder, as cores continue to increase, if another solution will present itself in the form of segregating traffic per processor.

[+] ryacko|6 years ago|reply
Dispatch units are slowly increasing in number, and there is enough registers to octuple threads per core.

It isn't hard to imagine a future where multithreaded applications with write disabled executable memory would be faster to the point where browsers will stop using JIT as we know it.

[+] viraptor|6 years ago|reply
Those who care already do this. NUMA is taking advantage of memory locality by itself, and processes can be pinned to cores manually as well. But even reading memory of your own process can be an issue (it is for browser tabs)
[+] JoeAltmaier|6 years ago|reply
Admitting all the issues with this 'attack' as given, I can't help thinking this is only an issue because, CPUs use the old 'secrecy as security' model. E.g. if we add to each thread an encryption key used to interpret every memory access (just an xor with a rolling mask) then access to somebody else's memory become access to an encrypted document with a key you don't know. No longer a problem?
[+] zrm|6 years ago|reply
These are really timing attacks. It's not a matter of directly reading the data, it's a matter of being able to deduce what's there based on how long it takes to do it.
[+] lostmsu|6 years ago|reply
The title is clickbait, because 5x does not matter if the AMD's difference is under 1%. Also not the original source.
[+] Dylan16807|6 years ago|reply
Clickbait because it could be misleading in a counterfactual?

15% vs. 3% is pretty meaningful. 15-16% is comparable to the gap between 4th and 9th generation Intel processors at the same frequency. And in that time turbo has gotten better but base clocks have dropped.

[+] HelloNurse|6 years ago|reply
The "5x" difference is a measure of how wrong chip designs from the two companies are.

Intel being five times more careless and/or incompetent and/or malicious has implications beyond practical performance degradation.