Things just keep getting better and better for AMD... it’s like once you have a little bit of luck things just keep snowballing from there.
They are on track to have an amazing year with CPUs that may take a very significant chunk of the cloud compute away from intel and then maybe even on the performance side.. it’s just fascinating that a company that almost died just a few years ago is now a big contender on multiple fronts.
This reminds me of AMD's launch of the 1.4GHz Athlon Thunderbird. People were shocked that benchmarks proved AMD dethroned Intel.
>The Athlon 1.4GHz is far and away the fastest PC processor you can buy. Well, OK, it's not that much faster than the 1.33GHz Athlon, but it's a heap faster than Intel's 1.7GHz Pentium 4. Through a range of tests measuring a variety of applications and abilities, every one of our Athlons from 1.2GHz to 1.4GHz beat out the Pentium 4 with regularity."[1]
I'm a fan of AMD, I hope they succeed. However, how do you figure things are getting better and better based on actual results?
Right now it looks like they're going to be reset back to 2017 numbers, losing the business gains they made in 2018. Their sales have fallen for the last three quarters in a row, quarter over quarter, and they barely turned a profit last quarter. Sales imploded by 23% last quarter year over year. When does the amazing year start?
> Things just keep getting better and better for AMD
I'm happy to hear that AMD does better with this. I'd already decided that I won't be buying Intel CPUs anymore, so I like that AMD is a reasonable replacement.
Persistent memory is going to a big thing in the cloud computing space in the coming years. Without an answer to Optane AMD is going to have a tough time competing with Intel.
At some point we need to start pointing the blame directly at Intel here. It’s becoming more and more obvious that this isn’t a problem where modern CPU architecture didn’t anticipate a security attack vector — but more that Intel took shortcuts with security infrastructure on the chip in order to improve their IPC count.
As far as I understood, the solution to this in sandboxes such as the js world, is simply to deny anyone using timers with a resolution that could reveal cache misses. How much is software really relying on timers with this resolution? What would it mean if CPU manufacturers simply gave up and said "to mitigate side channels, you can't have a clock that is so accurate that it lets you measure whether X has happened because knowing that is equivalent to reading any memory".
Or, instead of detecting various things and flushing out sensitive data on some context switch, the CPU just adds noise to the timers instead? I'm gussing this is a complete no-go, but I'm wondering why it is?
Adding noise just makes side channel attacks slower, it doesn't stop them; there are statistical techniques to extract the original signal from the signal plus noise, given enough samples.
For a simple example, imagine you want to distinguish a 1ms difference in the execution time of some operation. Without noise, you just have to time it; now let's randomly add either nothing or 1ms to the operation time, so the "fast" operation will take either +0ms or +1ms, and the "slow" operation will take either +1ms or +2ms. But if you repeat the same operation several times and average the execution times, the "fast" operation will take an average of +0.5ms, and the "slow" operation will take an average of +1.5ms. As you can see, in this simple example the random noise averaged itself to a normal distribution, and the original signal is still visible on top of it.
The browser makers also had to disable mutable JS shared memory arrays until other mitigations where in place. Having a single thread that continuously increments a shared value serves as a good enough approximation of the CPU clock for these exploits.
When you talk about timers and resolution, what do you actually mean? When I hear timer, I think about setTimeout, when I hear resolution, I'm thinking about screen resolutions.
Is that what you mean, or are you referring to other things?
Does anyone have the full kernel command line to turn all of the mitigations off? I don't need this on my private development server which runs all code either written by me or under my control.
I mean, a 15% reduction in effective computation is not nothing just because AMDs CPUs aren't faster. The 8700k has 2 less cores and costs ~$100 more, so now it is not a very compelling purchase if you plan to turn mitigations on.
Many people bought the 8700k for a level of performance, that they now need to compromise for: 1) higher performance, small chance of exploits, 2) lower performance and less exploits.
This dilemma did not exist until this year, and now everyone is paying for it, besides Intel.
If I'm not wrong, these are kernel patches. Intel is adding hardware (they call it in-silicon) mitigations to Meltdown (in Coffee Lake [1]) and Spectre (in Ice Lake[2]).
This should help increase performance again, right? I'm actually waiting for Ice Lake before buying a new laptop.
Dispatch units are slowly increasing in number, and there is enough registers to octuple threads per core.
It isn't hard to imagine a future where multithreaded applications with write disabled executable memory would be faster to the point where browsers will stop using JIT as we know it.
Those who care already do this. NUMA is taking advantage of memory locality by itself, and processes can be pinned to cores manually as well. But even reading memory of your own process can be an issue (it is for browser tabs)
Admitting all the issues with this 'attack' as given, I can't help thinking this is only an issue because, CPUs use the old 'secrecy as security' model. E.g. if we add to each thread an encryption key used to interpret every memory access (just an xor with a rolling mask) then access to somebody else's memory become access to an encrypted document with a key you don't know. No longer a problem?
These are really timing attacks. It's not a matter of directly reading the data, it's a matter of being able to deduce what's there based on how long it takes to do it.
Clickbait because it could be misleading in a counterfactual?
15% vs. 3% is pretty meaningful. 15-16% is comparable to the gap between 4th and 9th generation Intel processors at the same frequency. And in that time turbo has gotten better but base clocks have dropped.
[+] [-] listenandlearn|6 years ago|reply
They are on track to have an amazing year with CPUs that may take a very significant chunk of the cloud compute away from intel and then maybe even on the performance side.. it’s just fascinating that a company that almost died just a few years ago is now a big contender on multiple fronts.
[+] [-] NetBeck|6 years ago|reply
>The Athlon 1.4GHz is far and away the fastest PC processor you can buy. Well, OK, it's not that much faster than the 1.33GHz Athlon, but it's a heap faster than Intel's 1.7GHz Pentium 4. Through a range of tests measuring a variety of applications and abilities, every one of our Athlons from 1.2GHz to 1.4GHz beat out the Pentium 4 with regularity."[1]
[1] https://techreport.com/review/2523/amd-athlon-1-4ghz-process...
[+] [-] tasty_freeze|6 years ago|reply
[+] [-] chrisseaton|6 years ago|reply
My understanding is that they don't have the manufacturing capacity to ship this many processors, even if data centre operators wanted to buy them.
[+] [-] adventured|6 years ago|reply
Right now it looks like they're going to be reset back to 2017 numbers, losing the business gains they made in 2018. Their sales have fallen for the last three quarters in a row, quarter over quarter, and they barely turned a profit last quarter. Sales imploded by 23% last quarter year over year. When does the amazing year start?
[+] [-] yogthos|6 years ago|reply
[+] [-] JohnFen|6 years ago|reply
I'm happy to hear that AMD does better with this. I'd already decided that I won't be buying Intel CPUs anymore, so I like that AMD is a reasonable replacement.
[+] [-] chungleong|6 years ago|reply
[+] [-] InTheArena|6 years ago|reply
[+] [-] wtallis|6 years ago|reply
[+] [-] aristophenes|6 years ago|reply
[+] [-] alkonaut|6 years ago|reply
Or, instead of detecting various things and flushing out sensitive data on some context switch, the CPU just adds noise to the timers instead? I'm gussing this is a complete no-go, but I'm wondering why it is?
[+] [-] cesarb|6 years ago|reply
For a simple example, imagine you want to distinguish a 1ms difference in the execution time of some operation. Without noise, you just have to time it; now let's randomly add either nothing or 1ms to the operation time, so the "fast" operation will take either +0ms or +1ms, and the "slow" operation will take either +1ms or +2ms. But if you repeat the same operation several times and average the execution times, the "fast" operation will take an average of +0.5ms, and the "slow" operation will take an average of +1.5ms. As you can see, in this simple example the random noise averaged itself to a normal distribution, and the original signal is still visible on top of it.
[+] [-] josefx|6 years ago|reply
[+] [-] ss248|6 years ago|reply
[+] [-] jmkni|6 years ago|reply
When you talk about timers and resolution, what do you actually mean? When I hear timer, I think about setTimeout, when I hear resolution, I'm thinking about screen resolutions.
Is that what you mean, or are you referring to other things?
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] rwmj|6 years ago|reply
[+] [-] pabs3|6 years ago|reply
[+] [-] saati|6 years ago|reply
[+] [-] rich-tea|6 years ago|reply
[deleted]
[+] [-] Skunkleton|6 years ago|reply
[+] [-] bearjaws|6 years ago|reply
Many people bought the 8700k for a level of performance, that they now need to compromise for: 1) higher performance, small chance of exploits, 2) lower performance and less exploits.
This dilemma did not exist until this year, and now everyone is paying for it, besides Intel.
[+] [-] robbyt|6 years ago|reply
[+] [-] joaomacp|6 years ago|reply
This should help increase performance again, right? I'm actually waiting for Ice Lake before buying a new laptop.
[1] https://www.tomshardware.com/news/intel-9th-generation-coffe... [2] https://en.wikipedia.org/wiki/Ice_Lake_(microarchitecture)
[+] [-] hinkley|6 years ago|reply
I wonder, as cores continue to increase, if another solution will present itself in the form of segregating traffic per processor.
[+] [-] ryacko|6 years ago|reply
It isn't hard to imagine a future where multithreaded applications with write disabled executable memory would be faster to the point where browsers will stop using JIT as we know it.
[+] [-] viraptor|6 years ago|reply
[+] [-] JoeAltmaier|6 years ago|reply
[+] [-] zrm|6 years ago|reply
[+] [-] lostmsu|6 years ago|reply
[+] [-] Dylan16807|6 years ago|reply
15% vs. 3% is pretty meaningful. 15-16% is comparable to the gap between 4th and 9th generation Intel processors at the same frequency. And in that time turbo has gotten better but base clocks have dropped.
[+] [-] HelloNurse|6 years ago|reply
Intel being five times more careless and/or incompetent and/or malicious has implications beyond practical performance degradation.