Server and workstation chips still have AVX-512. It’s only unsupported on CPUs with smaller E(fficeincy) cores.
AVX-512 was never really supported in newer consumer CPUs with heterogeneous architecture. These CPUs have a mix of powerful cores and efficiency cores. The AVX-512 instructions were never added to the efficiency cores because it would use way too much die space and defeat the purpose of efficiency cores.
There was previously a hidden option to disable the efficiency cores and enable AVX-512 on the remaining power cores, but the number of workloads that would warrant turning off a lot of your cores to speed up AVX-512 calculations is virtually non-existent in the consumer world (where these cheap CPUs are targeted).
The whole journalism controversy around AVX-512 has been a bit of a joke because many of the same journalists tried to generate controversy when AVX-512 was first introduced and they realized that AVX-512 code would reduce the CPU clock speed. There were numerous articles about turning off AVX-512 on previous generation CPUs to avoid this downclocking and to make overclocks more stable.
A problem is slowing down the CPU frequency significantly when AVX-512 is involved, e.g. https://en.wikichip.org/wiki/intel/xeon_gold/6262v this, which usually cancels out the benefit in the Real World (tm).
This was massively exaggerated by journalists when AVX-512 was first announced.
It is true that randomly applied AVX-512 instructions can cause a slight clock speed reduction, the proper way to use libraries like this would be within specific hot code loops where the mild clock speed reduction is more than offset by the huge parallelism increase.
This doesn’t make sense if you’re a consumer doing something multitasking and a background process is invoking the AVX-512 penalty in the background, but it usually would make sense in a server scenario.
Intel has been trying to reduce the penalty for AVX-512, and barring that, advertise that there is no penalty. Most things on Ice Lake run fine with 256 bit vectors, but Skylake and earlier really needed 128 bit or narrower if you weren't doing serious vector math.
Unless someone has data for the latest Intel chips (i.e. sapphire rapids) showing the opposite I'm inclined to think this is a meme from 2016/7 that needs to go the way of the dodo.
I would love to see an example of reasonable code not seeing any benefit. On first generation SKX, we saw 1.5x speedups vs AVX2, and that was IIRC even without taking much advantage of AVX3-only instructions.
Please stop spreading this fallacy, while downclocking can happen, usually the benefit is still strong and superior to avx256. Even 256 can induce downclocking. AVX 512 when properly utilized simply demolish non AVX 512 cpus.
I just got through doing some work with vectorization.
On the simplest workload I have, splitting a 3 MByte text file into lines, writing a pointer to each string to an array, GCC will not vectorize the naive loop, though ICC might I guess.
With simple vectorization to AVX512 (64 unsigned chars in a vector), finding all the line breaks goes from 1.3 msec to 0.1 msec, so a little better than a 10x speedup, still just on the one core, which keeps things simple.
I was using Agner Fog's VCL 2, Apache licensed C++ Vector Class Library. It's super easy.
Still, what does it signal that vector extensions are required to get better string performance on x86? Wouldn't it be better if Intel invested their AVX transistor budget into simply making existing REPB prefixes a lot faster?
AVX-512 is an elegant, powerful, flexible set of masked vector instructions that is useful for many purposes. For example, low-cost neural net inference (https://NN-512.com). To suggest that Intel and AMD should instead make "existing REPB prefixes a lot faster" is missing the big picture. The masked compression instructions (one of which is used in Lemire's article) are endlessly useful, not just for stripping spaces out of a string!
Why is a large speedup from vectors surprising? Considering that the energy required for scheduling/dispatching an instruction on OoO cores dwarfs that of the actual operation (add/mul etc), amortizing over multiple elements (=SIMD) is an obvious win.
Is it generally possible to convert rep str sequences to AVX? Could the hardware or compiler already be doing this?
AVX is just the SIMD unit. I would argue the transistors were spent on SIMD, and the hitch is simply the best way to send str commands to the SIMD hardware.
What's the generated assembly look like? I suspect clang isn't smart enough to store things into registers. The latency of VPCOMPRESSB seems quite high (according to the table here at least https://uops.info/table.html), so you'll probably want to induce a bit more pipelining by manually unrolling into the register variant.
I don't have an AVX512 machine with VBMI2, but here's what my untested code might look like:
__m512i spaces = _mm512_set1_epi8(' ');
size_t i = 0;
for (; i + (64 * 4 - 1) < howmany; i += 64 * 4) {
// 4 input regs, 4 output regs, you can actually do up to 8 because there are 8 mask registers
__m512i in0 = _mm512_loadu_si512(bytes + i);
__m512i in1 = _mm512_loadu_si512(bytes + i + 64);
__m512i in2 = _mm512_loadu_si512(bytes + i + 128);
__m512i in3 = _mm512_loadu_si512(bytes + i + 192);
__mmask64 mask0 = _mm512_cmpgt_epi8_mask (in0, spaces);
__mmask64 mask1 = _mm512_cmpgt_epi8_mask (in1, spaces);
__mmask64 mask2 = _mm512_cmpgt_epi8_mask (in2, spaces);
__mmask64 mask3 = _mm512_cmpgt_epi8_mask (in3, spaces);
auto reg0 = _mm512_maskz_compress_epi8 (mask0, x);
auto reg1 = _mm512_maskz_compress_epi8 (mask1, x);
auto reg2 = _mm512_maskz_compress_epi8 (mask2, x);
auto reg3 = _mm512_maskz_compress_epi8 (mask3, x);
_mm512_storeu_si512(bytes + pos, reg0);
pos += _popcnt64(mask0);
_mm512_storeu_si512(bytes + pos, reg1);
pos += _popcnt64(mask1);
_mm512_storeu_si512(bytes + pos, reg2);
pos += _popcnt64(mask2);
_mm512_storeu_si512(bytes + pos, reg3);
pos += _popcnt64(mask3);
}
// old code can go here, since it handles a smaller size well
You can probably do better by chunking up the input and using temporary memory (coalesced at the end).
I love Daniel’s vectorized string processing posts. There’s always some clever trickery that’s hard for a guy like me (who mostly uses vector extensions for ML kernels) to get quickly.
I found myself wondering if one could create a domain-specific language for specifying string processing tasks, and then automate some of the tricks with a compiler (possibly with human-specified optimization annotations). Halide did this sort of thing for image processing (and ML via TVM to some extent) and it was a pretty significant success.
What would be a practical application of this? The linked post mentions a trim like operation, but in practice I only want to remove white space from the ends, not the interior of the string, and finding the ends is basically the whole problem. Or maybe I want to compress some json, but a simple approach won't work because there can be spaces inside string values which must be preserved.
I agree that the whitespace in text example seems a bit contrived but I've done similar types of byte elision operations on binary streams (e.g. for compression purposes), which this could be trivially adapted to.
Zen 4 is rumored to have AVX512. AMD has in the past had support for wide SIMD instructions with half internal width implementation, so the die area requirements and instruction set support are somewhat orthogonal. There's many other interesting things in AVX512 besides the wide vectors.
The short answer is no, but the long answer is that this is a very complex tradeoff space. Going forward, we may see more of these types of tasks moving to GPU, but for the moment it is generally not a good choice.
The GPU is incredible at raw throughput, and this particular problem can actually implemented fairly straightforwardly (it's a stream compaction, which in turn can be expressed in terms of prefix sum). However, where the GPU absolutely falls down is when you want to interleave CPU and GPU computations. To give round numbers, the roundtrip latency is on the order of 100µs, and even aside from that, the memcpy back and forth between host and device memory might actually be slower than just solving the problem on the CPU. So you only win when the strings are very large, again using round numbers about a megabyte.
Things change if you are able to pipeline a lot of useful computation on the GPU. This is an area of active research (including my own). Aaron Hsu has been doing groundbreaking work implementing an entire compiler on the GPU, and there's more recent work[1], implemented in Futhark, that suggests that that this approach is promising.
I have a paper in the pipeline that includes an extraordinarily high performance (~12G elements/s) GPU implementation of the parentheses matching problem, which is the heart of parsing. If anyone would like to review a draft and provide comments, please add a comment to the GitHub issue[2] I'm using to track this. It's due very soon and I'm on a tight timeline to get all the measurements done, so actionable suggestions on how to improve the text would be most welcome.
Andoryuuta|3 years ago
https://www.igorslab.de/en/intel-deactivated-avx-512-on-alde...
PragmaticPulp|3 years ago
AVX-512 was never really supported in newer consumer CPUs with heterogeneous architecture. These CPUs have a mix of powerful cores and efficiency cores. The AVX-512 instructions were never added to the efficiency cores because it would use way too much die space and defeat the purpose of efficiency cores.
There was previously a hidden option to disable the efficiency cores and enable AVX-512 on the remaining power cores, but the number of workloads that would warrant turning off a lot of your cores to speed up AVX-512 calculations is virtually non-existent in the consumer world (where these cheap CPUs are targeted).
The whole journalism controversy around AVX-512 has been a bit of a joke because many of the same journalists tried to generate controversy when AVX-512 was first introduced and they realized that AVX-512 code would reduce the CPU clock speed. There were numerous articles about turning off AVX-512 on previous generation CPUs to avoid this downclocking and to make overclocks more stable.
mhh__|3 years ago
electricshampo1|3 years ago
SemanticStrengh|3 years ago
gslin|3 years ago
PragmaticPulp|3 years ago
It is true that randomly applied AVX-512 instructions can cause a slight clock speed reduction, the proper way to use libraries like this would be within specific hot code loops where the mild clock speed reduction is more than offset by the huge parallelism increase.
This doesn’t make sense if you’re a consumer doing something multitasking and a background process is invoking the AVX-512 penalty in the background, but it usually would make sense in a server scenario.
pclmulqdq|3 years ago
Forget about 512 bit vectors or FMAs.
alksjdalkj|3 years ago
mhh__|3 years ago
janwas|3 years ago
SemanticStrengh|3 years ago
watmough|3 years ago
I just got through doing some work with vectorization.
On the simplest workload I have, splitting a 3 MByte text file into lines, writing a pointer to each string to an array, GCC will not vectorize the naive loop, though ICC might I guess.
With simple vectorization to AVX512 (64 unsigned chars in a vector), finding all the line breaks goes from 1.3 msec to 0.1 msec, so a little better than a 10x speedup, still just on the one core, which keeps things simple.
I was using Agner Fog's VCL 2, Apache licensed C++ Vector Class Library. It's super easy.
mdb31|3 years ago
Still, what does it signal that vector extensions are required to get better string performance on x86? Wouldn't it be better if Intel invested their AVX transistor budget into simply making existing REPB prefixes a lot faster?
37ef_ced3|3 years ago
janwas|3 years ago
ip26|3 years ago
AVX is just the SIMD unit. I would argue the transistors were spent on SIMD, and the hitch is simply the best way to send str commands to the SIMD hardware.
brrrrrm|3 years ago
I don't have an AVX512 machine with VBMI2, but here's what my untested code might look like:
You can probably do better by chunking up the input and using temporary memory (coalesced at the end).bertr4nd|3 years ago
I found myself wondering if one could create a domain-specific language for specifying string processing tasks, and then automate some of the tricks with a compiler (possibly with human-specified optimization annotations). Halide did this sort of thing for image processing (and ML via TVM to some extent) and it was a pretty significant success.
gfody|3 years ago
brrrrrm|3 years ago
GICodeWarrior|3 years ago
https://ark.intel.com/content/www/us/en/ark/search/featurefi...
The author mentions it's difficult to identify which features are supported on which processor, but ark.intel.com has a quite good catalog.
tedunangst|3 years ago
jandrewrogers|3 years ago
jquery|3 years ago
fulafel|3 years ago
protoman3000|3 years ago
raphlinus|3 years ago
The GPU is incredible at raw throughput, and this particular problem can actually implemented fairly straightforwardly (it's a stream compaction, which in turn can be expressed in terms of prefix sum). However, where the GPU absolutely falls down is when you want to interleave CPU and GPU computations. To give round numbers, the roundtrip latency is on the order of 100µs, and even aside from that, the memcpy back and forth between host and device memory might actually be slower than just solving the problem on the CPU. So you only win when the strings are very large, again using round numbers about a megabyte.
Things change if you are able to pipeline a lot of useful computation on the GPU. This is an area of active research (including my own). Aaron Hsu has been doing groundbreaking work implementing an entire compiler on the GPU, and there's more recent work[1], implemented in Futhark, that suggests that that this approach is promising.
I have a paper in the pipeline that includes an extraordinarily high performance (~12G elements/s) GPU implementation of the parentheses matching problem, which is the heart of parsing. If anyone would like to review a draft and provide comments, please add a comment to the GitHub issue[2] I'm using to track this. It's due very soon and I'm on a tight timeline to get all the measurements done, so actionable suggestions on how to improve the text would be most welcome.
[1]: https://theses.liacs.nl/pdf/2020-2021-VoetterRobin.pdf
[2]: https://github.com/raphlinus/raphlinus.github.io/issues/66#i...
curling_grad|3 years ago
steve76|3 years ago
[deleted]