Anyone using software that benefits from vector instructions. That includes a variety of compression, search, and image processing algorithms. Your JPEG decompression library might be using SSE2 or Neon. All high-end processors have included some form of vector instruction for like 20+ years now. Even the processor in my old eBook reader has the ARM Neon instructions.
Why would it be irrelevant? Even the paucity of availability isn't really a problem - the big winners here are server users in data centers, not desktops or laptops. How much string parsing and munging is happening ingesting big datasets right now? If running a specially optimized function set on part of your fleet reduces utilization, that's direct cost savings you realize. If the AMD is then widening that support base, you're deeply favoring expanding usage while you scale up.
Given Intel's AVX extension could cause silent failures on servers (very high work load for prolonged time, compare to end user computers), I'm not sure it would be a big win for servers either: https://arxiv.org/pdf/2102.11245.pdf.
retrac|3 years ago
Anyone using software that benefits from vector instructions. That includes a variety of compression, search, and image processing algorithms. Your JPEG decompression library might be using SSE2 or Neon. All high-end processors have included some form of vector instruction for like 20+ years now. Even the processor in my old eBook reader has the ARM Neon instructions.
mhh__|3 years ago
XorNot|3 years ago
_rtld_global_ro|3 years ago