(no title)
Twinklebear | 5 years ago
I'd add to the list:
- Embree: https://www.embree.org/ Open source high-performance ray tracing kernels for CPUs using SIMD.
- OpenVKL: https://www.openvkl.org/ Similar to Embree (high-performance ray tracing kernels), but for volume traversal and sampling.
- ISPC: https://ispc.github.io/ an open source compiler for a SPMD language which compiles it to efficient SIMD code
- OSPRay: http://www.ospray.org/ A large project using SIMD throughout (via ISPC) for real time ray tracing for scientific visualization and physically based rendering.
- Open Image Denoise: https://openimagedenoise.github.io/ An open-source image denoiser using SIMD (via ISPC) for some image processing and denoising.
- (my own project) ChameleonRT: https://github.com/Twinklebear/ChameleonRT has an Embree + ISPC backend, using Embree for SIMD ray traversal and ISPC for vectorizing the rest of the path tracer (shading, texture sampling).
bityard|5 years ago
Starting to see? Back in Ye Olde 586 Days of the late 1990s, MMX was added to the Pentium architecture pretty much exclusively for 3D games and real-time audio/video decoding. (This was back when the act of playing an MP3 was no small chore for the average consumer CPU.) Intel made quite a big deal over MMX including millions of dollars in TV ads aimed at the general population, despite the fact that software had to be built specifically to use MMX and that only certain kinds of software could benefit from it.
rasz|5 years ago
"MMX was useless for games. MMX is Integer math only, good for DSP, things like audio filters, or making a softmodem out of your sound card. Unsuitable for accelerating 3D games. Whats worse MMX has no dedicated registers, and instead reuses/shares FPU ones, this means you cant use MMX and FPU (all 3D code pre Direct3D 7 Hardware T&L) at the same time. ... Funnily enough AMDs 1998 3DNow! did actually add floating point support to MMX and was useful for 3D acceleration until hardware T&L came along 2 years later.
Intel Paid few dev houses to release make believe MMX enhancements, like POD (1997)
https://www.mobygames.com/images/covers/l/51358-pod-windows-...
1/6 of box covered with Intel MMX advertising while game used it only for some sound effects. Intel repeated this trick in 99 while introducing Pentium 3 with SSE. Intel commissioned Rage Software to build a demo piece showcasing P3 during Comdex Fall. It worked .. by cheating with graphic details ;-) Quoting hardware.fr "But looking closely at the demo, we notice - as you can see on the screenshots - that the SSE version is less detailed than the non-SSE version (see the ground). Intel would you try to roll the journalists in the flour?". Of course Anandtech used this never released publicly cheating demo pretending to be a game in all of their Pentium 3 tests for over a year.
https://www.vogons.org/viewtopic.php?f=46&t=65247&start=20#p... "
MMX was one of Intel's many Native Signal Processing (NSP) initiatives. They had plenty of ideas for making PCs dependent on Intel hardware, something Nvidia is really good at these days (physx, cuda, hairworks, gameworks). Thankfully Microsoft was quick to kill their other fancy plans https://www.theregister.co.uk/1998/11/11/microsoft_said_drop... Microsoft did the same thing to Creative with Vista killing DirectAudio, out of fear that one company was gripping positional audio monopoly on their platform.
djmips|5 years ago
Here's a GDC 2015 article about SIMD at Insomniac Games. https://deplinenoise.files.wordpress.com/2015/03/gdc2015_afr...
misnome|5 years ago
I've been learning ispc lately and it does seem like a wonderful solution, you avoid having to build separate implementations for every instruction set and/or worrying about per-compiler massaging to get it to recognise the vectorisation opportunities. The arguments for having a domain-specific language variant and why it was written (https://pharr.org/matt/blog/2018/04/30/ispc-all.html is a good read) seem like persuasive arguments.
However, outside of the projects in the above list - it doesn't seem to have very wide usage. There are still commits coming in/responding to some issues so it doesn't seem dead, but there are many issues untouched or just untriaged. There isn't much discussion about using it, or people asking for advice. The mailing list has about a message a month.
Is it merely just an extremely highly specialised domain? Is it just that CUDA/OpenCL is a more efficient solution for most cases where one would otherwise consider it? Are there too many ASM/intrinsic experts out there to bother learning?
Twinklebear|5 years ago
As far as support from other languages I did write this wrapper for using ISPC from Rust https://github.com/Twinklebear/ispc-rs (but that's just me again), and there has been work on a WebASM+SIMD backend which is really exciting. Intel does also have an ISPC based texture compressor (https://github.com/GameTechDev/ISPCTextureCompressor) which I think does have some popularity.
However, the domain is pretty specialized, and I think the fraction of people who really care about CPU performance and are willing to port or write part of their code in another language is smaller still. It's also possible that a lot of those who would do so have their own hand written intrinsics wrappers already. Migrating to ISPC would reduce a lot of maintenance effort on such projects, but when they already have momentum in the other direction it can be harder to switch. I think that on the CPU ISPC is easier and better than OpenCL for performance and tight integration with the "host" language, since you can directly share pointers and even call back and forth between the "host" and "kernel".
KMag|5 years ago
I generally liked ISPC, but I really didn't like that it tried to look as close as possible to C but departed from C in unnecessary ways. With Monte Carlo simulations, we deal with a lot of probabilities represented as doubles in the range [0.0, 1.0]. The biggest pain is that operations between a double and any integral type cast the double to the integral type, whereas in C, the integral type gets implicitly cast to a double. I understand the implicit casting rules were changed to give the fastest speed rather than minimize worst-case rounding error. I could understand getting rid of implicit casts, or maybe I could understand changing rules to improve accuracy and know that the user could easily use a profiler to discover any performance problems this caused. However, in our case, uint32_t * double = (uint32_t) 0, which then would get implicitly cast back to a double if being assigned to a variable. My interne was beating his head against the wall for the better part of an afternoon before I gave him a bit of debugging help. All of his probabilities were coming out 0% and 100% for his component.
I actually emailed the authors with a bug report when I found the implicit casting rules differed so radically from C and were in the direction away from accuracy. (Note there's no rounding error when converting uint32_t to a 64-bit IEEE-754 double.) They were very nice, and pointed us to where this behavior was documented.
If you're going out of your way to make your language look like C and interoperate seamlessly with C, you should have really strong justifications for the places where you radically depart from C's semantics.
adev_|5 years ago
ISPC is pretty popular in the HPC world.
apjana|5 years ago
It takes advantage of SIMD at -O3 level of optimization in it's custom string copy function: https://github.com/jarun/nnn/blob/bc7a81921ed974a408d4de2cbf...
The function is used extensively in the program.
saagarjha|5 years ago
z0mbie42|5 years ago
I try not to include C or C++ projects other than for educational purpose (like the Mandelbrot set) because one of my life's goal is to help the world to transition to a C & C++ free world (other than for kernels...).
I believe that my role is to promote projects which are "building the new world" and thus we need to abandon and port all form insecure core.
sk0g|5 years ago
adev_|5 years ago
It just show you do not know what you are talking about.
Most security problems affecting C program DO NOT affect C++ programs.
Stack smash, vla abuse, string null termination problems, goto error control, double free corruption do NOT affect C++, they are C specific.
fermentation|5 years ago
z3phyr|5 years ago
The sky is the limit, and there is so much to do! Transactional memory, massively multicore computers, hardware built on predicate logic, neuromorphic computers, and whatnot.
We are still mostly stuck with the cpu and memory designs of old.
hedora|5 years ago
The language matters less than you’d think once you get past a certain correctness baseline.
bityard|5 years ago