top | item 22833807

(no title)

Twinklebear | 5 years ago

SIMD is used a ton in rendering applications and starting to see more use in games too (through ISPC for example).

I'd add to the list:

- Embree: https://www.embree.org/ Open source high-performance ray tracing kernels for CPUs using SIMD.

- OpenVKL: https://www.openvkl.org/ Similar to Embree (high-performance ray tracing kernels), but for volume traversal and sampling.

- ISPC: https://ispc.github.io/ an open source compiler for a SPMD language which compiles it to efficient SIMD code

- OSPRay: http://www.ospray.org/ A large project using SIMD throughout (via ISPC) for real time ray tracing for scientific visualization and physically based rendering.

- Open Image Denoise: https://openimagedenoise.github.io/ An open-source image denoiser using SIMD (via ISPC) for some image processing and denoising.

- (my own project) ChameleonRT: https://github.com/Twinklebear/ChameleonRT has an Embree + ISPC backend, using Embree for SIMD ray traversal and ISPC for vectorizing the rest of the path tracer (shading, texture sampling).

discuss

order

bityard|5 years ago

> starting to see more use in games

Starting to see? Back in Ye Olde 586 Days of the late 1990s, MMX was added to the Pentium architecture pretty much exclusively for 3D games and real-time audio/video decoding. (This was back when the act of playing an MP3 was no small chore for the average consumer CPU.) Intel made quite a big deal over MMX including millions of dollars in TV ads aimed at the general population, despite the fact that software had to be built specifically to use MMX and that only certain kinds of software could benefit from it.

rasz|5 years ago

MMX had nothing to do with games! It was a part of Intel _marketing scam_. https://news.ycombinator.com/item?id=19468837 :

"MMX was useless for games. MMX is Integer math only, good for DSP, things like audio filters, or making a softmodem out of your sound card. Unsuitable for accelerating 3D games. Whats worse MMX has no dedicated registers, and instead reuses/shares FPU ones, this means you cant use MMX and FPU (all 3D code pre Direct3D 7 Hardware T&L) at the same time. ... Funnily enough AMDs 1998 3DNow! did actually add floating point support to MMX and was useful for 3D acceleration until hardware T&L came along 2 years later.

Intel Paid few dev houses to release make believe MMX enhancements, like POD (1997)

https://www.mobygames.com/images/covers/l/51358-pod-windows-...

1/6 of box covered with Intel MMX advertising while game used it only for some sound effects. Intel repeated this trick in 99 while introducing Pentium 3 with SSE. Intel commissioned Rage Software to build a demo piece showcasing P3 during Comdex Fall. It worked .. by cheating with graphic details ;-) Quoting hardware.fr "But looking closely at the demo, we notice - as you can see on the screenshots - that the SSE version is less detailed than the non-SSE version (see the ground). Intel would you try to roll the journalists in the flour?". Of course Anandtech used this never released publicly cheating demo pretending to be a game in all of their Pentium 3 tests for over a year.

https://www.vogons.org/viewtopic.php?f=46&t=65247&start=20#p... "

MMX was one of Intel's many Native Signal Processing (NSP) initiatives. They had plenty of ideas for making PCs dependent on Intel hardware, something Nvidia is really good at these days (physx, cuda, hairworks, gameworks). Thankfully Microsoft was quick to kill their other fancy plans https://www.theregister.co.uk/1998/11/11/microsoft_said_drop... Microsoft did the same thing to Creative with Vista killing DirectAudio, out of fear that one company was gripping positional audio monopoly on their platform.

misnome|5 years ago

> ISPC: https://ispc.github.io/ an open source compiler for a SPMD language which compiles it to efficient SIMD code

I've been learning ispc lately and it does seem like a wonderful solution, you avoid having to build separate implementations for every instruction set and/or worrying about per-compiler massaging to get it to recognise the vectorisation opportunities. The arguments for having a domain-specific language variant and why it was written (https://pharr.org/matt/blog/2018/04/30/ispc-all.html is a good read) seem like persuasive arguments.

However, outside of the projects in the above list - it doesn't seem to have very wide usage. There are still commits coming in/responding to some issues so it doesn't seem dead, but there are many issues untouched or just untriaged. There isn't much discussion about using it, or people asking for advice. The mailing list has about a message a month.

Is it merely just an extremely highly specialised domain? Is it just that CUDA/OpenCL is a more efficient solution for most cases where one would otherwise consider it? Are there too many ASM/intrinsic experts out there to bother learning?

Twinklebear|5 years ago

ISPC is really awesome, but you're right it is much less known than CUDA/OpenCL. Part of that might just be lack of marketing effort and focus (you don't hear much about it compared to e.g. CUDA) and the team working on it is far smaller than that on CUDA. There has been some wider adoption, like Unreal Engine 4 using it now: https://devmesh.intel.com/projects/intel-ispc-in-unreal-engi... which is super cool, so hopefully we'll see more of that.

As far as support from other languages I did write this wrapper for using ISPC from Rust https://github.com/Twinklebear/ispc-rs (but that's just me again), and there has been work on a WebASM+SIMD backend which is really exciting. Intel does also have an ISPC based texture compressor (https://github.com/GameTechDev/ISPCTextureCompressor) which I think does have some popularity.

However, the domain is pretty specialized, and I think the fraction of people who really care about CPU performance and are willing to port or write part of their code in another language is smaller still. It's also possible that a lot of those who would do so have their own hand written intrinsics wrappers already. Migrating to ISPC would reduce a lot of maintenance effort on such projects, but when they already have momentum in the other direction it can be harder to switch. I think that on the CPU ISPC is easier and better than OpenCL for performance and tight integration with the "host" language, since you can directly share pointers and even call back and forth between the "host" and "kernel".

KMag|5 years ago

At work, I had a project involving a DSL for Monte Carlo simulations. The DSL was an internal DSL in Scala, our interpreter was in Scala, and we transpiled to ISPC (for servers/VMs that didn't expose a GPU) and OpenCL.

I generally liked ISPC, but I really didn't like that it tried to look as close as possible to C but departed from C in unnecessary ways. With Monte Carlo simulations, we deal with a lot of probabilities represented as doubles in the range [0.0, 1.0]. The biggest pain is that operations between a double and any integral type cast the double to the integral type, whereas in C, the integral type gets implicitly cast to a double. I understand the implicit casting rules were changed to give the fastest speed rather than minimize worst-case rounding error. I could understand getting rid of implicit casts, or maybe I could understand changing rules to improve accuracy and know that the user could easily use a profiler to discover any performance problems this caused. However, in our case, uint32_t * double = (uint32_t) 0, which then would get implicitly cast back to a double if being assigned to a variable. My interne was beating his head against the wall for the better part of an afternoon before I gave him a bit of debugging help. All of his probabilities were coming out 0% and 100% for his component.

I actually emailed the authors with a bug report when I found the implicit casting rules differed so radically from C and were in the direction away from accuracy. (Note there's no rounding error when converting uint32_t to a 64-bit IEEE-754 double.) They were very nice, and pointed us to where this behavior was documented.

If you're going out of your way to make your language look like C and interoperate seamlessly with C, you should have really strong justifications for the places where you radically depart from C's semantics.

adev_|5 years ago

> However, outside of the projects in the above list - it doesn't seem to have very wide usage.

ISPC is pretty popular in the HPC world.

z0mbie42|5 years ago

Hi, thank you for the pointers!

I try not to include C or C++ projects other than for educational purpose (like the Mandelbrot set) because one of my life's goal is to help the world to transition to a C & C++ free world (other than for kernels...).

I believe that my role is to promote projects which are "building the new world" and thus we need to abandon and port all form insecure core.

sk0g|5 years ago

So in an article about high/extreme performance systems, you're ignoring the vast majority of them because you don't agree with the tool used to achieve said performance? What..?

adev_|5 years ago

By the sake of god, please stop to put C and C++ in the same basket when talking about security.

It just show you do not know what you are talking about.

Most security problems affecting C program DO NOT affect C++ programs.

Stack smash, vla abuse, string null termination problems, goto error control, double free corruption do NOT affect C++, they are C specific.

fermentation|5 years ago

Despise of C and an interest in high-performance, a unique mix.

z3phyr|5 years ago

In my opinion, we should instead focus on hardware and experiment more with different kinds of cpus, memory, co-processors etc. The key to newer software systems are newer kinds of hardware, for which you can write newer experimental systems in the language of your own designs.

The sky is the limit, and there is so much to do! Transactional memory, massively multicore computers, hardware built on predicate logic, neuromorphic computers, and whatnot.

We are still mostly stuck with the cpu and memory designs of old.

hedora|5 years ago

Some of the most secure software is written in C.

The language matters less than you’d think once you get past a certain correctness baseline.

bityard|5 years ago

I mean, have you even _seen_ the trail of CVEs that Java has left in its wake over the past few decades?