Arnsaste | 9 years ago | on: C++17 – better than you might think
Arnsaste's comments
Arnsaste | 9 years ago | on: Konrad Zuse and the digital revolution he started 75 years ago
Arnsaste | 10 years ago | on: Weep for Graphics Programming
Somebody in this thread mentioned CUDA, which is great, but it also has the downside that you practically have to use C++ on the CPU side. You also can use e.g. python, but when you do you are back to compiling CUDA code at runtime.
Sure, you could argue that we can simply create a new programming language for applications that use the graphics card (Or use C++ like CUDA). The problem with this is that few people would use it just because its a bit easier to do CPU-GPU communication. An there is a second much larger problem: Different graphics API vendors would create different programming languages which makes it much harder for a application to support multiple graphics API as a lot of games do today with DirectX and OpenGL.
Perhaps there is another better solution, but I can't see it right now.
If you listen to the C++ advocates, they all say use the STL as much as possible, use boost if there is solution for your problem there, etc.. It's true, you probably will quickly get a solution that works and looks pretty, but if you do this a hundred or a thousand times your code suddenly needs several hours to compile.
I am not against the STL, or boost, but there is a cost associated with using them that is often ignored, but IMHO is actually a productivity killer in codebases that are a bit bigger.