top | item 29958901

(no title)

dandotway | 4 years ago

Features that seem like a good idea at the time often don't stand the test of time 20-30 years in the future. In the mid-90s Object-Oriented Programming was super-hyped so a bunch of other languages bolted on OO, such as Fortran and Ada. But now we have Go/Rust/Zig rejecting brittle OO taxonomies because you always end up having a DuckBilledPlatypus that "is a" Mammal and "is a" EggLayer.

A great strength of C is that if you want more features you just go to a subset of C++, no need to add them to C. C++ is the big, ambitious, kitchen-sink language. When C++ exists we don't need to bloat C.

Fortran was originally carefully designed so that people who aren't compiler experts can generate very fast (and easily parallelized) code working with arrays the intuitive and obvious way. But later Fortran added OO and pointers making it much harder to auto-parallelize and avoid aliasing slowdown. Now that GPUs are rising it turns out that the original Fortran model of everything-is-array-or-scalar works really well for automatically offloading to the GPU. GPUs don't like method-lookup tables, nor do they like lambdas which are equivalent to stateful Objects with a single Apply method.

Scientists are moving to CUDA now, which on the GPU side deletes all these features that Fortran was bloated with. Now nVidia offers proprietary CUDA Fortran which is much more in the spirit of original Fortran, deleting OO and pointers for code that runs on GPU. If the ISO standards committee didn't ruin ISO Fortran for scientific computing by bloating it with trendy features we could all be running ISO Fortran automatically on CPUs and GPUs with identical code (or just a few pragmas) and not be locked in to proprietary nVidia CUDA.

But GPUs are now mainly used for crypto greed instead of science for finding cancer cures or making more aerodynamic aircraft so maybe it all doesn't matter anyway.

discuss

order

bee_rider|4 years ago

Yeah. I think I'm much less informed on this topic, but my initial thought on reading the "Rationale" section was that this sort of feature would only be helpful in cases where C offered almost no advantages over C++.

svnpenn|4 years ago

> A great strength of C is that if you want more features you just go to a subset of C++, no need to add them to C. C++ is the big, ambitious, kitchen-sink language. When C++ exists we don't need to bloat C.

This is a rationalization, and a bad one. When your solution is "just pull in another programming language", you have a problem.

dandotway|4 years ago

"Another programming language" cannot even meaningfully exist if all programming languages are forced to have the same feature set. Should Python get C-like low-level pointer manipulation so that Python users don't need to "pull in another programming language" of C to do pointer manipulation?

C doesn't need "defer" because C programmers have managed since the 1970s to implement operating systems, compilers, interpreters, editors, etc., just fine without it. Those who want a bigger C can use C++, this pond is big enough for two fish.

rossy|4 years ago

On the other hand, some features turn out to be a very good idea and do stand the test of time. Designated initializers and compound literals, introduced in C99, are perfect examples of C features that stuck and became very widespread, while keeping the spirit of the language. C shouldn't be set in stone.

The fact that goto-based solutions and a non-standard GCC extension are common methods of resource cleanup in C today seems to suggest that a standardized language construct for resource cleanup would be appreciated.

> A great strength of C is that if you want more features you just go to a subset of C++, no need to add them to C.

What is C for then? Cleanup of function-scoped resources is a major concern in every large C codebase I've seen.

eps|4 years ago

It's not an major concern though.

If one has trouble writing correct cleanup code conventionally (with "goto out" and a single function exit), then allowing them to use defer will only lead to more obscure issues.

And if defer is meant to make code slimmer, it still doesn't belong to C, because it leads to implicit execution and memory/stack allocation.

C is an explicit and verbose language. What you see is what you get. This is the spirit of the language. Unlike with, say, C++ where "a + b" may actually produce kilobytes of machine code, because + just happend to be overloaded.

mst|4 years ago

> GPUs don't like method-lookup tables, nor do they like lambdas which are equivalent to stateful Objects with a single Apply method.

Since I tend towards read-only instance data, I often live my life considering an object to mostly be a bag of closures with a shared outer scope.