top | item 28780722

(no title)

vyodaiken | 4 years ago

I doubt that very much, and one of the points of the article is that there is a shortage of data to back up claims like yours.

In any case, for people who want to write OS or cryptography or embedded systems or arithmetic libraries or ... in C, this is not a relevant point.

discuss

order

not2b|4 years ago

You doubt it because you aren't a compiler developer, so you aren't aware of the history. Try taking some of the classic Fortran scientific programming libraries, such as LAPACK, recoding them in C, and then see what happens when you need to generate code with a compiler that doesn't do any of the optimizations the article complains about and has no undefined behavior. Then you'll figure out why.

vyodaiken|4 years ago

You still don't have any data.

Gibbon1|4 years ago

My understanding is the reason FORTRAN is faster than C isn't because of stupid stuff like noalias and the like. It's because FORTRAN has arrays and C doesn't.

not2b|4 years ago

You're right, sort of. Because C doesn't have proper arrays, you make up for it by passing a pointer to the first element and the range. So for C to do as well as Fortran on matrix operations the compiler developers need help. One source of help is that if you have a pointer to double and a pointer to int, you can assume that writing pdouble[k] doesn't alter anything reachable as pint[m].

qsort|4 years ago

Why would that put C at a disadvantage performance-wise? Array decay is, with hindsight, an unfortunate feature, but optimizing access to contiguous memory is very low-hanging fruit as far as compiler optimizations go.