Trying to avoid adding warnings is a very silly form to twisted logic.
1. People enabled warnings and -Werror because they want high quality code.
2. Standard can't add warnings because people use -Werror
This means that not adding warnings is directly against the original reason to use -Werror in the first place! We are now avoiding warning people about dangerous things because they requested to be warned about dangerous things!
The argument also doesn't make much sense because all 3 big compilers are already adding tons of warnings with each new release. Upgrading to a new compiler version and seeing screens full of warnings scroll by when compiling code that was warning-free in the previous compiler version is quite normal. Why does this affect the C committee's decision making, and why is it suddenly a problem?
> Most compilers warn, but this is standards-conforming ISO C code that is required to not be rejected
Bollocks. That is a constraint violation, ISO C requires a diagnostic for it, and ISO C allows that diagnostic to be an error. The constraint is in the section "Simple assignment", which contains "One of the following shall hold:" followed by a list detailing when assignments are valid. Pointers to different structure types on the LHS vs the RHS are nowhere in that list.
The full quote explains his thought: "Yes, two entirely unrelated pointer types can be set to one another in standards conforming C. Most compilers warn, but this is standards-conforming ISO C code that is required to not be rejected unless you crank up the -Werror -Wall -Wpedantic etc. etc. etc."
Unless you make warnings errors (-Werror), it probably will warn, but will not reject the code (fail to compile).
What's even stupider is that I'd be willing to bet it's also UB. If it is, it means an ISO-C-compliant compiler is allowed to generate code that does absolutely anything; which makes the worries over adding a warning kind of ridiculous.
It's been a while since I called myself a C expert, but that's not an assignment - it's an initialization.
Those are not the same things in C; I think if you go look at your C standard for the constraints on initialization, they are different from and weaker than those for assignment.
This is dangerous of course because dogs are larger than cats, but I think he is correct. While every compiler I tried has this warning enabled for assignments or comparisons by default, I don't think the standard explicitly forbids it. Could be wrong though.
And this is how backwards compatibility comes to kill innovation. It's a reasonable stance to keep supporting your long established users but it comes at the cost of ceding the future to the competition (cough Rust cough)
IMHO that's how it should be for a programming language.
If a programming language evolves to the point that previous programs written in that language no longer compile then it's no longer the same language.
So let's keep C as C and, as you point out, new ideas and concepts that would break things can be implemented in new languages.
C does evolve but it's also taken the pragmatic approach not to break the huge existing code base it has. In many cases these programs have been running for decades, they work.
Well, precisely : if people want modern/innovating/fast evolving languages, they can use Rust, Go, Elixir, etc.
I actually started using C for my side projects since two years precisely because I want very long term backward compatibility (that is, being able to leave a program for years without maintaining it, then make a small edit in it and build it with minimum pain). C is perfect for that, and I agree with the sentiment that backward compatibility is its most important feature.
I pray daily that more of my fellow programmers may find the means of freeing themselves from the curse of compatibility.
-- Edsger Dijkstra - 1972, Turing award lecture
> And this is how backwards compatibility comes to kill innovation.
With all of the novel and esoteric programming languages that exist, I wonder why hasn't there been a "I can't believe it's not c/c++" language that breaks these things, but isn't taken seriously enough to diverge completely from the ISO language standards. (for bonus points, with standardized gcc extensions, and something like embedded asm but for compiled languages (like iso standard c))
C is a knife. You expect knives to cut you, so you handle them carefully.
Except that C is sometimes a knife with another knife hidden in the grip and it you don't handle it just right, the hidden knife will also cut you. (Thinking of libraries/other people's code)
This is what you get when a person who is mostly a C++ coder (as per his bio) writes C. I cringe when I see things like
struct Bark* p_dog = p_cat;
instead of
struct Bark *p_dog = p_cat;
That weird affectation of C++ programmers putting the asterisk on the type and not on the declarator, where it belongs, makes my eyes bleed. I somewhat understand the reasoning, but I think it's a gross violation of the Law of Least Astonishment.
Backwards compatibility for code is important, like progress in language evolution. I have question regarding "C has no ABI that could be affected by this, C doesn’t even respect qualifiers, how are we breaking things?!"
We have language standards for changes like this like '-std=c2x' or '-std=c89' with GNU's GCC. I understand and accept the matter of avoiding breakage. Furthermore C is inherently weakly typed, contrary to C++ which is strongly typed. Something you probably should not change, because that are basic language features. But the option to set the language standard does exist for this situation, to allow changes which will affect users. So why it cannot be used here?
That is not a critic. I'm sure that have their rationale for that and know more than me.
PS: Some changes will break the ABI, in that cases we likely see a PREPROCESSOR variable or something like that which is more complicated. The GCC people used it for some changes to std::string if I remember correctly.
I was wondering this myself. Those million line codebases where they are worried about new warnings breaking things should be using a compiler already that doesn't know about the new standard, or one that allows you to set which standard their code is following. I guess I don't understand the problem. Does something like MISRA assume you are using the latest standard? I understand there are regulations involved.
more seriously, what I'd like to see in C (as a long-time programmer in C) is less freedom around undefined behavior. I used to feel like the biggest mistakes made in C were around pointer bugs, but you can be careful and get things like that (mostly) right. Undefined code is a lot harder to see and avoid without a very deep understanding of lots of small details.
The basic idea is to replace various common UB scenarios with "defined, but unspecified", in order to move the compiler's interpretation of the program closer to the programmer's.
It would be very nice if there was a special C compiler that could just warn about all occurrences of undefined behavior in a program. It doesn't even need to be able to generate code, it could just be a front-end that points out the places where undefined behavior is either being invoked, or could be invoked depending on the input to the program.
Please tell me if I misunderstood but this is what I thought I was reading here:
- The author is someone quite young (undergrad age) who is serving on a C language committee (that I assume is mostly made up of people who are over 40, probably mostly over 50).
- The author not only is donating his own time to C language committee work, but also clearly knows what he's talking about regarding C.
The article came across to me as thinly-disguised frustration/anger that the committee had no interest in making C "safer".
My take away was that the article very much fitted in with all the articles one sees being positive about Rust not just being of academic / hobbyist interest but being a serious contender for a replacement in many industry contexts.
My understanding of the role is that it's their job to literally edit the standard, that is, they take the papers that have been accepted, and apply them to the standard's text to produce the next draft of the standard.
-Werror is a bad idea for open source projects or anything that will be compiled by people who do not know how to fix things when their shiny new compiler added a fancy warning.
-Werror=... for specific warnings might be OK in some cases.
Console homebrew. iOS jailbreaking. Android rooting. Those are only some of the freedom-enabling things this and other "insecurity" allows. It's not all bad --- and IMHO it's necessary have these "small cracks", as it keeps the balance of power from going too far in the direction of the increasingly authoritarian corporations.
I always keep this quote in mind: "Freedom is not worth having if it does not include the freedom to make mistakes."
This has nothing to do with any of that. Absolutely nobody is proposing that it shouldn't be possible to write code that reads from and writes to arbitrary registers and memory addresses, even though this obviously makes complete memory safety impossible. (I mean, there are legitimate use cases for sandboxing and VMs and what have you without escape hatches, but there are also legitimate use cases for not-that.)
This is about providing better compiler diagnostics. Such diagnostics can't catch every mistake, as long as we require the aforementioned ability to perform arbitrary operations, but they can catch a lot more mistakes than they're catching now.
Trust me, none of the memory errors in those examples come from the omission of a `const` in a pointer. It's not like adding `const` to a pointer will write-protect that memory so that magically exploits become impossible. All those insecurities that lead to exploits are buffer overruns of some sort, and nothing would change about them with this sort of little language change.
[+] [-] kevincox|5 years ago|reply
1. People enabled warnings and -Werror because they want high quality code. 2. Standard can't add warnings because people use -Werror
This means that not adding warnings is directly against the original reason to use -Werror in the first place! We are now avoiding warning people about dangerous things because they requested to be warned about dangerous things!
[+] [-] flohofwoe|5 years ago|reply
[+] [-] hvdijk|5 years ago|reply
Bollocks. That is a constraint violation, ISO C requires a diagnostic for it, and ISO C allows that diagnostic to be an error. The constraint is in the section "Simple assignment", which contains "One of the following shall hold:" followed by a list detailing when assignments are valid. Pointers to different structure types on the LHS vs the RHS are nowhere in that list.
[+] [-] DougBTX|5 years ago|reply
Only because it is misquoted, the sentence ends:
> this is standards-conforming ISO C code that is required to not be rejected unless you crank up the -Werror -Wall -Wpedantic etc. etc. etc.
[+] [-] kstenerud|5 years ago|reply
The full quote explains his thought: "Yes, two entirely unrelated pointer types can be set to one another in standards conforming C. Most compilers warn, but this is standards-conforming ISO C code that is required to not be rejected unless you crank up the -Werror -Wall -Wpedantic etc. etc. etc."
Unless you make warnings errors (-Werror), it probably will warn, but will not reject the code (fail to compile).
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] gwd|5 years ago|reply
[+] [-] zvrba|5 years ago|reply
[+] [-] NovemberWhiskey|5 years ago|reply
Those are not the same things in C; I think if you go look at your C standard for the constraints on initialization, they are different from and weaker than those for assignment.
[+] [-] DiabloD3|5 years ago|reply
[+] [-] raxxorrax|5 years ago|reply
[+] [-] noelwelsh|5 years ago|reply
[+] [-] mytailorisrich|5 years ago|reply
If a programming language evolves to the point that previous programs written in that language no longer compile then it's no longer the same language.
So let's keep C as C and, as you point out, new ideas and concepts that would break things can be implemented in new languages.
C does evolve but it's also taken the pragmatic approach not to break the huge existing code base it has. In many cases these programs have been running for decades, they work.
[+] [-] Oote3eep|5 years ago|reply
I actually started using C for my side projects since two years precisely because I want very long term backward compatibility (that is, being able to leave a program for years without maintaining it, then make a small edit in it and build it with minimum pain). C is perfect for that, and I agree with the sentiment that backward compatibility is its most important feature.
[+] [-] CodeArtisan|5 years ago|reply
[+] [-] flingo|5 years ago|reply
With all of the novel and esoteric programming languages that exist, I wonder why hasn't there been a "I can't believe it's not c/c++" language that breaks these things, but isn't taken seriously enough to diverge completely from the ISO language standards. (for bonus points, with standardized gcc extensions, and something like embedded asm but for compiled languages (like iso standard c))
[+] [-] ksec|5 years ago|reply
Or Zig? What other potential C replacement are there?
[+] [-] schemy|5 years ago|reply
[+] [-] csours|5 years ago|reply
Except that C is sometimes a knife with another knife hidden in the grip and it you don't handle it just right, the hidden knife will also cut you. (Thinking of libraries/other people's code)
[+] [-] craftinator|5 years ago|reply
[+] [-] pjmlp|5 years ago|reply
[+] [-] bigdict|5 years ago|reply
Why cast the malloc? This isn't C++.
[+] [-] scruple|5 years ago|reply
[+] [-] rootbear|5 years ago|reply
[+] [-] ho_schi|5 years ago|reply
We have language standards for changes like this like '-std=c2x' or '-std=c89' with GNU's GCC. I understand and accept the matter of avoiding breakage. Furthermore C is inherently weakly typed, contrary to C++ which is strongly typed. Something you probably should not change, because that are basic language features. But the option to set the language standard does exist for this situation, to allow changes which will affect users. So why it cannot be used here?
That is not a critic. I'm sure that have their rationale for that and know more than me.
PS: Some changes will break the ABI, in that cases we likely see a PREPROCESSOR variable or something like that which is more complicated. The GCC people used it for some changes to std::string if I remember correctly.
[+] [-] godshatter|5 years ago|reply
[+] [-] saagarjha|5 years ago|reply
[+] [-] loriverkutya|5 years ago|reply
[+] [-] Upvoter33|5 years ago|reply
more seriously, what I'd like to see in C (as a long-time programmer in C) is less freedom around undefined behavior. I used to feel like the biggest mistakes made in C were around pointer bugs, but you can be careful and get things like that (mostly) right. Undefined code is a lot harder to see and avoid without a very deep understanding of lots of small details.
[+] [-] wool_gather|5 years ago|reply
The basic idea is to replace various common UB scenarios with "defined, but unspecified", in order to move the compiler's interpretation of the program closer to the programmer's.
[+] [-] wtetzner|5 years ago|reply
[+] [-] jedisct1|5 years ago|reply
It fixes most of the C mistakes, while still giving programmers tight control over the system.
[+] [-] kps|5 years ago|reply
[+] [-] da39a3ee|5 years ago|reply
- The author is someone quite young (undergrad age) who is serving on a C language committee (that I assume is mostly made up of people who are over 40, probably mostly over 50).
- The author not only is donating his own time to C language committee work, but also clearly knows what he's talking about regarding C.
The article came across to me as thinly-disguised frustration/anger that the committee had no interest in making C "safer".
My take away was that the article very much fitted in with all the articles one sees being positive about Rust not just being of academic / hobbyist interest but being a serious contender for a replacement in many industry contexts.
[+] [-] app4soft|5 years ago|reply
As making mistakes is nature of human, and C will never stop human from making mistakes, then C will preserve human be natural forever!
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] torh|5 years ago|reply
[+] [-] staticassertion|5 years ago|reply
[+] [-] steveklabnik|5 years ago|reply
[+] [-] ddevault|5 years ago|reply
[+] [-] account42|5 years ago|reply
-Werror=... for specific warnings might be OK in some cases.
[+] [-] nly|5 years ago|reply
[+] [-] panpanna|5 years ago|reply
[+] [-] choeger|5 years ago|reply
[+] [-] userbinator|5 years ago|reply
Console homebrew. iOS jailbreaking. Android rooting. Those are only some of the freedom-enabling things this and other "insecurity" allows. It's not all bad --- and IMHO it's necessary have these "small cracks", as it keeps the balance of power from going too far in the direction of the increasingly authoritarian corporations.
I always keep this quote in mind: "Freedom is not worth having if it does not include the freedom to make mistakes."
[+] [-] ameliaquining|5 years ago|reply
This is about providing better compiler diagnostics. Such diagnostics can't catch every mistake, as long as we require the aforementioned ability to perform arbitrary operations, but they can catch a lot more mistakes than they're catching now.
[+] [-] TonyTrapp|5 years ago|reply
[+] [-] brazzy|5 years ago|reply