There's a bit of an impedence mismatch with Contracts in C because C++ contracts exist partially to rectify the fact that <cassert> is broken in C++.
Let's say you have a header lib.h:
inline int foo(int i) {
assert(i > 0);
//...
}
In C, this function is unspecified behavior that will probably work if the compiler is remotely sane.
In C++, including this in two C++ translation units that set the NDEBUG flag differently creates an ODR violation. The C++ solution to this problem was a system where each translation unit enforces its own pre- and post- conditions (potentially 4x evaluations), and contracts act as carefully crafted exceptions to the vast number of complicated rules added on top. An example is how observable behavior is a workaround for C++ refusing to adopt C's fix for time-traveling UB. Lisa Lippincott did a great talk on this last year: https://youtu.be/yhhSW-FSWkE
There's not much left of Contracts once you strip away the stuff that doesn't make sense in C. I don't think you'd miss anything by simply adding hygenic macros to assert.h as the author here does, except for the 4x caller/callee verification overhead that they enforce manually. I don't think that should be enforced in the standard though. I find hidden, multiple evaluation wildly unintuitive, especially if some silly programmer accidentally writes an effectful condition.
I think the main point of pre- and post-conditions is, that the compiler can see them and prove that they match and will never be triggered. There probably will be a compiler flag for outputting all non-proved pre/postconditions, there is already -fanalyzer.
I think these conditions should be part of the type signature, different to what was suggested in the otherwise good talk you cited.
One of the biggest problems I find with contracts whenever contracts are mentioned is that nobody seems to have a really clear definition of what exactly a contract 'is' or 'should be' (with the exception of languages where contracts are a formal part of the language, that is).
I find the general concept incredibly useful, and apply it in the more general sense to my own code, but there's always a bit of "what do I actually want contracts to mean / do here" back-and-forth before they're useful.
PS: I do like how D does contracts; though I admit I haven't used D much yet, to my great regret, so I can't offer my experience of how well contracts actually work in D.
No wonder it looks less than awesome to you. A contract is just a hack. Ideally, it should not exist because the type system already covers the programmer's intent. Languages that have shitty types which cannot express very much must work around the problem with contracts.
> Digital Mars C++ has had contracts since, oh, the early 1990s?
I think that implementations trying out their own experimental features is normal and expected. Ideally, standards would be pull-based instead of push-based.
The real question is what prevented this feature from being proposed to the standardization committee.
Ada / SPARK has contracts, too, that can be proven at compile-time. In fact, Ada alone suffices, it has pre- and post-conditions. I think Ada has more libraries than Eiffel does. Does anyone even write Eiffel? I am really curious if it is still alive anywhere, in some form.
Truly I agree, but if we can add features to improve C codebases without rewriting them then that's a win, and you can just ignore them if you don't like them (as I will), but to the people where this has benefit they can be used.
Java 24 and C# 9 resemble little of their first versions. C++ might as well not even be the same language at this point. Why are we so conservative with C but then so happily liberal with every other language?
Are you being really honest with yourself that you would renounce c90, c99 additions to stay with the original language as it was introduced in K&R C book?
do i understand correctly that there's nothing preventing someone from adding a postcondition check X and then just not implementing it inside the function? wouldn't this just mean that now the ub is triggered by the post()'s `unceachable()` instead of whatever ub would happen w/, say, dereferencing a null pointer, as a consequence of not actually implementing post check X? so it's just for speed optimisations then?
from reading about contracts for C before i assumed it would be like what cake[1] does, which actually compile time enforces pointer (non)nullability, as well as resource ownership and a bunch of other stuff, very cool project, check it out if you haven't seen it yet :)
The author writes that contract_assume invokes undefined behaviour when the assertion fails:
#define contract_assume(COND, ...) do { if (!(COND)) unreachable(); } while (false)
But this means that the compiler is allowed to e.g. reorder the condition check and never output the message. (Or invoke nasal demons, of course).
This doesn't make much sense. I get that you want the compiler to maybe do nothing different or panic after the assertion failed, but only really after triggering the assertion and the notion of after doesn't really exist with undefined behaviour. The whole program is simply invalid.
To the brain of a compiler writer UB means "the standard doesn't specify what should happen, therefore I can optimize with the assumption UB never happen." I disagree that this is how UB should be interpreted, but this fight is long lost.
With that interpretation of UB, all `unreachable()` means is that the compiler is allowed to optimize as if this point in the code will never be reached. The unreachable macro is standard in C23 but all major compilers provide a way to do it, for all versions of the language.
So if you have a statement like `if (x > 3) unreachable()` that serves as both documentation of the accepted values, as a constraint that the optimizer can understand - if x is an unsigned int, it will optimize with the assumption that the only possible values are 0,1,2.
Of course in a debug build a sane compiler would have `unreachable()` trigger an assert fail, but they're not required to, and in release they most definitely won't do so, so you can't rely on it as a runtime check.
> Here unreacheable() is the new macro from C23 (and C++23) that makes the behaviour undefined whenever the branch of the invocation is reached.
I cannot, in good conscience, use a technology that adds even more undefined behavior. Instead it reinforces my drive to avoid C whenever I can and use OCaml or Rust instead.
I think it is good to explicitly invoke UB. It makes it much more obvious in the code, where it is intended and where not. It's a way to specify that this point in code is never reached, the code can't deal with it and I don't even care what the compiler does in this case.
It's also a good thing to tell the compiler that the programmer intends that this case will never happen, so that the static analyzer can point out ways through the code, where it actually does.
Given the examples, the author wants to ensure that 0 is not a possible input value, and NULL is not a possible output value.
This could be achieved with a simple inline wrapper function that checks pre and post conditions and does abort() accordingly, without all of this extra ceremony
But regardless of the mechansim you're left with another far more serious problem: You've now introduced `panic` to C.
And panics are bad. Panics are landmines just waiting for some unfortunate circumstance to crash your app unexpectedly, which you can't control because control over error handling has now been wrested from you.
It's why unwrap() in Rust is a terrible idea.
It's why golang's bifurcated error mechanisms are a mess (and why, surprise surprise, the recommendation is to never use panic).
AlotOfReading|5 months ago
Let's say you have a header lib.h:
In C, this function is unspecified behavior that will probably work if the compiler is remotely sane.In C++, including this in two C++ translation units that set the NDEBUG flag differently creates an ODR violation. The C++ solution to this problem was a system where each translation unit enforces its own pre- and post- conditions (potentially 4x evaluations), and contracts act as carefully crafted exceptions to the vast number of complicated rules added on top. An example is how observable behavior is a workaround for C++ refusing to adopt C's fix for time-traveling UB. Lisa Lippincott did a great talk on this last year: https://youtu.be/yhhSW-FSWkE
There's not much left of Contracts once you strip away the stuff that doesn't make sense in C. I don't think you'd miss anything by simply adding hygenic macros to assert.h as the author here does, except for the 4x caller/callee verification overhead that they enforce manually. I don't think that should be enforced in the standard though. I find hidden, multiple evaluation wildly unintuitive, especially if some silly programmer accidentally writes an effectful condition.
1718627440|5 months ago
I think these conditions should be part of the type signature, different to what was suggested in the otherwise good talk you cited.
tpoacher|5 months ago
I find the general concept incredibly useful, and apply it in the more general sense to my own code, but there's always a bit of "what do I actually want contracts to mean / do here" back-and-forth before they're useful.
PS: I do like how D does contracts; though I admit I haven't used D much yet, to my great regret, so I can't offer my experience of how well contracts actually work in D.
bmn__|5 months ago
WalterBright|5 months ago
https://www.digitalmars.com/ctg/contract.html
motorest|5 months ago
I think that implementations trying out their own experimental features is normal and expected. Ideally, standards would be pull-based instead of push-based.
The real question is what prevented this feature from being proposed to the standardization committee.
guerrilla|5 months ago
1. https://frama-c.com/
__d|5 months ago
But if I want to use Eiffel, I’ll use Eiffel (or Sather).
I’d rather C remained C.
Maybe that’s just me?
johnisgood|5 months ago
veltas|5 months ago
pjmlp|5 months ago
C especially was designed with lots of security defects, and had it not been for UNIX being available for free, it would probably never taken off.
jimbob45|5 months ago
flykespice|5 months ago
OCTAGRAM|5 months ago
unknown|5 months ago
[deleted]
sirwhinesalot|5 months ago
f(int n, int a[n])
Actually do what it looks like it does. Sigh
veltas|5 months ago
https://godbolt.org/z/8dfKMrGqv
uecker|5 months ago
What new clang feature are you talking about?
WalterBright|5 months ago
You're welcome!
miropalmu|5 months ago
https://en.cppreference.com/w/cpp/container/span.html
Or if you want multidimensional span:
https://en.cppreference.com/w/cpp/container/mdspan.html
pjmlp|5 months ago
taminka|5 months ago
from reading about contracts for C before i assumed it would be like what cake[1] does, which actually compile time enforces pointer (non)nullability, as well as resource ownership and a bunch of other stuff, very cool project, check it out if you haven't seen it yet :)
[1]https://github.com/thradams/cake
1718627440|5 months ago
This doesn't make much sense. I get that you want the compiler to maybe do nothing different or panic after the assertion failed, but only really after triggering the assertion and the notion of after doesn't really exist with undefined behaviour. The whole program is simply invalid.
babaceca|5 months ago
To the brain of a compiler writer UB means "the standard doesn't specify what should happen, therefore I can optimize with the assumption UB never happen." I disagree that this is how UB should be interpreted, but this fight is long lost.
With that interpretation of UB, all `unreachable()` means is that the compiler is allowed to optimize as if this point in the code will never be reached. The unreachable macro is standard in C23 but all major compilers provide a way to do it, for all versions of the language.
So if you have a statement like `if (x > 3) unreachable()` that serves as both documentation of the accepted values, as a constraint that the optimizer can understand - if x is an unsigned int, it will optimize with the assumption that the only possible values are 0,1,2.
Of course in a debug build a sane compiler would have `unreachable()` trigger an assert fail, but they're not required to, and in release they most definitely won't do so, so you can't rely on it as a runtime check.
unit149|5 months ago
[deleted]
tempodox|5 months ago
I cannot, in good conscience, use a technology that adds even more undefined behavior. Instead it reinforces my drive to avoid C whenever I can and use OCaml or Rust instead.
1718627440|5 months ago
It's also a good thing to tell the compiler that the programmer intends that this case will never happen, so that the static analyzer can point out ways through the code, where it actually does.
aw1621107|5 months ago
kstenerud|5 months ago
Given the examples, the author wants to ensure that 0 is not a possible input value, and NULL is not a possible output value.
This could be achieved with a simple inline wrapper function that checks pre and post conditions and does abort() accordingly, without all of this extra ceremony
But regardless of the mechansim you're left with another far more serious problem: You've now introduced `panic` to C.
And panics are bad. Panics are landmines just waiting for some unfortunate circumstance to crash your app unexpectedly, which you can't control because control over error handling has now been wrested from you.
It's why unwrap() in Rust is a terrible idea.
It's why golang's bifurcated error mechanisms are a mess (and why, surprise surprise, the recommendation is to never use panic).
integricho|5 months ago