(no title)
gqgs | 11 months ago
This issue appears to present an intrinsically unsolvable problem, implying that a formally verified system could still contain bugs due to potential issues in the verification software.
While this perspective doesn't necessarily render formal verification impractical, it does introduce certain caveats that, in my experience, are not frequently addressed in discussions about these systems.
alexisread|11 months ago
I can't recall exactly, but I think CakeML (https://cakeml.org/jfp14.pdf) had a precursor lisp prover, written in a verified lisp, so would be amenable to this approach.
EDIT: think it was this one https://www.schemeworkshop.org/2016/verified-lisp-2016-schem...
guerrilla|11 months ago
newAccount2025|11 months ago
It’s like worrying about compiler bugs instead of semantic programming errors. If you are getting to the point where you have machine checked proofs, you are already doing great. It’s way more likely that you have a specification bug or modeling error than an error in the solver.
If you are worried about it, many people have solutions that are very compelling. There are LCF style theorem provers like HOL Light with tiny kernels. But for any real world systems, none of this matters, use any automated solvers you like, it will be fine.
ninetyninenine|11 months ago
We call these things axioms. The verifier must be assumed to be true.
For all the axioms in mathematics... we don't know if any of those axioms are actually fully true. They seem to be true because we haven't witnessed a case where they aren't true. This is similar to how we verify programs with unit tests. For the verifier it's the same thing, you treat it like an axiom you "verify" it with tests.
gqgs|11 months ago
Axioms are true by their own definition. Therefore, discovering an axiom to be false is a concept that is inherently illogical.
Discovering that a formal verification system produced an incorrect output due to a bug in its implementation is a perfectly well-defined concept and doesn't led to any logical contradictions; unless you axiomatically define the output of formal verification systems to be necessarily correct.
I believe this definition don't make sense in the general case considering the number of vectors that can introduce errors into software.
trenchgun|11 months ago
The issue is writing the formal specification. Formal verification is in nutshell just proving the equivalence of two different formal constructs: one that is called the spec and one that is called the system (or program). Writing specs is hard.
csbartus|11 months ago
In category theory / ologs, a formal method for knowledge representation, the result is always mathematically sound, yet ologs are prefixed with "According to the author's world view ..."
On the other way truth, even if it's not absolute, it's very expensive.
Lately AWS advocates a middle ground, the lightweight formal methods, which are much cheaper than formal methods yet deliver good enough correctness for their business case.
In the same way MIT CSAIL's Daniel Jackson advocates a semi-formal approach to design likely-correct apps (Concept design)
It seems there is a push for better correctness in software, without the aim of perfectionism.
whattheheckheck|11 months ago
solidsnack9000|11 months ago
What are those?
As far as I can tell, people don't deploy verified or high assurance systems without testing or without online monitoring and independent fail safes.
Ar-Curunir|11 months ago
deterministic|11 months ago