top | item 43550402

(no title)

gqgs | 11 months ago

A key concern I've consistently had regarding formal verification systems is: how does one confirm the accuracy of the verifier itself?

This issue appears to present an intrinsically unsolvable problem, implying that a formally verified system could still contain bugs due to potential issues in the verification software.

While this perspective doesn't necessarily render formal verification impractical, it does introduce certain caveats that, in my experience, are not frequently addressed in discussions about these systems.

discuss

order

alexisread|11 months ago

I'd guess the best approach is similar to security - minimal TCB (trusted computing base) that you can verify by hand, and then construct your proof checker on top, and have it verify itself. Proof and type checkers have a lot of overlap, so you can gain coverage via the language types.

I can't recall exactly, but I think CakeML (https://cakeml.org/jfp14.pdf) had a precursor lisp prover, written in a verified lisp, so would be amenable to this approach.

EDIT: think it was this one https://www.schemeworkshop.org/2016/verified-lisp-2016-schem...

guerrilla|11 months ago

This is correct. That's why you have core type theories. Very small, easy to understand and implement in a canonocal way. Other languages can be compiled to them.

newAccount2025|11 months ago

A deeply considered take: this is theoretically interesting but in practice matters very little.

It’s like worrying about compiler bugs instead of semantic programming errors. If you are getting to the point where you have machine checked proofs, you are already doing great. It’s way more likely that you have a specification bug or modeling error than an error in the solver.

If you are worried about it, many people have solutions that are very compelling. There are LCF style theorem provers like HOL Light with tiny kernels. But for any real world systems, none of this matters, use any automated solvers you like, it will be fine.

ninetyninenine|11 months ago

This is not just a problem with formal verification systems. It's a problem intrinsic to math, science and reality.

We call these things axioms. The verifier must be assumed to be true.

For all the axioms in mathematics... we don't know if any of those axioms are actually fully true. They seem to be true because we haven't witnessed a case where they aren't true. This is similar to how we verify programs with unit tests. For the verifier it's the same thing, you treat it like an axiom you "verify" it with tests.

gqgs|11 months ago

I believe there is a distinction between those concepts.

Axioms are true by their own definition. Therefore, discovering an axiom to be false is a concept that is inherently illogical.

Discovering that a formal verification system produced an incorrect output due to a bug in its implementation is a perfectly well-defined concept and doesn't led to any logical contradictions; unless you axiomatically define the output of formal verification systems to be necessarily correct.

I believe this definition don't make sense in the general case considering the number of vectors that can introduce errors into software.

trenchgun|11 months ago

Verifiers can be based on a small proven kernel. That is not really the issue.

The issue is writing the formal specification. Formal verification is in nutshell just proving the equivalence of two different formal constructs: one that is called the spec and one that is called the system (or program). Writing specs is hard.

csbartus|11 months ago

Perhaps there is no such thing like absolute truth.

In category theory / ologs, a formal method for knowledge representation, the result is always mathematically sound, yet ologs are prefixed with "According to the author's world view ..."

On the other way truth, even if it's not absolute, it's very expensive.

Lately AWS advocates a middle ground, the lightweight formal methods, which are much cheaper than formal methods yet deliver good enough correctness for their business case.

In the same way MIT CSAIL's Daniel Jackson advocates a semi-formal approach to design likely-correct apps (Concept design)

It seems there is a push for better correctness in software, without the aim of perfectionism.

solidsnack9000|11 months ago

...it does introduce certain caveats that, in my experience, are not frequently addressed in discussions about these systems.

What are those?

As far as I can tell, people don't deploy verified or high assurance systems without testing or without online monitoring and independent fail safes.

Ar-Curunir|11 months ago

There are many things people try to do here: multiple implementations of the verifier, formally verified implementations of the verifier in other verification tools, by-hand verification, etc.

deterministic|11 months ago

This is a solved problem. See CakeML for example.