(no title)
quanto | 3 months ago
My question is the same, albeit more technically refined. How do you prove the correctness of a numerical algorithm (operating on a quantized continuum) using type-theoretic/category-theoretic tools like theorem provers like Coq? There are documented tragedies where numerical rounding error of the control logic of a missile costed lives. I have proved mathematical theorems before (Curry-Howard!) but they were mathematical object driven (e.g. sets, groups) not continuous numbers.
dwohnitmok|3 months ago
This sounds flippant, but I'm being entirely earnest. It's a significantly larger pain because floating point numbers have some messy behavior, but the essential steps remain the same. I've proved theorems about floating point numbers, not reals. Although, again, it's a huge pain, and when I can get away with it I'd prefer to prove things with real numbers and assume magically they transfer to floating point. But if the situation demands it and you have the time and energy, it's perfectly fine to use Coq/Rocq or any other theorem prover to prove things directly about floating point arithmetic.
The article itself is talking about an approach sufficiently low level that you would be proving things about floating point numbers because you would have to be since it's all assembly!
But even at a higher level you can have theorems about floating point numbers. E.g. https://flocq.gitlabpages.inria.fr/
There's nothing category theoretic or even type theoretic about the entities you are trying to prove with the theorem prover. Type theory is merely the "implementation language" of the prover. (And even if there was there's nothing tying type theory or category theory to the real numbers and not to floats)
quanto|3 months ago
True for some approaches, but numerical analysis does account for machine epsilon and truncation errors.
I am aware that Inria works with Coq as your link shows. However, the link itself does not answer my question. As a concrete example, how would you prove an implementation of a Kalman filter is correct?
grumpymuppet|3 months ago
IEE754 does a good job explaining the representation, but it doesn't define all the operations and possible error codes as near as I can tell.
Is it just assumed "closest representable number to the real value" always?
What about all the various error codes?
anthk|3 months ago
OneDeuxTriSeiGo|3 months ago
1. That the model/specification makes sense. i.e. that certain properties in the model hold and that it does what you expect.
2. That the SUV/SUT (system under verification/test) corresponds to the model. This encompasses a lot but really what you are doing here is establishing how your system interacts with the world, with what accuracy it does so, etc. And from there you are walking along the internal logic of your system and mapping your representations of the data and the algorithms you are using into some projection from the model with a specified error bound.
So you are inherently dealing with the discrete nature of the system the entire time but you can reason about that discrete value as some distribution of possible values that you carry through the system with each step either
- introducing some additional amount of error/variability or
- tightening the bound of error/variability but trapping outside values into predictable edge cases.
Then it's a matter of reasoning about those edge cases and whether they break the usefulness of the system compared against the idealised model.
auggierose|3 months ago
See https://www.cl.cam.ac.uk/~jrh13/papers/thesis.html for your question, John Harrison got a job with Intel based on this after their floating point disaster.
But in short: theorem proving is not about equalities, it is about inequalities. And theorems about numerical algorithms are a great example of this.
tensegrist|3 months ago
curious about this
codenaught|3 months ago
https://www-users.cse.umn.edu/~arnold/disasters/Patriot-dhar...
https://www.gao.gov/assets/imtec-92-26.pdf