(no title)
codekilla | 2 years ago
[1]https://www.idris-lang.org [2]https://homotopytypetheory.org/2012/01/22/univalence-versus-... [3]https://redprl.org
codekilla | 2 years ago
[1]https://www.idris-lang.org [2]https://homotopytypetheory.org/2012/01/22/univalence-versus-... [3]https://redprl.org
AnimalMuppet|2 years ago
Assume that regular software has, on average, one bug every 50 lines. (All numbers made up on the spot, or your money back.) Let's suppose that Idris can reduce that to absolutely zero. And let's suppose that totally-working software is worth twice as much as the buggy-but-still-mostly-working slop we get today.
But Idris is harder to write. Not just a bit harder. I'd guess that it's maybe 10 times as hard to write as Javascript. So we'd get better software, but only 1/10 as much of it. Take your ten most favorite web applications or phone apps. You only would have one of them - but that one would never crash. Most people won't make that trade. Most companies that produce software won't make it, either, because they know their customers won't.
Well, you say, what about safety-critical software? What about, say, airplane flight control software? Surely in that environment, producing correct software matters more than producing it quickly, right?
Yes, but also you're in the world of real-time embedded systems. Speed matters, but also provably correct timing. Can you prove that your software meets the timing requirements in all cases, if you wrote it in Idris? I believe that is, at best, an unsolved problem. So what they do is they write in carefully chosen subsets of C++ or Rust, and with a careful eye on the timing (and with the help of tools).
jfoutz|2 years ago
So given that context, it doesn't sound to tough to add a cost to the type for each operation, function call, whatever, and have the type checker count up the cost of each call. So you'd have real proof that you're under some threshold. I wouldn't put the agda runtime on a flight control computer. But I think I could write a compiler, now, For like a microcontroller that would count up (or spend time budget, doesn't matter).
A more sophisticated computer would be way way harder, and be resource efficient. But if you modeled it as "everything's a cache miss" and don't mind a bunch of no-ops all the time, that would be a pretty straightforward adaptation of the microcontroller approach.
__MatrixMan__|2 years ago
How much software do you think we need? 10x less sounds about right to me.
haltist|2 years ago
1: https://arxiv.org/abs/2210.08232
2: https://chat.openai.com/share/aadb7a0a-08a4-4951-b877-cb2f61...