(no title)
d3ckard | 2 months ago
It’s the only kind of program that can be actually reasoned about. Also, not exactly Turing complete in classic sense.
Makes my little finitist heart get warm and fuzzy.
d3ckard | 2 months ago
It’s the only kind of program that can be actually reasoned about. Also, not exactly Turing complete in classic sense.
Makes my little finitist heart get warm and fuzzy.
mikepurvis|2 months ago
Also it's giving me flashbacks to LwIP, which was a nightmare to debug when it would exhaust its preallocated buffer structures.
kevin_thibedeau|2 months ago
LwIPs buffers get passed around across interrupt handler boundaries in and out of various queues. That's that makes it hard to reason about. The allocation strategy is still sound when you can't risk using a heap.
d3ckard|2 months ago
We used to have very little memory, so we developed many tricks to handle it.
Now we have all the memory we need, but tricks remained. They are now more harmful than helpful.
Interestingly, embedded programming has a reputation for stability and AFAIK game development is also more and more about avoiding dynamic allocation.
kibwen|2 months ago
Theoretically infinite memory isn't really the problem with reasoning about Turing-complete programs. In practice, the inability to guarantee that any program will halt still applies to any system with enough memory to do anything more than serve as an interesting toy.
I mean, I think this should be self-evident: our computers already do have finite memory. Giving a program slightly less memory to work with doesn't really change anything; you're still probably giving that statically-allocated program more memory than entire machines had in the 80s, and it's not like the limitations of computers in the 80s made us any better at reasoning about programs in general.
d3ckard|2 months ago
Static allocation requires you to explicitly handle overflows, but also by centralizing them, you probably need not to have as many handlers.
Technically, all of this can happen as well in language with allocations. It’s just that you can’t force the behavior.
IshKebab|2 months ago
What do you mean? There are loads of formal reasoning tools that use dynamic allocation, e.g. Lean.
dnautics|2 months ago
d3ckard|2 months ago
It’s actually quite tricky though. The allocation still happens and it’s not limited to, so you could plausibly argue both ways.
muvlon|2 months ago
No. That is one restriction that allows you to theoretically escape the halting problem, but not the only one. Total functional programming languages for example do it by restricting recursion to a weaker form.
Also, more generally, we can reason about plenty of programs written in entirely Turing complete languages/styles. People keep mistaking the halting problem as saying that we can never successfully do termination analysis on any program. We can, on many practical programs, including ones that do dynamic allocations.
Conversely, there are programs that use only a statically bounded amount of memory for which this analysis is entirely out of reach. For example, you can write one that checks the Collatz conjecture for the first 2^1000 integers that only needs about a page of memory.