(no title)
virchau13 | 3 years ago
I personally find that both are good tools, but to know what you should be using, you need to think about which viewing angle you want to choose: which components of state do you want to inspect and which moments in time do you want to track? If the answer is "these specific state components/unknown" (e.g. "when is my program accessing invalid memory?"), that's where print debugging comes in, with specializations depending on which particular state components you're looking at (strace or eBPF for IO, valgrind for invalid memory accesses, etc.) If it's "unknown/these specific times" (e.g. "where is my program setting HTTP response headers when a request occurs?"), then a debugger is a good idea.
However, what you said does have truth to it. Good programming practices generally center around the management of program state* (using types, specifications, whatever). If your program is not designed well in those terms, then the more likely it is that you have no idea what state components you want to inspect, meaning you reach for a debugger first. And if it is designed well, you won't need a debugger nearly as often. But sometimes we have no choice, whether that's due to an inherently hard problem space or lots of bad code that we didn't write, so a debugger is necessary.
* I don't actually know if there are programming practices that center around managing moments in time. I can't even think how that would work, but I would be very interested to know if there are any :)
scroot|3 years ago
There was a live, 3D, peer-to-peer interactive environment back in the early 2000s called Croquet that made use of the concept of "pseudo-time." It was built on top of Squeak Smalltalk (there are some decendants today, including croquet.io). The part that handled the time management in a way you might find interesting was called TeaTime [1], and built on the ideas of David Reed thesis about pseudo-time [2]. If you are not familiar with these you might want to check them out!
[1] https://dl.acm.org/doi/10.1145/1094855.1094861
[2] http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-20...
ThatGeoGuy|3 years ago
I don't disagree! The notion of state vs. time snapshots in terms of the mental model seems like it might be missing something small but sounds mostly on the mark? E.g. in dynamic languages my REPL is more of a debugger than any actual debugger could be.
> * I don't actually know if there are programming practices that center around managing moments in time. I can't even think how that would work, but I would be very interested to know if there are any :)
I think this depends on how you abstract control flow. Actor model comes to mind, as Smalltalk really doesn't have this notion of "time" in the same way. You debug live in such a system, and "time" is more a matter of what abstraction sent or received something (and as such may have failed). Similar arguments could be made for conditions / restarts in Lisp, perhaps even more strongly, since you can manage errors through conditions and then restart code (going back in time, so to speak) in a sort of live-debugging way. Not sure it measures quite to the degree you were asking, but that's the first thing that comes to mind.
Another thing is the movement towards async-await / cooperative coordination in programs. Even if you ignore concurrency, designing code in a way that coroutines cooperatively yield to one another (e.g. generators in Python) helps sort this out too. Basically, you make "time" in a program a function of control flow, and do that by forcing an abstraction where you explicitly yield control in the program. This relates to both the actor model and conditions / restarts in Lisp, so I feel like I'm pulling on the same train of thought there.