(no title)
pg | 6 years ago
I'm not sure how true that statement is, but my experience so far suggests that it is not only true a lot of the time, but that its truth is part of a more general pattern extending even to writing, engineering, architecture, and design.
As for the question of catching errors at compile time, it may be that there are multiple styles of programming, perhaps suited to different types of applications. But at least some programming is "exploratory programming" where initially it's not defined whether code is correct because you don't even know what you're trying to do yet. You're like an architect sketching possible building designs. Most programming I do seems to be of this type, and I find that what I want most of all is a flexible language in which I can sketch ideas fast. The constraints that make it possible to catch lots of errors at compile time (e.g. having to declare the type of everything) tend to get in the way when doing this.
Lisp turned out to be good for exploratory programming, and in Bel I've tried to stick close to Lisp's roots in this respect. I wasn't even tempted by schemes (no pun intended) for hygienic macros, for example. Better to own the fact that you're generating code in its full, dangerous glory.
More generally, I've tried to stick close to the Lisp custom of doing everything with lists, at least initially, without thinking or even knowing what types of things you're using lists to represent.
jblow|6 years ago
I agree that there is such a thing as writing a program that is only intended to be kind of close to correct, and that this is actually a very powerful real-world technique when problems get complicated. But the assertion that type annotations hinder this process seems dubious to me. In fact they help be a great deal in this process, because I don’t have to think about what I am doing very much, and I will bang into the guardrails if I make a mistake. The fact that the guardrails are there gives me a great deal of confidence, and I can drive much less carefully.
People have expressed to me that functions without declared types are more powerful and more leveragable, but I have never experienced this to be true, and I don’t really understand how it could be true (especially in 2019 when there are lots of static languages with generics).
KajMagnus|6 years ago
> language in which I can sketch ideas fast
Can I ask, what are you programs about?
And sketching ideas? Is it ... maybe creating software models for how the world works, then input the right data, and proceed with simulating the future?
jstimpfle|6 years ago
I agree with this idea to a degree. However, there are limits to this "relation". Imagine a sophisticated software that compresses program sources. Let's say it operates on the AST level, and not on the byte level, since the former is a little bit closer to capturing software complexity, as you mentioned somewhere else. Now, I know that that's almost the definition of a LISP program, but maybe we can agree that 1) macros can easily become hard to understand as their size grows 2) there are many rituals programmers have in code that shouldn't get abstracted out of the local code flow (i.e. compressed) because they give the necessary context to aid the programmer's understanding, and the programmer would never be able (I assume) to mechanically apply the transformations in their head if there are literally hundreds of these macros, most of them weird, subtle, and/or unintuitive. Think how gzip for example finds many surprising ways to cut out a few bytes by collapsing multiple completely unrelated things that only share a few characters.
In other words, I think we should abstract things that are intuitively understandable to the programmer. Let's call this property "to have meaning". What carries meaning varies from one programmer to the next, but I'm sure for most it's not "gzip compressions".
One important measure to come up with a useful measure of "meaning" is likelihood of change. If two pieces of code that could be folded by a compressor are likely to change and diverge into distinct pieces of code, that is a hint that they carry different meanings, i.e. they are not really the same to the programmer. How do we decide if they are likely to diverge? There is a simple test, "could we write any of these pieces in a way that makes it very distinct from the other piece, and the program would still make sense?". As an aspiring programmer trying to apply the rule of DRY (don't repeat yourself) at some point I noticed that this measure is the best way to decide whether two superficially identical pieces of code should be folded.
I noticed that this approach defines a good compressor that doesn't require large parts of the source to be recompressed as soon as one unimportant detail changes. Folding meaning in this sense, and folding only that, leads to maintainable software.
A little further on this line we can see that as we strip away meaning as a means of distinction, we can compress programs more. The same can be done with performance concerns (instead of "meaning"). If you start by ignoring runtime efficiency, you will end up writing a program that is shorter since its parts have fewer distinctive features, so they can be better compressed. And if the compressed form is what the programmer wrote down, the program will be almost impossible to optimize after the fact, because large parts essentially have to be uncompressed first.
One last thought that I have about this is that maybe you have a genetic, bottom-up approach to software, and I've taken a top-down standpoint.
UncleEntity|6 years ago
Isn't that the definition of a compiler?
iikoolpp|6 years ago
Busting a gigantic nut when I see a blank file