top | item 44864467

Designing Software in the Large

96 points| davidfstr | 6 months ago |dafoster.net

39 comments

order

Warwolt|6 months ago

I found "A philosophy of software design" to be a well intended but somewhat frustrating book to read.

It seemingly develops a theory of software architecture that is getting at some reasonable stuff, but does so without any reference _at all_ to the already rich theories for describing and modeling things.

I find software design highly related to scientific theory development and modeling, and related to mathematical theories like model theory, which give precise accounts of what it means to describe something.

Just taking the notion of "complexity". Reducing that to _just_ cognitive load seems to be a very poor analysis, when simple/complex ought to deal with the "size" of a structure, not how easy it is to understand.

The result of this poor theoretical grounding is that what the author of A Philosophy of Software Design presents feels very ad-hoc to me, and I feel like the summary presented in this article similarly feels ad-hoc.

legorobot|6 months ago

> Just taking the notion of "complexity". Reducing that to _just_ cognitive load seems to be a very poor analysis, when simple/complex ought to deal with the "size" of a structure, not how easy it is to understand.

Preface: I'm likely nitpicking here; the use of "_just_" is enough for me to mostly agree with your take.

Isn't the idea that the bulk of complexity IS in the understanding of how a system works, both how it should work and how it does work? We could take the Quake Fast Inverse Square Root code, which is simple in "size" but quite complex on how it actually achieves its outcome. I'd argue it requires comments, tests, and/or clarifications to make sense of what its actually doing.

How do we measure that complexity? No idea :) But I like to believe that's why the book takes a philosophical approach to the discussion.

I agree the arguments in the book largely "make sense" to me but I found myself finding it a little hand-wavey on it actually proving its points without concrete examples. I don't recall there being any metrics or measurements on improvement either, making it a philosophical discussion to me and not a scientific exercise.

bccdee|6 months ago

> related to mathematical theories like model theory, which give precise accounts of what it means to describe something

Perhaps too precise? APoSD is about the practical challenges of large groups of people creating and maintaining extensive written descriptions of logic. Mathematical formalisms may be able to capture some aspects of that, but I'm not sure they do so in a way that would lend real insight.

"How can I express a piece of complicated logic in an intuitive and easy-to-understand way" is fundamentally closer to writer's craft than mathematics. I don't think a book like this would ever be mathematically grounded, any more than a book about technical writing would be. Model theory would struggle to explain how to write a clear, legible paragraph.

commandlinefan|6 months ago

I haven't read it myself but I probably will because I have a lot of hope for this topic (there must be a better way to do this!)

I worry that it doesn't much matter if it's perfect or mediocre, though, because there's a huge contingent of project managers who mock _any_ efforts to improve code and refuse to even acknowledge that there's any point to doing so - and they're still the ones running the asylum.

sfpotter|6 months ago

The author is describing less a theory and more a framework or system of heuristics bases on extensive practicap experience. There's no need for rigor if it's practical and useful. I think your desire for grounding in something "scientific" or "mathematical" is maybe missing the forest for the trees a bit. Saying this as someone with loads of practical software development experience and loads of math experience. I just don't find that rigor does much to help describe or guide the art of software. I do think Ousterhout's book is invaluable.

andai|6 months ago

That's very interesting. Can you recommended any resources for learning more about this?

Also, have you considered writing on this subject yourself? I get the feeling that your perspective here would be valuable to others.

brabel|6 months ago

I've written code for a couple of decades. The diagrams in this post are absolutely great. If you're just starting out, try to remember what they say and you'll do really well.

debug_forever|6 months ago

The complexity in our team's code bases have only gotten worse with AI-integrated agents. Maybe it's the prompts we're using, but it's an ironic twist that these tools that promise so much productivity today ends up dumping more tech debt into our code.

It's funny reading the "key contributors to dependency-complexity" -- Duplication, Exceptions, Inheritance, Temporal Decomposition -- because those qualities seem like the standard for AI-generated code.

Jtsummers|6 months ago

> but it's an ironic twist that these tools that promise so much productivity today ends up dumping more tech debt into our code.

Because long-term productivity was never about the generated lines of code. You can increase your system features through expansion, or by a combination of expansion and contraction.

Generating new code without spending time to follow through with the contraction step, or alternatively contracting first as a way of enabling the new expansion, will always make the code more complex and harder to sustain and continue to improve wrt the feature set.

davidfstr|6 months ago

I have to take special effort to tamp down on duplication in AI generated code.

For me it's not uncommon for AI to draft an initial solution in X minutes, which I then spend 3*X minutes refactoring. Here's a specific example for a recent feature I coded: https://www.youtube.com/watch?v=E25R2JgQb5c

stereolambda|6 months ago

The actual hard question is probably making even 10% of such wisdom and good intentions survive when the program is bombarded by contributor patches, or people taking Jira tickets. TFA talks about it in the context of strategy and tactics.

Organizationally enforcing strategy would be the issue. And also that the people most interested in making rules for others in an organization may not be the ones best qualified to program. And automatic tools (linters) by necessity focus on very surface level, local stuff.

That's how you get the argument for the small teams productivity camp.

01HNNWZ0MV43FF|6 months ago

It would be cool to see a linter, or a new language, that makes good architecture easy and bad architecture hard.

Like making state machines easier than channels. (Rust is sort-of good at state machines compared to C++ but it has one huge issue because of the ownership model, which makes good SMs a little clumsy)

Or making it slightly inconvenient to do I/O buried in the middle of business logic.

deterministic|6 months ago

The largest successful software system we have is the internet.

So perhaps we should ask ourselves: What can we learn from the internet architecture?

And no that does not automatically mean micro-services. The core idea of the internet is to agree on API's (protocols like HTTP) and leave the rest as implementation details. You can do the same with modules, libraries, classes, files etc.

cadamsdotcom|6 months ago

Decent bunch of maintainability ideas.

LLMs are trained on data from humans. That means these “code ergonomics” apply equally to coding with AI. So this advice will continue to be good, and building with it in mind will continue to pay off.

CyberDildonics|6 months ago

Good on him for designing software in the large on the regular and on the daily. I saw him give a talk once in the round. Without him I would be in a bad way.

rwoerz|6 months ago

{ "permissions": { "allow": [ "Human(*)" ] } }

?

ontigola|6 months ago

Books by programming theorists often When they define 'complexity' as 'anything related to the structure of a software system that makes it hard to understand and modify the system,' they miss a crucial distinction: the complexity of a supermarket is not the same as that of a telecom company. The primary factor in complexity is functionality and requirements to implement, followed by non-functional requirements, the restrictions of the IT environment, and then the structure of the software itself. At this point, time becomes a crucial factor. You may end up with a creature that, after passing user testing and certification, has transformed into an unrecognizable monster despite your initial best intentions regarding length and clarity.

sfn42|6 months ago

When code gets out of hand and outgrows its initial design, I tend to refactor it to a new design that better accommodates the requirements.

The problem I usually encounter is that whoever was there before me didn't do this. They just made a solution then added quick fixes until it became a monster, and then left. Now I first have to figure out what the code actually does, try to distinguish what it needs to do from what it happens to actually do, usually write tests because the people who write the kind of code I end up fixing usually don't write good tests if any at all, then refactor the code to something more reasonable.

This can easily take me days or weeks whereas the person responsible could likely have cleaned it up much quicker because they don't first have to figure out requirements by reading messy code. Then again if they were competent and cared about the quality of their work they wouldn't have left me that mess in the first place so I tend to just write it off as incompetence, fix it and move on.