top | item 29595210

(no title)

dognotdog | 4 years ago

One can verify and sign off on computations that approximate the physics or chemistry that will occur in a structure or machine, as a well established chain of procedures exist to go from crude formulaic approximations to micro or, if necessary, nano-scale simulations of electrical, mechanical, and chemical processes, and we know what to look for.

I don't think the same is true for software "engineering," as it seems that all possible forms of process can be subverted and cargo-culted, from agile methods down to code checking. Certainly there is room to remedy some shortcomings, but SWE definitely is the engineering discipline least based in physical fact.

The physics behind simulating the buckling of a structure is always the same, we can just choose more or less crude approximations of it, but SWE in general seems a lot more diverse. I can implement that simulation in assembly or some scripting language, and attach various bits and pieces to it to manage users and data; deploy it across the cloud if need be. But, there isn't a singular, time-invariant optimal path to achieving that, and what is true today may not be true tomorrow. One can work off basic principles, like the Agile Manifesto, but how can you quantify or even certify this shifting landscape?

discuss

order

grandchild|4 years ago

Having studied both mechanical and software engineering at uni, I feel that you _can_ make the parallel between the two. It's just that in mechanical engineering we've converged a lot more over time. Out of convention and need for accountability much more than necessity. For example, for mechanical calculations we have converged on using mostly the same algebraic notation (never mind having minor differences here and there, such as in vector notation). Having an obscene amount of different notations, some so different that they are for the most part unintelligible to half the engineers out there, that would be unthinkable in ME, but is the norm in SE.

The _physics_ of a buckling structure may be always the same. But already the modelling techniques are far from obvious consensus: Do you do it analytically? Do you use FEM? BEM? Then there are a bunch of simulation techniques, i.e. for numerical integration, which you could use, much like you could use functional or imperative programming or OOP or whatever else.

So if we were to behave more like the _software_ branch of the engineering discipline in general, then we'd have a _much_ tighter space of languages that would be at all acceptable for any work deemed critical, like medical, administrative or aeronautical software.

mirker|4 years ago

I agree you can make software rigorous like in ME. The part which is hard is that debugging or proving properties about a program is much more difficult than writing the program. These costs are currently hard to amortize over multiple projects. Real-time systems have some of these facets (e.g., spacecraft).

For example, a memory allocator can be studied in the usual algorithmic sense or perhaps how they impact the stability of the system under randomized load. Can you prove the system remains stable? Yeah. Is it worth it when you can reboot machines and add some heuristics? No.

Currently, the big places which are getting any attention for verification of functionality are embedded applications and OS kernels. Even then, the depth of verification is limited to common bug categories.