top | item 34575104

(no title)

andyonthewings | 3 years ago

It's a common argument of computers getting powerful so we don't need to care that much for performance and/or efficiency regarding to cpu/memory/storage etc. In some limited cases the argument is valid but most of the time it's not.

First of all while maybe desktops and mobile phones are more powerful now, but we're getting more and more lower spec devices, like smart watches. Even when smart watches will be powerful enough one day, there will be eventually smaller computers, like smart contact lens, nano robots that run in blood vessels etc.

Secondly efficiency is still favourable for powerful computers. A small percentage of cost reduction can be big money save for large corps. A small percentage of energy save of all (or just a portion of) computers in the world can be a big win for the environment.

Lastly, we also run VMs and containers everywhere. Notice how we've already come up with all kind of ways to minimise VMs and containers size and footprint, in order to run more of them, to start them faster, and to transfer images quicker.

discuss

order

wtetzner|3 years ago

It depends on what kind of efficiency you care about more. Static linking can allow optimizations across library boundaries.

josephg|3 years ago

Yep. For hot call sites, these optimizations & inlining opportunities make a massive difference to performance. Static linking also allows for faster application startup time. (Though I don't have an intuition for exactly how slow dynamic linking is).

The only argument for dynamic linking being more efficient is that each dynamic library can be shared between all programs that use it. But not a net win in all cases. When you dynamically link a library, the entire library is loaded into RAM. When you static link, dead code elimination means that only the code you actually run in the library needs to be loaded.

But honestly, none of these arguments are strong. Dyld is fast enough on modern computers that we don't notice it. And RAM is cheap enough these days that sharing libraries between applications for efficiency feels a bit pointless.

The real arguments are these:

- Dynamic linking puts more power in the hands of the distribution (eg Debian) to change the library that a program depends on. This is used for security updates (eg OpenSSL) or for UI changes on Apple platforms. Dynamic linking is also faster than static linking, so compile times are faster for large programs.

- Static libraries put power in the hands of the application developer to control exactly how our software runs. VMs and Docker are essentially wildly complicated ways to force static linking, and the fact that they're so popular is evidence of how much this sort of control is important to software engineers. (And of course, statically linked binaries are usually simpler to deploy because static binaries make fewer assumptions about their runtime environment.)

Kranar|3 years ago

If what you care about is efficiency, then stick to static linking. Dynamic linking inhibits so many optimizations.

astrange|3 years ago

This is not the answer the performance engineers at Apple will give you, otherwise they would've done that.

pjmlp|3 years ago

Better not do plugins with static linking, given the waste in hardware resources for communication and process managment.

Both approaches have plus and minus regarding efficiency.

mike_hearn|3 years ago

You can have both. JIT compiled languages are dynamically linked yet optimization is done across module boundaries.

acomjean|3 years ago

So are you saying use Rust, more efficient at the expense of maybe using more memory? Or swift which may save some memory by being dynamically linked but perhaps is a little slower.

Honestly a ton of code is utility code, not run that often and nobody wants to take the time or expense to rewrite that Perl/python/bash script.

Some code efficiency is much more important than others.

saghm|3 years ago

> First of all while maybe desktops and mobile phones are more powerful now, but we're getting more and more lower spec devices, like smart watches. Even when smart watches will be powerful enough one day, there will be eventually smaller computers, like smart contact lens, nano robots that run in blood vessels etc.

My desktop/laptop distro dynamically link everything by default just so it can be slightly less work to port to a smartwatch? That sounds like a reasonable argument for having dynamically linking be possible, but it doesn't seem compelling for it being the default for the entire world.