top | item 29464097

(no title)

vog | 4 years ago

It took a while for me to fully appreciate the OpenSSH approach to portability:

They primarily develop OpenSSH purely for OpenBSD, using all (including non-portable) facilities of OpenBSD, including crypto and whatnot.

Then, a separate team manages the "portable" version of OpenSSH, which add stubs and does everything else needed to make OpenSSH compile on as many operating systems as possible.

I'm aware that OpenSSH is not the only project using that approach to portability. Nevertheless, I think it is fair to say this is an unusual approach used only on a minority of projects.

I was always puzzled on why they are doing this. This always struck me to be "just" a side effect of project politics and historically grown project structures.

But over the years I started to see some interesting benefits of that approach as well. I'm still not convinced by this model, but I have to admit that, more generally speaking, the OpenBSD project does many things against the mainstream, but quite often they turn to be right.

discuss

order

junon|4 years ago

This is how many large OSS libraries that aim for portability operate. I don't see how this is novel.

mrpippy|4 years ago

Can you name one? OpenBSD-affiliated projects like OpenSSH are the only ones I know of that do fully separate releases where one is just for one OS, and the other is the “portable” one

pjmlp|4 years ago

Not really, that is how the games industry has worked since forever and one of the reasons why AAA game studios couldn't care less about 3D APIs portability.

The game idea is developed with one specific platform in mind, and if the game actually gets a publishing deal, the publisher onboards studios whose main skill is to port games into platform XYZ.

pwdisswordfish9|4 years ago

You mention being puzzled. Could you elaborate on why? What do you see as the natural way of doing it instead?

vog|4 years ago

To me the "natural way" has always been to write portable code in the first place. From time to time, you'll find that parts of it are not portable, so you fix it, and along that way to learned something new about portability and apply it to future improvements on your code as well. Over time, you'll find fewer and fewer portability issue as you get better and better at writing portable code in the first place.

I'm not saying that this is the best way to do this, but to me this was always the obvious thing to do. As a somewhat extreme example, I'd never write a graphical user interface in pure Win32 API and expect it to be even remotely portable by some additional layer. I'd rather use Qt (or GTK, or Dear ImGui, or whatever) for native UIs even for programs that are (for now) meant to be only run on Windows.

To me personally, this has the additional benefit that I can do most of the development and testing in a non-hostile environment (e.g. Debian), then running a cross compiler (e.g. via MXE) and only do the final testing on Windows (well, usually first Wine, then some Windows VM), but at that last stage surprises are extremely seldom.