top | item 29478675

(no title)

vog | 4 years ago

To me the "natural way" has always been to write portable code in the first place. From time to time, you'll find that parts of it are not portable, so you fix it, and along that way to learned something new about portability and apply it to future improvements on your code as well. Over time, you'll find fewer and fewer portability issue as you get better and better at writing portable code in the first place.

I'm not saying that this is the best way to do this, but to me this was always the obvious thing to do. As a somewhat extreme example, I'd never write a graphical user interface in pure Win32 API and expect it to be even remotely portable by some additional layer. I'd rather use Qt (or GTK, or Dear ImGui, or whatever) for native UIs even for programs that are (for now) meant to be only run on Windows.

To me personally, this has the additional benefit that I can do most of the development and testing in a non-hostile environment (e.g. Debian), then running a cross compiler (e.g. via MXE) and only do the final testing on Windows (well, usually first Wine, then some Windows VM), but at that last stage surprises are extremely seldom.

discuss

order

lazide|4 years ago

Be aware - that is an extreme minority of projects in the real world.

Historically, most of the time the development team only knows a specific platform specific language or way of doing things, and doesn’t have the background or experience to even know that what they would be doing in a specific place isn’t portable.

Languages and cross platform toolkits have developed a lot, but I still would question if most development folks would even recognize something like an endianness issue was a potential problem on some random project.

If someone is doing HTML/js/web dev, they’d never need to worry though I guess (barring something really weird).