IME, there's one big thing that often keeps my programs from being unaffected by byte order: wanting to quickly splat data structures into and out of files, pipes, and sockets, without having to encode or decode each element one-by-one. The only real way to make this endian-independent is to have byte-swapping accessors for everything when it's ultimately produced or consumed, but adding all the code for that is very tedious in most languages. One can argue that handling endianness is the responsible thing to do, but it just doesn't seem worthwhile when I practially know that no one will ever run my code on a big-endian processor.
This is functionally identical to the author's example - the file has a defined byte order and you have a choice of doing byte swapping or just explicitly writing out the bytes in the defined order. The author is saying your goal of avoiding "having to encode or decode each element one-by-one" a misguided optimization.
I think the article's author would say that loading data "without having to encode or decode each element" is premature optimization and more likely to have bugs. I tend to agree.
If you are using C/C++ for any new app, there is a possibility you are writing code that has a performance requirement.
- mmap/io_uring/drivers and additional "zero-copy" code implementations require consideration about byte order.
- filesystems, databases, network applications can be high throughput and will certainly benefit from being zero-copy (with benefits anywhere from +1% to +2000% in performance.)
This is absolutely not "premature optimization." If you're a C/C++ engineer, you should know off the top of your head how many cycles syscalls & memcpys cost. (Spoiler: They're slow.) You should evaluate your performance requirements and decide if you need to eliminate that overhead. For certain applications, if you do not meet the performance requirements, you cannot ship.
Once upon the time I became the de facto admin for a VxWorks box because my code was to be the bottleneck on a task with a min throughput defined in the requirements and we weren't hitting the numbers. I ended up having to KVM into it and run benchmarks in vivo, which meant understanding the command line which I'd never seen before.
People were understandably concerned that we had fucked up in the feasibility phase of the project. Lots of people get themselves in trouble this way, and this was a 9 figure piece of hardware sitting idle while our app picked its nose crunching data, if we didn't finish our work on time during maintenance windows.
But I was on my longest hot streak of accurate perf estimates in my career and this one was not going to be my Icarus moment. It ended being tweaks needed from the compiler writer and from Wind River (DMA problem). I had to spend a lot of social capital on all of this, especially the Wind River conference call (which took ten minutes for them to come around to my suggestion for a fix that they shipped us in a week. After months and months of begging for a conference call).
A memcpy should not be slow. It should be nearly as fast as generic memory copying can be. Most of the time you shouldn't even hit the actual function, but instead a bit of code generated by the compiler that does exactly the copy you need.
My uses of mmap have only over been memoization. Where I didn't care about byte order, and instead just assumed the files wouldn't be portable between any two computers.
If you are going zero copy, you either need to give up on any kind of portability, or delve deep into compiler flags to standardize struct layout.
and in big endian (again, on the wire, architecture endianness irrelevant) it would be the same thing with the indices reversed, where "value" is the 4 bytes read in off the wire?
Yeah, you deal with order when marshaling stuff on the wire, I haven’t dealt with it much for years, but doing embedded software that used to be in my face a lot.
Unless you're dealing with binary data in which case byte order matters very much and if you forget to convert it you're causing a world of pain for someone.
He even has an example where he just pushes the problem off to someone else "if the people at Adobe wrote proper code to encode and decode their files", yeah hope they weren't ignoring byte order issues.
The article's point is that the machine's byte order doesn't matter. The byte order of a data stream of course matters, but they show a way to load a binary data stream without worrying about the machine's byte order.
That key insight is that people shouldn't try to optimize the case where the data stream's byte order happens to match the machine's byte order. That's both premature optimization and a recipe for bugs. Just don't worry about that case.
Load binary data one byte at a time and use shifts and ORs to compose the larger unit based on the data's byte order. That's 100% portable without any #ifdefs for the machine's byte order.
Really except for the networking (including say Bluetooth) nobody is big endian anymore. So how about just don't leak that thing from the network layer.
And do not define any data format to be big endian anymore. Deine it as little endian (do not leave it undefined) and everyone will be happy.
Given a reader (file, network, buffers can all be turned into readers), you can call readInt. It takes the type you want, and the endianess of the encoding. It's easy to write, self documents, and it's highly efficient.
If we're talking about a single int, the way you do it doesn't matter, just wrap it up in a readInt function.
But if we're talking about a struct or an array, if you're byte-order aware you can do things like memcpy the whole thing around that you couldn't do by assembling it out of individual readInt calls.
As a games coder I was glad when the xbox 360 / ps3 era came to an end; getting big endian clients talking to little endian servers was an endless source of bugs.
The other case where it matters is SIMD instructions where you're serializing or deserializing multiple fields at once, but the SIMD operations are usually architecture specific to begin with and so if you shuffle bytes into and out of the native packed formats it will be specific to the endianness of the native packed format, and then you can forget about byte order outside of those shuffle transformations.
What he said: if you read bytes with some byte order, you compose them yourself correctly, no byte swapping but just reading byte for byte and convert them to the number value you need. The architecture byte order is implicit as long as you use the architecture's tools to convert the bytes.
Rust, for example has from_be_bytes(), from_le_bytes() and from_ne_bytes() methods for the number primitives u16, i16, u32, and so on. They all take a byte array of the correct length and interpret them as big, little and native endian and convert them to the number.
The first two methods work fine on all architectures, and that's what this article is about.
The third method, however, is architecture-dependent and should not be used for network data, because it would work differently and that's what you don't want. In fact, let me cite this part from the documentation. It's very polite but true.
> As the target platform’s native endianness is used, portable code likely wants to use from_be_bytes or from_le_bytes, as appropriate instead.
I don't like these ambiguous titles. From the title I thought I was going to read that byte order doesn't matter when in fact the title should be "a computer's byte order is irrelevant to high-level languages". At least, state the fallacy in unambiguous terms one sentence right away. In any case, was an interesting read.
> If you wrote it on a PC and tried to read it on a Mac, though, it wouldn't work unless back on the PC you checked a button that said you wanted the file to be readable on a Mac. (Why wouldn't you? Seriously, why wouldn't you?)
As a non-SWE, whenever I see checkboxes to enable options that maximize compatibility, I often assume there’s an implicit trade-off, so if it isn’t checked by default, I don’t enable such things unless strictly necessary. I don’t have any solid reason for this, it’s just my intuition. After all, if there were no good reasons not to enable Mac compatibility, why wouldn’t it be the default?
Be aware that if you actually want to do as the article prescribes, don't just copy and paste -- you shan't take anything at face value in C: https://news.ycombinator.com/item?id=31718292.
The byte order matters in all cases where there is i/o, being files, network streams, inter chip communication,... For data that stays on the same processor or for files that are only accessed with the processors of the same endianness, there really is no issue, even when doing bit manipulation.
No, same deal. The article argues that you should write portable code based on the ordered bytes in an external format, as that's guaranteed to be a machine-independent thing (i.e. it's stored on disk in exactly one way). Same is true for image files as 2-byte wchar file as zip files, yada yada.
It's true as far as it goes, but (1) it leans very heavily on the compiler understanding what you're doing and "un-portabilifying" your code when the native byte order matches the file format and (2) it presumes you're working with pickled "file" formats you "stream" in via bytes and not e.g. on memory mapped regions (e.g. network packets!) that want naturally to be inspected/modified in place.
It's fine advice though for the 90% of use cases. The author is correct that people tend to tie themselves into knots needlessly over this stuff.
LegionMammal978|1 year ago
advisedwang|1 year ago
rocqua|1 year ago
GMoromisato|1 year ago
iscoelho|1 year ago
- mmap/io_uring/drivers and additional "zero-copy" code implementations require consideration about byte order.
- filesystems, databases, network applications can be high throughput and will certainly benefit from being zero-copy (with benefits anywhere from +1% to +2000% in performance.)
This is absolutely not "premature optimization." If you're a C/C++ engineer, you should know off the top of your head how many cycles syscalls & memcpys cost. (Spoiler: They're slow.) You should evaluate your performance requirements and decide if you need to eliminate that overhead. For certain applications, if you do not meet the performance requirements, you cannot ship.
hinkley|1 year ago
People were understandably concerned that we had fucked up in the feasibility phase of the project. Lots of people get themselves in trouble this way, and this was a 9 figure piece of hardware sitting idle while our app picked its nose crunching data, if we didn't finish our work on time during maintenance windows.
But I was on my longest hot streak of accurate perf estimates in my career and this one was not going to be my Icarus moment. It ended being tweaks needed from the compiler writer and from Wind River (DMA problem). I had to spend a lot of social capital on all of this, especially the Wind River conference call (which took ten minutes for them to come around to my suggestion for a fix that they shipped us in a week. After months and months of begging for a conference call).
AlotOfReading|1 year ago
neonz80|1 year ago
rocqua|1 year ago
If you are going zero copy, you either need to give up on any kind of portability, or delve deep into compiler flags to standardize struct layout.
pmarreck|1 year ago
if it's little endian (on the wire), the process would be like:
and in big endian (again, on the wire, architecture endianness irrelevant) it would be the same thing with the indices reversed, where "value" is the 4 bytes read in off the wire?paulddraper|1 year ago
Uh...
Compared to doing nothing, yes it's "slow."
chasil|1 year ago
"htonl, htons, ntohl, ntohs - convert values between host and network byte order"
The cheapest big-endian modern device is a Raspberry Pi running a NetBSD "eb" release, for those who want to test their code.
https://wiki.netbsd.org/ports/evbarm/
Isamu|1 year ago
rwmj|1 year ago
He even has an example where he just pushes the problem off to someone else "if the people at Adobe wrote proper code to encode and decode their files", yeah hope they weren't ignoring byte order issues.
GMoromisato|1 year ago
That key insight is that people shouldn't try to optimize the case where the data stream's byte order happens to match the machine's byte order. That's both premature optimization and a recipe for bugs. Just don't worry about that case.
Load binary data one byte at a time and use shifts and ORs to compose the larger unit based on the data's byte order. That's 100% portable without any #ifdefs for the machine's byte order.
unknown|1 year ago
[deleted]
genpfault|1 year ago
Original thread w/104 comments:
https://news.ycombinator.com/item?id=3796378
AstralStorm|1 year ago
And do not define any data format to be big endian anymore. Deine it as little endian (do not leave it undefined) and everyone will be happy.
butterisgood|1 year ago
So it's not even all networking... and "network byte order" will mess you up.
Laremere|1 year ago
Given a reader (file, network, buffers can all be turned into readers), you can call readInt. It takes the type you want, and the endianess of the encoding. It's easy to write, self documents, and it's highly efficient.
edflsafoiewq|1 year ago
But if we're talking about a struct or an array, if you're byte-order aware you can do things like memcpy the whole thing around that you couldn't do by assembling it out of individual readInt calls.
ultrahax|1 year ago
benlivengood|1 year ago
_nalply|1 year ago
Rust, for example has from_be_bytes(), from_le_bytes() and from_ne_bytes() methods for the number primitives u16, i16, u32, and so on. They all take a byte array of the correct length and interpret them as big, little and native endian and convert them to the number.
The first two methods work fine on all architectures, and that's what this article is about.
The third method, however, is architecture-dependent and should not be used for network data, because it would work differently and that's what you don't want. In fact, let me cite this part from the documentation. It's very polite but true.
> As the target platform’s native endianness is used, portable code likely wants to use from_be_bytes or from_le_bytes, as appropriate instead.
fracus|1 year ago
ddingus|1 year ago
Two areas I find it does matter: Assembly language where bytes are parsed or sorted, or transformed in some way by code that writes words
, and
binary file representations when written on a little endian machine and read by a big endian machine.
nativeit|1 year ago
As a non-SWE, whenever I see checkboxes to enable options that maximize compatibility, I often assume there’s an implicit trade-off, so if it isn’t checked by default, I don’t enable such things unless strictly necessary. I don’t have any solid reason for this, it’s just my intuition. After all, if there were no good reasons not to enable Mac compatibility, why wouldn’t it be the default?
Edit: spelling error with “implicit”
e4m2|1 year ago
wmf|1 year ago
Also, a lot of comments in this thread have nothing to do with the article and appear to be responses to some invisible strawman.
nuancebydefault|1 year ago
eternityforest|1 year ago
wakawaka28|1 year ago
unknown|1 year ago
[deleted]
wiredfool|1 year ago
ajross|1 year ago
It's true as far as it goes, but (1) it leans very heavily on the compiler understanding what you're doing and "un-portabilifying" your code when the native byte order matches the file format and (2) it presumes you're working with pickled "file" formats you "stream" in via bytes and not e.g. on memory mapped regions (e.g. network packets!) that want naturally to be inspected/modified in place.
It's fine advice though for the 90% of use cases. The author is correct that people tend to tie themselves into knots needlessly over this stuff.