(no title)
netbsdusers | 3 months ago
Windows is only targeting little-endian systems which makes life easier (and in any case they trust MSVC to do the right thing) so Windows drivers make much use of them (just look at the driver samples on Microsoft's GitHub page.)
Linux is a little afraid to rely on GCC/Clang doing the right thing and in any case bitfields are underpowered for a system which targets multiple endians. So Linux uses systems of macros instead for dealing with what Windows C uses bitfields. The usual pattern is a system of macros for shifting and masking. This is considerably uglier and easier to make a mess of. It would be a real improvement in quality-of-life if this were not so.
You can also look at Managarm (which benefits from C++ here) for another approach to making this less fraught: https://github.com/managarm/managarm/blob/a698f585e14c0183df...
surajrmal|3 months ago
thyristan|3 months ago
char * input = receive_data();
struct deserialized * decoded = (struct deserialized *)input; // zero cost deserialization
Of course this can only work if an implementation tells you exactly what the memory layout of 'struct deserialized' and all the data types in it are.
Btw, ordering is somewhat more defined than packing, in that the usual forward/reverse/little/big-endian shenanigans are OK. But relative ordering of each field is always preserved by the C standard.
netbsdusers|3 months ago
reactordev|3 months ago
mort96|3 months ago
(Mini rant: CPU people seem to think that you can avoid endianness issues by just supporting both little and big endian, not realizing the mess they're creating higher up the stack. The OS's ABI needs to be either big endian or little endian. Switchable endianness at runtime solves nothing and causes a horrendous mess.)
vodou|3 months ago
QuiEgo|3 months ago
claudex|3 months ago
netbsdusers|3 months ago
stonemetal12|3 months ago