I was surprised to see `fmt/format.h` on that list, but I do have to admit that the objections seem reasonable. Perhaps because he(?) mentioned wanting to use -O0. Template code is almost useless without optimization. If -O0 is needed then I am surprised that all of the STL doesn't get pitched.
Ok, I was also surprised to see co-routines on the nice list, but I don't have direct experience there. I normally see complaints about them. I would like them to be good because some code is easier to express that way.
> I was surprised to see `fmt/format.h` on that list, but I do have to admit that the objections seem reasonable
The author talks about the code bloat, beacause of "an API that encourages custom formatter specification to live in a template". But at the end he mentions the standard solution to this problem:
> A preferable interface (I use, but also others AFAIK) is to check the type in a template (no choice there), and dispatch the formatting routine to somewhere that lives in a single translation unit.
So what prevents you from doing this with <format>? As I understand, the implementations of parse() and format() of std::formatter don't depend on the template parameters and can delegate to non-template functions residing in one CPP file. You can also provide additional wformat_parse_context/wformat_context overloads if you need wchar_t support.
{fmt} doesn't encourage "custom formatter specification to live in a template". On the contrary, if you look at the docs in https://fmt.dev/latest/api.html#formatting-user-defined-type..., none of the examples is parameterized. One even demonstrates how to define your formatting code in a source file. And if your formatters are so big that they meaningfully impact build speed you are doing something wrong. fmt/core.h is heavily optimized for build speed so you can just use it as a type-safe replacement for *printf. That said, implementations of std::format (especially Microsoft's) may not be as optimized for build speed yet. This will likely improve now that the ABI can be stabilized.
it's partly because of the engine code. there's even bigger stuff, especially especially if it's a company with any legacy codebase that's 10-20 years old or whatever (e.g. EA / Frostbite.) one i worked on took hours to compile the first time on a machine with 128gb of ram and a threadripper. the onboarding doc suggests getting some coffee at that point haha
a big part of working on them as a generalist ends up being the ability to know how to even navigate something like that (especially since they're often haphazardly documented)
(part of it is that most of the games "fork" the engine rather than using it as a standalone thing)
it's probably not everyone on the team building that whole thing each time, but yea. hundreds of solutions and millions of LOC isn't unusual
*i just did a quick check with unreal's source, it's ~20 million LoC (assuming I didn't mess up the filtering somehow)
I'm a junior in uni, and I hate it when I say "Yeah we learned this technique in the C class, but it's UB in C++ so please rewrite that" in reviewing friends' codes that do type-punning with unions.
So I'm also very happy with the 'std::bit_cast' in general.
BTW how about std::is_constant_evaluated()?
I assumed it would help folks who do heavy physics simulations, but looks like not listed in the article.
TBF, I have yet to see a C++ compiler where the union type punning trick doesn't work, there would be a lot of broken code if real-world compilers would change the current behaviour no matter what the standard says.
Of course now that std::bit_cast exists it's the safe thing to do (but then there's still C code that's compiled in C++ mode which was even recommended by Microsoft because the Visual Studio team couldn't be bothered to keep their C compiler in shape until a little while ago).
where the order is not related to the order the fields appear in the struct (that has to be maintained for ABI reasons), and not all fields need to appear (the others are initialized with 0/NULL).
...all those limitations taken together, and the C++ designated initialization feature is pretty much useless except for the most trivial structs - while in C99, designated initialization really shines with complex, nested structs.
The funny thing is that none of those limitations would be required. Clang had supported full C99 designated init in C++ mode just fine for many years before C++20 appeared.
C++ isn’t C and has different structure semantics. Members are initialized in the order defined, which means you can write
struct foo {
int a = 0;
int b = a+1;
}
If the compiler just did the initialization in the order of declaration, regardless of the order in the initialization list this would not do what you expect:
struct obj {
int a;
int b;
}
int ival = 0;
auto o = obj {.b = ++ival, .a = ival};
o.a would not equal o.b.
I would like to have the initialization syntax of C because then one could reorder elements (say for packing reasons) and the designated initialization would “just work”…except it wouldn’t.
C++ designated initialization does buy you two things: 1- documentation, but more importantly 2- if you do reorder a struct or class data members the compiler will warn you that your initialization lists are now invalid rather than silently failing. I don’t know how to even find them all in a large code base any other way!
Destructors will execute in the reverse of declaration order, so if initialization order doesnt match declaration order, and some members depend on each other somehow, things will break. At the very least, it could be surprising. Not a problem in C where destructors don't exist.
I think, as usual this was the compromise that the committee was able to agree on above all objections. There is stills the possibility that the rules are relaxed if there is agreement. But somebody has to do the work to push it through standardization.
I also thought that the behaviour as standardized was useless, but recently I started writing more minimalist code eschewing constructors where aggregate initialisation would suffice, and I haven't really missed the ability to reorder initializers or skip them.
Initialization in C++ is already a mess. Making one of the core behaviours (members are initialized in declaration order) work subtly different for this case would make it even more difficult for the programmer to build a correct mental model.
From what I can tell, the snippet you posted would compile fine in C++20 mode.
> Personally, I find code that leverages ranges harder to read, not easier, because lambdas inlined in functions introduce new scopes that have a strong non-linearizing effect on the code. This isn’t a criticism of ranges per se, but certainly is a stylistic preference.
Does anyone know what “non-linearizing” means here?
I assume “code outside the lambda runs first, then code inside the lambda maybe runs later, maybe runs multiple times, maybe doesn’t run at all”.
It can especially create problems when the lambda captures a variable by reference which gets mutated and/or deallocated before the lambda runs, and the developer didn’t plan for mutation or deallocation.
Or (a problem with lambdas, but not “non-linearizing”), if the lambda captures a variable by value (copies the value) and mutates it, and the developer expected the mutation to persist outside the lambda.
This was my first encounter with the three-way comparison operator (<=>). Can someone give a practical use case? There must be one for it to be included in the spec, but I'm not seeing it.
But the sort answer is all the other operators are automatically generated from that one if it is defined. So it makes the code simpler. And for many types <=> isn't much more complicated than the others
> Signed overflow/underflow remain UB (and it’s understandable that changing this behavior would have dramatic consequences)
I think that the dramatic consequences are only understandable if you succumb to mimetic contagion.
The consequences are real but not dramatic and possibly not even measurable in many workloads.
It just means that you’ll have an extra sign extension (one of the cheapest ops the CPU has) in a subset of your loops, namely the ones that had a 32 bit signed induction variable and the compiler could reason about that variable but only if it also could assume no wrapping. That’s a lot of caveats.
Most loops will be unaffected by making signed integer overflow defined. Anything that’s not in a loop will almost certainly be unaffected by this change. If you use size_t as your indices then you’ll definitely be unaffected.
So yeah. “Dramatic consequences”. I wish folks stopped exaggerating. There’s nothing dramatic here. It’s a fraction of a percent of perf maybe.
> a 32 bit signed induction variable and the compiler could reason about that variable but only if it also could assume no wrapping.
(Amateur C programmer silly question) I think I understand it as if we increment the variable (i+10) and use it in an if condition. With UB the compiler could skip that code altogether and assume it will never be reached?
Is it just me, or the worst part of coroutines is lack of tooling around them? Whenever I get a crash in a coroutine, the "stacktrace" is totally useless and doesn't actually show where the crash happened, just some boiler plate code around executing some continuation which doesn't refer to real code that you wrote.
more or less agree, although this issue isn't even really unique to C++. in practice it's still worth it imo, since debugging callback heavy stuff isn't exactly fun either
Lately I've been under the extreme temptation to rewrite my game engine in Rust.
I crave the ergonomy of rust development. I use Rust at my job (not game dev) and it sucks to switch back to C++ for my side projects
But I resist for the moment, because I fear it won't be easy as I predict and it would delay my projects.
I already started using this list of features and refactored most of my code for c++20. I hope C++ will continue on that path and catch up Rust. But there are still so many things missing
In the meantime I refactor little by little my C++ projects to be "rust ready": hierchical ownership, data oriented with minimalist oop. So the day I can't resist no more I will be able to quickly rewrite it in Rust
Rust doesn't allos dynamic libraries in general, so it isn't going to work where (right or wrong) the code is based on plugins. You can work around this with C api interfaces, but that limits you if both sides are rust. (unsafe for what should be safe as I understand.)
[+] [-] wscott|2 years ago|reply
Ok, I was also surprised to see co-routines on the nice list, but I don't have direct experience there. I normally see complaints about them. I would like them to be good because some code is easier to express that way.
[+] [-] danpla|2 years ago|reply
The author talks about the code bloat, beacause of "an API that encourages custom formatter specification to live in a template". But at the end he mentions the standard solution to this problem:
> A preferable interface (I use, but also others AFAIK) is to check the type in a template (no choice there), and dispatch the formatting routine to somewhere that lives in a single translation unit.
So what prevents you from doing this with <format>? As I understand, the implementations of parse() and format() of std::formatter don't depend on the template parameters and can delegate to non-template functions residing in one CPP file. You can also provide additional wformat_parse_context/wformat_context overloads if you need wchar_t support.
[+] [-] verall|2 years ago|reply
In my corner of the C++ world though, I am so, so excited for <format> in 6 years or however long it will take us to move to C++20.
[+] [-] vitaut|2 years ago|reply
[+] [-] tubs|2 years ago|reply
[+] [-] Xeamek|2 years ago|reply
Is that really 'an average' for modern AAA game?
Damn. That's an order of magnitude bugger then I'd imagine
[+] [-] dagmx|2 years ago|reply
Just C++ 11,375,669
Total (of everything) 31,379,114
That’s fairly representative of just the tooling side of things for a AAA engine. That’s not counting the logic of the game itself.
[+] [-] jihiggins|2 years ago|reply
a big part of working on them as a generalist ends up being the ability to know how to even navigate something like that (especially since they're often haphazardly documented)
(part of it is that most of the games "fork" the engine rather than using it as a standalone thing)
it's probably not everyone on the team building that whole thing each time, but yea. hundreds of solutions and millions of LOC isn't unusual
*i just did a quick check with unreal's source, it's ~20 million LoC (assuming I didn't mess up the filtering somehow)
[+] [-] nox100|2 years ago|reply
[+] [-] mgaunard|2 years ago|reply
[+] [-] Terminal135|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] gumby|2 years ago|reply
[+] [-] hotjump|2 years ago|reply
BTW how about std::is_constant_evaluated()? I assumed it would help folks who do heavy physics simulations, but looks like not listed in the article.
[+] [-] flohofwoe|2 years ago|reply
Of course now that std::bit_cast exists it's the safe thing to do (but then there's still C code that's compiled in C++ mode which was even recommended by Microsoft because the Visual Studio team couldn't be bothered to keep their C compiler in shape until a little while ago).
[+] [-] aw1621107|2 years ago|reply
[+] [-] rwmj|2 years ago|reply
I hate this about C++! In C you can initialize them in any order, and this allowed us to write nbdkit plugins in a very natural way:
where the order is not related to the order the fields appear in the struct (that has to be maintained for ABI reasons), and not all fields need to appear (the others are initialized with 0/NULL).For C++ we have to do this mess:
https://bugzilla.redhat.com/show_bug.cgi?id=1418328#c3
Anyway my question is .. why is this, C++ people?
[+] [-] flohofwoe|2 years ago|reply
...no chaining of designated initializers:
...and no array indexing: ...all those limitations taken together, and the C++ designated initialization feature is pretty much useless except for the most trivial structs - while in C99, designated initialization really shines with complex, nested structs.The funny thing is that none of those limitations would be required. Clang had supported full C99 designated init in C++ mode just fine for many years before C++20 appeared.
[+] [-] gumby|2 years ago|reply
I would like to have the initialization syntax of C because then one could reorder elements (say for packing reasons) and the designated initialization would “just work”…except it wouldn’t.
C++ designated initialization does buy you two things: 1- documentation, but more importantly 2- if you do reorder a struct or class data members the compiler will warn you that your initialization lists are now invalid rather than silently failing. I don’t know how to even find them all in a large code base any other way!
[+] [-] pjmlp|2 years ago|reply
[+] [-] wheybags|2 years ago|reply
[+] [-] gpderetta|2 years ago|reply
I also thought that the behaviour as standardized was useless, but recently I started writing more minimalist code eschewing constructors where aggregate initialisation would suffice, and I haven't really missed the ability to reorder initializers or skip them.
[+] [-] nemetroid|2 years ago|reply
From what I can tell, the snippet you posted would compile fine in C++20 mode.
[+] [-] zalyalov|2 years ago|reply
[+] [-] sp1rit|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] codeflo|2 years ago|reply
> Personally, I find code that leverages ranges harder to read, not easier, because lambdas inlined in functions introduce new scopes that have a strong non-linearizing effect on the code. This isn’t a criticism of ranges per se, but certainly is a stylistic preference.
Does anyone know what “non-linearizing” means here?
[+] [-] armchairhacker|2 years ago|reply
It can especially create problems when the lambda captures a variable by reference which gets mutated and/or deallocated before the lambda runs, and the developer didn’t plan for mutation or deallocation.
Or (a problem with lambdas, but not “non-linearizing”), if the lambda captures a variable by value (copies the value) and mutates it, and the developer expected the mutation to persist outside the lambda.
[+] [-] mysterydip|2 years ago|reply
[+] [-] craftit|2 years ago|reply
[+] [-] wscott|2 years ago|reply
But the sort answer is all the other operators are automatically generated from that one if it is defined. So it makes the code simpler. And for many types <=> isn't much more complicated than the others
[+] [-] pizlonator|2 years ago|reply
I think that the dramatic consequences are only understandable if you succumb to mimetic contagion.
The consequences are real but not dramatic and possibly not even measurable in many workloads.
It just means that you’ll have an extra sign extension (one of the cheapest ops the CPU has) in a subset of your loops, namely the ones that had a 32 bit signed induction variable and the compiler could reason about that variable but only if it also could assume no wrapping. That’s a lot of caveats.
Most loops will be unaffected by making signed integer overflow defined. Anything that’s not in a loop will almost certainly be unaffected by this change. If you use size_t as your indices then you’ll definitely be unaffected.
So yeah. “Dramatic consequences”. I wish folks stopped exaggerating. There’s nothing dramatic here. It’s a fraction of a percent of perf maybe.
[+] [-] rdtsc|2 years ago|reply
(Amateur C programmer silly question) I think I understand it as if we increment the variable (i+10) and use it in an if condition. With UB the compiler could skip that code altogether and assume it will never be reached?
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] ohnoesjmr|2 years ago|reply
[+] [-] jihiggins|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] cepacked|2 years ago|reply
I crave the ergonomy of rust development. I use Rust at my job (not game dev) and it sucks to switch back to C++ for my side projects
But I resist for the moment, because I fear it won't be easy as I predict and it would delay my projects.
I already started using this list of features and refactored most of my code for c++20. I hope C++ will continue on that path and catch up Rust. But there are still so many things missing
In the meantime I refactor little by little my C++ projects to be "rust ready": hierchical ownership, data oriented with minimalist oop. So the day I can't resist no more I will be able to quickly rewrite it in Rust
[+] [-] bluGill|2 years ago|reply
[+] [-] jihiggins|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]