> why all these groups of people decided to start from scratch than to put up with the C++ committee
In which the C++ committee continues to not acknowledge that its problem is being a committee, in the most ridiculously bureaucratic sense of that word.
If the only way to get my contributions accepted into a project involves writing a paper about it, sure, I can do that. If it involves writing a paper about it, and then having endless meetings about it that could have been emails, some of which I have to physically travel to, I can't be bothered. I've left actual paying jobs over that, I'm not doing it for free.
And sure, I'm an individual, and most of the people they're talking about here are representatives of companies. But the effort-to-results ratios still exist, and C++ has managed to tip them to the point that making an entire new language is less effort than proposing a C++ change.
The D programming language came out of my inability to influence C++. Amusingly, D has had a lot more influence on C++'s direction than I was able to do directly.
It is and must be more difficult to make changes than start something new. When one person alone starts something new it is easy to make choices to a vision. When something is popular you cannot easily make changes that all will agree on.
C++ has painful experience on what happens when you don't carefully consider all proposed changes and so miss something. Export is an obvious example, but there are others that seemed good until painful experience later showed why not. Templates would look very different if they knew then what people would do with them.
> If the only way to get my contributions accepted into a project involves writing a paper about it, sure, I can do that. If it involves writing a paper about it, and then having endless meetings about it that could have been emails, some of which I have to physically travel to, I can't be bothered. I've left actual paying jobs over that, I'm not doing it for free.
If most people wrote papers that were so perfect in their construction that no one would ever need to ask questions about their content, as every relevant question would be answered by reading the paper, then there wouldn't need to be a need to shepherd it through meetings. But in my limited experience, most papers aren't like that. In the numerics study group, we had one paper at the most recent meeting that was so vague, we eventually decided we had no idea what the paper was actually proposing, so answering the question "would we like to move forward with this idea" was impossible. And with the author not being present... well, that's more or less the end of the road for that idea.
These replacements don't care about the committee per se, but rather reject committee's core goal of preserving backwards compatibility over everything else.
Making it easier to add more features to C++ can't fix the problem of being unable to simplify the language by removing unsafe and legacy features.
If they wanted C++, but only hated the committee process, they'd have forked the language and worked on compiler extensions (like WHATWG bypassed W3C process). But instead they all went for clean slate with some level of interoperability.
> One of the concerns is that C and C++ are being discouraged for new projects by several branches of the US government[1], which makes memory safety important to address.
The biggest memory safety problem for C is array overflows. I proposed a simple, backwards compatible change to C years ago, and it has received zero traction. Note that we have 20 years of experience in D of how well it works.
P.S. Modules would also make a big improvement to C, proved by implementing them in the ImportC C compiler. Modules don't take away anything from C's utility.
But that change is by definition not backwards compatible - neither ABI nor source level.
Using an ifdef to maintain source level compatibility doesn't work as two pieces of code will see the same function using different ABIs.
That said I agree entirely - the conflation of array and pointer is the biggest flaw, it's what "necessitated" the null termination error that people are so fond of calling the biggest mistake.
Fully agree with #embed. I don't really need the feature often enough to justify it. But it feels fine to use the preprocessor for it. It's annoying enough for the people who need it to have some build-system work-around, so just some simple straight forward implementation seems better than some over-specified solution for every possible use-case (imagine the horror of some template construct with locales/encoding specification added to it).
Also, even if C keeps diverging from C++ having these basic constructs stay compatible is worth quite a lot imho.
One problem is that C++ wants to (eventually, at least as an ambition) deprecate the pre-processor. So it's embarrassing to add features which people need to this system which you claim you're deprecating. Which is it?
I think C++ would have been better off with a closer equivalent to include_bytes! (Rust's compiler intrinsic masquerading as a macro, which gives back an immutable reference to an array with your data in it) - but the C++ language doesn't really have a way to easily do that, and you can imagine wrestling with a mechanism to do that might miss C++ 26, which is really embarrassing when this is a feature your language ought to have had from the outset. So settling on #embed for C++ 26 means it's done.
I was concerned that maybe include_bytes! prevents the compiler from realising it doesn't need this data at runtime (e.g. you include_bytes! some data but just to calculate a compile time constant checksum from it) but nope, the compiler can see it doesn't need the array at runtime and remove it from the final binary just as a C++ compiler would with #embed.
> having these basic constructs stay compatible is worth quite a lot
Also, I think C++ can still use the same preprocessor as C at this point (it's been a while since I've had to deal with that)? If you're going to diverge the preprocessor you should get more benefit out of doing so than "not having #embed". For that matter, having important features like #embed only available via preprocessor also helps undermine the pointy-haired trolls who (allegedly?) keep trying to deprecate the preprocessor entirely in favor of some proprietary build system.
I would love to see a world in which a system is designed and built from multiple languages, so that the "right" tool could be used for each part. Does this even make sense? The modern distributed web seems to be leading us there. Slowly, slowly.
I would also love to see a world where all C, C++ dependencies magically port themselves to Rust without FFI or a first-cut rewrite that a hobbyist did. 10-20 years maybe?
Unless:
> One of the concerns is that C and C++ are being discouraged for new projects by several branches of the US government[1], which makes memory safety important to address.
Reading these posts really does make it seem like C and C++ are a derided, ancient construct of better days when we trusted software engineers and didn't write code for connected systems. It's just not possible to go back to those times.
While I'm extremely interested in Rust, the ecosystem for my entire industry is based on C++ with no change in sight, and built on C operating systems. Because, to date, we write code that executes on a machine that is not taking input from a user, and so does not have the brand of security concerns that make Rust attractive (for the most part). Here, static analyzers get us what we need at the 80/20 level.
It only makes things slighly better, but Windows, Android, macOS, iOS, mbed, and plenty of others have enough C++ into them, even in kernel space.
And yes, it will either take decades to purge IT ecosystems from them, or they finally get some #pragma enable-bounds-checking, #pragma no-implicit-conversions (yes there are compiler specific ways to get things like this), and similar, so that they can stay in the game of safe computing.
I shall preface this with that I'm a beginner at C++.
I really like your idea of building a language from multiple parts.
Or multiple DSLs.
Maybe you could have a DSL for scheduling the code, a DSL for memory management and a DSL for multithreading. A DSL for security or access control And the program is weaved together with those policies.
One of my ideas lately would be how to bootstrap a language fast. The most popular languages are multi paradigm languages. What if the standard library could be written with an Interface Description Language and ported and inherited by any language by interoperability
Could you transpile a language's standard library to another language? You would need to implement the low level functionality that the standard library uses for compatibility.
I started writing my own multithreaded interpreter and compiler that targets its own imaginary assembly language.
I feel I really enjoy Java's standard library for data structures and threading.
Regarding the article, I hope they resolve coroutines completely. I want to use them with threads similar to an Nginx/nodejs event loop.
I tried to get the C++ coroutine code on GCC 10.3.1 working from this answer to my Stackoverflow post but I couldn't get it to compile. I get co_return cannot turn int into int&&.
>"I would also love to see a world where all C, C++ dependencies magically port themselves to Rust without FFI or a first-cut rewrite that a hobbyist did. 10-20 years maybe?"
Still hoping for compile time introspection/reflection for class serialization. Whichever language implements it first (C++ or other) I'm all in on. I come from a scientific background, where running code on data gathering machines, and writing it out, then reading it back in later for analysis is 90% of what I do.
I’ve done this with libclang: parsing C++ with clang.cindex in Python, walking the AST for structs with the right annotation, and generating code to serialize/deserialize. All integrated into a build system so the dependency links are there. Obviously being built into the language would be way better, but if I was spending 90% of my time I would take any necessary steps.
Have you checked out the PFR library (perfect flat reflection)? I've coupled this with the magic-enum library to good effect.
PFR can be rewritten in very little code, assuming c++14(?); magic-enum is long enough to just use.
I generally have one TU for just serialization, and don't let PFR and magic-enum "pollute" the rest if my code. This keeps compile times reasonable. (The other is to uniquely name the per-type serializer: C++'s overload resolution is O(n^2)). I then write a single-definition forwarding wrapper (a template) that desugars down to the per-type-name serializers. It strikes a good balance between hand-maintenance, automatic serialization support, and compile-time cost.
Rust basically supports this with pretty low complexity via serde, but I think many developed languages have at least something to do this, although in some it has to be hacked on.
Reflection is definitely a big topic of discussion, but I'm not sure whether it will make it in time for the finalization of the C++23 spec. I think this is the most recent iteration of the proposal:
I'm curious about how you could use https://celtera.github.io/avendish for this. I've developed it to enable very easily creating media processors with the little reflection we can do currently in c++20; in my mind data gathering would not be too dissimilar of a task.
It makes me really sad reading about the objections to pack indexing as this library needs it a LOT (and currently, doing it with std::get<> or similar is pretty pretty bad and does not scale at all past 200 elements in terms of build time, compiler memory usage & debug build size)
Compile-time introspection and reflection have been implemented in GHC Haskell as the Generic class. Basically the compiler synthesizes a representation of your data type in terms of basic operations like :+: or :*: (for sum types and product types) and you can easily operate on them. Is that what you mean by compile-time introspection?
It's already being used (for many years in fact) to implement JSON serialization and deserialization in arson without depending on Template Haskell (kind of like macros).
> I spent an ungodly amount of time over the past couple of years exploring ways to get views::enumerate (a view that yields an index + an element for each element of a range) to produce a struct with named members (index & value), as this is more ergonomic and safer than std::get<0>(*it). Alas, despite my best efforts and hundreds of hours invested, this proved almost unworkable.
Isn’t this a damming indictment of the language and everything that is wrong with it? How can something so simple be so hard?
CppFront is just like Carbon and Val, with a completly different syntax, translating to C++ is just an implementation detail, he just markets in a different way given his position at ISO, most likely not to raise too many waves.
Herb is one of the best things that ever happened to C++. Not only is he wicked smart, but his ability to persuade is most impressive. As if he needed more, he's also a very nice gentleman.
It can, but then most don't use it, even though Windows and Android have proven you can ship that into production with hardly any noticeble performance loss.
blep_|3 years ago
In which the C++ committee continues to not acknowledge that its problem is being a committee, in the most ridiculously bureaucratic sense of that word.
If the only way to get my contributions accepted into a project involves writing a paper about it, sure, I can do that. If it involves writing a paper about it, and then having endless meetings about it that could have been emails, some of which I have to physically travel to, I can't be bothered. I've left actual paying jobs over that, I'm not doing it for free.
And sure, I'm an individual, and most of the people they're talking about here are representatives of companies. But the effort-to-results ratios still exist, and C++ has managed to tip them to the point that making an entire new language is less effort than proposing a C++ change.
WalterBright|3 years ago
bluGill|3 years ago
C++ has painful experience on what happens when you don't carefully consider all proposed changes and so miss something. Export is an obvious example, but there are others that seemed good until painful experience later showed why not. Templates would look very different if they knew then what people would do with them.
jcranmer|3 years ago
If most people wrote papers that were so perfect in their construction that no one would ever need to ask questions about their content, as every relevant question would be answered by reading the paper, then there wouldn't need to be a need to shepherd it through meetings. But in my limited experience, most papers aren't like that. In the numerics study group, we had one paper at the most recent meeting that was so vague, we eventually decided we had no idea what the paper was actually proposing, so answering the question "would we like to move forward with this idea" was impossible. And with the author not being present... well, that's more or less the end of the road for that idea.
pornel|3 years ago
Making it easier to add more features to C++ can't fix the problem of being unable to simplify the language by removing unsafe and legacy features.
If they wanted C++, but only hated the committee process, they'd have forked the language and worked on compiler extensions (like WHATWG bypassed W3C process). But instead they all went for clean slate with some level of interoperability.
kllrnohj|3 years ago
So no, they are not failing to acknowledge that. It's literally the point of the quote you're responding to.
WalterBright|3 years ago
The biggest memory safety problem for C is array overflows. I proposed a simple, backwards compatible change to C years ago, and it has received zero traction. Note that we have 20 years of experience in D of how well it works.
https://www.digitalmars.com/articles/C-biggest-mistake.html
It'd improve C++ as well.
I really do not understand why C adds other things, but not this, as this would engender an enormous improvement to C.
WalterBright|3 years ago
olliej|3 years ago
Using an ifdef to maintain source level compatibility doesn't work as two pieces of code will see the same function using different ABIs.
That said I agree entirely - the conflation of array and pointer is the biggest flaw, it's what "necessitated" the null termination error that people are so fond of calling the biggest mistake.
kllrnohj|3 years ago
Bigpet|3 years ago
tialaramex|3 years ago
I think C++ would have been better off with a closer equivalent to include_bytes! (Rust's compiler intrinsic masquerading as a macro, which gives back an immutable reference to an array with your data in it) - but the C++ language doesn't really have a way to easily do that, and you can imagine wrestling with a mechanism to do that might miss C++ 26, which is really embarrassing when this is a feature your language ought to have had from the outset. So settling on #embed for C++ 26 means it's done.
I was concerned that maybe include_bytes! prevents the compiler from realising it doesn't need this data at runtime (e.g. you include_bytes! some data but just to calculate a compile time constant checksum from it) but nope, the compiler can see it doesn't need the array at runtime and remove it from the final binary just as a C++ compiler would with #embed.
a1369209993|3 years ago
Also, I think C++ can still use the same preprocessor as C at this point (it's been a while since I've had to deal with that)? If you're going to diverge the preprocessor you should get more benefit out of doing so than "not having #embed". For that matter, having important features like #embed only available via preprocessor also helps undermine the pointy-haired trolls who (allegedly?) keep trying to deprecate the preprocessor entirely in favor of some proprietary build system.
jvanderbot|3 years ago
I would also love to see a world where all C, C++ dependencies magically port themselves to Rust without FFI or a first-cut rewrite that a hobbyist did. 10-20 years maybe?
Unless:
> One of the concerns is that C and C++ are being discouraged for new projects by several branches of the US government[1], which makes memory safety important to address.
Reading these posts really does make it seem like C and C++ are a derided, ancient construct of better days when we trusted software engineers and didn't write code for connected systems. It's just not possible to go back to those times.
While I'm extremely interested in Rust, the ecosystem for my entire industry is based on C++ with no change in sight, and built on C operating systems. Because, to date, we write code that executes on a machine that is not taking input from a user, and so does not have the brand of security concerns that make Rust attractive (for the most part). Here, static analyzers get us what we need at the 80/20 level.
1. https://media.defense.gov/2022/Nov/10/2003112742/-1/-1/0/CSI...
pjmlp|3 years ago
And yes, it will either take decades to purge IT ecosystems from them, or they finally get some #pragma enable-bounds-checking, #pragma no-implicit-conversions (yes there are compiler specific ways to get things like this), and similar, so that they can stay in the game of safe computing.
samsquire|3 years ago
I really like your idea of building a language from multiple parts.
Or multiple DSLs.
Maybe you could have a DSL for scheduling the code, a DSL for memory management and a DSL for multithreading. A DSL for security or access control And the program is weaved together with those policies.
One of my ideas lately would be how to bootstrap a language fast. The most popular languages are multi paradigm languages. What if the standard library could be written with an Interface Description Language and ported and inherited by any language by interoperability
Could you transpile a language's standard library to another language? You would need to implement the low level functionality that the standard library uses for compatibility.
I started writing my own multithreaded interpreter and compiler that targets its own imaginary assembly language.
https://GitHub.com/samsquire/multiversion-concurrency-contro...
I like Python's standard library, it works
I feel I really enjoy Java's standard library for data structures and threading.
Regarding the article, I hope they resolve coroutines completely. I want to use them with threads similar to an Nginx/nodejs event loop.
I tried to get the C++ coroutine code on GCC 10.3.1 working from this answer to my Stackoverflow post but I couldn't get it to compile. I get co_return cannot turn int into int&&.
https://stackoverflow.com/questions/74520133/how-can-i-pass-...
FpUser|3 years ago
10-20 years sound as a pipe dream.
mkoubaa|3 years ago
saboot|3 years ago
boardwaalk|3 years ago
thechao|3 years ago
PFR can be rewritten in very little code, assuming c++14(?); magic-enum is long enough to just use.
I generally have one TU for just serialization, and don't let PFR and magic-enum "pollute" the rest if my code. This keeps compile times reasonable. (The other is to uniquely name the per-type serializer: C++'s overload resolution is O(n^2)). I then write a single-definition forwarding wrapper (a template) that desugars down to the per-type-name serializers. It strikes a good balance between hand-maintenance, automatic serialization support, and compile-time cost.
cosmic_quanta|3 years ago
foota|3 years ago
10000truths|3 years ago
https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2022/p12...
jcelerier|3 years ago
It makes me really sad reading about the objections to pack indexing as this library needs it a LOT (and currently, doing it with std::get<> or similar is pretty pretty bad and does not scale at all past 200 elements in terms of build time, compiler memory usage & debug build size)
kccqzy|3 years ago
It's already being used (for many years in fact) to implement JSON serialization and deserialization in arson without depending on Template Haskell (kind of like macros).
netr0ute|3 years ago
layer8|3 years ago
orf|3 years ago
Isn’t this a damming indictment of the language and everything that is wrong with it? How can something so simple be so hard?
RcouF1uZ4gsC|3 years ago
ape4|3 years ago
pjmlp|3 years ago
CppFront is just like Carbon and Val, with a completly different syntax, translating to C++ is just an implementation detail, he just markets in a different way given his position at ISO, most likely not to raise too many waves.
WalterBright|3 years ago
chrsig|3 years ago
varajelle|3 years ago
slavik81|3 years ago
This can be done by compilers without any change to the language standard.
pjmlp|3 years ago
deluarseo|3 years ago
[deleted]
Brian_K_White|3 years ago