After going through several love-hate cycles in my career toward C++ I must say I have a kind of admiration for what the C++ authors are trying to achieve.
It took me a while to understand why every iteration of C++ brings so many (often Turing-complete or close) side effect horrors. The reason is that C++ is more than a language: it is part of an actually philosophical quest that language design is about.
It is about trying to bridge the language of the machines and the language of human abstractions as closely as possible. In itself it does not necessarily lead to the best language possible, but it explores an interesting limit.
One can use higher level languages like Haskel, Prolog or even a bit lower like Ruby or Python and code nice abstractions. By doing so, however, the programmer typically loses track of what the machine implementation will be.
C++ strives to be a language where you can still feel what the machine will actually implement while you code high-level abstractions.
That is their goal, that is their quest. In doing so there are many side-effects that pop up, but their effort is commendable.
That so many people still use it is impressive but, I think, irrelevant to the priorities they typically set.
> The reason is that C++ is more than a language: it is part of an actually philosophical quest that language design is about.
> It is about trying to bridge the language of the machines and the language of human abstractions as closely as possible. In itself it does not necessarily lead to the best language possible, but it explores an interesting limit.
> C++ strives to be a language where you can still feel what the machine will actually implement while you code high-level abstractions.
This has been false ever since the earliest days of ANSI C. The C and C++ standards define an abstract machine that is quite far from any machine that exists today. Type-based aliasing rules, to name one important example, are something that has almost nothing to do with anything that exists in the hardware.
It's quite enlightening to read the description of LLVM IR [1] and observe how far it is from anything a machine does. In fact, LLVM IR is quite a bit lower level than C is, as memory is untyped in LLVM IR without metadata: this is not at all the case in C.
In reality, C++ is an attempt to build a high-level language on top of the particular abstract virtual machine specification that happened to be the accidental byproduct of a consensus process among hardware/compiler vendors in 1989. It turns out that this has been a very helpful endeavor for a lot of people, but I don't think we should claim that it's anything more than that. There's nothing "philosophically" interesting about the C89 virtual machine.
I watched a talk once from CppCon where the presenter was showing different bits of code and asking the audience if it was a 0-cost abstraction or not. This was at CppCon and there wasn't a consensus.
Sure it's possible to do high level things while knowing what the machine will end up doing but if you have to be a guru to be able to do so... then can you really say that?
I'd say "C++ strives to be a language where expert C++ compiler writers can feel what the machine will actually implement while you code high-level abstractions (provided you're using the compiler you wrote yourself)"
I am very disappointed with constexpr in C++14 and C++17.
The work shown here is impressive, but the point is that it should not be. D has shown that variable initialization at compile time can be painless.
The main limitations I find are:
a) Lack of support in the standard library (e.g., you should code your own sort, vector class, etc.)
b) Terrible Terrible Terrible compilation times.
c) Lack of a dynamic memory allocation within constexpr.
constexpr functions are not evaluated while parsing (e.g., the code is not compiled and then executed). My use case, which was to avoid a 1 second preprocessing time while loading a library, took more than 20 minutes to compile and used more than 60 GB of RAM (on GCC7, CLANG did not even manage to compile).
Stuff like initializing a bitset can easily eat all your ram (e.g. this fails to compile):
#include <bitset>
int main() { std::bitset<102410241024> bs; }
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63728
A lot of the STL is already being constexpr-ized (as Stephen T Lavalej has mentioned)
This code i wrote takes about 5 seconds on gcc and 1.5 seconds on clang to compile for a 1000 node HTML template (about 28 KB). Definitely slow, but not really really slow.
I have a feeling recursion slows down the compile time, I will attempt an iteration based version eventually and see.
That negates the whole point of templates and constexpr -- they're purely functional subsets and explicitly designed to be so.
(You can argue that purely functional code is useless in real life and that C++ shouldn't strive to be purely functional, but that's another argument.)
Yes. We parse JSON at compile-time with std.json (in D standard library). It probably wasn't even designed with that goal. From a D point of view, compile-time parsers are almost boring since anyone can write them.
Sadly, the thing that stood out the most to me is the sentence We attempt to make the compiler generate the most sensible error message, followed by an incomprehensible error message completely unrelated to the problem.
C++ should really get proper metaprogramming, with support for user-defined error messages. All this template stuff always seemed like an awful hack to me: Fighting the compiler instead of being the compiler.
Going from templates to constexpr is like moving from a war zone to a forest. It's still scary and dark, but at least you know what hit you.
static_printf is coming soon.
The error message looks incomprehensible, but the last couple of lines give you enough information - the error string and the line number - it looks much better on the console since its colorized
Often times, the easiest way to debug complex template code is to intentionally fail the compilation, with a minimal context, so you can read the compiler's output on what the types and values are. Here's a simple example of a function I used to use:
When trying to compile this, using something like `g++ show-type.cpp`, you'll get this output:
show-type.cpp: In instantiation of ‘void show_types() [with T = int; Ts = {float, bool}]’:
show-type.cpp:7:32: required from here
show-type.cpp:3:3: error: static assertion failed: Type log
{ static_assert((T*)nullptr, "Type log"); }
The same can be done for non-type template args, of course.
In Dlang, where compile time function execution is the norm, you use the equivalent of what would be "pragma printf" -- eg. You embed compiler warnings into your compile time branching to debug.
[+] [-] Iv|8 years ago|reply
It took me a while to understand why every iteration of C++ brings so many (often Turing-complete or close) side effect horrors. The reason is that C++ is more than a language: it is part of an actually philosophical quest that language design is about.
It is about trying to bridge the language of the machines and the language of human abstractions as closely as possible. In itself it does not necessarily lead to the best language possible, but it explores an interesting limit.
One can use higher level languages like Haskel, Prolog or even a bit lower like Ruby or Python and code nice abstractions. By doing so, however, the programmer typically loses track of what the machine implementation will be.
C++ strives to be a language where you can still feel what the machine will actually implement while you code high-level abstractions.
That is their goal, that is their quest. In doing so there are many side-effects that pop up, but their effort is commendable.
That so many people still use it is impressive but, I think, irrelevant to the priorities they typically set.
[+] [-] pcwalton|8 years ago|reply
> It is about trying to bridge the language of the machines and the language of human abstractions as closely as possible. In itself it does not necessarily lead to the best language possible, but it explores an interesting limit.
> C++ strives to be a language where you can still feel what the machine will actually implement while you code high-level abstractions.
This has been false ever since the earliest days of ANSI C. The C and C++ standards define an abstract machine that is quite far from any machine that exists today. Type-based aliasing rules, to name one important example, are something that has almost nothing to do with anything that exists in the hardware.
It's quite enlightening to read the description of LLVM IR [1] and observe how far it is from anything a machine does. In fact, LLVM IR is quite a bit lower level than C is, as memory is untyped in LLVM IR without metadata: this is not at all the case in C.
In reality, C++ is an attempt to build a high-level language on top of the particular abstract virtual machine specification that happened to be the accidental byproduct of a consensus process among hardware/compiler vendors in 1989. It turns out that this has been a very helpful endeavor for a lot of people, but I don't think we should claim that it's anything more than that. There's nothing "philosophically" interesting about the C89 virtual machine.
[1]: https://llvm.org/docs/LangRef.html
[+] [-] ericfrederich|8 years ago|reply
Sure it's possible to do high level things while knowing what the machine will end up doing but if you have to be a guru to be able to do so... then can you really say that?
I'd say "C++ strives to be a language where expert C++ compiler writers can feel what the machine will actually implement while you code high-level abstractions (provided you're using the compiler you wrote yourself)"
[+] [-] Mmrnmhrm|8 years ago|reply
The work shown here is impressive, but the point is that it should not be. D has shown that variable initialization at compile time can be painless.
The main limitations I find are: a) Lack of support in the standard library (e.g., you should code your own sort, vector class, etc.)
b) Terrible Terrible Terrible compilation times.
c) Lack of a dynamic memory allocation within constexpr.
constexpr functions are not evaluated while parsing (e.g., the code is not compiled and then executed). My use case, which was to avoid a 1 second preprocessing time while loading a library, took more than 20 minutes to compile and used more than 60 GB of RAM (on GCC7, CLANG did not even manage to compile).
Stuff like initializing a bitset can easily eat all your ram (e.g. this fails to compile): #include <bitset> int main() { std::bitset<102410241024> bs; } https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63728
[+] [-] rep_movsd|8 years ago|reply
A lot of the STL is already being constexpr-ized (as Stephen T Lavalej has mentioned)
This code i wrote takes about 5 seconds on gcc and 1.5 seconds on clang to compile for a 1000 node HTML template (about 28 KB). Definitely slow, but not really really slow.
I have a feeling recursion slows down the compile time, I will attempt an iteration based version eventually and see.
[+] [-] mhh__|8 years ago|reply
[+] [-] rep_movsd|8 years ago|reply
Considering that one of the greatest C++ programmers of all time (Andrei) is now a D person, that's proof enough.
But for some reason, this sort of C++ shenanigan is intensely satisfying to me.
I guess it's a narrow complexity fetish.
[+] [-] giancarlostoro|8 years ago|reply
[+] [-] otabdeveloper1|8 years ago|reply
(You can argue that purely functional code is useless in real life and that C++ shouldn't strive to be purely functional, but that's another argument.)
[+] [-] reikonomusha|8 years ago|reply
[+] [-] p0nce|8 years ago|reply
[+] [-] fooker|8 years ago|reply
[+] [-] setzer22|8 years ago|reply
C++ should really get proper metaprogramming, with support for user-defined error messages. All this template stuff always seemed like an awful hack to me: Fighting the compiler instead of being the compiler.
[+] [-] rep_movsd|8 years ago|reply
Going from templates to constexpr is like moving from a war zone to a forest. It's still scary and dark, but at least you know what hit you. static_printf is coming soon.
The error message looks incomprehensible, but the last couple of lines give you enough information - the error string and the line number - it looks much better on the console since its colorized
[+] [-] cletus|8 years ago|reply
[+] [-] rep_movsd|8 years ago|reply
This is as scary as a bunny painted in camouflage.
[+] [-] tlb|8 years ago|reply
How does one debug complex constexpr code? I assume there's no printf.
It'd be really cool if it supported some kind of interpolation, like Jinja templates, so it could generate dynamic page templates at compile time.
[+] [-] Jeaye|8 years ago|reply
[+] [-] rep_movsd|8 years ago|reply
#define constexpr
Which makes all the code plain code that executes at runtime, then you can put your couts and use your debuggers.
Its orders of magnitude simpler than dealing with template meta-programming.
constexpr_printf is the feature I want most of all. There is actually a patch for gcc that implements this.
I dont know how Jinja works, but the idea here is you do templating at runtime, but your templates get compiled down to a tree like DS at compiletime
[+] [-] VHRanger|8 years ago|reply
[+] [-] ehliu|8 years ago|reply
[+] [-] kbenson|8 years ago|reply
By malformed are we talking about incorrectly closed, or actual invalid HTML? HTML doesn't require all tags be closed...
[+] [-] rep_movsd|8 years ago|reply
The HTML specification has a grammar and defines whats allowed - a subset/derivation of XHTML/SGML and their ilk
There are a bunch of test template files in the test/ folder that demonstrate what kinds of errors are caught.