After reading http://blogs.msdn.com/b/vcblog/archive/2015/09/25/rejuvenati... i'm not really surprised about these kind of issues. It's impressive that VC2013 was as compliant as it was, when all template parsing was based on string interpolation instead of an AST (and simultaneously distressing that a billion-dollar company would release such a product so recently).
Is the submitted article talking about VC2015? Although i suppose they will need to keep supporting the older toolchains.
It's a pitty it is not stated which version - maybe it means 'all versions' - nor possible fixes a user can use. E.g. for the 'Binding Rvalues to Lvalue References' case: using /W4 flag will yield a C4239 warning, turning on 'Treat warnings as error' will prevent using the code all together. For things like the '__VA_ARGS__ chaos' and 'Elimiated Types' on the other hand there is no fix (that I know of), but I'd argue such constructs do not have a place in modern C++ (or maybe even any C++ for that matter) anyway and they should be avoided by the users and as such should never turn into problems.
I think the standards for large languages like C++ should really be developed in parallel and released together with a reference implementation - that way others would have a baseline to compare behaviour with, and it'd discourage adding features that turn out to be ridiculously difficult to implement correctly (e.g. export templates) or diverging dialects. The reference implementation doesn't have to be optimised or produce optimised code, but should be written to follow the standard closely, favouring clarity instead of efficiency.
Requiring a reference implementation with a standard is a bad idea. First off, who's going to write it? The compiler vendors are the people who'd likely do the best job, and they already have to adapt their implementations for changes--now you're making them do it twice. Another, even bigger issue, is that it actually doesn't tell you what's hard to implement. MSVC, gcc, and clang all take different approaches to their compiler, and what can be easy to implement in one could turn out to be ridiculously difficult in another (e.g., exposing a full AST via reflective mechanisms). You need a diversity of implementations, not just one implementation, to know what's actually hard to implement and what's not.
It's interesting that you bring up export templates, because that fiasco feature actually already sort of had a reference implementation. The original Cfront compiler actually supported it (admittedly, it was very buggy). The problem with export was that, at the time of discussion (~1996), no one really understood just what the impact of all of the template complexity meant--they assumed that templates were basically just a typesafe variant of the C preprocessor (or at least that most common use could be boiled down to that description; its Turing-completeness was known by then).
So, in that vein, export template was seen as a "well, this is going to be tricky, but so is name resolution in general in C++" and the overarching use case was compelling (you can hide the implementation of a regular function, why not templates?). Compiler vendors objected to the feature, but not on the basis that it was essentially impossible to implement, but rather that major design points would need quibbles. It was eventually accepted into the standard largely with the understanding that everyone would implement something close to it, if not the exact current specification.
The only thing that would have prevented export template from being standardized would have been requiring a fairly robust implementation that could have discovered that export template really did infect every part of the code base in a horrible way, and it should be noted that reference implementations generally aren't that robust. Amaya, after all, didn't prevent HTML 4 or CSS 2 from having unimplementable sections.
At least parts of it are essentially developed this way: C++14's features were implemented in clang while the standard was still in development (if memory serves) and several of its main programmers are members of the standards committee. The module system that's been proposed (for C++1z?) has been implemented in clang as well. Since clang is open source, it could count as a reference implementation ;-)
Or am I confusing things? Also, I assume the `</b>` at the end is a typo?
Furthermore, I wasn't even aware you could access `a` the way it's being accessed in the same example; I always use `this->a`. Actually, I'd access `x` through `this->x` anyway, in case there's a lvar in the context. Although, I rarely write C++ anyway.
That #define comment behavior is interesting - I wonder if it was the cause of a nasty bug I had a looooong time ago when I was porting the driver for our Serial HIPPI NIC to the initial release of NT 4.0 (NDIS 4).
There was a crash bug that happened only every few million packets. Obviously, this was a race condition. After a lot of hair-puling and waiting for 5-30min every time I wanted to test something, I decided to simply replaced the actual locking code with a single global spinlock at every entry to see if a serialized driver worked[1].
After that version still crashed, and a lot of trial and error, I finally found out that the spinlock were never actually running. Microsoft's compiler was compiling loops like this as if it was a NOOP:
#define SPINLOCK(lock) \
for (;;) { \
if (TESTLOCK(lock)) \
break; \
}
But it worked after I made this change:
- for (;;) {
+ while (1) {
Empty for loops were dropped... but only in macros. Now I'm wondering if it was getting commented out in some subtle way.
[1] There was some concern about hardware errors, as a similar PCI chip we were using failed to notice changes in the PCI GNT# if it change REQ# in the same clock cycle.
This may have been a bit of a discouraging post with respect to MSVC,
but do not worry — in a subsequent post we’ll take a look at MSVC-specific
constructs which, while not necessarily standard-compliant, are nevertheless
interesting and often quite usable
bah. Everything about the MSVC compiler is pretty much terrible.
Having some extra non standards compliant features in no way mitigates its terribleness.
The only good thing we can take from these articles is that yes, the guys at Microsoft have acknowledge how bad the situation is, and are actually trying to do something about it.
honestly? I'll believe it when I see it.
Breaking backwards compatibility is a nightmare because lots of libraries depend on the very particular way the MSVC compiler works; but that means you can't actually fix bugs.
So long story short, there's probably never going to be a 'good' version of MSVC. It'll pretty much be stuck with its peculiarities forever.
Maybe one day we'll get a 'new' compiler that can sit along the legacy one and actually compile, you know, the C++ standard. I'm not holding my breath, but hey, we can daydream...
I'm not a big C++ user, nor terribly fond of the language in any case, but IMHO the "single-phase lookup" MSVC is doing for templates is conceptually much simpler to understand (and apparently implement) than the standard 2-phase: instantiating a template is literally a textual substitution, which is intuitively what I'd expect and understand templates to be.
I understand that we get in a mess with different compilers not supporting the standard with cross platform code so MSVC should be changed.
But with respect to binding RValues to LValue references, why is the MSVC way not the standard? At first glance the MSVC way appears way more intuitive.
For instance if I have code:
update_X(X());
I would expect to be able to refactor it to this safely:
{
X x;
update_X(x);
}
[+] [-] mappu|10 years ago|reply
Is the submitted article talking about VC2015? Although i suppose they will need to keep supporting the older toolchains.
[+] [-] stinos|10 years ago|reply
It's a pitty it is not stated which version - maybe it means 'all versions' - nor possible fixes a user can use. E.g. for the 'Binding Rvalues to Lvalue References' case: using /W4 flag will yield a C4239 warning, turning on 'Treat warnings as error' will prevent using the code all together. For things like the '__VA_ARGS__ chaos' and 'Elimiated Types' on the other hand there is no fix (that I know of), but I'd argue such constructs do not have a place in modern C++ (or maybe even any C++ for that matter) anyway and they should be avoided by the users and as such should never turn into problems.
[+] [-] userbinator|10 years ago|reply
[+] [-] jcranmer|10 years ago|reply
It's interesting that you bring up export templates, because that fiasco feature actually already sort of had a reference implementation. The original Cfront compiler actually supported it (admittedly, it was very buggy). The problem with export was that, at the time of discussion (~1996), no one really understood just what the impact of all of the template complexity meant--they assumed that templates were basically just a typesafe variant of the C preprocessor (or at least that most common use could be boiled down to that description; its Turing-completeness was known by then).
So, in that vein, export template was seen as a "well, this is going to be tricky, but so is name resolution in general in C++" and the overarching use case was compelling (you can hide the implementation of a regular function, why not templates?). Compiler vendors objected to the feature, but not on the basis that it was essentially impossible to implement, but rather that major design points would need quibbles. It was eventually accepted into the standard largely with the understanding that everyone would implement something close to it, if not the exact current specification.
The only thing that would have prevented export template from being standardized would have been requiring a fairly robust implementation that could have discovered that export template really did infect every part of the code base in a horrible way, and it should be noted that reference implementations generally aren't that robust. Amaya, after all, didn't prevent HTML 4 or CSS 2 from having unimplementable sections.
[+] [-] Kristine1975|10 years ago|reply
[+] [-] spoiler|10 years ago|reply
Furthermore, I wasn't even aware you could access `a` the way it's being accessed in the same example; I always use `this->a`. Actually, I'd access `x` through `this->x` anyway, in case there's a lvar in the context. Although, I rarely write C++ anyway.
[+] [-] dnesteruk|10 years ago|reply
[+] [-] pdkl95|10 years ago|reply
There was a crash bug that happened only every few million packets. Obviously, this was a race condition. After a lot of hair-puling and waiting for 5-30min every time I wanted to test something, I decided to simply replaced the actual locking code with a single global spinlock at every entry to see if a serialized driver worked[1].
After that version still crashed, and a lot of trial and error, I finally found out that the spinlock were never actually running. Microsoft's compiler was compiling loops like this as if it was a NOOP:
But it worked after I made this change: Empty for loops were dropped... but only in macros. Now I'm wondering if it was getting commented out in some subtle way.[1] There was some concern about hardware errors, as a similar PCI chip we were using failed to notice changes in the PCI GNT# if it change REQ# in the same clock cycle.
[+] [-] shadowmint|10 years ago|reply
Having some extra non standards compliant features in no way mitigates its terribleness.
The only good thing we can take from these articles is that yes, the guys at Microsoft have acknowledge how bad the situation is, and are actually trying to do something about it.
honestly? I'll believe it when I see it.
Breaking backwards compatibility is a nightmare because lots of libraries depend on the very particular way the MSVC compiler works; but that means you can't actually fix bugs.
So long story short, there's probably never going to be a 'good' version of MSVC. It'll pretty much be stuck with its peculiarities forever.
Maybe one day we'll get a 'new' compiler that can sit along the legacy one and actually compile, you know, the C++ standard. I'm not holding my breath, but hey, we can daydream...
[+] [-] cremno|10 years ago|reply
http://blogs.msdn.com/b/somasegar/archive/2014/06/03/first-p...
Also (nowadays) MS admits that their implementation of __VA_ARGS__ isn't compliant:
https://connect.microsoft.com/VisualStudio/Feedback/Details/...
[+] [-] userbinator|10 years ago|reply
[+] [-] misnome|10 years ago|reply
[+] [-] marvy|10 years ago|reply
[+] [-] Kristine1975|10 years ago|reply
[+] [-] cbd1984|10 years ago|reply
[+] [-] progmal1|10 years ago|reply
But with respect to binding RValues to LValue references, why is the MSVC way not the standard? At first glance the MSVC way appears way more intuitive.
For instance if I have code: update_X(X()); I would expect to be able to refactor it to this safely: { X x; update_X(x); }
[+] [-] toolslive|10 years ago|reply