This guy is recounting struggles teaching beginners to program, not his own difficulties. There's no reason to assume he's a basic programmer. He's also not calling C++ a bad language, merely saying that it's frustrating teaching it to newbies - what's wrong with that?
Even though he wasn't ragging on C++ as a general-purpose programming language, it would be nice if the programming community could have honest conversations about its shortcomings without resorting to defensive, ad-hominem attacks.
I wouldn't describe it as "C++ lets you make mistakes." I'd instead say that "C++ exposes you to the artifacts and edges that are part of the raw computer system."
I say this because it's a lot less dismissive of the troubles of the programmer to be. I remember being new not so long ago, and being frustrated immensely by the dismissals of experienced C++ programmers who feel that programming is inherently all about knowing and dealing with the hard edges of the machine, and much less about the rather beautiful theoretical side most people would call Computer Science. And I feel that's pretty much wrong; that programming is a balance between wrangling the machine and building elegant, beautiful logical constructions.
Covering both sides separately is important, and builds a strong programmer. Teaching only one side, or not acknowledging the differences between these two very separate skills/processes confuses aspiring programmers tremendously, and turns them away from the tremendous power that can be had in languages like C++.
Wait, what? While this is great pragmatic advice, it's not very good for a newbie. A newer programmer should absolutely be able to go to of all things the documentation to get some kind of an understanding of what's going on. These kinds of "quick reference" techniques are great once you know the language and need information quick, but it's just proof that the docs are unapproachable to a new programmer if they have to resort to non-standard operations like the above to figure out what's going on.
At CQU (in Australia) I remember being lucky enough to have been one of the guinea pigs of the IT course at the time that it first started, and the introductory language was (Turbo) Pascal. The following year if you wanted to go further with programming other languages were offered like C++, LISP, assembly, COBOL, etc.
The intakes the following year started straight into C++ and the drop-out rate was incredible. I know if I started with C++ I would have struggled and probably given up and dropped out too.
Programming should be easy and fun when you learn, you shouldn't have to feel like an idiot.
I don't get this... Is he teaching a language he himself doesn't quite understand? (I don't know the author). Or is he complaining that C++ is difficult and that it should be something it clearly isn't. Implicit conversions, size_t, the way hardware dictates how wrap around zero works... that's all pretty basic stuff i the context of C++. If you know this stuff, you also know that you shouldn't teach it in any sort of introductory CS courses. It's all white noise. Many other languages suffer from similar (sometimes self-inflicted) quirks. He should stick with Scheme.
I think a great deal of the pain in this example is self-inflicted. The hardware wrapping of an unsigned integer is fair game, but the multi-layer typedef and conversion to unsigned comparison are both warts of C++ that are totally avoidable. The multi-layer typedef is just the nature of enormous libraries with multitudinous compatibility concerns, so it's possible to forgive even that (and tools will help you there). But most reasonable languages won't let the assignment from unsigned int to int happen, and won't let the comparison happen either.
Can you name a language besides C++ and C in which comparing a signed integer to an unsigned integer will compile and run without warning while implicitly converting one to the other?
I read this article as a very verbose way of saying "beginners make mistakes". Just because this particular mistake might not exist in some other places (say in a language without unsigned types) doesn't mean the beginner won't make similarly horrible mistakes in that other language.
I suppose you could s/beginner/person/ on that statement as well, with lower probability of mistakes but still nonzero.
Folks... Let me ask you a related question:
Why are signed integers used so (extremely) often, if the number is by definition always greater or equal to zero? Why are so few using explicit unsigned integers? Is it just a lack of understanding what the difference between signed and unsigned integers are and what happens if either one is increased/decreased over it's maximum/minimum value?
I think most people would understand the difference if asked. There are all sorts of reasons, but no great ones:
1) Tutorials usually use 'int' (self-referential)
2) 'unsigned' is one more thing to type.
3) It can be hidden behind a typedef.
4) Negative numbers can serve as error codes.
5) Wraparound can cause bugs for either.
6) Occasionally safer (if x - 3 < 0)
I've usually taken to using explicit width unsigned integers (uint32_t) unless there is a reason not to.
I avoid them like a plague, except in specific situations where I need the extra range or need to save the bytes. I expect my reasons will be unpopular.
Suppose you have an object with a interface like object.setValue(7). You'd like to assert that people are sending you valid data, but if the type is uint then you can't do that. if the type is int, then you can assert(value>0, "Value must be greater than zero").
Now if anyone calls setValue(-1) you'll get an assertion that would have been skipped before.
I feel that pedants and more experienced C++ programmers will tell me I am sloppy or should be getting warnings on setValue(-1) but I've been burned before. My life is easier when I avoid unsigned types.
The behavior of int delta = unit1 - unit2 is also annoying. Not unexpected if you're paying attention, just annoying.
I might use them more if underflow / overflow caused an exception instead of undefined behavior.
I did not initially see the 'intro' in the title and it took me a while to understand why the author put so much energy on this (relatively) questionable aspect of c++.
On the article itself, I don't believe anyone sincerely consider C++ to be a good introductory language. It is dragged by a lot of low-level concerns which are only a dead-weight for a beginner. At this point of a programming cursus, the focus should be on understanding algorithms and low level details that can be useful in other contexts only serves to trip students.
As for the higher level of C++, such as everything object oriented, I believe it becomes and additional load on the understanding of the language(why are Strings initialized this way? How come I can call a function from a data type with the dot-notation, but functions I define can only be called by themselves?)
I find it to be also weird that String::length returns a type_t. The way I understood this type was that it represented a memory size. When we call length, we want the number of characters, independantly of the character size under the hood.
A lot of people think C++ was a good introductory language, or at least thought it was. C++ was the language of choice for introductory programming courses at both my high school and university. "Good" in this case was generally evaluated on the basis of "that's what people use out in the real world, so that's what we should teach" rather than actual suitability for instruction. The concept that it might be better to start out teaching one language, then switch languages later on once the basics are mastered is foreign to a lot of people. People who have never learned more than one or two programming languages tend to think that if your goal is to master C++, you should start with C++, and any time spent on any other language is wasted.
The non-value preserving implicit conversions -- both the signed-unsigned conversions and the narrowing conversions -- are one of the biggest language warts in C++ (which it inherited from C). However, you can basically avoid these issues completely by enabling the compiler warning for these conversions and treating the warnings as error.
What is the best intro language? Perhaps the hallmarks of an intro language should be to teach a user about variables, function calls, conditionals, and loops. For that I think there is still little better than LOGO or BASIC. But to help the begin feel like they are making a change, perhaps Javascript is now the best.
My 'real' intro languages were Pascal and Prolog. Pascal was fine, but prolog was very much a waste of time.
I think JS is as horrible as C++, since it has all the implicit casting to string rules, besides a lot of other quirks. ( I am not sure which language I would advocate, probably either Python or C, one because of all of the nice features, the other because of the absence of all the features.)
I would say Python and Ruby but Javascript is getting more popular as a first language, I think mostly because it's easy to make something that looks like something.
Prolog's only a waste of time if you let it be. It's a good introduction to declarative programming and the logic implicit in all other programming.
Also, yes, I'll throw Python out there as the best beginning intro language we have now. It's syntax-light compared to the other languages teachers know, familiar to instructors coming from a Java or C++ background, and, best of all, wasn't designed as a teaching language, so it isn't crippled in bizarre ways.
However, I also think that teaching multiple languages is essential, and they should all be from different families or at least have different features, so Haskell, Prolog, Lisp, Forth, and, yes, C++ should all be in the program.
Define beginner. I came to C++ as what I'd consider a beginner, having had two years of VB plus some previous experience with BASIC and HTML. I think if you're a complete neophyte it's an awful language, but if you've already got some background C++ is a great way to learn how nasty programming at a low level can be, but how fantastic wielding power over the system is.
Given that C++ originated as a superset of C, it is perhaps not that surprising to see this quirk as a remnant.
C++ has a whole bunch of other issues, however. The inundation of poor educational resources on it and the fact that many treat it as "C with Classes" rather than its own language doesn't help matters much.
size_t is covered within the first 50 pages of C++ Primer.
incorrect behavior is incorrect behavior. You want it to crash when you decrement past some value, instead it does something that looks like crazy voodoo to you (from a boolean algebra perspective this is completely expected behavior! You're just loosing the top binary digit b/c you have nowhere to store it!)
We don't program to make programs crash. Yeah, it's a slightly easier to debug when you have a crash, but never bet on crashing or segfaulting for "checking" if your program works.
The example the author uses fails to capture a minor nuance of the real world: there is an implicit assumption that you can't keep removing pennies once you hit zero. Had the pseudo-code been this...
as long as the number of pennies is less than the number of letters in your name *and there are pennies to remove*.
...then the generated C++ conditional would've been:
} while (pennies > 0 && pennies < name.length());
This is more an example of why sloppily converting algos in English to machine-friendly codes is terrible.
Well that's the point, isn't it. The code is purposefully sloppy and should fail or hang, and the programmer can then fix it like you did. But with c++, it "works", but for the wrong reason, so it never gets fixed and may cause problems someday.
I fully agree, a someone who has taught programming to people the one step that seems to take people from program copiers to programmers is the ability to read and understand error messages and warnings. The number of times the answer is staring people in the face is quite surprising. If you are actively ignoring warnings is it any wonder things don't work as expected. The question as to whether -Wall should be default is another issue but I hardly blame C++ for this issue.
Actually you should cast name.length to int otherwise you leave the error in.
Casting negative number to unsigned(size_t) would get you a large positive number.
[+] [-] yan|12 years ago|reply
The entire thread is worth reading.
[+] [-] bcoates|12 years ago|reply
[+] [-] ansible|12 years ago|reply
[+] [-] fredsanford|12 years ago|reply
Yes, C++ will let you make mistakes. It's not your nanny, it's a programming tool for... GASP ...programmers...
Let's see if we can get ACTOR resurrected...
[+] [-] rybosome|12 years ago|reply
Even though he wasn't ragging on C++ as a general-purpose programming language, it would be nice if the programming community could have honest conversations about its shortcomings without resorting to defensive, ad-hominem attacks.
[+] [-] Pxtl|12 years ago|reply
[+] [-] lelandbatey|12 years ago|reply
I say this because it's a lot less dismissive of the troubles of the programmer to be. I remember being new not so long ago, and being frustrated immensely by the dismissals of experienced C++ programmers who feel that programming is inherently all about knowing and dealing with the hard edges of the machine, and much less about the rather beautiful theoretical side most people would call Computer Science. And I feel that's pretty much wrong; that programming is a balance between wrangling the machine and building elegant, beautiful logical constructions.
Covering both sides separately is important, and builds a strong programmer. Teaching only one side, or not acknowledging the differences between these two very separate skills/processes confuses aspiring programmers tremendously, and turns them away from the tremendous power that can be had in languages like C++.
[+] [-] derleth|12 years ago|reply
Yes, assembly will let you make mistakes. It's not your nanny, it's a programming tool for... GASP ...programmers...
So, therefore, C++ is equal to or less than BaSUCK and yer a silly-nanny like yer old man.
[+] [-] cldr|12 years ago|reply
[+] [-] omegote|12 years ago|reply
[+] [-] lelandbatey|12 years ago|reply
[+] [-] shearnie|12 years ago|reply
[+] [-] bnastic|12 years ago|reply
[+] [-] sirclueless|12 years ago|reply
[+] [-] norswap|12 years ago|reply
[+] [-] MichaelSalib|12 years ago|reply
[+] [-] asveikau|12 years ago|reply
I suppose you could s/beginner/person/ on that statement as well, with lower probability of mistakes but still nonzero.
[+] [-] b3tta|12 years ago|reply
[+] [-] nkurz|12 years ago|reply
[+] [-] xsmasher|12 years ago|reply
Suppose you have an object with a interface like object.setValue(7). You'd like to assert that people are sending you valid data, but if the type is uint then you can't do that. if the type is int, then you can assert(value>0, "Value must be greater than zero").
Now if anyone calls setValue(-1) you'll get an assertion that would have been skipped before.
I feel that pedants and more experienced C++ programmers will tell me I am sloppy or should be getting warnings on setValue(-1) but I've been burned before. My life is easier when I avoid unsigned types.
The behavior of int delta = unit1 - unit2 is also annoying. Not unexpected if you're paying attention, just annoying.
I might use them more if underflow / overflow caused an exception instead of undefined behavior.
[+] [-] john_fushi|12 years ago|reply
On the article itself, I don't believe anyone sincerely consider C++ to be a good introductory language. It is dragged by a lot of low-level concerns which are only a dead-weight for a beginner. At this point of a programming cursus, the focus should be on understanding algorithms and low level details that can be useful in other contexts only serves to trip students.
As for the higher level of C++, such as everything object oriented, I believe it becomes and additional load on the understanding of the language(why are Strings initialized this way? How come I can call a function from a data type with the dot-notation, but functions I define can only be called by themselves?)
I find it to be also weird that String::length returns a type_t. The way I understood this type was that it represented a memory size. When we call length, we want the number of characters, independantly of the character size under the hood.
[+] [-] mikeash|12 years ago|reply
[+] [-] _stephan|12 years ago|reply
[+] [-] brg|12 years ago|reply
My 'real' intro languages were Pascal and Prolog. Pascal was fine, but prolog was very much a waste of time.
[+] [-] darkstalker|12 years ago|reply
[+] [-] yk|12 years ago|reply
[+] [-] miloshadzic|12 years ago|reply
I would say Python because I think that it offers the best mix between learnability and usefulness.
[+] [-] adamors|12 years ago|reply
[+] [-] derleth|12 years ago|reply
Also, yes, I'll throw Python out there as the best beginning intro language we have now. It's syntax-light compared to the other languages teachers know, familiar to instructors coming from a Java or C++ background, and, best of all, wasn't designed as a teaching language, so it isn't crippled in bizarre ways.
However, I also think that teaching multiple languages is essential, and they should all be from different families or at least have different features, so Haskell, Prolog, Lisp, Forth, and, yes, C++ should all be in the program.
[+] [-] nicholassmith|12 years ago|reply
[+] [-] dllthomas|12 years ago|reply
[+] [-] vezzy-fnord|12 years ago|reply
C++ has a whole bunch of other issues, however. The inundation of poor educational resources on it and the fact that many treat it as "C with Classes" rather than its own language doesn't help matters much.
[+] [-] telephonetemp|12 years ago|reply
[+] [-] blah32497|12 years ago|reply
incorrect behavior is incorrect behavior. You want it to crash when you decrement past some value, instead it does something that looks like crazy voodoo to you (from a boolean algebra perspective this is completely expected behavior! You're just loosing the top binary digit b/c you have nowhere to store it!)
We don't program to make programs crash. Yeah, it's a slightly easier to debug when you have a crash, but never bet on crashing or segfaulting for "checking" if your program works.
[+] [-] angersock|12 years ago|reply
BUT.
The example the author uses fails to capture a minor nuance of the real world: there is an implicit assumption that you can't keep removing pennies once you hit zero. Had the pseudo-code been this...
...then the generated C++ conditional would've been: This is more an example of why sloppily converting algos in English to machine-friendly codes is terrible.[+] [-] squidfood|12 years ago|reply
[+] [-] yngccc|12 years ago|reply
[+] [-] axus|12 years ago|reply
[+] [-] smitec|12 years ago|reply
[+] [-] deletes|12 years ago|reply