top | item 2301563

Real cost of C++ exception vs error-code checking

71 points| zeugma | 15 years ago |lazarenko.me | reply

62 comments

order
[+] pilif|15 years ago|reply
I assume he was proving a point where main() of his full program using error code checking was not in fact checking the return value of foo()

Even in simple example code like this you can forget a check. In this case that result would be undefined if any call to devide failed.

I'd much rather have my program blow up with a readable stack trace pointing to where it happened than it working with a basically random value and then maybe blowing up somewhere totally unrelated or worse, destroying user data.

[+] davidsiems|15 years ago|reply
You can accomplish this by using asserts in your code.

You don't need exceptions to get a callstack and you can assert that values are valid and force a crash / callstack dump when they're not.

On top of that, you can compile the asserts out for release builds if you're confident they won't be hit.

[+] McP|15 years ago|reply
I prefer Raymond Chen's take on exceptions: http://blogs.msdn.com/b/oldnewthing/archive/2005/01/14/35294...
[+] AshleysBrain|15 years ago|reply
This reads to me like 'writing good code is hard, and writing even better code is even harder'. With modern C++0x smart pointers (like unique_ptr) and RAII, you can get very high performance and (more) easily exception-safe code. Maybe a lot of C++ exception criticism comes from people still trying to code like C in C++?
[+] bostonpete|15 years ago|reply
"since you have to check every single line of code (indeed, every sub-expression) and think about what exceptions it might raise and how your code will react to it"

Yes, that's my concern with exceptions as well. It seems like the Java model (if I remember it right -- it's been a decade), which requires a method to either handle an exception that a sub-method throws or explicitly allow it to be thrown, would be preferable and help people avoid accidentally ignoring an exception.

I'd like to see support for exceptions like this in C++0X, but I haven't bothered to check to see if it's there...

[+] Xurinos|15 years ago|reply
I didn't see any timings in his code.
[+] Quarrelsome|15 years ago|reply
Nice article but as far as I'm concerned you don't need to _prove_ this as it is a logical fallacy to start with.

If an exception is being thrown then something is wrong, if something isn't wrong then you implemented your exceptions incorrectly as exceptions shouldn't exist in normal program flow.

So to recap, you're writing a crap ton of more code just so you can return your error code _slightly_ faster than it would take an exception. You're optimising your failure cases, which (in the _vast_ majority of cases) is UTTERLY ABSURD.

[+] shin_lao|15 years ago|reply
It's not slightly faster, it can be an order of magnitude faster.

Example: a listen loop which handles disconnections through exceptions. This isn't stupid but it's not very efficient.

[+] aplusbi|15 years ago|reply
I think you partially missed the point - it's not that throwing exceptions is slow, is that even having them in your code is [allegedly] slow.

According to the article there are two methods used to implement exceptions in C++ - one that has higher overhead when you throw an exception (zero-cost) and one that has higher overhead when you call a function that might throw an exception (setjmp/longjmp).

Unfortunately the author didn't go over the latter method, which would have been more interesting.

[+] JoeAltmaier|15 years ago|reply
Perhaps the worst problem with checking-vs-exceptions is, either solution dominates your code structure, obscuring the algorithm logic.

The holy grail would be some method of ensuring the code cannot fail e.g. weirdly constrained argument semantics. Thus separating algorithm from constraints instead of shuffling them together on the page like a deck of cards.

[+] johnny531|15 years ago|reply
If only c++ had some sort of static type system which could be leveraged to provide compile-time checks...

But seriously, this is a large part of the power of c++'s type system. Taking the article's example, if the argument types were of (user class) 'non_zero_float', there's no possibility for error.

You still have to check that your input is non-zero at some point, but you've now focused it into one place (the 'non_zero_float' class ctor), and other chunks of your program depending on those type semantics no longer need to worry about it.

[+] adrianN|15 years ago|reply
You can't prevent exceptions when you do IO or dynamically allocate memory.
[+] AshleysBrain|15 years ago|reply
Today with terabyte harddrives, gigabytes of RAM and broadband connections, when is the binary size a more important factor than both execution speed and ease of development? Especially when the binary size difference is probably not huge?

Shouldn't the advice of this article just be "use exceptions"?

[+] pmjordan|15 years ago|reply
when is the binary size a more important factor than both execution speed and ease of development?

Binary size (or maybe more accurately in this case, binary code layout) can be highly relevant for speed due to the instruction cache.

As for ease of development, there are issues with C++ exceptions regarding this as well: some C++ libraries aren't exception safe, and neither are practically all C libraries. This is something you need to worry about whenever you pass a function pointer into a library, as there might be an exception-unsafe function higher up in the stack. Propagating an exception up through it is potentially extremely dangerous.

That said, using exceptions can still be a good idea, especially if your code doesn't need to be portable or if you know the platforms in advance, and you are careful about passing around function pointers. All you need to do is ensure that any of your code that might be called from third-party code with questionable exception semantics won't throw or propagate any exceptions, e.g. by installing a catch-all exception handler in it.

[+] scott_s|15 years ago|reply
Not all development targets desktops or laptops.
[+] Jach|15 years ago|reply
Space is time.

From the Gentoo wiki: -Os is very useful for large applications, like Firefox, as it will reduce load time, memory usage, cache misses, disk usage etc. Code compiled with -Os can be faster than -O2 or -O3 because of this. It's also recommended for older computers with a low amount of RAM, disk space or cache on the CPU. But beware that -Os is not as well tested as -O2 and might trigger compiler bugs.

I believe Apple compiles (a lot or all?) of their stuff with -Os.

Anyway C++ exceptions are awful. ;)

[+] yason|15 years ago|reply
How many hundreds of megabytes or gigabytes is our operating system installation?

Size matters a lot because harddrives are stinkin' snails when compared to CPU and RAM. All that stuff needs to be loaded from somewhere, and while SSD's have changed the scheme a bit, there's still a major gap between storage and memory.

[+] zwieback|15 years ago|reply
On my current platform (STM32L micro) we'll have 256K flash, 48K RAM, no hard drive. It's very reasonable to use C++ on such a processor but exception handling might not be something you want to pay for.
[+] pilif|15 years ago|reply
Symbian uses their own kind-of-exceptions (called TRAP, I think) and I've heard that decision to not use C++ exception was funded on binary size constraints.
[+] svlla|15 years ago|reply
Cache sizes have not seen gains proportional to RAM or HDs.
[+] gersh|15 years ago|reply
A few issues. Does he actually benchmark? Things don't always work in practice the way you would think.

If you are going for ultra-high performance, do you even have error-checking? Do you write it in assembler?