top | item 361153

How can C Programs be so Reliable?

103 points| ltratt | 17 years ago |tratt.net | reply

104 comments

order
[+] jd|17 years ago|reply
People often write in higher level languages because they want lots of bad code fast. Almost all business applications are CRUD apps (create/retrieve/update/destroy) with some business logic, and they're generally written in C#. The app may crash when you click the wrong button, but the app is cheap to develop and the programmers are easily replaceable.

Of course I'm generalizing, and a lot of C# programmers write great and reliable code. The point is that they often don't -need- to write great code, because lousy code is good enough for all business purposes (except when the software is your product, which is rarely the case).

The second issue is that people who choose C for their projects tend to (a) understand low-level concepts, (b) care about speed / memory usage / reliability / dependencies, (c) don't consider development time that important. That you create more reliable software that way is obvious - the same programmers would create reliable software in C# (or similar language). But there aren't many situations in which development speed doesn't matter, speed and memory usage don't matter, dependencies don't matter but, for some reason, reliability is very important.

[+] scott_s|17 years ago|reply
I come from the systems research world. I've seen people choose C not for the reasons you mention, but just because it's what they know best. This is not always the best thing to do.

One instance is a colleague who needed to do data post-processing, and just did it in C because it was most familiar. A language like Perl or Python would have been a better choice, and saved him time in the long run, since string manipulation in C is tedious and error prone.

[+] luckystrike|17 years ago|reply

  People often write in higher level languages because they want lots of bad code fast. 
I assume you simply meant programmers who want to churn out something quick prefer high level languages. (Majority of them would never like to write 'bad' code intentionally)

Your second assertion is spot on. If one needs to develop a high performing solution optimized for speed and memory consumption, plus high reliability and if it also needs to be reasonably cross platform, one would gravitate towards C. Add to this the established heritage/ecosystem and the tools, and availability of a (relatively) big pool of programmers, C tends to be a good choice.

The only con is a relatively longer development cycle. (True only for mortals like me though, some people stream out C code faster than i write an email. :-) )

[+] brlewis|17 years ago|reply
Yes, lots of bad code fast is half the reason I use Scheme almost exclusively now after 12 years of C. The other half is converting bad code to good code fast.
[+] greyman|17 years ago|reply
Interesting perspective. I was programming a long time in C, C++ and C#, and I don't feel that programming in C# is easier in some fundamental way, or that C# programmers are easily replaceable. Yes, the mode of thinking is different in each language, but the issue that you have to think carefully is basically the same. ;-)
[+] lhorn|17 years ago|reply
I am not sure why are you bringing enterprise software into the discussion. Despite being the most popular form of employment for "programmers" it has always been the absolutely lowest form of life in a software ecosystem and, therefore, should be ignored and left out of any intelligent discussion about programming.

In the 90s it was Visual Basic, now it's Java and C#, but it has absolutely nothing to do with what most of us consider to be discussion worthy.

[+] scott_s|17 years ago|reply
The author has a unique perspective since he - somehow - skipped learning C until now. He programmed assembly before, and from his other work he clearly is familiar with dynamic and more modern languages.

Consequently, his perspective on C is that of somehow who is new to the language, yet also understands both the fundamentals of what his code will compile down to, and the higher level facilities that later languages and programming models abstracted away.

[+] qwph|17 years ago|reply
I'm still not convinced that there's anything particularly magical about C here. Reliable programs are written by people who:

* understand the problem domain

* know the implementation language and its supporting library

* pay attention to detail

Admittedly, some languages fit some problem domains better than others, but 90% of the time, picking the language you're personally most familiar with will be as good a choice as any.

[+] michaelneale|17 years ago|reply
that was my experience when I went from ASM - to functional and then C when wrangling hardware. I am still surprised at how reliable C apps can be (linux kernel) - hats off to the genii involved in making things stable.
[+] wheels|17 years ago|reply
This misses two points I consider important:

- A whole lot of C code is old. Old, actively maintained code, tends to be more reliable than new code.

- C tends to be employed in relatively predictable sub-systems. Something like a device driver has a relatively predictable set of states relative to a GUI application. There aren't that many paths. C code for GUIs, in my experience, tends to be at least a buggy as code in other languages.

[+] janm|17 years ago|reply
I disagree with both these points:

In some cases old, actively maintained code is more reliable. However, I have seen many cases where old, actively maintained code (depending highly on the quality of the maintainers) has lost reliability because the original principles of the codebase have been lost.

Device drivers (especially in multiprocessor systems) have very many unpredictable paths. In my experiences in writing devices drivers and GUI code code (as well as lots of code in the middle), device drivers are more likely to have the unforeseen codepath. With GUI code, you can constrain concurrency so that possible codepaths are also reduced.

[+] VinzO|17 years ago|reply
You seem to forget that in embedded systems, C is still the most used language. So most of the new systems developed these days have new C code. And embedded systems are everywhere.

Also a big part of embedded systems have to be reliable for years in hostile environement without external interventions. so I wouldn't say that this is easily predictable subsystems.

[+] jodrellblank|17 years ago|reply
Considering it's the first (alpha) release of the first significant C program he's written which is for a new (and therefore little used) programming language, and the page of Converge tools has no mention of testing tools, and the page itself has error messages on it, the claim that it's "not riddled with bugs" seems a touch, erm, cheerful.
[+] jwilliams|17 years ago|reply
The thing about C is that you are generally very aware of the side effects. Aside from the libraries you use, the (data) structures are yours. When you set something to NULL, you know damn well that that means to you.

As the author alludes to as well - in C you're made more aware of the error conditions you can handle and the ones you can't. So you can code to a level of robustness... Exceptions in Java are all well and good, but I haven't seen many implementations that do anything with IOException except for cascade it.

This works really well for programs/modules that can fit in the head of a single programmer. When you go beyond that it gets pretty messy - which is where some of the advantages of metaphors like OO start to help... Course, there is the argument that modules should never get that big, but that's another debate.

[+] Hexstream|17 years ago|reply
"As the author alludes to as well - in C you're made more aware of the error conditions you can handle and the ones you can't."

You mean like when a function silently returns -1 to indicate failure and then you wonder why your program returned a wrong result (if you're lucky enough to even notice)? In the bigger part of most of my programs I want a big, flashy, loud, total failure by default if anything goes wrong.

[+] mleonhard|17 years ago|reply
> What I realised is that neither exception-based approach is appropriate when one wishes to make software as robust as possible.

The author misses the main point of exceptions: they let us separate data processing code from error handling code. This is why we are more productive in languages that have exceptions and our code is easier to maintain.

C requires us to handle errors throughout our program, tightly coupling the data processing code with the error handling code. With apologies to Mr. Spencer, I would declare that:

"Those who don't code in languages that lack exceptions are doomed to reimplement them, poorly."

[+] demallien|17 years ago|reply
I have to disagree. From my perspective it is an error to think that error handling can be handled separately from the main code path. The classic example that I use to demonstrate this is to point to all of those tedious discussions that programmers have as to whether something is an exception or just normal behaviour of a system. Straight away for me that's a red flag that a non-real distinction is being made.

I honestly don't see any advantage to exceptions over C-style return codes, with one important ...euh... exception: the boiler plate for exception handlers can be well handled by modern IDEs. All the other supposed advantages seem to be just waffling to me. Take the whole 'Oh, but exceptions make handling errors the default!' kind of argument (several examples on this thread already). Yes, sure, you do have to write exception handlers for all errors in a language such as Java. But my experience is that if I'm writing use-once-and-throw-away in C, I'll just not use the return code. In Java I'll just stick a whopping great big try/catch around the whole app, and be done with it. If I'm trying to write stable code that's going to be around for a while in C, I check the error codes returned by a function every single time, which gives me around about as much work as when I am using Java, and actually handling different exceptions correctly.

All of which means, for me at least, that exceptions don't add anything to a language, but they do make the language just a little bit harder to learn (remembering exactly how any given language has implemented exceptions, and which resources can still be safely used when is a pain, as each language tends to have subtle differences that can bite you).

[+] msluyter|17 years ago|reply
After having read a lot of these types of language debates, the only conclusions I can safely arrive at are 1) that every language has its advocates and 2) some people are highly productive in their preferred language, much more so than the average programmer would be in their preferred language. But the real question, imho, is how do average programmers compare? How will the same app written in C++ and Java compare, when written by non-superstars? The question merits some empirical research.
[+] alecco|17 years ago|reply
The problem with C is they left too many things completely on the wild. Strings and memory management are all laissez-faire and everybody does whatever they think is right.

A typical performance issue is calling malloc/free all the time, it can be avoided but there are no standard ways.

I hope some great features of C++ get some day backported to C. But don't bet money on that :(

[+] scott_s|17 years ago|reply
C is, as the author points out, a portable assembly language. If a portable assembly language is not what you need - for example, you want better string support and memory management - then don't use C.
[+] holygoat|17 years ago|reply
The later C standards add additional features. However, I'd argue that there is no such thing as a "great feature" in C++: almost everything in the language is a mistake, either intrinsically or in combination with other 'feature's.

It's a blind pig with fifteen legs, trying to put on its own lipstick while riding a unicycle.

[+] liuliu|17 years ago|reply
C believes that programmer doesn't make mistakes. About memory management, I don't think it should be a language feature because there are some good implemented libraries out there.
[+] alecco|17 years ago|reply
Programs compiled from C don't change much when run, while dynamic applications vary significantly on every run due to the large environment (e.g. GC.) This is a bliss for debugging. I've never seen anything like gdb on any other language. You can backtrack, set and change things, watch for expressions, and even run one line of compiled code at a time to see what's breaking.

Also C programs usually are made to run many times and stay alive, so the attitude of the developer tends to be more careful.

[+] maurycy|17 years ago|reply
As a person who spent few years doing C programming, and then got trapped into the Ruby realm, I must say that the main difference is focus. If you have to focus about memory management, strings, you think much more about the code you write.

Theoretically, high level programming should enable you to focus on the abstractions much more, but somehow it doesn't work this way.

[+] jmah|17 years ago|reply
See also: No Silver Bullet
[+] axod|17 years ago|reply
On the downside, most bugs ever discovered are most likely in c code. Especially memory leaks, buffer overflows etc.
[+] sharkfish|17 years ago|reply
If, as in the case of extsmail, one wants to be robust against errors, one has to handle all possible error paths oneself.

That's not really a bad thing, as the author points out.

One thing I've always felt a sense of dread about in C# and Java (last Java I did was back in 2001) is that I never truly knew what errors were lurking with their exception handling. It would be really nice if all error possibilities were listed in the documentation so I could pick precisely what to handle.

[+] jd|17 years ago|reply
It's even worse. For top level functions it can throw the UNION of the exceptions thrown by the functions it calls. Therefore, by changing a low level function's exception signature the exception signature of all higher level functions changes too. That always worried me, but it's doesn't seem to be a big deal in practice.
[+] jwilliams|17 years ago|reply
> It would be really nice if all error possibilities were listed in the documentation so I could pick precisely what to handle.

Yeah, that's a problem - it's a problem of style and implementation rather than a "Java" problem... but it's a problem none-the-less.

Many exceptions in Java are simply == "it didn't work". Often there is no distinguishing between: (a) didn't work because you supplied something invalid (e.g. Integer.parseInt("notanint") and (b) it didn't work, but you can probably recover if you want and (c) it didn't work, there is nothing you can do about it.

[+] lst|17 years ago|reply
Do they? Mature C code: yes. Modern one: no, many times affected by using very bad algorithms...