top | item 1367692

Ending the Era of Patronizing Language Design

56 points| raganwald | 16 years ago |blog.objectmentor.com | reply

57 comments

order
[+] okmjuhb|16 years ago|reply
This article has so much crazy that I almost don't know how to respond to it.

C++ coddles programmers and Ruby doesn't. Really? That sounds like a sensible thing to say?

C++ avoided reflection because it would be misused by programmers? Only a profound misunderstanding of the design goals of C++, or having never actually used it, could inspire such a claim.

ActiveRecord is a Rails innovation that took decades to realize? What? Why would you think this?

[+] Artemidoros|16 years ago|reply
So, why aren’t more people crashing and burning?

because:

a) You're awesome, as long as you're using Ruby no matter how ridiculous you look when seen without rainbow colored glasses - http://www.infoq.com/presentations/ford-large-rails

b) The "you're doing it wrong crowd" will shout you down the moment you admit having troubles - http://glu.ttono.us/articles/2007/04/15/on-twitter-rails-and...

c) Most failed Ruby (which in all fairness usually means Rails) projects never gain any publicity.

d) Ruby projects coming close to the gargantuan scope of some of the bigger Java projects are (thankfully) exceedingly rare (if not non existent).

Lastly I had no clue that Saeed al-Sahaf is now working for Object Mentor, awesome!

Disclaimer: I worked for 3 years as a Ruby developer, including on one of the bigger (> 50.000 loc) Rails projects out there and I encountered some crashing and burning myself (I know I did it wrong).

[+] gfodor|16 years ago|reply
The author acts as though languages like Ruby, which defers to the programmer to not screw himself, are a new thing. Just because the "enterprise" has been trapped with C++, C#, and Java for the last two decades says less about language designers in general (as the author posits here) and more about the thinking of large organizations in choosing languages to build their software in.
[+] btilly|16 years ago|reply
He admits that he doesn't know Ruby deeply, but then he goes on to assert that Ruby people don't seem to create messes that paint themselves into corners.

From what I've read plenty of people have done just that in Ruby because of monkeypatching.

[+] Tamerlin|16 years ago|reply
My first Ruby programming experience was with a company sunk by Rubyists, their asinine architecture, and constant monkey-patching.

Rubyists most certainly DO hang themselves with monkeypatching, quite often. You just don't tend to hear about the ones that to that, because the sites they build that way don't survive long enough for anyone to notice, assuming that they even launch them successfully, which is rare.

I think in fact that because Ruby is a much more forgiving language than, say, C++ (the same goes for Python), the average caliber of Ruby developers is FAR lower than the caliber of, say, C++ and Lisp developers. My experience with Ruby and Python developers hasn't been positive -- in fact, working at a Python shop gave me an intense distaste for the language, and my experience at a Ruby shop lead to similar sentiments among a lot of the senior developers who worked there.

[+] swombat|16 years ago|reply
I've been coding in Ruby/Rails for 3 years now, and been active in the community, and I haven't yet done that or encountered anyone who had done that. People are warned that monkey-patching is dangerous, they learn how to monkey-patch responsibly, and they act accordingly and responsibly - if something does go wrong with monkey-patching (which, as I've said, seems not to happen), the person who did the patching is aware of why things are going wrong, and doesn't blame it on the language - I can't remember reading a single query in, say, #rubyonrails, where the problem was due to monkey-patching.
[+] angelbob|16 years ago|reply
No, he's asserting that they're aware that's a problem and that they need to be careful of it.

Some succeed, some fail.

Rather like using pointers, in most ways, which can also cause some pretty unmaintainable messes, but don't if you're really, really careful :-)

[+] rue|16 years ago|reply
> He admits that he doesn't know Ruby deeply, [...] From what I've read plenty of people have done just that [...]

Your argument seems to have a slight inconsistency.

[+] jamesbritt|16 years ago|reply
"From what I've read plenty of people have done just that in Ruby because of monkeypatching."

Tends to be the people who think of modifying open classes as "monkeypatching" who fuck it up because they still see it as some sort of quirk in the language rather than just another part of the language ecosystem.

[+] ccc3|16 years ago|reply
Another problem I see with creating artificial barriers is the demoralizing effect in can have on someone trying to learn a language. Whenever I'm prevented from doing something because some committee thought it was too complex for me, I begin questioning whether I'm using the right tool. Who knows what other random rules are going to jump out and bite me later.

I would much rather do something the wrong way and be burned by it (and probably learn a lot in the process), than be prohibited outright from using certain techniques.

[+] Rickasaurus|16 years ago|reply
I like the idea of having the freedom to do everything but also having intelligent defaults to nudge the programmers in the right direction. Optional immutability is a great example of this. When you make something the default, you send the message "this is usually the right way to go about things".
[+] prodigal_erik|16 years ago|reply
When immutability is optional, you can no longer rely on it to reason about the program. Specifically, you have to verify that any two pieces of code you're worried about actually don't interact, because you no longer have a guarantee that they can't. I think a lot of restrictions are like this.

Edit: To quote a great comment on the article,

> Invariants make software more reliable. Weaker languages have stronger invariants. There’s less stuff to test.

[+] barrkel|16 years ago|reply
From the outset, I'm aware that programmers who haven't tried to design programming languages will likely disagree with me; but I'll express what I've learned anyway.

The truth, from the language designer and compiler author's perspective: programmers should be protected from their own worst instincts, and programming languages that do this well are of a higher quality than languages which don't. Patronizing, to a point, is good. The flip side is that features whose use should be encouraged should be included in the box, turned on by default, made easy to use, etc.

Ruby is patronizing because it's memory safe. C and C++ programmers don't like this kind of patronization. Ruby, and Python, are unlike C and C++ in this way: they are rigid and uncompromising in their type safety, disallowing unsafe casts and operations.

Michael makes some category errors in his rant. For example, he says:

"[...] we’ve made the assumption that some language features are just too powerful for the typical developer -- too prone to misuse. C++ avoided reflection"

The problem with this abutment is that C++ avoided reflection in its quest for power (or what C++ users thought was power), rather than its avoidance of power: C++ users wanted to hew to a philosophy of pay only for what you need. The problem with that philosophy is that if you make certain features optional, or pay as you go, then the community in general cannot assume that everyone is using available features. Instead, third-party library authors must assume that only the lowest common denominator set of features is available if they want to maximize usage.

Java and C# have avoided multiple inheritance for good reason; and Michael's rant is not a good reason for them to reintroduce that feature, because it remains problematic.

"The newer breed of languages puts responsibility back on the developer"

This is simply untrue, and bizarrely myopic, in my view. The developer always has responsibility, but the responsibility has shifted to different places, as the emphasis on abstraction level in programming languages has shifted. To take Michael's thesis at face value, you would need to believe that C mollycoddled the user and didn't bite their fingers off when they didn't take care of their pointers, or carefully handle their strings, or foolishly used fixed-size buffers for dynamic input.

Of course C put responsibility in the hands of the developer for these things. But guess what? Ruby, Python, C#, Java etc. all take away responsibility from the developer for these things! Michael says that dynamic languages like Ruby hand over the power of monkey-patching etc. to the developer, and that this is a new development; but to get equivalent power of dynamic behaviour overriding in a C application, you'd be using very abstract structures, likely looking like a Ruby or Python interpreter behind the scenes, where you would have a similar degree of responsibility. But not only that; you'd also be responsible for the dynamic runtime, as well as the memory management and all the other unsafe stuff that comes with C.

[+] j_baker|16 years ago|reply
Interesting points, but I think you're bordering on straw manning the author. I'll grant you that maybe C++ is about as bad an example as you can get of putting responsibility in the hands of the programmer.

In fact, it was behind a lot of Java's decision to take that power out of the developers' hands. I would argue that Java was an overreaction. It corrected the safety issues, but also removed a lot of the ability to create abstractions. Want a global variable? Those are bad, don't use them. Want multiple inheritance? That's bad, don't use it. Want operator overloading? That's bad, don't use it.

The problem is that the fact that those things are bad in some cases (even arguably most cases), doesn't mean they should be forbidden. I know what I'm doing. If I want to use a global variable I should be able to. I know what I'm writing. The language designer doesn't.

In other words, to make a long story short: I don't think he was trying to say that responsibility needs to be in the developers' hands in all cases. But I think that he makes a valid case that newer languages are right in moving some of that responsibility back to the programmer.

[+] JulianMorrison|16 years ago|reply
For what it's worth, some "patronizing" limits actually enable optimizations that are otherwise not possible.

For example, if anything might be a pointer and you can point to an offset inside a struct, then you can't have deterministically perfect garbage collection and you can't have garbage collection that moves things, for example to compact the heap. Java solves this by only allowing references to "the top of" objects, and stopping you casting pointers. Consequently, Java's runtime knows what's garbage and it can move things around with impunity.

Or consider Clojure, where immutable data means that you can grab a snapshot of a value without read-locking.

[+] Zarkonnen|16 years ago|reply
The problem with saying "you can do things in whatever way you want in Ruby" is that the moment you have more than one developer it collides rather drastically with the "principle of least astonishment". This is not to say that making everything out of eg Java boilerplate is the solution. The key is writing well-behaved and consistent code, which you can do in most languages.
[+] gruseom|16 years ago|reply
This argument is common, but I think it's bogus. There's plenty of astonishing Java out there.

More rigid languages don't lead to less astonishment under complexity. A given line of code may contain less astonishment, because there are fewer possibilities for what it might be doing, but what matters is global understanding, not local. The meaning of a program is not the sum of the meaning of its parts. Otherwise we'd all be writing in assembler; it's easy to say what this is doing, locally:

  mov eax, [ebp + 8]
The key is writing well-behaved and consistent code, which you can do in most languages.

But the set of problems for which a well-behaved and consistent program is readily writeable is not the same for all languages.

[+] raganwald|16 years ago|reply
The argument I take is that while you can write well-behaved and consistent code in any language, there is no language that forces you to do so. So picking a language on the basis that it will force/encourage people to write well-behaved and consistent code--even if they don't want to/don't know how to/don't give a toss because they're a cheap contractor six time zones away--is a broken choice.
[+] jamesbritt|16 years ago|reply

    The problem with saying "you can do things in whatever 
    way you want in Ruby" is that the moment you have more
    than one developer it collides rather drastically with 
    the "principle of least astonishment".
I've solved that problem by actually talking with the other developers and agreeing to some common practices.

Not every problem requires a technical solution. Social solutions often work just fine.

[+] InclinedPlane|16 years ago|reply
I think the best comment on this comes from Glenn Vanderburg:

http://www.vanderburg.org/Blog/Software/Development/sharp_an...

The money quote here is this:

"Weak developers will move heaven and earth to do the wrong thing. You can’t limit the damage they do by locking up the sharp tools. They’ll just swing the blunt tools harder."

Which is so very, very true in my experience.

Designing a language around the idea of protecting weak developers from bad choices is a recipe for failure and mediocrity. Instead, look toward designing in a way that guides experienced or at least thoughtful developers toward greater success.

tl;dr: Don't make bumper cars for people who can't be trusted to drive, make nomex suits and roll-cages for race car drivers.

[+] jemfinch|16 years ago|reply
It's not about being patronizing. It's about recognizing our human limitations: "As a slow-witted human being I have a very small head and I had better learn to live with it and to respect my limitations and give them full credit, rather than try to ignore them, for the latter vain effort will be punished by failure." (Dijkstra, EWD249)
[+] pmccool|16 years ago|reply
I understood Dijkstra to be arguing for simple languages, being more susceptible to formal proof, easier to understand, etc.

Langauges like, say, C# are complicated _and_ patronising. That's the sort of language I thought the article was comparing Ruby with.

[+] jgg|16 years ago|reply
I know everyone's read it 100 times already, but:

Like the creators of sitcoms or junk food or package tours, Java's designers were consciously designing a product for people not as smart as them. Historically, languages designed for other people to use have been bad: Cobol, PL/I, Pascal, Ada, C++. The good languages have been those that were designed for their own creators: C, Perl, Smalltalk, Lisp.

From here: http://www.paulgraham.com/javacover.html

[+] kenjackson|16 years ago|reply
Fun quote, but factually incorrect. For example, who was the first user of C++? Bjarne himself. He needed the power of Simula, but at the time Simula implementations didn't scale. He took the ideas he found useful in the language and put some into C, as he was building large scale C/BCPL applications.

To me that's the definition of building a language for yourself. You actually write a language that you need to solve a specific problem you have.

And I find his categorization of good vs bad languages somewhat absurd. What actually makes Ada, C++, and Pascal bad? His lack of understanding? What makes C, Perl, and Lisp good? The inverse?

While I have my own personal prefernces with respect to languages, I fully believe a large part of it is familiarity. I've NEVER met anyone who could argue why a particular language was truly bad. They usually are just passionately arguing religion, and I find that tiring. It was cute when I was in high school, but those debates are getting tired.

[+] chipsy|16 years ago|reply
You have to patronize at least a little bit to write something that is more than a glorified assembly language - otherwise you have no basis to build your other abstractions on.

With C, for example, there's a predefined model for the callstack based around having a fixed set of arguments for function calls and singular return values.

Forth, on the other hand, treats words as simple nested subroutines, keeps data on a global stack, and lets the data carry over from one word to the next. No explicit arguments or return values are necessary.

C can be characterized as "safe," while Forth is "flexible," depending on your use case, but in terms of pure performance both models have strengths and weaknesses.