I find there are roughly two categories of languages that are worthwhile (based on my opinion of course).
1. Easy to write/produce: Languages/Framework that are easy to write because they are highly expressive (Haskell/Scala) or are very opinionated (Rails).
2. Easy to read/maintain: Verbose languages with excellent tools... cough... Java/C#.
As for reading code I don't know what it is about crappy verbose languages but I have yet to see Java/C# code that I couldn't figure out what was going on. Sure I have more experience with these languages and the tools (particularly code browsing with an IDE) make it so much easier... but so do most people.
The reality is language dilettantes think writing code is painful (as mentioned in the first paragraph by the author) but the real bitch is maintaining.
I feel like there must be some diminishing returns on making a language too expressive, implicit, and/or convenient but I don't have any real evidence to prove such.
I would add a third category, which is not exclusive with the other two: languages that provide good tools for abstraction.
The pitfall with #1 is leaky and obscure abstractions. It's easy to write code that has performance problems or requires a lot of understanding of moving parts not actually related to the problem at hand. Where's the code responsible for putting the current state on a web page? All I see is a bunch of monad transformations and I don't know what they're for! Sure, I can figure out what's going on eventually, but I'll have to read a lot of CS papers first.
The pitfall with #2 is lack of ability to write a suitable abstraction for the problem. Instead, the problem has to be fit to the language. You end up with either something relatively simple, but inflexible or a large amount of incidental complexity. Why do I need to implement AbstractThingPutterOnPageGenerator and generate a ThingPutterOnPage before I can put a thing on the page? Couldn't this just be called putThingOnPage() and use some optional args when the default behavior doesn't cut it? Sure, I can figure out what's going on eventually, but I'll have to read a lot of code first.
I think Lisp has always been strong in the third category, and that Clojure is a Lisp especially suited to real-world use right now. The heavy emphasis on defining code in terms of generic operations on generic data structures is a particular strength. For something more mainstream, Python does pretty well here. That's largely cultural though; Python has a very comparable feature set to Ruby, but Ruby's community doesn't have "explicit is better than implicit", the lack of which can lead to code which is impenetrable rather than merely dense.
> I feel like there must be some diminishing returns on making a language too expressive, implicit, and/or convenient but I don't have any real evidence to prove such.
I think it is understood that the more expressive your language is, the more difficult it is to make tools for the language. For example, Common Lisp style (non-hygienic) macros are hard to support in a debugger (by which I mean, hard to allow the developer to step through their code as they wrote it, rather than stepping through the final expanded form). Dynamic dispatch makes it difficult for tools to provide who calls and which function does this call invoke (not impossible, with some forms of static typing, but more difficult in general).
There's the cognitive load of the language itself. There's the cognitive load of the libraries. There's the cognitive load of the algorithm. And there's the cognitive load of the actual code (number of lines times how hard each line is to read - smart coding conventions help quite a bit here).
But it's not that simple, because people are different. Different people have different cognitive load when presented with the same language. I think this is one of the reasons Haskell is so polarizing - it either has a low cognitive load for you, or a very high one. And if it has a very high one, you're not likely to spend the time and effort to get to the point where it has a low cognitive load for you.
> I feel like there must be some diminishing returns on making a language too expressive, implicit, and/or convenient but I don't have any real evidence to prove such.
I think that going to far in any of those directions probably increases total cognitive load, by making some other component worse.
I admittedly don't have to read Java or C# code often; a few times I had to though, it had been a fair bit of pain -- so, I would much rather have to figure out someone's Perl code rather than deal with either one of these.
The problem was not with the languages themselves, they are just fine, and I actually quite like C# -- but it seems that a lot of third-party library authors for these languages really go all out on various design patterns, abstracting everything, etc., in the process making simplest things quite impenetrable.
There are a couple of implicit assumptions in the final paragraphs that I think should be made explicit. One was that we can evaluate these languages based on this experiment with a single programmer. Another was the not-clearly-defined term strong work ethic, by which I think he means someone who will strive to make the program work properly, not have horrible kludges, will avoid known problematic aspects of the language, etc.
The problem with these assumptions is that you don't run into situations like that often. You're far more likely to run into a team of people of mixed abilities, and with some languages, one or two of them will be able to inflict horrors on the whole codebase.
> Another was the not-clearly-defined term strong work ethic, by which I think he means someone who will strive to make the program work properly, not have horrible kludges, will avoid known problematic aspects of the language, etc.
Indeed. Using a different definition of "strong work ethic", I've met programmers who had too strong a "work ethic" - using it as an excuse or crutch to scoff at improvements to code readability or maintainability. After all, if you just power through it with enough overtime, you can wade through even the worst codebases, so why bother cleaning it up? To make things easier? And you want to take a break to step back, think on the problems, and discuss options instead of just sitting down and coding more? Sounds like you're just looking for excuses to be lazy - put down the coffee and get back to work!
Needless to say, this can lead to a lot of firefighting and damaged morale.
Ivory tower academics can get too caught up on theory to practice effectively. On the other hand, that's probably still preferable to the COBOL-only programmer that doesn't understand why things have changed since the 1970s - after all, COBOL can do anything your newfangled languages can! Better than either: Give me a practical polyglot. Preferably one who hates whatever terrible language we're going to be using, with a laundry list of issues that language has to back up that hate. Why such a hater? Because that hate sounds like the impassioned voice of experience with these problems (and how to mitigate or avoid them, even if one of those options - switching languages - isn't on the table.)
The 'single programmer' bit is the most damning IMO. It assumes that a programming language is objectively good or bad, when they are instead a medium for creation where the personal affinities of the creator is the most important factor.
Precisely: language/library/framework decisions affect the whole team. If you choose something weird, you'll quickly be cursed under your teammate's breaths--if not get screamed at to your face--because eventually they're going to need to work on your code, too.
That's not to say you can't bring new technologies into a company. I've done it several times. You just need to understand that it's a big undertaking to get an entire team to buy in and learn that new tech.
Of course for hobby projects, go hog wild. That's how I pick up new languages.
Temper the soldier rather than steel, and a club becomes a sword. A fairer example might be to compare the Nix and Guix package manager codebases, which aim to implement the same model of declarative system wide dependency management. The former is written by a university team in C++ and Perl, the latter by GNUcolytes in Guile Scheme and a touch of C.
My take away here is that it only pays to be a programming language dilettante if you are actually building a programming language, especially if it's a dedicated language for a new platform. Otherwise, you're mostly going to be subject to whatever's already mostly in use on your platform of choice, up to minor tweaks in that language over time.
Unfortunately, the real value of anything is almost entirely due to extrinsic factors. Air? Very valuable if you're underwater, on the moon etc.
Which human language? The one spoken by the people you need to communicate with is most valuable.
The first iPhone? Very valuable then; not today.
But some people love intrinsic value. And it's what they create that ends up having real value. They would say that intrinsic value is the only "real" value. They aren't very practical.
> They would say that intrinsic value is the only "real" value.
Thing is, this is a testable hypothesis (at least in theory): measure whether those "intrinsically valued" languages make a true impact on software cost. Often, it is the very same people who tout this intrinsic value who deliberately shy away from testing this hypothesis empirically.
It's interesting that when Java's original designers analyzed customer needs vs. features offered by academic languages, they discovered that most value in those languages wasn't in the linguistic features but in the extra-linguistic features, so they deliberately put all the good stuff in the VM, and packaged it in a language designed to be as unthreatening and as familiar as possible. It was designed to be a wolf in sheep's clothing:
It was clear from talking to customers that they all needed GC, JIT, dynamic linkage, threading, etc, but these things always came wrapped in languages that scared them. -- James Gosling[1]
Give me Scala and a real-world problem vs someone using using PHP or Javascript and I will beat them on the initial write, and destroy them on the maintenance. I wouldn't use Scala professionally if I didn't believe this.
In the short term practical concerns can be more important than PLT ones - in five years' time I'm sure Idris will be a better language than Scala, but for some tasks it isn't yet - apart from anything else, you need a strong library/tool ecosystem before a language is truly useful. But that's a temporary state of affairs. If you were making this kind of judgement 20 years ago, and chose a popular language like Perl or TCL or C++ over a theoretically-nice language like OCaml or Haskell, how would you be feeling about that decision today?
Give me Java and a real-world problem vs someone using using Scala and I will beat them on the initial write, and destroy them on the maintenance. :)
> in five years' time I'm sure Idris will be a better language than Scala
Idris? In the entire history of computing there has been a single[1] complete non-trivial (though still rather small) real-world program (CompCert) written in a dependently typed language. Even though the program is small and the programmer (Xavier Leroy) one of the leading dependent-type-based-verification experts, the effort was big (and that's an understatement) and the termination proofs proved too hard/tedious for him, so he just used a simple counter for termination and had a runtime exception if it ran out. Idris is a very interesting experiment, I'll give you that. But I don't see how anyone can be sure that it would work (although you didn't say it can work, only that it would "be a better language than Scala", so I'm not sure what your success metrics are).
[1]: Approximately, though I don't know of any other.
Only if the new thing legitimately and very clearly solved a real problem that presented a credible barrier to work. That's a very high bar to clear when creating a new programming language.
This article isn't exactly wrong. Certainly, running on your target platform and having library support for the things you're trying to do are critical features for getting anything done, and a great language that lacks these things in the wrong tool for the job. That doesn't mean criticism of bad design choices in, say, Javascript is mistaken, or as the author describes it, "troubling". It just means you probably have to use Javascript anyway[0].
It also leaves out another reason for learning languages and using them for pet projects: it makes you a better programmer. The more good languages you know, and idioms from those languages, the more likely you are to recognize when an ad-hoc implementation of one of those idioms is the right solution to a problem in the language you're actually using.
When doing serious development and hitting these issues, the workaround isn't to continue using a broken language. The workaround is to use a different language. The first question 'Does this language run on the target system that I need it to?' isn't a yes or no question.
Even something as blatantly broken as the pre-ES6 scoping rules in JavaScript isn't the fundamental problem it's made out to be. It hasn't been stopping people from making great things with the language.
No, it hasn't been stopping them, but I guarantee you it's been slowing them down, at least a little. If nothing else, it makes the language a little bit harder to learn than it needed to be. I'll wager it also causes actual bugs that people have to spend time tracking down. It's true that those bugs can be avoided by proper discipline, but the brain cells required for enforcing that discipline could have been used for something else.
ETA: I agree with the author that a certain pragmatism is useful in selecting a language for a particular project, but I still think it's important to raise people's consciousness about warts in language designs. Doing so improves the odds that the next language someone designs to scratch their personal itch, but that happens to catch on for some reason, will have fewer such warts.
Over 90% of the first 100k LOC I wrote was ES5 JavaScript or CoffeeScript. I don't actually recall even a single instance when I was bitten by a bug related to lexical scope on the job. Maybe the problem is people expecting JavaScript to work like Java or some other block scoped language.
Async bugs, on the other hand, were nightmarish at times.
matlab? custom function was (or still is?) a one-function-one-file. i had bunch of function-files in a project directory. my mind was literally scattered.
then i moved on to languages that allow binding function to variable. i had much less files. simpler.
FP with anonymous function further frees my mind from naming variables so i have zero chance of mistyped function names. easier maintenance? sure
those didn't stop me from getting work done; however, i prefer to not waste time on weaker programming languages, although coding on those languages did broaden my mind (yeah, now i know they suck)
Maybe. but actually what's important is investment cost versus the ratio of supply vs demand.
There are lots of Java jobs, yes, but there are also lots of Java programmers too. I'm a python dev (now) and while I have to search a little longer for jobs, I get good pay still, since I'm also rarer.
[+] [-] agentgt|9 years ago|reply
1. Easy to write/produce: Languages/Framework that are easy to write because they are highly expressive (Haskell/Scala) or are very opinionated (Rails).
2. Easy to read/maintain: Verbose languages with excellent tools... cough... Java/C#.
As for reading code I don't know what it is about crappy verbose languages but I have yet to see Java/C# code that I couldn't figure out what was going on. Sure I have more experience with these languages and the tools (particularly code browsing with an IDE) make it so much easier... but so do most people.
The reality is language dilettantes think writing code is painful (as mentioned in the first paragraph by the author) but the real bitch is maintaining.
I feel like there must be some diminishing returns on making a language too expressive, implicit, and/or convenient but I don't have any real evidence to prove such.
[+] [-] Zak|9 years ago|reply
The pitfall with #1 is leaky and obscure abstractions. It's easy to write code that has performance problems or requires a lot of understanding of moving parts not actually related to the problem at hand. Where's the code responsible for putting the current state on a web page? All I see is a bunch of monad transformations and I don't know what they're for! Sure, I can figure out what's going on eventually, but I'll have to read a lot of CS papers first.
The pitfall with #2 is lack of ability to write a suitable abstraction for the problem. Instead, the problem has to be fit to the language. You end up with either something relatively simple, but inflexible or a large amount of incidental complexity. Why do I need to implement AbstractThingPutterOnPageGenerator and generate a ThingPutterOnPage before I can put a thing on the page? Couldn't this just be called putThingOnPage() and use some optional args when the default behavior doesn't cut it? Sure, I can figure out what's going on eventually, but I'll have to read a lot of code first.
I think Lisp has always been strong in the third category, and that Clojure is a Lisp especially suited to real-world use right now. The heavy emphasis on defining code in terms of generic operations on generic data structures is a particular strength. For something more mainstream, Python does pretty well here. That's largely cultural though; Python has a very comparable feature set to Ruby, but Ruby's community doesn't have "explicit is better than implicit", the lack of which can lead to code which is impenetrable rather than merely dense.
[+] [-] EdwardCoffin|9 years ago|reply
I think it is understood that the more expressive your language is, the more difficult it is to make tools for the language. For example, Common Lisp style (non-hygienic) macros are hard to support in a debugger (by which I mean, hard to allow the developer to step through their code as they wrote it, rather than stepping through the final expanded form). Dynamic dispatch makes it difficult for tools to provide who calls and which function does this call invoke (not impossible, with some forms of static typing, but more difficult in general).
[+] [-] AnimalMuppet|9 years ago|reply
There's the cognitive load of the language itself. There's the cognitive load of the libraries. There's the cognitive load of the algorithm. And there's the cognitive load of the actual code (number of lines times how hard each line is to read - smart coding conventions help quite a bit here).
But it's not that simple, because people are different. Different people have different cognitive load when presented with the same language. I think this is one of the reasons Haskell is so polarizing - it either has a low cognitive load for you, or a very high one. And if it has a very high one, you're not likely to spend the time and effort to get to the point where it has a low cognitive load for you.
> I feel like there must be some diminishing returns on making a language too expressive, implicit, and/or convenient but I don't have any real evidence to prove such.
I think that going to far in any of those directions probably increases total cognitive load, by making some other component worse.
[+] [-] xixi77|9 years ago|reply
The problem was not with the languages themselves, they are just fine, and I actually quite like C# -- but it seems that a lot of third-party library authors for these languages really go all out on various design patterns, abstracting everything, etc., in the process making simplest things quite impenetrable.
Could have been just my luck though.
[+] [-] EdwardCoffin|9 years ago|reply
The problem with these assumptions is that you don't run into situations like that often. You're far more likely to run into a team of people of mixed abilities, and with some languages, one or two of them will be able to inflict horrors on the whole codebase.
[+] [-] MaulingMonkey|9 years ago|reply
Indeed. Using a different definition of "strong work ethic", I've met programmers who had too strong a "work ethic" - using it as an excuse or crutch to scoff at improvements to code readability or maintainability. After all, if you just power through it with enough overtime, you can wade through even the worst codebases, so why bother cleaning it up? To make things easier? And you want to take a break to step back, think on the problems, and discuss options instead of just sitting down and coding more? Sounds like you're just looking for excuses to be lazy - put down the coffee and get back to work!
Needless to say, this can lead to a lot of firefighting and damaged morale.
Ivory tower academics can get too caught up on theory to practice effectively. On the other hand, that's probably still preferable to the COBOL-only programmer that doesn't understand why things have changed since the 1970s - after all, COBOL can do anything your newfangled languages can! Better than either: Give me a practical polyglot. Preferably one who hates whatever terrible language we're going to be using, with a laundry list of issues that language has to back up that hate. Why such a hater? Because that hate sounds like the impassioned voice of experience with these problems (and how to mitigate or avoid them, even if one of those options - switching languages - isn't on the table.)
[+] [-] lolc|9 years ago|reply
[+] [-] eikenberry|9 years ago|reply
[+] [-] jdcarter|9 years ago|reply
That's not to say you can't bring new technologies into a company. I've done it several times. You just need to understand that it's a big undertaking to get an entire team to buy in and learn that new tech.
Of course for hobby projects, go hog wild. That's how I pick up new languages.
[+] [-] jpt4|9 years ago|reply
[+] [-] kazinator|9 years ago|reply
WTF, I haven't heard that one before. Did you make it up?
When I google it, your above comment is the third hit, and the previous two aren't relevant.
[+] [-] andreasvc|9 years ago|reply
[+] [-] piokuc|9 years ago|reply
[+] [-] sdenton4|9 years ago|reply
[+] [-] hyperpallium|9 years ago|reply
Which human language? The one spoken by the people you need to communicate with is most valuable.
The first iPhone? Very valuable then; not today.
But some people love intrinsic value. And it's what they create that ends up having real value. They would say that intrinsic value is the only "real" value. They aren't very practical.
[+] [-] pron|9 years ago|reply
Thing is, this is a testable hypothesis (at least in theory): measure whether those "intrinsically valued" languages make a true impact on software cost. Often, it is the very same people who tout this intrinsic value who deliberately shy away from testing this hypothesis empirically.
It's interesting that when Java's original designers analyzed customer needs vs. features offered by academic languages, they discovered that most value in those languages wasn't in the linguistic features but in the extra-linguistic features, so they deliberately put all the good stuff in the VM, and packaged it in a language designed to be as unthreatening and as familiar as possible. It was designed to be a wolf in sheep's clothing:
It was clear from talking to customers that they all needed GC, JIT, dynamic linkage, threading, etc, but these things always came wrapped in languages that scared them. -- James Gosling[1]
[1]: https://www.youtube.com/watch?v=Dq2WQuWVrgQ
[+] [-] lmm|9 years ago|reply
In the short term practical concerns can be more important than PLT ones - in five years' time I'm sure Idris will be a better language than Scala, but for some tasks it isn't yet - apart from anything else, you need a strong library/tool ecosystem before a language is truly useful. But that's a temporary state of affairs. If you were making this kind of judgement 20 years ago, and chose a popular language like Perl or TCL or C++ over a theoretically-nice language like OCaml or Haskell, how would you be feeling about that decision today?
[+] [-] pron|9 years ago|reply
> in five years' time I'm sure Idris will be a better language than Scala
Idris? In the entire history of computing there has been a single[1] complete non-trivial (though still rather small) real-world program (CompCert) written in a dependently typed language. Even though the program is small and the programmer (Xavier Leroy) one of the leading dependent-type-based-verification experts, the effort was big (and that's an understatement) and the termination proofs proved too hard/tedious for him, so he just used a simple counter for termination and had a runtime exception if it ran out. Idris is a very interesting experiment, I'll give you that. But I don't see how anyone can be sure that it would work (although you didn't say it can work, only that it would "be a better language than Scala", so I'm not sure what your success metrics are).
[1]: Approximately, though I don't know of any other.
[+] [-] guelo|9 years ago|reply
how about PHP or Javascript?
[+] [-] andreasvc|9 years ago|reply
[+] [-] angersock|9 years ago|reply
Every developer should be forced, I believe, to read Arthur C. Clarke's story Superiority ( https://en.wikipedia.org/wiki/Superiority_%28short_story%29 ) and to reflect on its application to their profession.
EDIT: Story can be found here... http://www.mayofamily.com/RLM/txt_Clarke_Superiority.html
[+] [-] knieveltech|9 years ago|reply
[+] [-] Zak|9 years ago|reply
It also leaves out another reason for learning languages and using them for pet projects: it makes you a better programmer. The more good languages you know, and idioms from those languages, the more likely you are to recognize when an ad-hoc implementation of one of those idioms is the right solution to a problem in the language you're actually using.
[0] Though possibly only as a compilation target.
[+] [-] xigency|9 years ago|reply
Take a look at this example - http://blog.fogcreek.com/the-origin-of-wasabi/
A language that compiles to PHP and ASP, what a relief.
And for the contemporary result - http://blog.fogcreek.com/killing-off-wasabi-part-1/
When the platform catches up, then you can go back to mainstream development with a useful language.
[+] [-] spdegabrielle|9 years ago|reply
[+] [-] ScottBurson|9 years ago|reply
No, it hasn't been stopping them, but I guarantee you it's been slowing them down, at least a little. If nothing else, it makes the language a little bit harder to learn than it needed to be. I'll wager it also causes actual bugs that people have to spend time tracking down. It's true that those bugs can be avoided by proper discipline, but the brain cells required for enforcing that discipline could have been used for something else.
ETA: I agree with the author that a certain pragmatism is useful in selecting a language for a particular project, but I still think it's important to raise people's consciousness about warts in language designs. Doing so improves the odds that the next language someone designs to scratch their personal itch, but that happens to catch on for some reason, will have fewer such warts.
[+] [-] xiaoma|9 years ago|reply
Async bugs, on the other hand, were nightmarish at times.
[+] [-] angersock|9 years ago|reply
Is it harder to learn a smaller thing with some tricks than a larger thing with few tricks?
Those are two questions I often ask myself when thinking about the evolution of the JS ecosystem.
[+] [-] clitmouse|9 years ago|reply
then i moved on to languages that allow binding function to variable. i had much less files. simpler.
FP with anonymous function further frees my mind from naming variables so i have zero chance of mistyped function names. easier maintenance? sure
those didn't stop me from getting work done; however, i prefer to not waste time on weaker programming languages, although coding on those languages did broaden my mind (yeah, now i know they suck)
[+] [-] sgt101|9 years ago|reply
Also misses the point, it's not the job at hand that matters, it's the 10000 jobs that keeping the thing alive that matters.
[+] [-] Chris2048|9 years ago|reply
There are lots of Java jobs, yes, but there are also lots of Java programmers too. I'm a python dev (now) and while I have to search a little longer for jobs, I get good pay still, since I'm also rarer.
[+] [-] grifter2000|9 years ago|reply
[deleted]
[+] [-] unknown|9 years ago|reply
[deleted]