This highlights how "good" programmers are simply those who are most able to think like a computer. The article's language further reinforces this (perhaps intentionally), e.g. the reference to "compiling" functions.
I find this contraint interesting - I don't know if computational thinking is a strength or a weakness. Increasingly, it seems that computational models occur naturally and therefore the ability to think in such a manner would have inter-disciplinary value.
If we deem it a weakness, then programming becomes a UX problem rather than language one. The lack of change both within and across programming paradigms would suggest that many don't believe this to be a fundamental issue.
I'd place my skill bars higher: "thinking like a computer" seems fundamental: I don't think one can even be a mediocre programmer without it, and it's not enough to make you a good programmer.
A good programmer also needs to be able to effectively communicate through code to both others and themselves, know how to design maintainable programs, how to preemptively avoid bugs by making their code harder to misuse, and too many other skills to list here -- and not all of those naturally flow from knowing how to think like a computer.
This actually touches on Alan Kay's statements about programming with "what" instead of "how". It has made me seriously consider logic programming for the applications I write.
Computers don't think like computers. Computers reason formally. Thinking like a computer is just a matter of learning to reason formally and exactly instead of employing fuzzy mental abstractions.
This is primarily hard because we're wired for fuzzy mental abstractions, since they make loads of tasks (natural language, making breakfast, manual labor) far easier compared to taking genuine formal approaches, and we have a lower layer of the brain that can be trained to follow exact procedures quite well anyway.
With more research along these lines, we may have a day when a programming job interview will consist on interacting with some codebase for 10 minutes, and the model will spit out our normalized scores on syntax comprehension, algorithmic thinking, library familiarity, etc.
I wonder if when we get that far the computer is able to write the code for us anyways and programmers will be redundancies. Its just a matter of if, not when. Understanding human cognition well enough to evaluate it is a good indicator that the singularity is near.
I would love to see this kind of study done comparing different programming paradigms (i.e. functional vs. procedural vs. OO vs. logic). Some concrete cognitive data on how lisp-like languages may or may not actually make programmers more productive would be quite fascinating I think.
I’m not aware of many eye-tracking studies specifically, but there has been a lot of research done on the general subject of how we read and understand code. Search for papers on “program comprehension”.
To address (some of) your specific point, I suspect part of the appeal of functional programming languages is that they naturally promote describing the data flow of the code. In contrast, imperative languages inherently dwell on control flow, even though it’s often an unimportant implementation detail. It turns out that even when we’re reading imperative code, we’re trying to figure out the underlying data flow anyway, so why not cut out the middle step?
I also suspect part of what holds back functional programming languages from wider acceptance is that when you’re modelling something where time/order matters, the control flow is not an unimportant implementation detail. You have to work harder to describe it in a functional language where you get a lot for free with an imperative programming language.
My "Brain-Clearing Whiteboard" has a scrawl on it about investigating "refactorability" as an empirical code metric. I'm taking this Hacker News convo as raw material for figuring out what that would constitute, but broadly speaking my hypothesis is that "more refactorable is better".
I see how this is interesting but I can't get over how the author says, "When contrasting this with Eric's video", but "Eric's video" is a different program than the novice programmer. Maybe this is me being too pedantic.
Not just a different program, a better one, and easier to understand.
It especially annoyed me when he praised Eric for going back to understand the function first, when that function isn't even present in the novice's program.
In light of recent mass surveillance uproars, the possibility of remote eye tracking technology is what really scares me. While the technological usage could be limited or masked, such attention detector and analyzer could give enourmous powers to those who have the behavioral models.
[+] [-] joeroot|12 years ago|reply
I find this contraint interesting - I don't know if computational thinking is a strength or a weakness. Increasingly, it seems that computational models occur naturally and therefore the ability to think in such a manner would have inter-disciplinary value.
If we deem it a weakness, then programming becomes a UX problem rather than language one. The lack of change both within and across programming paradigms would suggest that many don't believe this to be a fundamental issue.
[+] [-] MaulingMonkey|12 years ago|reply
A good programmer also needs to be able to effectively communicate through code to both others and themselves, know how to design maintainable programs, how to preemptively avoid bugs by making their code harder to misuse, and too many other skills to list here -- and not all of those naturally flow from knowing how to think like a computer.
[+] [-] jared314|12 years ago|reply
[+] [-] eli_gottlieb|12 years ago|reply
This is primarily hard because we're wired for fuzzy mental abstractions, since they make loads of tasks (natural language, making breakfast, manual labor) far easier compared to taking genuine formal approaches, and we have a lower layer of the brain that can be trained to follow exact procedures quite well anyway.
[+] [-] silvio|12 years ago|reply
[+] [-] seanmcdirmid|12 years ago|reply
[+] [-] keefe|12 years ago|reply
EDIT should have RTFA before commenting, gaze tracking stuff is very cool
[+] [-] goldfeld|12 years ago|reply
[+] [-] oblique63|12 years ago|reply
[+] [-] Chris_Newton|12 years ago|reply
To address (some of) your specific point, I suspect part of the appeal of functional programming languages is that they naturally promote describing the data flow of the code. In contrast, imperative languages inherently dwell on control flow, even though it’s often an unimportant implementation detail. It turns out that even when we’re reading imperative code, we’re trying to figure out the underlying data flow anyway, so why not cut out the middle step?
I also suspect part of what holds back functional programming languages from wider acceptance is that when you’re modelling something where time/order matters, the control flow is not an unimportant implementation detail. You have to work harder to describe it in a functional language where you get a lot for free with an imperative programming language.
[+] [-] eli_gottlieb|12 years ago|reply
[+] [-] buttproblem|12 years ago|reply
[+] [-] NoodleIncident|12 years ago|reply
It especially annoyed me when he praised Eric for going back to understand the function first, when that function isn't even present in the novice's program.
[+] [-] Chris_Newton|12 years ago|reply
https://news.ycombinator.com/item?id=4940952
[+] [-] ansgri|12 years ago|reply
Very interesting reserach though.
[+] [-] garysweaver|12 years ago|reply
Really? How about "adequately understood the function and did not have to reread it". We are humans, not compilers. ;)