top | item 11495590

The camel doesn't have two humps: Programming “aptitude test” canned

77 points| Smaug123 | 10 years ago |retractionwatch.com | reply

133 comments

order
[+] optforfon|10 years ago|reply
The title seems to intentionally misrepresent the retraction to push an agenda (or the author has some issues with basic logic)

The retraction is that the strong/definitive proof that "the camel has two humps" was fabricated/exaggerated.

The converse - that it has one hump hasn't been proven. From the quote, Richard Bornat doesn't say that they re-crunched the numbers and now there is proof of one hump.

The camel has gone back to having an unknown number of humps

[+] braythwayt|10 years ago|reply
In the absence of that paper, we had plenty of reasonable hypotheses about the ability to program. Like “Put 10,000 hours into it, and you’ll master it.”

Or “It follows a distribution similar to other skills involving mathematics.”

The paper presented an unusual hypotheses, that there was this sharp distinction between those who can program and those who never will, and that this distinction was somewhat orthogonal to other measures of scholarship. So for example, one might be a good architect but never get good at programming no matter how much one tried.

You’re right we haven’t disproven this hypothesis, but given its novelty, the burden of proof is on "the camel has two humps." With the paper retracted, we do not presume it has an unknown number of humps, we presume that skill writing programs is going to be similar to other skills we observe.

As this article observes, unusual outcomes in attempting to teach programming may just as easily be explained by the fact that in a young field, we may not know how to teach programming.

If we’re going to chase ‘humps,’ perhaps we should look for unusual distributions in the skill of teaching programming. We may be terrible at it, and perhaps the skill we are really observing is the skill of learning to program in spite of didactic and structural obstacles.

[+] _pmf_|10 years ago|reply
> The camel has gone back to having an unknown number of humps

Some sources even go as far as saying there is no camel involved at all, and might have never been.

[+] scotty79|10 years ago|reply
> The camel has gone back to having an unknown number of humps

Camel has gone back to be assumed to be typical gaussian camel as for many camels we don't know much about.

[+] venomsnake|10 years ago|reply
Politics. Science is always better accepted when it conforms or is spun to conform to the current zeitgeist. So it with bad science, no science and pull out of someone's ass (pseudo) science.

And right now the zeitgeist in tech is to increase diversity.

[+] MustardTiger|10 years ago|reply
>The title seems to intentionally misrepresent the retraction to push an agenda (or the author has some issues with basic logic)

Almost certainly the former. Notice the dismissal of scientific evidence because they dislike the implications. Every study on the subject has shown clear racial IQ differences. That is not "pseudo-science". Arguing about why that observation exists is fine, but pretending the observation itself must be wrong because it hurts ones feelings is not reasonable.

>The retraction is that the strong/definitive proof that "the camel has two humps" was fabricated/exaggerated.

It isn't even that. The bimodal distribution is very clearly there, the "retraction" is just to the claim that this is an innate characteristic. Something which was widely repeated, but which the actual paper never even claimed.

>The camel has gone back to having an unknown number of humps

It still has two humps, we just don't know why. But we never knew that in the first place.

[+] panic|10 years ago|reply
This retraction shows we have to be extra critical of things that reinforce our beliefs. Working as a programmer, it can be easy to believe that some people just aren't cut out for it. That doesn't mean a publication written using official-sounding words and methods that happens to match this belief is true at all.
[+] splintercell|10 years ago|reply
But the retraction is super terrible. Imagine someone writes an article which makes a lot of sense, now that person writes a retraction which simply says "oh I was in great mental and psychological trouble when I wrote that".

No, if you want to retract it, then make a case why this is not true. We don't believe that article because we trust the author.

[+] YeGoblynQueenne|10 years ago|reply
So, this discussion reminds me a lot of discussions in logic programmig circles about why so many programmers can't learn to program in Prolog, no matter what. There's papers dedicated to the question, people who had done research on the question back in the '90s when Prolog was still popular (yeah, it was, google it). The verdict: no verdict. Some few special and magical people seem to be able to learn to program in Prolog, but the majority can't.

Well then, why not set the bar for who can program and who can't on who can code in Prolog, rather than who can write FizzBuzz? If the point is to sort out the men from the goats, or whatever uncle Joel said we're trying to sort out, surely using Prolog as the sorting tool would leave many, many fewer "real programmers" than FizzBuzz. Hell, we could even combine the two: FizzBuzz in Prolog! See how well you can do at that, internets!

And why stop at Prolog? There's languages that are way, way harder to program in! Hey, if you're a Real Programmer you should be able to write FizzBuzz in, dunno, Ook. Brainfuck. Leboge? Mondrian? Perl, even?

Or maybe, I don't know, we can start asking people to show us what they can do instead of trying to find things they can't? Does that make sense to anyone else? I'd personally love it if people who want to hire me took the time to check out my github and the git repository I'm serving off my public server, in order to get an idea of the things I have already done (and so demonstrably can do) rather than trying to catch me with my pants down in front of a whiteboard.

Look. I write a lot of code. I go over a lot of code I've written. I'm an idiot 80% of the time, but the other 20% of the time I figure it out, fix my bugs and go to town. It works, OK? If interviews can't appreciate this balance then they 're useless.

[+] _Codemonkeyism|10 years ago|reply
1. Interesting.

2. I still wonder why around 30% of people with programming experience who have applied to a job when I was recruiting couldn't solve FizzBuzz[1] or string reverse. All of them either University MSc or years of programming experience.

The article hints at using the wrong teaching methods.

[1] Given the written instructions: 'Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.'

[+] YeGoblynQueenne|10 years ago|reply
How do you ask people to "solve FizzBuzz" and how do you know they've solved it?

I'm asking because if you do a whiteboard test then it's easy enough to get things wrong on both sides of the interview. Humans are just not as accurate as compilers, not even for the simplest of things.

Besides that there's always nerves, misunderstandings, cultural differences etc. In one place I interviewed they asked me to write a factorial method in Java. I assumed they wanted a recursive method, because, factorial, right? Classic recursive example. So I gave them a recursive factorial in Whiteboard Java™. I turned around from the whiteboard to see the interviewers staring at me, their mouths hanging open. And not in a good way. It turns out all they wanted was to see if I could write a simple loop. Instead of marvelling at my amazing recursion powers, they got the impression I was needlessly flashy and probably had my head up my arse. Also, seeing them staring I completely panicked and I totally screwed the iterative version: the loop went up instead of down, I was adding where I should be multiplying... a shambles.

So they didn't hire me.

Shit happens in interviews and it's not because "Johnny can't program". It's because "Johnny can't pass a programmign interview" (or at least many of them).

[+] eveningcoffee|10 years ago|reply
[0] has quite interesting discussion about the FizzBuzz exercise.

It feels a little like a window to the programmers mind. Some of them are disturbed by the simple repetition and come up with obfuscated or infective solutions etc.

[0] http://c2.com/cgi/wiki?FizzBuzzTest

[+] trhway|10 years ago|reply
>2. I still wonder why around 30% of people with programming experience who have applied to a job when I was recruiting couldn't solve FizzBuzz[1] or string reverse. All of them either University MSc or years of programming experience.

take a world 100m running champion and ask him to run the 30m with making 360 deg rotation left on each 3rd step and rotation 360 deg right on each 5th step... That would be such a fun, like a drunken duck...

[+] Zigurd|10 years ago|reply
It's perspective. By the end of interviewing 6 or 7 people, YOU know those questions and all the paths to answers cold. You not only know the questions and answers, you know all the ways thinking about those problems is connected to other problems. So, WHO could possibly not know that? Possibly you, before you put together your list of interview tech questions.
[+] p0nce|10 years ago|reply
The problem is that Fizzbuzz and string reversing are a kind of trick questions. Either you have already done them or not. Good people will fail at trick questions very easily in interviews. Since they don't represent real work and aren't statistically significant, why are we using them? They are merely adding random noise in the interview process.
[+] smoyer|10 years ago|reply
I have a really good way of separating candidates into those who will be successful and those who won't - During the interview I ask how they became interested in computer science, programming, etc as well as asking them to provide information on "pet projects" or other significant work they've accomplished.

If someone describes staying up all night as a 13 year-old because they had to finish the code they were working on, that's someone who was passionate about programming before they found out it's one of the better paying professions.

If someone points to finished projects (especially side projects or OSS), then they also have the ability to focus. If someone points to a bunch of barely started projects, they might have great ideas but in a business you also need to execute.

So ... passion and focus, plus a reasonable amount of domain knowledge will make a great employee.

[+] panic|10 years ago|reply
How do you know your method works? Have you compared it side-by-side with other methods, including a control where you decide to hire or not completely at random?
[+] pjc50|10 years ago|reply
If someone describes staying up all night as a 13 year-old

So, you're explicitly selecting by class background and upbringing? This particular metric also tends to select against women, if you have any applying.

[+] dasboth|10 years ago|reply
"If someone points to a bunch of barely started projects, they might have great ideas but in a business you also need to execute."

I'm not disagreeing, because I agree that side projects show passion, but I'm not sure a lack of side projects necessarily points to a worse employee. Someone could be a great developer in an enterprise environment, from where they obviously wouldn't be able to show you code snippets, but have no side projects because they might have other hobbies.

[+] URSpider94|10 years ago|reply
I think that people are completely misinterpreting the article and the retraction in this thread, whether willingly or not.

I don't think anyone is seriously questioning the fact that some people have more aptitude or intelligence than others. That's pretty commonly accepted fact. However, the general belief is that intelligence follows a roughly Gaussian (bell curve) distribution, and that it's roughly continuous. Mechanistically, one can imagine that intelligence is a convolution of genetic and other factors that are so thoroughly mixed that there are no discrete steps.

The theory behind a two-humped camel model of aptitude or intelligence would then be that some magic x-factor is so significant that it splits the underlying distribution in two, into the "cans" and the "can nots".

While the existence of such an x-factor would be a really interesting finding, it wouldn't change the underlying fact that some people have more innate ability to program than others, just like some people have more innate ability to be doctors or lawyers or concert pianists than others. It would just tend to make that difference much starker.

What it also wouldn't change is the fact that innate ability doesn't always correlate to success in a given career, or in life. Many other factors other than innate ability, such as drive and ability to collaborate with others, affect whether someone is going to be successful at their job. Also, people can learn. Even someone who doesn't have innate talent can become very skilled, if they work really hard at it.

So, even if the study were valid, which it's not, the finding itself is more of an academic interest than an excuse to start treating coding job interviews as a "sword in the stone" test of whether you've got "it" or not.

[+] PaulHoule|10 years ago|reply
This is one of the few times I've seen SSRIs blamed for jackass behavior.
[+] drunken-serval|10 years ago|reply
I'm going to second what possibility said. Bipolar depression is a very different animal from standard clinical depression. Mania is ugly, it turns someone you know into someone you don't.
[+] possibility|10 years ago|reply
Bipolar depression rapidly gives way to mania if you try to treat it like unipolar depression with standard antidepressants. The chances of this happening are greatly increased if you've never been diagnosed as bipolar. It happened to someone I know, and it was pretty ugly. The jackass behavior is an unfortunate part of the medical condition. It's common for the shame felt after coming down from a manic episode to throw you right back into depression. You can search for "ssri mania" if you want to learn more, it's a well-known phenomenon.
[+] PaulHoule|10 years ago|reply
Psychologists would reject any kind of test that doesn't come out as more or less normally distributed out of hand. A common complaint about oddball tests such as the Meyers-Briggs test or Hubbard's OCA is that they have not been validated to "behave well".

This may well be to due the fact that psychologists have been using parametric statistics instead of nonparametric statistics. In a lot of cases us nonparametric types have given up a little statistical power for "cheap and cheerful" methods that are harder to get wrong.

Psychs and medicals find it very expensive to treat thousands and thousands of patients correctly and carefully observe the results. This is why those Cochrane reviews for the case for almost any drug or other treatment are so depressing -- maybe they need all the statistical power they can get.

[+] aidenn0|10 years ago|reply
I think you have that wrong. One major criticism of Meyers-Briggs is that it measures things that are normally distributed, and assigns them categories.

For an easy example of why (if true) this would be a valid criticism, consider IQ, which is normally distributed. I will give you an IQ test and assign you into one of two categories: smart or dumb. If you are below the median, you're dumb, if you're above the median, your smart.

It should be obvious why this makes little sense; the large number of people very close to the median are divided in two and put in the same category of those with either very low or very high IQs.

To assign people to one of two categories, one would want to see a distinct bimodal distribution, then there would be a small number of people for whom which distribution they belong to is ambiguous, and the majority could be confidently assigned to one or the other.

[+] ebrenes|10 years ago|reply
So, this article talks about the retraction of the 2006 paper, which the authors admitted had some failings.

However, I fail to see much discussion about the subsequent paper [1] that addressed these failings and featured improved experiments that seem to still uphold the hypothesis.

[1] http://www.eis.mdx.ac.uk/research/PhDArea/saeed/SD_PPIG_2009...

[+] aidenn0|10 years ago|reply
I always thought the data from "the camel has two humps" merely demonstrated the lack of good pedagogy in the field of programming.
[+] cb18|10 years ago|reply
>All teachers of programming find that their results display a 'double hump'. It is as if there are two populations: those who can [program], and those who cannot [program], each with its own independent bell curve. Almost all research into programming teaching and learning have concentrated on teaching: change the language, change the application area, use an IDE and work on motivation. None of it works, and the double hump persists. We have a test which picks out the population that can program, before the course begins. We can pick apart the double hump. You probably don't believe this, but you will after you hear the talk. We don't know exactly how/why it works, but we have some good theories.

>Despite the enormous changes which have taken place since electronic computing was invented in the 1950s, some things remain stubbornly the same. In particular, most people can't learn to program: between 30% and 60% of every university computer science department's intake fail the first programming course. Experienced teachers are weary but never oblivious of this fact; brighteyed beginners who believe that the old ones must have been doing it wrong learn the truth from bitter experience; and so it has been for almost two generations, ever since the subject began in the 1960s.

http://blog.codinghorror.com/separating-programming-sheep-fr...

emphasis:All teachers of programming find that their results display a 'double hump'. It is as if there are two population

emphasis:Experienced teachers are weary but never oblivious of this fact; brighteyed beginners who believe that the old ones must have been doing it wrong learn the truth from bitter experience

So the disparity has presented itself in all known historical forms of pedagogy. Some have learned while others haven't, some have even learned with entirely self-directed pedagogy.

When a array of teachers has been giving it their all for decades but the disparity remains, should we really blame the teachers?

[+] regularfry|10 years ago|reply
(2014)
[+] JoBrad|10 years ago|reply
And yet, people in these comments are still clinging to it. Internet memes are the new afterlife.