top | item 6583580

Dear Startups: stop asking me math puzzles to figure out if I can code

872 points| brryant | 12 years ago |countaleph.wordpress.com

342 comments

order
[+] tommorris|12 years ago|reply
Here's the test I've used in the past:

Before the interview, I ask them to write some code to access an HTTP endpoint that contains exchange rate data (USD, EUR, GBP, JPY etc.) in XML and to parse and load said data into a relational database. Then to build a very simple HTML form based front-end that lets you input a currency and convert it into another currency.

I ask them to send me either a link to a repository (Git, SVN etc.) or a zipball/tarball. If the job specifies a particular language, then I obviously expect it to be in that language. If not, so long as it isn't in something crazy like Brainfuck, they have free range.

If the code works and is basically sane, that goes a long way to get them shortlisted.

During the interview, I'll pull the code they sent up on a projector and ask them to self-review it. If they can figure out things that need improving in their code, that weighs heavily in their favour. Usually this is things like comments/documentation, tests, improving the structure or reusability. If it's really good, I'll throw a hypothetical idea for refactoring at them and see how they think.

The reason this works is that, despite Hacker News/Paul Graham dogma to the contrary, "smartness" isn't the only thing that matters in programmers. It's actually fairly low down the list. When hiring programmers, I want people who are actually able to do the daily practical job of writing code, modest and self-critical enough to spot their own mistakes, and socially capable to actually communicate their decisions and mistakes to the people they work with.

I interviewed a guy who was intellectually very smart and understood a lot about CS theory. I asked him why the PHP code he sent me didn't have any comments. "I don't believe in comments because they slow the PHP interpreter down." Sorry, he can be smarter than Einstein but I ain't letting him near production code.

[+] rdtsc|12 years ago|reply
Two possible reasons:

1) I think a lot of start-ups want to hire "smart" people. Because they expect the new person to eventually wear many hats. Objective-C, Java, Android, CSS, server side concurrency, monitoring. An we've all seen Hunter and Schmidt reference that tokenadult usually posts when talk about interviewing comes around and it does seem that a general mental ability test (like an IQ test) combined with a work samples seem to predict future performance of that employee. Well except that one can't just straight up give IQ test to job applicants (there is a court case about that). So we are left with a job sample (which many forget to give, as is the point of the author). But instead many focus on the GMA and create proxies for it -- cute little puzzles about blenders, round manhole covers, and other such silly things.

2) Those interviewing don't know the technical stuff and are afraid you'd out-bullshit them. "How does an Ajax request work" well if the interviewer themselves doesn't quite know the details the might not be able to evaluate it properly. They could have it written down but well, some technical questions have many different levels of depth that a candidate might descent to. So a quick written answer to the question might seem wrong but it is really because the candidate is more advanced. So puzzles seems to be a generic and "easier" to handle.

[+] thraw|12 years ago|reply
1) I think a lot of start-ups want to hire "smart" people.

What does it really mean to be smart? Lately I cannot stop thinking about it. I have always been considered a 'smart person'. I am a self-taught freelance developer now - it used to be my hobby and somehow (mostly because I needed location-independent work quickly) it became my profession. I get by because everybody thinks I am smart but I feel like an impostor because the more I think about myself the more I realize that intelligence is not some general ability to solve problems - it's more just a set of very different skills that corelate to much lesser extent than people usually think and you can be really good at something that people use to judge your abilities and at the same time really bad at something else that is actually required to get the job done.

I studied sociology and I shortly worked as a data analyst. It seems to me that this kind of work requires... ehm... a different intelligence than programming. You need to be good at connecting the dots, noticing things, seeing patterns. This is my kind of thinking and I have always been good at this - doubting everything, seeing pure assumptions where other people saw 'truths', permanently creating hypotheses and alternative theories, trying to spot logical fallacies in prevailing theories... basically trying to spot things.

Programing is very different (at least it seems to me - so different tat it's even difficult do describe it). I guess it's about creating stuff, not just observing stuff. You need to build very complex and abstract mental models, keep them in your head and be able to operate with them - and this is the part of intelligence that I seem to be lacking. It just does not feel natural. I try to solve some problem and I am thinking... if this condition and that condition but at the same time not that condition... and bang!, suddenly I am lost and I don't even remember what I am doing. I cannot keep it in my head. I totally get what OP was saying about passing anonymous functions in JavaScript - I had the same experience. The first time I encountered something so simple as JavaScript closures it took me two hours to get it. And the day after that I had to repeat the whole mental process to get there again because I somehow lost it over night. This is simply not how my brain works and I think I am really bad at this. Yet people pay me for this... which is just depressing (and you understand why I write this under throwaway).

I remember our 'statistical analysis 101' professor always telling us 'remember, you are not really testing hypotheses, you are only testing indicators!' - if HR department picks wrong indicators for the skills that they actually need both company and employee are going to be unhappy - and I think this is very common because our understanding of the indicators for different kinds of 'being smart' is still poor.

[+] morgante|12 years ago|reply
I think (1) is probably the primary reason, along with the fun factor I mentioned above.

But honestly, why can't we just start giving IQ tests? Or at least asking for SAT scores? We shouldn't have to run through training puzzles just to prove we're smart enough to build a site.

[+] unknown|12 years ago|reply

[deleted]

[+] ek|12 years ago|reply
> Spoiler alert: to solve this problem, you need to know how to enumerate the rationals.

This problem was addressed nicely in this functional pearl by Jeremy Gibbons, et al.: http://www.cs.ox.ac.uk/jeremy.gibbons/publications/rationals... . As interesting as the result is, however, it's a pretty well-made point that research-level ideas from the programming languages community are not really software engineering interview material in the vast majority of cases.

This is yet another example of "rockstar developer"-itis, wherein startups are given to believe that they need the best of the best when in fact they do not. This particular example is entirely egregious because they asked her about something that requires enumerating the rationals when what they really wanted was an iOS code monkey. Then they fired her, based on their own shoddy interview.

[+] Xylakant|12 years ago|reply
I actually like asking math questions on interviews. It shows how people approach a problem. Asking code questions in an arbitrary interview setting shows just about nothing - no access to a reference doc, somebody peering over your shoulder. Heck, I couldn't code my way out of a wet paperback in that setting.

Certainly, asking only math questions is stupid as well, people should know at least a little about the stuff they're supposed to work with, but teaching an actual language to a smart person eager to learn is a breeze compared to teaching problem solving to someone who memorized the reference manual.

[+] RogerL|12 years ago|reply
Did you read the OP? The point of the article was that one can be great at that sort of thing and yet quite terrible as a production programmer. You are not measuring the skill you want with math problems, you are measuring a proxy.

Furthermore, I've known plenty of smart math people that just never seemed to be able to program (well). I think they are different skill sets, certainly with plenty of overlap, but plenty of differences as well, and those differences matter. I laughed out loud at the "variables, variables everywhere!" answer in the OP - I've had to deal with so much code written that way. Some people, very smart people, just don't 'get' design in that way. I worked with a guy that used to run around the office, asking brain teasers, sharing tidbits of knowledge, but he couldn't execute a basic project - couldn't plan what to do, couldn't do things in a rational order, couldn't experiment and gather data, couldn't incrementally design, develop, refactor, nor do a big-bang waterfall kind of design, and so on. He was not sorely missed when let go. Smart as a whip, and useless (for programming).

I've worked at several companies with staff mathematicians. Sooner or later they got their hands on a compiler. Oh my. No, let me do that. You tell me what is wrong with my Kalman filter, but I'll take care of the implementation, thank you.

Its easy to kvetch at somebody else's answer without offering an alternative (I do agree whiteboard progamming is disastrous). So, instead of asking math, why not ask them to write a simple routine, but then start asking real world problems about the code they would face - how would you make this API interface robust? What kind of documentation would you write. How would you handle errors? Is this code exception safe? Thread safe? How would you make it either/both of those. Suppose your problem size was n=100MM, how might you need to change this (say they have a data structure that loaded everything into memory)? Ask them some problems they will see in production - what is the network delay, or whatever your problem case is. You still get to see how they approach problems, but in the context of the actual decisions they will be making while programming for you.

Anyway, that is what I try to do. I am revising my thoughts even on that, because I find people flopping on the 'code the simple problem (and, it is simple)' yet doing great on all the engineering questions, and doing fine if we hire them.

[+] uniclaude|12 years ago|reply
Asking math questions might exclude some good candidates. I know more than a couple very productive programmers that did not go through a formal CS education. Asking math questions to those candidates could even scare them for no reason. I hired people who had no idea about Lagrange Multipliers but were able to ship code in various languages and even learn new paradigms when necessary.

There are not only smart people and persons referencing reference manuals. Being a programmer often means solving bugs in messed-up codebases, build web apps using the technology du jour, or making data go from one place to another, and asking math questions does not help a lot to find people able to do this. This blog post resonates with some people I met.

I have been programming for a while, and went through a CS education, but my experience with hiring made me realize that being good with maths and being a productive programmer are not necessarily two things that always come together.

[+] auggierose|12 years ago|reply
I think you don't get the point. Learning a new language after you programmed for 5 years in a variety of paradigms like assembler, object-oriented, purely functional, will take you between a couple of hours and a week. But if you have NOT had those 5 years of varied programming experience, then the new language is not your problem. Learning the concepts is, and that will take time.
[+] driverdan|12 years ago|reply
What kind of math questions do you ask? Are then relevant to the job? Is it knowledge that the employee will need to have for their day-to-day tasks?

Asking math questions that are unrelated to the employee's tasks has strong bias for recent graduates. Ask a 40 year old a question about an equation they haven't used since they were 19 and they won't remember.

I was once asked to do binary math on a phone interview. I hadn't had to do binary math since I was in school 10 years prior. I was pretty much guaranteed to fail. This was not a good test. If I needed it I could easily relearn binary math in a few hours at most.

[+] kenster07|12 years ago|reply
"...a smart person eager to learn is a breeze compared to teaching problem solving to someone who memorized the reference manual."

I would guess that there is actually significant overlap between these two groups.

[+] mcphilip|12 years ago|reply
After much experimentation giving interviews for server side positions, I've come to favor questions that involve routine real world problems that can be handled in increasingly sophisticated ways.

One example I use is getting the candidate to write crud, list, and search controller actions for a simple category data structure. Given a basic category data model (e.g. Name, Parent), the candidate starts with the crud actions.

Crud actions aren't meant to be difficult to solve and serve as a basic screener to verify the candidate has working knowledge of the basics. The only edge case I look for the candidate to ask about is if orphaning child nodes is allowed (I.e updating parent node, deleting a node with children)

List action(s) start getting more interesting since recursion comes into play. A basic implementation of an action that can load the tree given an arbitrary category as a starting point is expected. If the candidate has some prior experience, a discussion of what performance concerns they may have with loading the category tree is a follow up question. The tree loading algorithm is then expected to be revised to handle an optional max depth parameter. An edge case I look to be considered is how to signify in the action response that a category has one or more child nodes that weren't loaded due to a depth restriction.

The search action implementation has a degree of difficulty scaled to the candidates experience level. All candidates have to write an action that returns a collection of categories matching a search string. Those with previous experience are asked about a paging solution. Senior level candidates are asked to return matching categories in a format that indicates all ancestors ( for instance: "Category 1 -> Category 1.1 -> Category 1.1.1" result for search string "1.1.1")

For an added degree of difficulty, candidates can be asked to recommend data model tweaks and algorithms supporting tree versioning requirements necessary to allow for loading the category tree's state at a given point in time.

The candidate's performance to this exercise seems to give some insight into their level of experience and ability to implement algorithms from a common real world example without having to ask much trivia or logic problems.

[+] dpiers|12 years ago|reply
Hiring engineers is hard, and companies haven't really figured it out yet. Even the best companies rely on puzzles and gimmicks that often have little to do with day-to-day programming.

At one company I interviewed with, I was asked to implement a queue using two stacks. At that time in my programming career, I had worked with C, C++, Obj-C, Lua, Python, JavaScript, SQL, and a handful of DSLs developing games, game development tools, and web applications. Want to know what I had never done? Written a queue using two stacks. My immediate response to the question was, "Why would you want to do that?"

If you really want to know if someone has the capacity to pull their weight as an engineer, ask them about what they've built. Even if they are fresh out of college, the best engineers will have projects they can talk about and explain. Ask how they approached/solved specific problems. Ask what they're most proud of building. Ask what was most frustrating.

Those are the kind of questions that will provide insight into a person's problem solving capabilities and offer a decent picture of what they're capable of doing.

[+] cynicalkane|12 years ago|reply
If you're wondering, the "two stacks queue" is an easy way to write a simple immutable queue for a functional programming language. You use one functional stack for enqueueing and one for dequeueing. When the dequeue stack is exhausted the enqueue stack is "flipped" into the dequeue. Amortized runtime is O(1) per operation.
[+] angrycoder|12 years ago|reply
Exactly. "Tell me about your last project" can easily turn into a 2 hour conversation with any developer who has built anything of substance. It will tell you everything you need to know about them.
[+] austinl|12 years ago|reply
I definitely agree with a model that places emphasis on past work/projects.

"To find out if they can get stuff done, I just ask what they’ve done. If someone can actually get stuff done they should have done so by now. It’s hard to be a good programmer without some previous experience and these days anyone can get some experience by starting or contributing to a free software project." http://www.aaronsw.com/weblog/hiring

[+] Techasura|12 years ago|reply
Best way to go is to ask them about their projects and hear them as to how they will explain them and how enthusiastic they are about building new things and ask them to probably build one in 2 days with the help of google. Then i think the quality would just come out naturally and now you can decide whether to hire this guy or not.
[+] vasilipupkin|12 years ago|reply
The problem is, you often want to know what they can build for you in the future. And old projects where someone worked on something very specific for 5 years and exercised a small subset of a particular language or general algorithmic tool kit may not be predictive of their ability to work on YOUR project
[+] x0054|12 years ago|reply
Here is an interesting idea that I had reading this. As a startup, what if you were to create a simple computer language that looked different from most other computer languages, at least somewhat different. Alternatively, just use one of the many really obscure programming languages out there, just make sure the applicant does not know it ahead of time. Give the applicant a 10-20 page reference manual for the language and ask them to make a simple program of some sort. Have them read the manual and write the program, hopefully while not looking over their shoulder, so they can relax. In the manual you give them omit one critical function or API reference, but make sure that info is available online (make it available if you made up the language). Then see what happens.

This would test programmers ability to learn a new language.

[+] morgante|12 years ago|reply
It is rather unfortunate how little correlation most tech interviews have with their respective jobs. It's largely a lose-lose situation for everyone. Developers who could easily build great systems but aren't experts in graph theory get passed over while brilliant mathematicians who can't necessarily code get hired. Result? Companies simultaneously having to fire employees while facing a supposed talent crunch. Given that this hurts everyone, how did we even get into this situation?

Probably because the only person who doesn't lose from this is the interviewer: they get to have fun. Honestly, when you spend all day buried in code, it's fun to play with puzzles for a change.

Perhaps it's time we started optimizing interviews for hiring success rather than interviewer happiness.

[+] Beltiras|12 years ago|reply
Funny. Just made a hire and this story made me think of it.

The position I was filling is a part-time position for a CS major, sort of like an internship. I devote time to develop his/her skills, s/he would get real-world experience, and a little money to help with cost of living. If everything works out, a position could open up for full employment.

I had a pretty good idea what I was looking for. Someone that had good grasp on theory but had no experience coding. Preferably enrolled in Uni. I had 5 applicants but the only candidate I interviewed is enrolled in Math-CS.

I basically tried to gauge if he had deep interests and asked him to code a bit, solve a simple control (find me the article with the highest hitcount from the day a week ago, gave him 10 minutes).

He failed the coding test but I made the hire regardless. Reason why was 2 things out of the 4 hours we spent together: When I asked him who he considered the father of CS he rattled off von Neuman, Djikstra and Knuth. Yeah, you can make that argument I suppose, but he knew who the influential people were. The other thing was: even if he failed the coding test he failed it by not reading the code examples quite right, he was using my code to try to help himself solve the problem. I'm sure he'll work out.

We as a field should employ internships a lot more than we do, get the college kids and undergrads working on real-world problems a lot more than we do.

[+] lotsofcows|12 years ago|reply
I agree. But for a different reason: I'm shit at maths puzzles.

I just don't have the experience or tools or interest for them.

And yet, somehow, in 20 years of business geekery I've never come across a problem I can't solve.

Maybe when writing Tetris for J2ME I would have saved myself 10 minutes googling if I'd had the experience to realise that right angle based matrix translations don't require fp maths and maybe when writing financial indicators, I'd have saved myself half a day if I hadn't had to look up integrals but this sort of stuff is definitely in the minority as far as my experience goes.

[+] yeukhon|12 years ago|reply
I don't play Sudoku, I don't solve puzzles. I don't want to solve mathematical puzzles. I just want to deal with system and deal with security and none of them requires me to understand the tricks to solve a puzzle. Sure I can learn some cool algorithms but no thanks. I don't want to solve puzzles. Exactly.
[+] jph|12 years ago|reply
> Breadth-first search from both ends.

I believe this is deeply valuable. For some roles, I would much prefer to hire someone who can quickly see the value of breadth-first search from both ends.

If he/she doesn't happen to know the syntax of Ruby, or Java, etc. it's less important to me.

[+] tylerkahn|12 years ago|reply
I mean this as a genuine question:

http://sixarm.com/

This is your company, correct? (I stalked your Github profile)

Seems like the types of problems you're solving are exactly those which require far more domain experience with Ruby/HTML5/Javascript/whatever than the ability to see the value of various graph-searching techniques.

Would you hire this guy even though he's said quite plainly that he lacks the experience with these technologies (and has difficulty picking up new ones owing to that lack of experience)?

[+] dagw|12 years ago|reply
I agree as such. Given the the choice between someone who's really smart and can solve hard problems, but a mediocre programmer or someone who's a good programmer, but sucks at problem solving, I'd chose the first one. That being said, if you hire someone in the first category, don't expect them to be happy and competent at writing an iPhone CRUD app in two weeks after seeing Objective-C for the first time.

I think that was the thrust of the article. If you're hiring based solely on someones math and algorithm skills with zero concern about their coding skills, you cannot turn around and be angry when their coding skills aren't what you needed. It's not math vs programming as such, but more generally about tuning your interviews to finding the skills you actually need (as opposed the skills you think you need).

[+] uniclaude|12 years ago|reply
This is great. Now what would you do if after three months, your new hire can still not ship code that benefits your business? Fire the person? Keep teaching?

I also believe algorithmic knowledge is important, and tend to give algorithm questions to my candidates, but it matters more for those who will write databases and game engines than for those who will write CRUD apps.

[+] Swannie|12 years ago|reply
Sure, it's valuable. But deeply?

If the person wrote the code in such a way that no one else could understand that it was a breath-first search, is that valuable? If they didn't even leave comments when they write code? If they didn't realise that you're using some slightly arcane feature of your language to keep a library of useful tree operations, and was expected to use the pre-existing function through macros/generics? Or was expected never to use such a technique because half the team wouldn't understand the code?

Hey, I know the theory on how to bake a large amount of cakes, breads, cookies, etc. I learned it by watching it on TV, and from cookery books. But actually it turns out that I'm a pretty crap baker, because I lack experience to know when I'm over-working/under-working dough, how to adjust cooking time with an unknown oven, etc. And I make a huge mess when I'm doing this, and take a lot longer than anyone with basically any experience of doing it at all.

[+] jroseattle|12 years ago|reply
I'm dealing with this now, having been interviewing for different engineering roles over the past two months. It hasn't been as bad as straight-up conceptual math problems, but there have been plenty of questions that I have questioned for validity.

Interviewer: "How can we optimize the character replacement in a string such that we use no extra memory?" Me: "We do this and that and this. But, should we consider what situations we would need this optimization?" Interviewer: "What? Why?"

I can now use this as a filter as I interview organizations. Optimizing algorithms by creating your own core data structure classes (instead of using the built-in ones) is great in certain circumstances, but an absolute waste of time in many others. And if you're not going to ask me about those times when making those improvements is important, then you're not asking questions for a programmer -- you're asking them for a theoretician who can recall syntax.

It's poor practice, and I've seen it everywhere.

[+] tokenadult|12 years ago|reply
There are many discussions here on HN about company hiring procedures. Company hiring procedures and their effectiveness is a heavily researched topic, but most hiring managers and most job applicants haven't looked up much of the research. After reading the blog post kindly submitted here and some of its comments, and then reading most of the comments here on HN that came in while I was asleep in my time zone, it looks like it's time to recycle some electrons from a FAQ I'm building about company hiring procedures.

The review article by Frank L. Schmidt and John E. Hunter, "The Validity and Utility of Selection Models in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings,"[1] Psychological Bulletin, Vol. 124, No. 2, 262-274 sums up, current to 1998, a meta-analysis of much of the huge peer-reviewed professional literature on the industrial and organizational psychology devoted to business hiring procedures. There are many kinds of hiring criteria, such as in-person interviews, telephone interviews, resume reviews for job experience, checks for academic credentials, personality tests, and so on. There is much published study research on how job applicants perform after they are hired in a wide variety of occupations.[2]

EXECUTIVE SUMMARY: If you are hiring for any kind of job in the United States, with its legal rules about hiring, prefer a work-sample test as your hiring procedure. If you are hiring in most other parts of the world, use a work-sample test in combination with a general mental ability test.

The overall summary of the industrial psychology research in reliable secondary sources is that two kinds of job screening procedures work reasonably well. One is a general mental ability (GMA) test (an IQ-like test, such as the Wonderlic personnel screening test). Another is a work-sample test, where the applicant does an actual task or group of tasks like what the applicant will do on the job if hired. (But the calculated validity of each of the two best kinds of procedures, standing alone, is only 0.54 for work sample tests and 0.51 for general mental ability tests.) Each of these kinds of tests has about the same validity in screening applicants for jobs, with the general mental ability test better predicting success for applicants who will be trained into a new job. Neither is perfect (both miss some good performers on the job, and select some bad performers on the job), but both are better than any other single-factor hiring procedure that has been tested in rigorous research, across a wide variety of occupations. So if you are hiring for your company, it's a good idea to think about how to build a work-sample test into all of your hiring processes.

Because of a Supreme Court decision in the United States (the decision does not apply in other countries, which have different statutes about employment), it is legally risky to give job applicants general mental ability tests such as a straight-up IQ test (as was commonplace in my parents' generation) as a routine part of hiring procedures. The Griggs v. Duke Power, 401 U.S. 424 (1971) case[3] interpreted a federal statute about employment discrimination and held that a general intelligence test used in hiring that could have a "disparate impact" on applicants of some protected classes must "bear a demonstrable relationship to successful performance of the jobs for which it was used." In other words, a company that wants to use a test like the Wonderlic, or like the SAT, or like the current WAIS or Stanford-Binet IQ tests, in a hiring procedure had best conduct a specific validation study of the test related to performance on the job in question. Some companies do the validation study, and use IQ-like tests in hiring. Other companies use IQ-like tests in hiring and hope that no one sues (which is not what I would advise any company). Note that a brain-teaser-type test used in a hiring procedure could be challenged as illegal if it can be shown to have disparate impact on some job applicants. A company defending a brain-teaser test for hiring would have to defend it by showing it is supported by a validation study demonstrating that the test is related to successful performance on the job. Such validation studies can be quite expensive. (Companies outside the United States are regulated by different laws. One other big difference between the United States and other countries is the relative ease with which workers may be fired in the United States, allowing companies to correct hiring mistakes by terminating the employment of the workers they hired mistakenly. The more legal protections a worker has from being fired, the more reluctant companies will be about hiring in the first place.)

The social background to the legal environment in the United States is explained in various books about hiring procedures,[4] and some of the social background appears to be changing in the most recent few decades, with the prospect for further changes.[5]

Previous discussion on HN pointed out that the Schmidt & Hunter (1998) article showed that multi-factor procedures work better than single-factor procedures, a summary of that article we can find in the current professional literature, for example "Reasons for being selective when choosing personnel selection procedures"[6] (2010) by Cornelius J. König, Ute-Christine Klehe, Matthias Berchtold, and Martin Kleinmann:

"Choosing personnel selection procedures could be so simple: Grab your copy of Schmidt and Hunter (1998) and read their Table 1 (again). This should remind you to use a general mental ability (GMA) test in combination with an integrity test, a structured interview, a work sample test, and/or a conscientiousness measure."

But the 2010 article notes, looking at actual practice of companies around the world, "However, this idea does not seem to capture what is actually happening in organizations, as practitioners worldwide often use procedures with low predictive validity and regularly ignore procedures that are more valid (e.g., Di Milia, 2004; Lievens & De Paepe, 2004; Ryan, McFarland, Baron, & Page, 1999; Scholarios & Lockyer, 1999; Schuler, Hell, Trapmann, Schaar, & Boramir, 2007; Taylor, Keelty, & McDonnell, 2002). For example, the highly valid work sample tests are hardly used in the US, and the potentially rather useless procedure of graphology (Dean, 1992; Neter & Ben-Shakhar, 1989) is applied somewhere between occasionally and often in France (Ryan et al., 1999). In Germany, the use of GMA tests is reported to be low and to be decreasing (i.e., only 30% of the companies surveyed by Schuler et al., 2007, now use them)."

[1]

http://mavweb.mnsu.edu/howard/Schmidt%20and%20Hunter%201998%...

[2]

http://www.siop.org/workplace/employment%20testing/testtypes...

[3]

http://scholar.google.com/scholar_case?case=8655598674229196...

[4]

http://books.google.com/books?hl=en&lr=&id=SRv-GZkw6TEC

[5]

http://intl-pss.sagepub.com/content/17/10/913.full

http://www.economics.harvard.edu/faculty/fryer/files/Fryer_R...

[6]

http://geb.uni-giessen.de/geb/volltexte/2012/8532/pdf/prepri...

[+] mattjaynes|12 years ago|reply
I can confirm this works in my own experience of trial-and-error interviewing over 2000 applicants and hiring over 200 for my clients and myself (since ~2006).

Nearly 100% of the applicants were remote, so I think that helped me from falling into traps of poor "traditional" hiring practices.

The point of hiring these remote folks was to help accelerate whatever team I was on. It can be a great way to scale your existing team very quickly if you know how to do it.

For example, I took over an iPhone app dev team and it was taking them 4 months to produce an app. These apps were all very similar in functionality, but the developers were spending a ton of time slicing images, testing, and other tasks that remote workers could easily do. So I hired some remote staff (via oDesk) to do most of that supporting work and we got the app production time down consistently to 1 month. That was a huge ROI for the business since the total cost for all remote staff was the same as for 1 additional on-site engineer.

There's nothing magic about hiring well, but I've watched others try to hire remote staff and the vast majority of them try once, fail, and give up on it.

They will approach the hiring process in a traditional way (personally interview them to watch how they handle puzzles, etc). It's a grueling process and then they still get really poor hires and conclude that "outsourcing doesn't work".

It's most helpful to think of the process as panning for gold. (Naturally, I'm not saying that some people are more valuable than others innately, just that you're looking for those who are most valuable at performing your given tasks.)

So, to find gold, you must filter, filter, filter. That's the exact process for finding applicants that are high performers. Most of your applicants will be pretty terrible at the job you're hiring for, so the filtering process is critical for success.

  - filter out the very worst applicants with a small easy question
  - filter out the remaining applicants:
     - pick a real-life production task you've recently completed
     - ensure that the task is *exactly* what they'd be doing in the job
     - have them perform the task
     - compare their task results to your task results
  - hire more than you need of the top performers
  - filter out (gently fire) the ones that aren't as good
  - repeat as needed until you have gold
When I see others attempt this, the most common problem is that they essentially go down to the river and just grab whatever pebbles they see in their first handful and hope there's gold in it (hire without filtering). Or they go down and carefully pick the prettiest pebbles hoping they will be gold (wrong filter / puzzle interviewing). But the only way to really find gold is to seriously invest in a filtering process that will yield actual gold. That means filtering based on their ability to do the actual tasks they'll be doing on the job.

The great thing about hiring remote folks is that I care 0% how they get the task done. I don't care if they've automated it, or have their mom do it for them, or whatever. If they provide the results I need, I'm happy, period.

There are plenty of other smaller caveats and gotchas to watch out for, but I'll try to cover those in a blog post sometime.

If you're a startup and want to go faster, try this out by off-loading some of the grunt work from your staff. It can be a big competitive advantage if you can do it right.

[+] bradleyjg|12 years ago|reply
It seems awfully rude to cut and paste this gigantic comment, which is now virtually pinned to the top of a discussion thread of a blog post to which it barely responds.

There's nothing in this that indicates it is at all customized for the article it is attached to. Especially given that you and the author seem to have some things in common (connection to small-town Minnesota & love of mathematics) it seems like you could have done a lot better than just re-posting virtually the same comment for the 11th time.

You have to go about half way down the page to find a comment that actually engages with the article. The way the HN system works out in practice the "winning" top level comment determines the entire structure of the ensuing discussion. I think that means that top level commentators have a particular responsibility to comment well.

[+] Alex3917|12 years ago|reply
I really don't see how this is relevant to this post, since the research you're citing is about hiring people for routinized or semi-routinized labor, like at a car manufacturing plant.

C.f. https://news.ycombinator.com/item?id=4614430

[+] pjungwir|12 years ago|reply
After reading the OP I immediately thought of your standard hiring advice and wondered if the folks asking math questions aren't on the right track after all. (Thank you for your persistence posting it, btw.) At least half of my daily work requires learning new things on-the-fly, and I bet in 5 years most of my technical knowledge is turned over anyway--except the math and CS fundamentals. In programming aptitude matters way more than current knowledge. I think IBM discovered the same thing in the 60s when they were hiring chess players and crossworders. So if math puzzles serve as a legal(ish) proxy for an IQ test, aren't they just what interviewers should ask? Of course I'd rather have someone who knows Ajax and 3NF if I can get it, and not lose any time to up-front training, but that may not be realistic, and those things can be learned if the person is smart. Particularly for an entry-level hire, I'd rather hire someone who can solve graph theory problems than knows the command-line options for git.
[+] ap22213|12 years ago|reply
Could someone explain what 'mean validity' means in [1]?

Also, how good of a GMA score is needed? That is, how good of a score would be needed for me to stop looking at other candidates?

[+] ElDiablo666|12 years ago|reply
You really shouldn't recommend a "mental acuity" test since those are complete and utter nonsense. I think it's a better screening criteria in the sense that you wouldn't want anyone working for you who believes that IQ tests are worthwhile.
[+] andrewflnr|12 years ago|reply
Where is the line between a coding test and a "work-sample" test? Does a coding test become a work-sample test just by virtue of resembling a task you would do on the job? Is it a function of scale of the task?
[+] gdy|12 years ago|reply
>Because of a Supreme Court decision in the United States (the decision does not apply in other countries...) That's funny that you think this clarification is needed J (No offence intended)
[+] DigitalSea|12 years ago|reply
I failed mathematics in school, for the life of me I can't grasp them beyond the basics, but give me laptop and a copy of Sublime and I'll code anything you want. I can code, but I would fail any mathematical test given to me. This kind of approach has always bothered me, there are a lot of good developers out there bad at maths but posses strong problem-solving and highly analytical skills.

Being a developer is 80% Google and 20% actual coding knowledge. We are hackers at the end of the day, not miniature Einstein's with encyclopaedias for brains.

[+] gboudrias|12 years ago|reply
I've always felt like I was supposed to like maths, but I learned (basic) programming before I learned algebra, and I couldn't figure out how I would be using it.

After years of being a programmer, I still can't.

[+] onezeno|12 years ago|reply
Actually...

“Never memorize something that you can look up.”

― Albert Einstein

[+] mrcactu5|12 years ago|reply
It looks like Emma's math prowess is working against her. It's ironic the app developers - who need her help the most - are pushing her away.

OK, so there is a difference between computer science and programming. that's why there are two different stack-exchanges:

  cs.stackexchange.com
  stackoverflow.com
And we can make even finer distinctions if we wanted to.

it's actually really fucking INCREDIBLE that

* you can know tons of CS without being able to build a decent app * you can a decent facebook clone without having any idea how it works

I feel really bad for Emma. I was a math major, but app developers won't even look at me b/c I'm not a full-stack whatever. So now I'm a Data Scientist at an advertising firm in Puerto Rico.

[+] michaelpinto|12 years ago|reply
After reading this I have a dumb question: The person behind the post is a CS major but only played a little bit with the C programming language in college — is this pretty common these days?
[+] lucasnemeth|12 years ago|reply
I believe there is some kind of inferiority complex, we don't believe software engineering is actually worth it. Probably, it is the result of an academic mindset that is taught at colleges, where the applied fields are seen as less important than the "pure" ones. But good software engineering, that is, writing complex systems, with a lot of requirements, maintainable, scalable, nice APIs, etc. it's very, very hard. And we know it! If we applied our hiring methods to writers, we would be asking them to improvise a rap rhyme, when we wanted to hire a novelist.
[+] deluxaran|12 years ago|reply
My opinion on this is that most of the interview processes is pretty old(over 20-30 years) and back then a good programmer was also a pretty good mathematician, and now most of the people that do interviews just use the same old patterns because, maybe, some of them don't know any better or because that is what they found in some books they have read.

I tend to hate the interviews that ask me to solve math and logic brainteasers because I don't see the value in them regarding my knowledge of programming.

[+] biot|12 years ago|reply
Math puzzles are great if the problem is easily understood, the solution achievable without a math degree, and you ask them to solve it by writing code.

For example: "This database contains 100,000 problems with standardized parameters. The problem definition is defined in the file spec.txt which you can grab from our code repository. Write the code to solve these problems efficiently, passing each solution to a remote service via POSTing to a REST API, the documentation for which you can find here. Bonus points for parallel execution. Feel free to use any editor/IDE and reference online documentation, Stack Overflow, etc. that you want. If anything's not clear or you need a hand with something, just ask as you would if you were an employee already. Ready to get started?"

The great thing is that once you've identified a candidate, you can do remote screen sharing and have them write code before they even have to come into the office. I've interviewed a fair number of remote people this way and it's excellent for weeding out the people who can talk the talk but can't program worth a damn. And it limits bias because you don't care about much beyond their communication ability plus their technical ability.

[+] keithgabryelski|12 years ago|reply
My observation is that a lot of interviews come down to "stump the chump" questions; a question that is meant to show a single issue the interviewee has under their belt and is used to gauge the entirety of the interviewer's ability. Math puzzles/logic puzzles are in the same category: they require domain knowledge that probably doesn't translate to any job I've ever worked on.

That aside, one must have a way to measure the abilities of a candidate -- and asking the same set of questions to many people allows you to compare the answers as apples to apples.

I generally don't restrict my people from asking any particular question, but I will ask them to consider what a failed answer really means for the specific job (questions are generally adjusted then).

As an aside, some questions of mine that aren't specifically about coding:

* do you code outside of work (a love of coding translates to good coders)

* send me a link to some code you've written that you are proud of (let see what you got)

* tell me about a problem you had where your solution wasn't correct (how have you dealt with failure).