top | item 14075652

(no title)

demallien | 9 years ago

The problem with testing algorithms is that it in no way tests intelligence. I would think that 9 out of 10 programmers that know an algorithm would not be able to derive the algorithm from first principles. So you are just testing esoteric knowledge - it's qualitatively no different that asking someone questions about a specific framework / API.

You could make the argument that algorithms tend to be studied more by smarter people, but if that's what you're going for you may as well ask them about their hobbies, and hire the person that is into playing chess, or doing astronomy (or whatever intellectual pursuit you care to name).

If on the other hand you are interested in a person's ability to code, ask them to do so. The last time I had to hire someone, I wrote a small application with one module that was deliberately written in an obfuscated style. I asked candidates to bring that module under control - rewrite it in a readable code style. To do this, successful candidates needed to identify what the current code was doing by examining the public interfaces in a debugger, documenting what the calls seemed to do, prepare unit tests, and then rewrite the module in a readable style. It took about a day for most candidates to do.

At the end of that, you get to see a candidate's ability to read code, use a debugger, write unit tests, write documentation, and write well structured code, which is a pretty good coverage of the typical tasks in a developer's day. I feel this gives a much more realistic assessment of a candidate's capabilities that asking questions about a more or less randomly chosen algorithm.

discuss

order

flukus|9 years ago

> It took about a day for most candidates to do.

This is an issue as well. If you aren't google then a day is too much investment for a single job opportunity, especially if you're already employed.

Jach|9 years ago

I don't think someone with an IQ of 80 could program Djikstra's algorithm given a mathematical description of it with diagrams. I'm even skeptical of programming binary search. They might understand an intuitive explanation involving a phone book but I don't think they could program it. And even if they could, I think someone with an IQ of 120 would do it much faster, though both solutions would likely have the integer overflow bug that was even in Java's implementation for a long time. So I think algorithms do test intelligence, just not as well as an actual IQ test. It can easily be gamed by sheer memorization, whereas good IQ tests can't. I agree that other things like ability to play chess would probably test just as well as algorithms. If the industry switched to testing candidates to see if they can solve chess problems, or play a computer self-limited to some specific ELO, you can bet that everyone who was serious about getting a job in the industry would start playing a lot of chess, and those with higher IQs will on average play better chess.

When I have to give an interview and have to include an algorithms section I make candidates type code. Whether that's on a phone screen with a shared online text editor or in person with their laptop / an interview laptop, I want them to type stuff, not just rely on whiteboard pseudo-code and diagramming. As a vim user I discount that their editing environment may not be what they're used to but even if I was forced to use Notepad I could still bang out a function to test the even/oddness of a number (my own fizzbuzz) pretty quickly. So I at least make sure to test coding, even if poorly.

I agree work-sample tests are the best, but as another commenter noted if they take a lot of time for the applicant you're going to get people who refuse that just as some refuse to play the algorithms game. Especially if people have a github repo, especially if some of the projects they've worked on have had more than themselves as commiters, especially if they're currently employed as a developer at some other company that does general software. Unless you're trying to build a top team, which most projects don't need, you're wasting a lot of time trying to rank beyond "would work out ok" and "would not work out at all". I have a section in my phone screen that tests for regex knowledge, I'm primarily just testing to see if they know the concept or if when faced with a problem that regexes can solve (which actually does happen from time to time) they reach for writing some custom parser or not. If they vaguely remember there's a way to specify a pattern and find matches, that's a Pass. If they know grep/their language of choice's regex syntax and can give a full solution, great, I'll rank them slightly higher than someone who just knows the concept, but all I really care about is the concept. If they don't know the concept, that's a strong sign (to me) they won't work out.

I tried to do a semi work sample test with an intern candidate a few months ago instead of a different test, based on experience with a prior intern who struggled on something I thought was basic and left me wondering why I didn't catch that in the phone screen. Basically I gave them some stripped down code from several files that looks a lot like what we have in production (JS, using Backbone) explained the overall mapping from bits of code to what could be shown on the screen, and essentially asked them to add a new component (already written) to one part of the screen by modifying/filling-in-the-functions in a few places. It required them to read and understand some alien code, see what they can ignore, understand what was asked, and then do it (initialize something, pass it around, up to I think 3 indirect function calls of nesting, call a couple things on it). The candidate got through it, I'm not sure the old intern would have...