(no title)
lbatx | 6 years ago
The assignment was to read a file containing a list of numbers (some formatted incorrectly, so there was some very simple parsing logic involved), call an API using each correctly formatted number as a parameter, and store what the API returned to a file. I am to this day stunned that 60% of people who passed a phone screen could not solve this task. Note that we gave them the input file, so it wasn't a matter of an edge case tripping them up or them getting one input file but the test input file having some other edge case.
My point here is that it may be possible to get the same screening value with much less investment from the candidate.
collyw|6 years ago
I am sure half of these things are never thought through. In Python setting up a new project and downloading dependencies may involve needing to install a load of other crap and often takes more than two hours. Some libraries are incompatible with others.
If you are making assumptions that the test will take two hours, make sure that it involves minimal dependencies on third party stuff.
lbatx|6 years ago
I'm sorry you've been burned, but that doesn't mean there aren't tests that actually take < 2 hours. I can't speak to every language, but what modern toolset can't open an input file, make an http(s) call, and write to a file?
I also don't understand why we shouldn't figure out how long something takes before administering it. Several people took the test and the time ranged from 15 minutes to an hour and a half, depending on language and experience level. I will say that if someone couldn't do it in 2 hours, they wouldn't have been a good fit for the team. If several team members took it, of course we're going to make an assumption about how long it takes.
Furthermore, since we didn't prescribe a specific language, there's no reason why someone wouldn't have all of the tools pre-installed. Even so, if you had to install your favorite development environment, you'd have been fine. That also wouldn't have been part of the two hour time frame (which wasn't a limit, BTW, just how long it ended up taking competent developers).
boyter|6 years ago
It consists of a small chunk of code in Java/C#/Go or otherwise that has some obvious and other not so obvious mistakes. The candidate is asked to point out any issues they see in the code.
It takes them 15 minutes to do and about 15 minutes to review the response, which I feel values time on both sides.
lbatx|6 years ago
It feels like it tests something different than writing code, though both may be a proxy for "quality candidate".
matwood|6 years ago
pmiller2|6 years ago
lbatx|6 years ago
Applications: 5000 candidates; ~20% pass rate (vs unknown)
Phone Screen: 1000 candidates; ~50% pass rate (vs 40%)
Technical Test: 500 candidates; ~40% pass rate (vs 25%)
On-site interview & reference checks: 200 candidates; ~50% pass rate (vs 40%)
Offer: 100 candidates; ~80% hire rate (vs 60%)
Hired: 80
So by some arguments you could say Firebase was 2.5x as selective (40 offers vs 100). With a funnel like this, even small changes to the percentages end up having a larger overall effect.
Unfortunately, we don't have the Applications number from the blog post, though he says "we considered a great deal more applicants than that [1000] on paper." I suppose a "great deal" could be anywhere from double to 10x...
pmiller2|6 years ago
JMTQp8lwXL|6 years ago