I like this approach far, far more than coding tests!
> It’s fun. It’s fun in the same way an escape room is fun. It’s fun because of the dopamine you get when the test suite blinks green.
Keep in mind that for lots of people in a job interview setting (even excellent candidates) this is not fun, this is stressful. It may be fun on the job or at home, but the stress of a job interview both eliminates any "fun" aspect and makes even simple tasks more difficult. Just mentioning that to encourage some amount of empathy and understanding for the candidate.
This approach then selects those who are relaxed. The candidates that have five more interviews at advanced stages, and likely to receive several offers, as opposed to candidates who badly need a job.
The former kind of candidate may be more desirable for the employer :-/
Yeah, I've had a live bug squash interview (SSH into a server where a script is not functioning) and I didn't find it particularly fun. I did manage to get through it and the interviewer seemed impressed, but it's still stressful to do it with someone silently judging you. I'd much prefer take-home assignment and do it at my own pace, but I guess that might be susceptible to "cheating" (whatever you consider it being).
Isn't that what companies would look for, though? They want people who can work under stress. If you had two equal programmers, and one who was stressed and the second who was not, any company would pick the second one without thinking.
i prefer the bug squash question. i have 20 years of experience. i will probably do well.
a fresh grad might fail. do i care? no. leetcode has benefited fresh grads and is being used to discriminate against those with anxiety and other disorders and those who are older.
i think it is refreshing to try a different interview method, one that benefits those with experience.
The discomfort and stress is a given in any interview. What I think is nice about this is it doesn’t require you to emanate fully-formed, nicely designed code from your brain on the spot. Instead you just have to navigate the codebase, read it and understand it.
It’s a better demonstration of knowing how software works imo. But I am biased because this sort of thing is one of my strengths.
At one place there was a bug squash interview like this. Rough idea was we wrote a very small version of a system we had in our app (some data-syncing-then-displaying service), with a handful of bugs and a very simple feature request.
It was very helpful for sanity checking if a person was able to be in front of a computer. There's a bit of a challenge because I think there's a pretty low ceiling of performance (strace on a Python program is fun, but our bugs are not that deep!), but we could use it to also talk about new features on this mini-system and whatnot.
General feeling on it was "this is a great filter", because ultimately we needed people to just pass above a minimum skill bar, not be maximally good at coding. And having the exercise really helped to communicate the day-to-day (which can be tedious!)
I've done the equivalent of this by asking them to describe an interesting bug they've encountered in past lives; how it came up, how they hunted it, how they fixed it. By listening to them describe the work, asking questions, and following their thought processes, you can come to a fairly good hire/no from this single walk-through.
I know, it's short and humane, so not a good fit for current Sillycon Valley culture.
I straight up lie and make something up for these "tell me about a time..." type questions
Unless it was literally a week ago, I work on too many things to ever remember anything, and for me it's just a fact of the work that I'll be fixing bugs or whatever the questions asks. Nothing stands out, because I don't consider any of them to be some sort of special occasion worth keeping track of
> It’s easy for the candidate to self-assess their own progress.⊕ When the candidate isn’t doing well on it, they probably already knew it without needing to be told as much by the recruiter. This is a much better candidate experience than the whiplash of thinking you solved a question perfectly only to realize that the interviewer was looking for something else entirely.
Not to say that I think this is a bad type of question overall, but IMO, this is an anti-feature. The candidate does not need to accurately self-assess their performance on the day. They need to have their confidence preserved so they don’t tilt and tank the signal for the entire interview round.
I wonder if this is an improvement over a conventional white boarding question.
It seems to me that debugging in particular is often dependent on the developer realizing some edge case or scenario where things circumstances line up to produce a bug. In that case, wouldn't a "bug squash" end up being similar to a "gotcha style" white boarding question.
The author, quite correctly, says that the interview should be scored not on whether the candidate can immediately vomit up a solution but on whether they can demonstrate their an organized thought process. However, isn't that true of white board questions as well?
Overall, it seems to me that the real problem with conventional white board questions is when the interviewer focuses on whether the candidate gets the right answer rather than on whether they demonstrate the ability to problem solve. While it sounds like the author is a good interviewer who focuses on assessing the candidate's thought process, it's not clear to me that giving debugging questions is actually what causes that.
I've only had one "find the bug" interview, and it was awful:
- Didn't set you up with a way to reliably run/test the code, nor a step-through-debugger which, jeez is it 1980? Like setting up a repo in a language you aren't familiar with (say getting your py-env exactly right) can take a whole hour if it's not your main language.
- Didn't have standardized questions, which is hugely problematic (some bugs are 2 orders of magnitude harder than others)
- It also just seems like there's a huge random element, like am I debugging code written by somebody who thinks at the same level of abstraction as me? Did they write comments I understand?
> - Didn't set you up with a way to reliably run/test the code, nor a step-through-debugger which, jeez is it 1980? Like setting up a repo in a language you aren't familiar with (say getting your py-env exactly right) can take a whole hour if it's not your main language.
How frequently do people interview in a language other than their "main" one(s)?
> - Didn't have standardized questions, which is hugely problematic (some bugs are 2 orders of magnitude harder than others)
How do you know they're not standardized? (You say in another comment where it was, and indeed that's where the blog post author works. It's described as a pretty standardized process by my reading of it.) You can pass the interview without actually solving the bug, but I get it's easier to blame the problem than admit you struggled with it.
Yea there's a huge element of randomness to these kinds of interviews. I wasn't a big fan of them at Stripe. You can get some signal like:
- do they methodically approach the problem?
- do they understand the bug?
- do they know how to use debugging tools?
- do they know advanced debugging tools?
- do they work well in an unfamiliar codebase?
But in practice I'd say most candidates check those boxes, at which point it becomes an arbitrary evaluation unless you pass/fail them based on whether they solved the bug.
> It also just seems like there's a huge random element, like am I debugging code written by somebody who thinks at the same level of abstraction as me? Did they write comments I understand?
Isn't that part of what professional software engineering is about? Unless you worked with the same group of people for a decade and have had the time to mind meld together professionally, _any_ random developer at a new company is nearly guaranteed to think in a different way and have their own philosophy of code comments.
Checking for mental flexibility and adaptation to varying approaches for others is a great subject for an interview as a software engineer.
My favourite thing to do for "coding" interviews is to give the candidate a piece of absolutely awful python code that a friend of mine came up with for use for interviews.
There are code smells, side effects, errors, confusing syntax, a whole slew of things in it. I give them the code, tell them I'll help with every bit of python syntax (and really emphasise that I don't mark people down _at all_ for any lack of familiarity with python / python syntax), and ask them to work through the code and do a code review.
There's some python specific quirks that I largely consider bonus points at best, but anyone with any appreciable familiarity with at least one language should be able to spot a number of things. As they call stuff out, I'll dig in (or not) as seems appropriate.
So far it seems to be working out great for me as an interviewer. Candidates seem to be way less stressed than they are with a "whiteboard coding" exercise. Good discussions are showing me their development skills, etc.
step 1: the candidate is shown the specification for a method and the results of running the test suite on an obfuscated version of the method.
All tests pass.
The test suite is minimal, and test coverage is abysmal.
step 2: the candidate is asked to come up with more test cases, based on the specification.
The code is run against the updated test suite - most new tests will fail because the method's implementation has several known bugs.
step 3: The un-obfuscated source is provided and the candidate is asked to correct any bugs they discover.
step 4: the changed source is run against a full test suite.
I like this because the candidate's ability to think of test cases, debug, and fix existing code are all tested for.
One of my favorite interviews was like this, but it was a long time ago and they printed out the code and told me there was a problem. Here's a pencil. What's wrong? Most were a function call or two. In one case I remember someone was doing a loop that incremented the index of a SQL query in code and using each result, instead of querying the set and looping through it in the code.
The fun part was it was a discussion. Two people, paper and pencil, talking about the code. And the examples were from bugs they'd already squashed in a code base they inherited. It was a project that I ended up working on that had... quite a few interesting bugs like that.
I somewhat disagree that this is reflective of a person's everyday work. Usually we're only ever working on at most handful of codebases at a time, and getting oriented in a completely new application (including getting it to build and run on your machine) doesn't happen every day.
I've done interviews like this and one major pitfall is the start-up time. It's possible to spend an unreasonable amount of time debugging cold-start issues with getting the repo set up, dependencies installed, and the app to build while the interview slips away. These things are representative of the first week on a new team or project, maybe. Figuring out why bundler can't seem to download the dependency in the gemfile isn't my idea of a good use of interview time.
I'm not too proud to admit I took the general concept from https://sadservers.com/ and turned it into something I could use for interactive debugging interviews.
I had golden images with some scenarios I wrote myself, and the images automatically shared a tmux session over http so I could follow along without requiring the candidates to screen share.
I did have to ask for IPs so I could ensure the machines were just available to me and the candidate, but it was otherwise pretty seamless!
I was given a similar interview problem and thought it was one of the better interviews I've had. In my case the interviewer pulled up a web app and said this particular page is loading too slow, how would you approach the problem? We went from opening dev tools to look at the requests driving the page all the way through database optimizations and every layer in between.
This kind of interview didn't feel like a gotcha and was much much closer to real world work than the toy and/or algorithm problems that I have encountered. More companies should adopt these types of interview approaches.
I had a bug squash interview today. I found it nice, but also frustrating.
It was nice because I didn't need to practice and I knew exactly how to debug the thing.
It was frustrating because my personal laptop is from 7 years ago (from college), is slow, and the dependencies and editor don't work out of the box for an a new repo. Additionally, I'd prefer to use IntelliJ like I do at work but again, that's too heavy for my computer to handle so I resort to vscode and have to figure out how to use it. So then the interview becomes debugging my environment instead of debugging the problem. Maybe that's a useful signal, but it's not really bug squashing anymore then.
So overall, it was still requiring learning but there was not a very good way to test in advance (how do you test all possible repo structures?)
This is one of the interview questions at Stripe, and I agree that it was the most "fun" part of the interview
(It is still pretty stressful giving an interview, but magnitude times much less so than converting a BST to doubly linked list, and converting it back )
If you can’t sit down with someone, have a simple tech conversation with them and be able to tell if they know what they are doing… if you need to “give someone a test”… the problem is likely not the candidate.
That biases heavily to candidates with good people skills and the skill to take over a conversation. Yes, you are not in fact immune to this even if you think you are, that’s why it’s so dangerous.
On the flipside, I've spent an hour with a candidate who seemed great, but could hardly _actually program_. It was a bizarre dichotomy. Dude was clearly smart as hell but was so caught up in building perfect abstractions in parts of our software that _didn't matter all that much_ using languages that nobody else knew or wanted to use.
I get it, your fun language is fun. But if the dev shop is 60% one language and 39% another, I don't really care about the small improvements. You need a strong reason that it actually _helps the business_
Sure, there is some correlation between talking and doing, but I've worked with people who talk like geniuses and code like crap. And also the opposite.
For some skills, there is just no substitute for actually have people actually demonstrate using them.
What you're describing is a low hiring bar. Low hiring bars make sense for companies that pay low to average salaries, or don't require particularly deep technical skills. But it's an awful strategy if you're trying to filter for exceptional talent and willing to pay top dollar.
I agree — this is very much the type of interview I like to give. However it took me a while to get good at steering the conversation and digging for details.
It’s really easy to let the interviewee talk through talk through all of the great things they built from a product and business perspective — and assume they understand the technical perspective. There are candidates who are really good at talking in an impressive way, but manage to talk around any true implementation details or deep technical understanding.
It’s also really easy to come away with a bad impression of someone who has a tendency to give short answers and not elaborate on things, even when it turns out that person is really skilled when you dig in.
I imagine a big part of the reason for the test-style interviews we have today is that it’s easier to train an interviewer on how to give them.
We tried this approach. It did not work. Source: SendGrid while ramping up to become a public company. The interviewing experience involved was fairly extensive; I had probably given a couple hundred interviews in my career by then and I was not the most experienced by a long shot.
We tried to have a zero coding interview. All behavioral and experiences questions followed up by "how would you design a system to do blah." Got a candidate that did well. Hired. Turned out they could talk the talk but not walk the walk. It was wildly unexpected. They simply couldn't code well. Too long to deliver, too poorly written, usually didn't work right. We insisted on _some_ coding in interviews going forward
So true. A test will only measure its set of criteria. What is a test for drive — desire to understand and learn? What is a test for how a person will respond to challenging and overwhelming prod scenarios? What is a test for if they will burn out in a month because they want to prove how rockstar of a programmer they are?
My most successful hiring has always been based on a conversation with 3-4 practical questions. I even worked in a company that had testing down to a science with all the psychometric nonsense and in the end, it just hired many sociopath-adjacents.
You're describing the parallelized web-scraper that they pointed to their own internal site? Yeah that was fun. Too bad everyone got the same question.
Had an interview question like this where the interviewers were very impressed. Said it was the fastest and cleanest they'd seen anyone do it, and in an unfamiliar framework no less. Unfortunately it wasn't enough for me to get the job. I had a couple of answers in the next section they didn't quite like.
I agree. Great question. Academic LeetCode exercises are really not what I consider useful in evaluating candidates.
However, I'm biased. I'm really, really good at finding and fixing bugs, and pretty much stink at LeetCode, so take my support with a grain of salt.
One problem is that, if the exercise gets known, expect an underground economy of solution cheats. Same with LeetCode, but that's sort of expected. Bug fix solutions hit harder.
> You’ll need at least one question per language that you want to allow people to interview in. ... “Was this performance poor because they’re bad at debugging, or just unfamiliar with this language’s tools?”
Not necessarily a con. Different languages/tech stacks have very different tools. I.e. debugging a GC issue is different than debugging a network issue is different than debugging an embedded issue. If you chose a language for a very good reason, then it's not a con to filter out those who aren't familiar with that language enough to debug an issue in it
> If you chose a language for a very good reason, then it's not a con to filter out those who aren't familiar with that language enough to debug an issue in it
The language is based on/chosen by the candidate. We want to map the candidate to whatever language in our repository of "this question in different langs" is closest, to control for the candidate's familiarity as much as possible.
If you're screening to only candidates who can code in the language your company primarily works with, you're missing good candidates. The best are going to be able to pick up a new language rapidly, particularly if your language is a mainstream one that's probably imperative or OO-ish, or both; adding a non-esoteric language to one's repertoire is just not that hard a thing to do. But in the interview, I don't want them struggling with syntactic bull, as it's not a useful signal; I want to know how they think, whether they've seen code before, and can they reason from problem statement to debugged bug.
I did a bug squash interview when I was interviewing for new grad positions and it was one of my favorite interviews (both as an interviewee and an interviewer)
- People were allowed to use their favorite IDEs - so you could see how proficient some people were
- Great engineers are really really good at debugging - it was great to see how people debugged and it helped me pick up a few things as well
[+] [-] JohnFen|1 year ago|reply
> It’s fun. It’s fun in the same way an escape room is fun. It’s fun because of the dopamine you get when the test suite blinks green.
Keep in mind that for lots of people in a job interview setting (even excellent candidates) this is not fun, this is stressful. It may be fun on the job or at home, but the stress of a job interview both eliminates any "fun" aspect and makes even simple tasks more difficult. Just mentioning that to encourage some amount of empathy and understanding for the candidate.
[+] [-] nine_k|1 year ago|reply
The former kind of candidate may be more desirable for the employer :-/
[+] [-] Hamuko|1 year ago|reply
[+] [-] StefanBatory|1 year ago|reply
[+] [-] nailer|1 year ago|reply
Suggestion here: leave the room and come back in 40 minutes. Let them debug, fix the problem, and present to you when they're done.
[+] [-] brainzap|1 year ago|reply
[+] [-] b20000|1 year ago|reply
i prefer the bug squash question. i have 20 years of experience. i will probably do well.
a fresh grad might fail. do i care? no. leetcode has benefited fresh grads and is being used to discriminate against those with anxiety and other disorders and those who are older.
i think it is refreshing to try a different interview method, one that benefits those with experience.
[+] [-] draw_down|1 year ago|reply
It’s a better demonstration of knowing how software works imo. But I am biased because this sort of thing is one of my strengths.
[+] [-] rtpg|1 year ago|reply
It was very helpful for sanity checking if a person was able to be in front of a computer. There's a bit of a challenge because I think there's a pretty low ceiling of performance (strace on a Python program is fun, but our bugs are not that deep!), but we could use it to also talk about new features on this mini-system and whatnot.
General feeling on it was "this is a great filter", because ultimately we needed people to just pass above a minimum skill bar, not be maximally good at coding. And having the exercise really helped to communicate the day-to-day (which can be tedious!)
[+] [-] vandyswa|1 year ago|reply
I know, it's short and humane, so not a good fit for current Sillycon Valley culture.
[+] [-] sensanaty|1 year ago|reply
Unless it was literally a week ago, I work on too many things to ever remember anything, and for me it's just a fact of the work that I'll be fixing bugs or whatever the questions asks. Nothing stands out, because I don't consider any of them to be some sort of special occasion worth keeping track of
[+] [-] sk11001|1 year ago|reply
[+] [-] mistercow|1 year ago|reply
Not to say that I think this is a bad type of question overall, but IMO, this is an anti-feature. The candidate does not need to accurately self-assess their performance on the day. They need to have their confidence preserved so they don’t tilt and tank the signal for the entire interview round.
[+] [-] harimau777|1 year ago|reply
It seems to me that debugging in particular is often dependent on the developer realizing some edge case or scenario where things circumstances line up to produce a bug. In that case, wouldn't a "bug squash" end up being similar to a "gotcha style" white boarding question.
The author, quite correctly, says that the interview should be scored not on whether the candidate can immediately vomit up a solution but on whether they can demonstrate their an organized thought process. However, isn't that true of white board questions as well?
Overall, it seems to me that the real problem with conventional white board questions is when the interviewer focuses on whether the candidate gets the right answer rather than on whether they demonstrate the ability to problem solve. While it sounds like the author is a good interviewer who focuses on assessing the candidate's thought process, it's not clear to me that giving debugging questions is actually what causes that.
[+] [-] zug_zug|1 year ago|reply
- Didn't set you up with a way to reliably run/test the code, nor a step-through-debugger which, jeez is it 1980? Like setting up a repo in a language you aren't familiar with (say getting your py-env exactly right) can take a whole hour if it's not your main language.
- Didn't have standardized questions, which is hugely problematic (some bugs are 2 orders of magnitude harder than others)
- It also just seems like there's a huge random element, like am I debugging code written by somebody who thinks at the same level of abstraction as me? Did they write comments I understand?
[+] [-] __float|1 year ago|reply
How frequently do people interview in a language other than their "main" one(s)?
> - Didn't have standardized questions, which is hugely problematic (some bugs are 2 orders of magnitude harder than others)
How do you know they're not standardized? (You say in another comment where it was, and indeed that's where the blog post author works. It's described as a pretty standardized process by my reading of it.) You can pass the interview without actually solving the bug, but I get it's easier to blame the problem than admit you struggled with it.
[+] [-] trevor-e|1 year ago|reply
- do they methodically approach the problem? - do they understand the bug? - do they know how to use debugging tools? - do they know advanced debugging tools? - do they work well in an unfamiliar codebase?
But in practice I'd say most candidates check those boxes, at which point it becomes an arbitrary evaluation unless you pass/fail them based on whether they solved the bug.
[+] [-] maxvt|1 year ago|reply
Isn't that part of what professional software engineering is about? Unless you worked with the same group of people for a decade and have had the time to mind meld together professionally, _any_ random developer at a new company is nearly guaranteed to think in a different way and have their own philosophy of code comments.
Checking for mental flexibility and adaptation to varying approaches for others is a great subject for an interview as a software engineer.
[+] [-] Twirrim|1 year ago|reply
There are code smells, side effects, errors, confusing syntax, a whole slew of things in it. I give them the code, tell them I'll help with every bit of python syntax (and really emphasise that I don't mark people down _at all_ for any lack of familiarity with python / python syntax), and ask them to work through the code and do a code review.
There's some python specific quirks that I largely consider bonus points at best, but anyone with any appreciable familiarity with at least one language should be able to spot a number of things. As they call stuff out, I'll dig in (or not) as seems appropriate.
So far it seems to be working out great for me as an interviewer. Candidates seem to be way less stressed than they are with a "whiteboard coding" exercise. Good discussions are showing me their development skills, etc.
[+] [-] gecko6|1 year ago|reply
step 1: the candidate is shown the specification for a method and the results of running the test suite on an obfuscated version of the method. All tests pass. The test suite is minimal, and test coverage is abysmal.
step 2: the candidate is asked to come up with more test cases, based on the specification. The code is run against the updated test suite - most new tests will fail because the method's implementation has several known bugs.
step 3: The un-obfuscated source is provided and the candidate is asked to correct any bugs they discover.
step 4: the changed source is run against a full test suite.
I like this because the candidate's ability to think of test cases, debug, and fix existing code are all tested for.
[+] [-] ryoshu|1 year ago|reply
The fun part was it was a discussion. Two people, paper and pencil, talking about the code. And the examples were from bugs they'd already squashed in a code base they inherited. It was a project that I ended up working on that had... quite a few interesting bugs like that.
[+] [-] tdeck|1 year ago|reply
I've done interviews like this and one major pitfall is the start-up time. It's possible to spend an unreasonable amount of time debugging cold-start issues with getting the repo set up, dependencies installed, and the app to build while the interview slips away. These things are representative of the first week on a new team or project, maybe. Figuring out why bundler can't seem to download the dependency in the gemfile isn't my idea of a good use of interview time.
[+] [-] flurie|1 year ago|reply
I had golden images with some scenarios I wrote myself, and the images automatically shared a tmux session over http so I could follow along without requiring the candidates to screen share.
I did have to ask for IPs so I could ensure the machines were just available to me and the candidate, but it was otherwise pretty seamless!
Though now that https://sadservers.com/ has a paid service it might be worth looking into.
[+] [-] fduran|1 year ago|reply
I've thought of adding programming debug scenarios (I even got sadbugs.com lol), may implement in the future.
I'd love to see what you've done, please feel free to connect :-)
[+] [-] rurp|1 year ago|reply
This kind of interview didn't feel like a gotcha and was much much closer to real world work than the toy and/or algorithm problems that I have encountered. More companies should adopt these types of interview approaches.
[+] [-] Brystephor|1 year ago|reply
It was nice because I didn't need to practice and I knew exactly how to debug the thing.
It was frustrating because my personal laptop is from 7 years ago (from college), is slow, and the dependencies and editor don't work out of the box for an a new repo. Additionally, I'd prefer to use IntelliJ like I do at work but again, that's too heavy for my computer to handle so I resort to vscode and have to figure out how to use it. So then the interview becomes debugging my environment instead of debugging the problem. Maybe that's a useful signal, but it's not really bug squashing anymore then.
So overall, it was still requiring learning but there was not a very good way to test in advance (how do you test all possible repo structures?)
[+] [-] yas_hmaheshwari|1 year ago|reply
[+] [-] 0x20cowboy|1 year ago|reply
[+] [-] marcinzm|1 year ago|reply
[+] [-] cheeze|1 year ago|reply
I get it, your fun language is fun. But if the dev shop is 60% one language and 39% another, I don't really care about the small improvements. You need a strong reason that it actually _helps the business_
[+] [-] BurningFrog|1 year ago|reply
Sure, there is some correlation between talking and doing, but I've worked with people who talk like geniuses and code like crap. And also the opposite.
For some skills, there is just no substitute for actually have people actually demonstrate using them.
[+] [-] tedunangst|1 year ago|reply
[+] [-] throwaway215234|1 year ago|reply
[+] [-] Pikamander2|1 year ago|reply
[+] [-] mikeocool|1 year ago|reply
It’s really easy to let the interviewee talk through talk through all of the great things they built from a product and business perspective — and assume they understand the technical perspective. There are candidates who are really good at talking in an impressive way, but manage to talk around any true implementation details or deep technical understanding.
It’s also really easy to come away with a bad impression of someone who has a tendency to give short answers and not elaborate on things, even when it turns out that person is really skilled when you dig in.
I imagine a big part of the reason for the test-style interviews we have today is that it’s easier to train an interviewer on how to give them.
[+] [-] sethammons|1 year ago|reply
We tried to have a zero coding interview. All behavioral and experiences questions followed up by "how would you design a system to do blah." Got a candidate that did well. Hired. Turned out they could talk the talk but not walk the walk. It was wildly unexpected. They simply couldn't code well. Too long to deliver, too poorly written, usually didn't work right. We insisted on _some_ coding in interviews going forward
[+] [-] caseyy|1 year ago|reply
My most successful hiring has always been based on a conversation with 3-4 practical questions. I even worked in a company that had testing down to a science with all the psychometric nonsense and in the end, it just hired many sociopath-adjacents.
[+] [-] doawoo|1 year ago|reply
They dropped an archive of a medium-ish codebase to my machine, that had failed unit tests. My task was to simply fix the code so the tests passed.
Not only did I feel engaged with the interview because I could speak aloud as I debugged, I also found it fun!
[+] [-] Etheryte|1 year ago|reply
[+] [-] awkii|1 year ago|reply
[+] [-] jpmoral|1 year ago|reply
[+] [-] __float|1 year ago|reply
Could you say more about this? Were they technical or behavioral interviews?
[+] [-] ChrisMarshallNY|1 year ago|reply
However, I'm biased. I'm really, really good at finding and fixing bugs, and pretty much stink at LeetCode, so take my support with a grain of salt.
One problem is that, if the exercise gets known, expect an underground economy of solution cheats. Same with LeetCode, but that's sort of expected. Bug fix solutions hit harder.
[+] [-] ajbt200128|1 year ago|reply
Not necessarily a con. Different languages/tech stacks have very different tools. I.e. debugging a GC issue is different than debugging a network issue is different than debugging an embedded issue. If you chose a language for a very good reason, then it's not a con to filter out those who aren't familiar with that language enough to debug an issue in it
[+] [-] deathanatos|1 year ago|reply
The language is based on/chosen by the candidate. We want to map the candidate to whatever language in our repository of "this question in different langs" is closest, to control for the candidate's familiarity as much as possible.
If you're screening to only candidates who can code in the language your company primarily works with, you're missing good candidates. The best are going to be able to pick up a new language rapidly, particularly if your language is a mainstream one that's probably imperative or OO-ish, or both; adding a non-esoteric language to one's repertoire is just not that hard a thing to do. But in the interview, I don't want them struggling with syntactic bull, as it's not a useful signal; I want to know how they think, whether they've seen code before, and can they reason from problem statement to debugged bug.
[+] [-] aray07|1 year ago|reply
- People were allowed to use their favorite IDEs - so you could see how proficient some people were
- Great engineers are really really good at debugging - it was great to see how people debugged and it helped me pick up a few things as well
- People couldn't leetcode their way out of this