kenschu | 1 year ago | on: The impact of AI on the technical interview process
kenschu's comments
kenschu | 1 year ago | on: The impact of AI on the technical interview process
kenschu | 1 year ago | on: The impact of AI on the technical interview process
kenschu | 1 year ago | on: The impact of AI on the technical interview process
kenschu | 1 year ago | on: The impact of AI on the technical interview process
kenschu | 1 year ago | on: The impact of AI on the technical interview process
Discourse here always tends to be negative - but I think that AI really opens the door positively here. It allows us to effectively vet talent asynchronously for the first time.
Our thesis is that live interviews, while imperfect, work. If an engineer sits down with a candidate and watches them work for an hour (really you probably only need 5 minutes), you have a good read on their technical ability. There's all of these subtle hints that come out during an interview (how does the candidate approach a problem? What's their debugging reflex when something goes wrong? etc) - seeing enough of those signals give you confidence in your hiring decision.
Well - LLMs can do that too, meaning we can capture these subtle signals asynchronously for the first time. And that's a big deal - if we can do that, then everyone gets the equivalent of a live interview - it doesn't matter your YOE or where you went to school etc - those that are technically gifted open a slot.
And that's what we've built - a test that has the same signal as a live interview. If you're able to do that reliably, it doesn't just provide a new interview method for a new system - it might change how the recruiting process itself is structured.
kenschu | 1 year ago | on: Learning to Reason with LLMs
kenschu | 1 year ago | on: Ask HN: What are you working on (August 2024)?
kenschu | 1 year ago | on: Bug squash: An underrated interview question
[1] https://www.wayup.com/employers/blog/how-a-positive-candidat...
kenschu | 1 year ago | on: Bug squash: An underrated interview question
I think there's a valid point that any IDE != a candidate's local setup. But I think there's a compromise there - we try to offer a variety of common IDE's + a few mins of prep to download extensions, etc before you're thrown in the thick of it.
kenschu | 1 year ago | on: Bug squash: An underrated interview question
kenschu | 1 year ago | on: Bug squash: An underrated interview question
We take in these bug-squash/Github based repos, serve them in VScode Web/Jetbrains/etc, and give you instant results
Email is in profile if anyone's curious to see it live
kenschu | 1 year ago | on: Why Triplebyte Failed
kenschu | 1 year ago | on: Why Triplebyte Failed
We're (1) only running technical (coding) assessments, and (2) still letting live evaluators making final hiring decisions
kenschu | 1 year ago | on: Why Triplebyte Failed
I should have clarified - we're not just putting an LLM on the other side of the candidate and letting it drive decisions.
Instead think of things use cases like content generation (we don't have a problem library - we create custom problems/modules for each customer of ours), etc. That's where I think you can improve signal a lot, by setting up a better situation to assess the candidate.
kenschu | 1 year ago | on: Why Triplebyte Failed
Our thesis is that LLMs unlock a lot in this space - and that we can provide more signal to employers, while giving candidates a better experience. There's a lot of open/difficult questions to doing this well - we're trying to figure it out.
(edit: we're not building an "AI recruiter" that asks you a time you failed or automates hiring decisions - we are extensively using LLMs to do things like problem/module generation, etc.)
I'd love to better understand the Triplebyte story. If you enjoyed their product (as a hiring manager, or as a candidate) or if you feel passionately about this space, I'd love to talk to you. Email is in my bio.
kenschu | 1 year ago | on: The business of takehome assessments
We're biased, but we think the old form of take-home assessments (+ classic Leetcode tests, etc) are completely broken. Beyond reasons you all mention - they're completely unreliable today in the age of ChatGPT, etc. Way too easy to cheat.
We're seeing candidates copy takehome instructions into an LLM, paste the solution and submit in <5 minutes. It's hard to write a problem that's (1) low enough scope to solve in a short time, but (2) hard enough that LLMs can't step towards sovling it.
At Ropes - we're using LLMs to evaluate how candidates code, not just the final solution. So HM's can step in and look at a timeline of actions taken to reach the solution, instead of just the final answer. How do candidates debug? What edge cases do they consider? Etc. We think these answers hold real signal and can be answered for the first time async.
We're trying to make this better for candidates too. E.g. (1) shorter assessments, (2) you can often use your own IDE, (3) you're not purely evaluated on test cases, etc. But we're not yet perfect. If this sounds interesting / you have strong thoughts I'd love to talk to you - email is in my bio.