kenschu's comments

kenschu | 1 year ago | on: The impact of AI on the technical interview process

Ha, no LLM back and forth interview! Just an async test, and the signals are implicit. I do think there's an advantage for candidates - personally I'd rather have the opportunity to prove my skills vs. being auto-denied because I didn't go to a shiny university/etc

kenschu | 1 year ago | on: The impact of AI on the technical interview process

*disclaimer that I'm the founder of Ropes AI, & we're building a new way to evaluate engineering talent*

Discourse here always tends to be negative - but I think that AI really opens the door positively here. It allows us to effectively vet talent asynchronously for the first time.

Our thesis is that live interviews, while imperfect, work. If an engineer sits down with a candidate and watches them work for an hour (really you probably only need 5 minutes), you have a good read on their technical ability. There's all of these subtle hints that come out during an interview (how does the candidate approach a problem? What's their debugging reflex when something goes wrong? etc) - seeing enough of those signals give you confidence in your hiring decision.

Well - LLMs can do that too, meaning we can capture these subtle signals asynchronously for the first time. And that's a big deal - if we can do that, then everyone gets the equivalent of a live interview - it doesn't matter your YOE or where you went to school etc - those that are technically gifted open a slot.

And that's what we've built - a test that has the same signal as a live interview. If you're able to do that reliably, it doesn't just provide a new interview method for a new system - it might change how the recruiting process itself is structured.

kenschu | 1 year ago | on: Bug squash: An underrated interview question

Agree! This is one of the core pains we set out to fix. Using a CDE to package everything nicely for the candidate goes a surprisingly long way for their experience.

I think there's a valid point that any IDE != a candidate's local setup. But I think there's a compromise there - we try to offer a variety of common IDE's + a few mins of prep to download extensions, etc before you're thrown in the thick of it.

kenschu | 1 year ago | on: Bug squash: An underrated interview question

There's both culture and technical elements to consider in a potential hire. I don't think anyone would contest that vetting for the culture/drive of a candidate is important. But I do think the demonstration of skills is a necessary part of technical hiring, at least for non-senior positions.

kenschu | 1 year ago | on: Bug squash: An underrated interview question

This is exactly what we've built!

We take in these bug-squash/Github based repos, serve them in VScode Web/Jetbrains/etc, and give you instant results

Email is in profile if anyone's curious to see it live

kenschu | 1 year ago | on: Why Triplebyte Failed

There's a lot of "AI Recruiters", etc floating around asking you about a time you failed, etc - we don't like that approach at all.

We're (1) only running technical (coding) assessments, and (2) still letting live evaluators making final hiring decisions

kenschu | 1 year ago | on: Why Triplebyte Failed

One that we have to tread carefully!

I should have clarified - we're not just putting an LLM on the other side of the candidate and letting it drive decisions.

Instead think of things use cases like content generation (we don't have a problem library - we create custom problems/modules for each customer of ours), etc. That's where I think you can improve signal a lot, by setting up a better situation to assess the candidate.

kenschu | 1 year ago | on: Why Triplebyte Failed

I'm the founder of a new tech assessment co - I hear "you guys remind me of Triplebyte" at least 1x per week. Clearly they were onto something originally - there's a lot of lingering love among eng leaders.

Our thesis is that LLMs unlock a lot in this space - and that we can provide more signal to employers, while giving candidates a better experience. There's a lot of open/difficult questions to doing this well - we're trying to figure it out.

(edit: we're not building an "AI recruiter" that asks you a time you failed or automates hiring decisions - we are extensively using LLMs to do things like problem/module generation, etc.)

I'd love to better understand the Triplebyte story. If you enjoyed their product (as a hiring manager, or as a candidate) or if you feel passionately about this space, I'd love to talk to you. Email is in my bio.

kenschu | 1 year ago | on: The business of takehome assessments

Founder of new tech assessment company / mentioned in article here.

We're biased, but we think the old form of take-home assessments (+ classic Leetcode tests, etc) are completely broken. Beyond reasons you all mention - they're completely unreliable today in the age of ChatGPT, etc. Way too easy to cheat.

We're seeing candidates copy takehome instructions into an LLM, paste the solution and submit in <5 minutes. It's hard to write a problem that's (1) low enough scope to solve in a short time, but (2) hard enough that LLMs can't step towards sovling it.

At Ropes - we're using LLMs to evaluate how candidates code, not just the final solution. So HM's can step in and look at a timeline of actions taken to reach the solution, instead of just the final answer. How do candidates debug? What edge cases do they consider? Etc. We think these answers hold real signal and can be answered for the first time async.

We're trying to make this better for candidates too. E.g. (1) shorter assessments, (2) you can often use your own IDE, (3) you're not purely evaluated on test cases, etc. But we're not yet perfect. If this sounds interesting / you have strong thoughts I'd love to talk to you - email is in my bio.

page 1