top | item 21732377

(no title)

richmarr | 6 years ago

> I will say though, the problem is one of “standardization” across an organization where it’s too big for everyone to fit in a room.

I think you've got a lot of this right (disclaimer: we've built the product I think you're describing)

The don't think the most important problem is standardisation though, it's observability/instrumentation ie. if you don't measure what's working, you can't improve things.

The very best tech companies measure quite a lot, and often look back at their hiring processes in the event of a mis-hire to figure out what went wrong and how they can avoid the same happening in future... but even then they only do that in exceptional cases because it's done fairly manually. That means they have low statistical significance and a stuttering cycle of learning.

I believe they should be constantly looking at what's working well, for every hire. So that's what we built.

Once your hiring pipeline is trivially visible, a lot of these questions go away. You can see what's working well and try new things in safety, you can optimise with your eyes wide open.

One thing we did straight away was to deprioritise CVs and replace them with written scenario-based questions relevant to the job. If managed properly that takes your sift stage from a predictive power around r=0.3 to a performance we find typically above r=0.6. Far fewer early false negatives makes your hiring funnel (a) less leaky, (b) more open to pools of talent previously ruled out by clumsy CV sifting, and (c) potentially shorter as the improved sift accuracy allows companies to consider dropping their phone interview stage(s)

Our NPS rating for HR teams is currently running at 85, and MRR churn is under 1% so there's clearly some value to the approach.

discuss

order

No comments yet.