(no title)
alexk | 3 years ago
* Error handling and code structure - whether the code processes errors well and has a clear and modular structure or crashes on invalid inputs, or the code works, but is all in one function.
* Communication - whether all PR comments have been acknowledged during the code review process and fixed.
Others, like whether the code uses good setup of HTTPS and has authn are more clear.
However, you have a good point. I will chat to the team and see if we can reduce the amount of things that are subject to personal interpretation and see if we can replace them with auto checks going forward.
tptacek|3 years ago
A rubric written in advance that would allow a single person to vet a work sample response mostly cures the problem you have right now. The red flag is the vote.
alexk|3 years ago
For some challenges we wrote a public linter and tester, so folks can self-test and iterate before they submit the code:
https://github.com/gravitational/fakeiot
I'll go back and revise these with the team, thanks for the hint.
wdella|3 years ago
> A rubric written in advance that would allow a single person to vet a work sample response mostly cures the problem you have right now. The red flag is the vote.
I argue the opposite: Not having multiple human opinions and a hiring discussion/vote/consensus is a red flag.
The one engineer vetting the submission they may be reviewing before lunch or have had a bad week, turning a hire into a no-hire. [1] Not a deal breaker in an iterated PR review game, but rough for a single round hiring game. Beyond that, multiple samples from a population gives data closer to the truth than any single sample.
There is also a humanist element related to current employees: Giving peers a role and voice in hiring builds trust, camaraderie, and empathy for candidates. When a new hire lands, I want peers to be invested and excited to see them.
If you treat hiring as a mechanical process, you'll hire machines. Great software isn't built by machines... (yet)
[1] https://en.wikipedia.org/wiki/Hungry_judge_effect