(no title)
A1aM0 | 3 days ago
You’re right that many strong engineers can’t legally share employer code, and “has OSS time” is not a universal signal. So I’m now thinking of this as a dual-path system:
1) Public evidence path (for people with OSS/public technical work), where existing contributions are treated as reusable evidence. 2) Structured assessment path (for people without public artifacts), using scoped tasks/pair debugging/incident reasoning mapped to the same rubric.
So OSS should be an advantage when present, but never a requirement.
Also agree on AI confounders: raw public activity can’t be trusted at face value anymore. We need to weight traceable process signals (review back-and-forth, bug-to-fix chain, consistency over time) higher than easy-to-generate text/code volume.
If you were hiring with this, what would be your minimum bar for “credible evidence”?
apothegm|3 days ago
A1aM0|3 days ago
The thing I’m testing is not “new tests,” but a tighter system: derive the rubric from real team tasks, apply the same rubric to both public evidence and live assessments, and make every score traceable (with lower weight on easy-to-fake AI-era signals).
If that doesn’t improve consistency/speed/quality vs current hiring loops, then it’s just old process with new branding.