(no title)
cspa | 7 years ago
Adaptive testing (https://en.wikipedia.org/wiki/Computerized_adaptive_testing) is something we're looking into, and would be extremely interesting (and challenging!) to build.
cspa | 7 years ago
Adaptive testing (https://en.wikipedia.org/wiki/Computerized_adaptive_testing) is something we're looking into, and would be extremely interesting (and challenging!) to build.
ABCLAW|7 years ago
The algo to take Core + the best other section, right? As it stands, the test difficulty makes it pretty easy to hit 2 perfect scores, so you'll be competing against those pretty consistently.
How does the CSPA help my interviewing process if I'm a network guru and knock all those questions out of the park, but apply for a ML job where my knowledge is paltry? How does my CSPA testing help me differentiate from the above guy as a network guru when I also aced the ML section? This seems like an obvious area the test can outperform other testing metrics.
As far as I can tell, the only value for the current scoring system is to weed out people who are just plain abysmal at everything. Eventually people are going to game your exam and pop up very close analogues onto google. At that point even the floor function is dead with the added deadweight cost of the exam still being an application requirement at BigCo.
Maybe I'm just blinded because I think having accurate skill radar charts that test takers and employers could use for self improvement and prospect evaluation, respectively, could be an absurdly large value add.
cspa|7 years ago
You're right, so far most of the test takers are entry-level or changing careers. To accurately assess specialities like ops/IT, we'll need separate, dedicated subject tests (or adaptive tests).
That said, no one has yet gotten a perfect 1600 :) . The highest is something like 1380.