Peeking at the source, it's just a zero-width div, which is not accomodating of people with disabilities. This might open you up to litigation if you disqualify a blind person on the grounds he gave the wrong answer 'using AI', when he might have just been answering the question his screen reader read out.
For anyone who missed the (poorly-explained) trick, the website uses a CSS trick to insert an equals sign, thus showing different code if read or if copied/pasted. That's how the author knows whether you solved it in your head or pasted it somewhere.
The thing I found particularly fascinating, because the article was talking about discarding AI applicants, is that if you take a screenshot and ask ChatGPT, then it works fine (of course, it cannot really see the extra equals sign)
So this is not really foolproof, and also makes me think that feeding screenshots to AI is probably better than copy-pasting
Thanks, I was wondering how in the hell that many would get the answer wrong and what is this hidden equal sign he was talking about.
Maybe the question could be flipped on its head to filter further with "50% of applicants get this question wrong -- why?" to where someone more inquisitive like you might inspect it, but that's probably more of a frontend question.
What most interviews get wrong is that there are usually just a few "bullet points" that if you see them, you instantly know that the the candidate at least has the technical chops.
Instead of creating a test that specifically aims for those bullet points, many technical assessments end up with convoluted scaffolding when actually, only those key bullet points really matter.
Like the OP, I can usually tell if a candidate has the technical chops in just a handful of really straightforward questions for a number of technical domains.
If you have a test that can identify a good candidate quickly then you have honestly struck gold and can genuinely use that to start your own company. I mean this with absolute sincerity.
One of the absolute hardest part of my business is really hiring qualified candidates, and it's really demoralizing and time consuming and unbelievably expensive. The best that I've managed to do is the same that pretty much every other business owner says... which is that I can usually (not always) filter out the bad candidates (along with some false negatives), and have some degree of luck in hiring good candidates (with some false positives).
But along that thought, I’ve always held that a human conversation is the best filter. I’ll ask you what do you work on recently, what did you learn, what did you mess up, what did you hate about the tool / language / framework.
I strongly believe your ability to articulate your situation corresponds with your ability to do the job.
People are focusing on the > vs the >=, but for me the key point is being able to hold logic and variables in your mind over several iterations of a loop.
I’ve used similar tests in interviews before (a function that behaves like atoi and candidates have to figure out what it’s doing) and the good candidates are able to go over the code and hold values in their head across multiple iterations of a loop.
Which is literally the point of the post. They have the = in >= at an opacity of 0 and a font-size of 1, which means it doesn't appear if styles are used properly. And their point is that candidates that copy/paste such trivial code into an AI/interpreter will get -11 because it just sees the >=.
Though a gap in their process is that, as you mentioned, various reading modes also remove styles and likewise might see it. Though a reader should filter out opacity:0 content.
exactly the reason why you NEVER should copy-paste code from a website into your terminal, even if that has paste protection (https://lwn.net/Articles/749992/)
Was that the trick? When copying the text, it is also >=, which is why an online search or AI tools probably give the wrong answer as the article asserts. If you correct the code then at least Claude gives the right answer.
I think it’s important to test these systems. Let some % of candidates who get this wrong through to the next stage and see what happens. Does failing this test actually correlate with being a bad fit later?
If you want to ineffectivly filter out most candidates just auto-reject everything that doesn’t arrive on a timestamp ending in 1.
> Let some % of candidates who get this wrong through to the next stage and see what happens.
This isn't a good methodology. To do your validation correctly, you'd want to hire some percentage of candidates who get it wrong and see what happens.
Your way, you're validating whether the test is informative as to passing rate in the next stage of your hiring process, not whether it's informative as to performance on the job.
(Related: the 'stage' model of hiring is a bad idea.)
It's not for getting an interview with the CTO, but a very early filter to weed out poor candidates. But yes, if that's the only question then it's not going to discover talent.
I like it, a test so bad, it just might work! I think the trick is not the equal sign, trick is to keep it so simple and small that most qualified people will not try to short circuit it.
If such a trivial test really did[1] filter out many candidates (beyond the technical limitation of the test that some client devices will render the =, as mentioned by users leveraging reader mode), I wonder if there is a greater observation we could draw from it. Personally I would assume 100% of programmers could very easily answer the question in seconds, and if people really were copy/pasting to an AI tool[2], I would assume they are so jaded and exhausted of clever reviews, nine round hiring funnels, leetcode, blackboard code, and so on, that it's just another of countless applications they've made and they just don't care anymore.
[1] Yeah, I'm super cynical about stories like this, and know that many if not most are just invented shower thoughts manifested into a fictional reality.
[2] Alternately they're pasting to a normal editor -- even notepad -- for a more coherent programming font, where again the = appears.
You've built a filter that punishes verification at the hiring stage, then you're surprised when your team ships unverified code.
You get what you select for. He selected for "doesn't double-check." Congratulations, you've got a team of developers who don't double-check.
I got the right answer but it was so easy I went in with doubt I had done it right.
Which I understand is my issue to work on, but if I were interviewing, I'd ask candidates to verbalize or write out their thought process to get a sense of who is overthinking or doubting themselves.
That doubt is valid. Anyone reading this blog post (or in an interview, given the prevalence of trick interview questions) would know there must be some kind of trick. So, after getting the answer without finding a trick, it would be totally reasonable to conclude you must have missed something. In this case, it turns out the trick was something that was INTENDED for you to miss if you solved the problem in your head. At the end of the day, the knowledge that "I may have missed something" is just part of day to day life as an engineer. You have to make your best effort and not get paralyzed by the possibility of failure. If you did it right, had a gut feeling that something was amiss, but submitted the right answer without too much hemming and hawing, I expect that's typical for a qualified engineer.
The only correct answer is... both answers (1 and -11).
That is, if you're really interested in pursuing the position.
Not only are you willing to take their tests, but you go beyond what is required, for your own benefit and edification.
That's why, when presented with the URL during the interview, you immediately load it, and right-click View Source into another tab, while simultaneously making small talk with the former CTO interviewer.
Even though you're a backender, you know enough frontend to navigate the interspersed style and html and javascript and so, you solve both puzzles, and weave into the conversation the two answers, while also deciding that this is probably not the gig for you, but who knows, let's see how they answer your questions now...
somewhat off-topic: I had an interview for an Engineering Manager position with the Head of Engineering.
They had some leet code problem prepared and I tried solving it and failed.
During the challenge, I used some python string operand (:-1) (and maybe some other stuff) that they didn't knew.
In the end, I failed the challenge as I didn't do it in the O(n) way...
These kind of stupid challenges exemplify what's wrong with hiring these days: one interviewer, usually some "vp"/"head of" decides what is the "correct" way to write some code, when they (sometimes) themselves couldn't write a line of code (since they've been "managers" for a millennia)
ps. they actually did not know what `:-1` means ...I rest my case
Were they a python engineer? I interview folks all the time in languages I don’t understand, and I ask dumb questions throughout the interview. I’ve been a professional (non-python) programmer for over a decade now and I don’t really know what :-1 means, I can guess it’s something like slicing until the last character but idk for sure.
I read the problem without reader mode by accident and got it correct, then was mildly confused when I switched to reader mode (which I always use when a site is in light mode, as I prefer dark mode on everything) and saw the ">=". In normal circumstances I would've failed this.
[+] [-] hairband_dude|3 months ago|reply
[+] [-] kevin061|3 months ago|reply
[+] [-] thaumasiotes|3 months ago|reply
[+] [-] imglorp|3 months ago|reply
If someone is visually impaired, it's short enough you can just read the problem text to them.
[+] [-] userbinator|3 months ago|reply
[+] [-] stavros|3 months ago|reply
[+] [-] kevin061|3 months ago|reply
So this is not really foolproof, and also makes me think that feeding screenshots to AI is probably better than copy-pasting
[+] [-] divbzero|3 months ago|reply
[+] [-] psygn89|3 months ago|reply
Maybe the question could be flipped on its head to filter further with "50% of applicants get this question wrong -- why?" to where someone more inquisitive like you might inspect it, but that's probably more of a frontend question.
[+] [-] CharlieDigital|3 months ago|reply
Instead of creating a test that specifically aims for those bullet points, many technical assessments end up with convoluted scaffolding when actually, only those key bullet points really matter.
Like the OP, I can usually tell if a candidate has the technical chops in just a handful of really straightforward questions for a number of technical domains.
[+] [-] Maxatar|3 months ago|reply
One of the absolute hardest part of my business is really hiring qualified candidates, and it's really demoralizing and time consuming and unbelievably expensive. The best that I've managed to do is the same that pretty much every other business owner says... which is that I can usually (not always) filter out the bad candidates (along with some false negatives), and have some degree of luck in hiring good candidates (with some false positives).
[+] [-] antonymoose|3 months ago|reply
But along that thought, I’ve always held that a human conversation is the best filter. I’ll ask you what do you work on recently, what did you learn, what did you mess up, what did you hate about the tool / language / framework.
I strongly believe your ability to articulate your situation corresponds with your ability to do the job.
[+] [-] ttoinou|3 months ago|reply
[+] [-] imron|3 months ago|reply
I’ve used similar tests in interviews before (a function that behaves like atoi and candidates have to figure out what it’s doing) and the good candidates are able to go over the code and hold values in their head across multiple iterations of a loop.
There are many candidates who can’t do this.
[+] [-] delduca|3 months ago|reply
[+] [-] ikidd|3 months ago|reply
[+] [-] tagyro|3 months ago|reply
better yet, listen to a 1h meeting and compare notes/action points
[+] [-] apothegm|3 months ago|reply
[+] [-] llm_nerd|3 months ago|reply
Though a gap in their process is that, as you mentioned, various reading modes also remove styles and likewise might see it. Though a reader should filter out opacity:0 content.
[+] [-] amelius|3 months ago|reply
But then the question is, how do you reach people who filter out the job ads?
[+] [-] Piraty|3 months ago|reply
[+] [-] ano-ther|3 months ago|reply
[+] [-] unknown|3 months ago|reply
[deleted]
[+] [-] koakuma-chan|3 months ago|reply
[+] [-] rumplestilts|3 months ago|reply
[deleted]
[+] [-] preinheimer|3 months ago|reply
If you want to ineffectivly filter out most candidates just auto-reject everything that doesn’t arrive on a timestamp ending in 1.
[+] [-] gs17|3 months ago|reply
Really, the better test would be to not discriminate on it before you know it's useful, but store their answer to compare later.
[+] [-] jgilias|3 months ago|reply
[+] [-] thaumasiotes|3 months ago|reply
This isn't a good methodology. To do your validation correctly, you'd want to hire some percentage of candidates who get it wrong and see what happens.
Your way, you're validating whether the test is informative as to passing rate in the next stage of your hiring process, not whether it's informative as to performance on the job.
(Related: the 'stage' model of hiring is a bad idea.)
[+] [-] skeeter2020|3 months ago|reply
[+] [-] ndsipa_pomu|3 months ago|reply
[+] [-] farazbabar|3 months ago|reply
[+] [-] llm_nerd|3 months ago|reply
[1] Yeah, I'm super cynical about stories like this, and know that many if not most are just invented shower thoughts manifested into a fictional reality.
[2] Alternately they're pasting to a normal editor -- even notepad -- for a more coherent programming font, where again the = appears.
[+] [-] ctoth|3 months ago|reply
[+] [-] tycoon666|3 months ago|reply
[+] [-] emseetech|3 months ago|reply
Which I understand is my issue to work on, but if I were interviewing, I'd ask candidates to verbalize or write out their thought process to get a sense of who is overthinking or doubting themselves.
[+] [-] gs17|3 months ago|reply
And if in your doubt you decided to run it through the interpreter to get the "real" answer, whoops, you're rejected.
[+] [-] peacebeard|3 months ago|reply
[+] [-] OptionOfT|3 months ago|reply
Safari's reader sees the =. Edge does not.
[+] [-] metadope|3 months ago|reply
That is, if you're really interested in pursuing the position.
Not only are you willing to take their tests, but you go beyond what is required, for your own benefit and edification.
That's why, when presented with the URL during the interview, you immediately load it, and right-click View Source into another tab, while simultaneously making small talk with the former CTO interviewer.
Even though you're a backender, you know enough frontend to navigate the interspersed style and html and javascript and so, you solve both puzzles, and weave into the conversation the two answers, while also deciding that this is probably not the gig for you, but who knows, let's see how they answer your questions now...
[+] [-] sointeresting|3 months ago|reply
[+] [-] unknown|3 months ago|reply
[deleted]
[+] [-] tagyro|3 months ago|reply
They had some leet code problem prepared and I tried solving it and failed.
During the challenge, I used some python string operand (:-1) (and maybe some other stuff) that they didn't knew.
In the end, I failed the challenge as I didn't do it in the O(n) way...
These kind of stupid challenges exemplify what's wrong with hiring these days: one interviewer, usually some "vp"/"head of" decides what is the "correct" way to write some code, when they (sometimes) themselves couldn't write a line of code (since they've been "managers" for a millennia)
ps. they actually did not know what `:-1` means ...I rest my case
[+] [-] jjj123|3 months ago|reply
[+] [-] percentcer|3 months ago|reply
[+] [-] Maxatar|3 months ago|reply
[+] [-] lukeinator42|3 months ago|reply
[+] [-] kevin061|3 months ago|reply
[+] [-] skeledrew|3 months ago|reply
[+] [-] abujazar|3 months ago|reply