top | item 39209144

(no title)

m1el | 2 years ago

I've had a displeasure of interviewing someone who used ChatGPT in a live setting. It was pretty obvious: I ask a short question, and I say that I expect a short answer on which I will expand further. The interviewee sits there in awkward silence for a few seconds, and starts answering in a monotone voice, with sentence structure only seen on Wikipedia. This repeats for each consecutive question.

Of course this will change in the future, with more interactive models, but people who use ChatGPT on the interviews make a disservice to themselves and to the interviewer.

Maybe in the future everybody is going to use LLMs to externalize their thinking. But then why do I interview you? Why would I recommend you as a candidate for a position?

discuss

order

blharr|2 years ago

The idea that spotting cheating is obvious is a case of selection bias. You only notice when it's obvious.

Clearly, the person put 0 effort towards cheating (as most cheaters would, to be fair). But slightly adjusting the prompt, or just paraphrasing what ChatGPT is saying, would make the issue much harder to spot.

al_borland|2 years ago

Maybe I’m a slow reader, but reading, understanding, and paraphrasing the response seems like it would take enough time to be awkward and obvious as well.

I’m not sure why anyone would want a job they clearly aren’t qualified for.

ozim|2 years ago

If someone is smart enough to go away with it is enough that I know but it doesn’t bother me much- I don’t mind.

Had an interview take home assignment done by GPT and it was easy to spot after seeing dozens of solutions. Downside for the guy was - it didn’t work.

irrational|2 years ago

We will have to start studying people's eyes to see if they are moving as if reading text.

johnnyanmac|2 years ago

I mean, at some point if they go through so much effort to hide their cheating they probably have attained some mastery in the process. Kinda like how some friends in high school would try and sneak in note cards on a test but they probably spent so much time prepping them that they coulda gotten an A or B regardless.

It's also why it's kinda annoying to do live interviewing trivia questions. Can I immediately answer what a partial template specialization is? Probably not, I never used them. Can I google it in 2 minutes and summarize it as as way for (often c++) template classes to bound some of the template arguments to values or pointers? Well, I just did. Should that cost me the interview? That's pretty much what I do on the job.

twic|2 years ago

What's your evidence for that claim?

ptmcc|2 years ago

Yes of course! I'd be happy to answer your short question with a short answer. I look forward to expanding further on the answer, as you previously stated that you expect me to.

Jokes aside, something about LLM responses is very uncanny valley and obvious.

chewxy|2 years ago

The peppy, upbeat, ultra-American tone that the LLMs produce can be somewhat toned down with good prompting but ultimately, it does stink of the refinement test set.

foxyv|2 years ago

To be honest, I think in the future we will interview people on their ability to work with an LLM. This would be a separate skill from the other ones we are looking for. Maybe even have them do some fact checks on a given prompt and response as well as suggest new prompts that would give better results. There might even be an entire AI based section of an interview.

In the end, it's just a new way to "Google" the answer. After all, there isn't much difference between reading off an LLM response and just reading the Wikipedia page after a quick Google search, except for less advertisements.

SOLAR_FIELDS|2 years ago

I’ve already been allowed to use it in programming interviews where they’ve said it’s explicitly allowed to use ChatGPT. It’s led to some fun interactions because I use it a lot and as such I’m quite good with it and interviewers are often taken aback by how quickly I’m able to just destroy the question they put out with a good prompt

I will say there are still some programming questions you can give that will stump the hell out of ChatGPT. In particular I took one online coding assessment where I used it and there was a question about plotting on a graph with code and calculating areas based on the points plotted that ChatGPT failed miserably at, but someone pretty good with math and geometry would find pretty tractable.

theamk|2 years ago

We didn't start testing people on Google usage when Googling became useful, so I don't see why LLMs would be different.

Instead, there would be tasks that can be completed using any tools available - Google, LLM, whatever. And candidates are rated on how well the task is done, and maybe asked a few questions to make sure they made decisions knowingly and not just copied the first answer off the internet.

This already exists and is called "take home programming assignment"

jacques_chester|2 years ago

I agree that this is the likely long term outcome. But for now folks want to think that everyone needs to have memorized every individual screw, nail, nut and bolt in the edifice of computer science.

outside415|2 years ago

Me and several friends have used ChatGPT in live interviews to supplement answers to topics we were only learning in order to bridge the gap on checkboxes the interviewer may have been looking for.

We’ve all got promotions by changing jobs in the last 6 months using this method.

You can be subtle about it if it’s already an area you kind of know.

al_borland|2 years ago

I like when a person admits they don’t know something in an interview. It shows they aren’t afraid to admit when they don’t have the answer instead of trying to lie their way through it and hoping they don’t get caught. Extra bonus points if they look the thing up later to show they are curious and want to close knowledge gaps when they become aware of them.

People who are unwilling to say, “I don’t know, let me look into that,” are not fun to work with. After a while it’s hard to know what is fact vs fiction, so everything is assumed to be a fabrication.

jacques_chester|2 years ago

So, assuming they didn't know and approve, you cheated.

smcin|2 years ago

Is this junior/intermediate software engineer, or what? What sort of questions? CS exam-type, definitions, whiteboarding, programming, LeetCode, numerical problems, algorithm, data structures...? Programming-language certifications? Riddles?

m1el|2 years ago

Oh, and to add an insult to the injury, I was using a collaborative editing tool. So I was able to see the person:

1) Select All (most likely followed by the copy) 2) Type the answer 3) Make an obvious mistake when they type else block, before the if

willsmith72|2 years ago

i have a really annoying habit of constantly double-clicking to highlight whatever i'm reading or looking at.

i've actually been called out for it in a systems design interview, under the presumption i was copying my notes into another window, but was glad they called me out so that i could explain myself

frabjoused|2 years ago

That was me interviewing someone yesterday. The telltale select all is so cringe.

8organicbits|2 years ago

> Maybe in the future everybody is going to use LLMs to externalize their thinking. But then why do I interview you?

It will become a skill. In 1900 you'd interview a computer (a person who does math) by asking them to do math on paper. Now you'd let them write some code or use software to do it. If the applicant didn't know how to use a (digital) computer, you'd negatively rate them.

I don't love it, but we may reach the point where your skill at coaxing an LLM to do the right thing becomes a desirable skill and you'd negatively rank LLM-illiterate applicants.

Looking at LLM quality, we're not at that point for most fields.

appleiigs|2 years ago

You're not asking the correct questions as an interviewer. You should be asking specific questions about projects they've worked on, or about them personally to get to know them. ChatGPT should not be able to answer. Pretend you're Harrison Ford in Blade Runner.

makeitdouble|2 years ago

You ask many kind of questions.

A candidate can do very well on personal and web project experience questions, and suddenly blank when you ask them how an http request is structured. Or what's CORS.

Then you dig further and discover a lot more thing about them that wouldn't have surfaced otherwise because hou assumed they knew all of that.

My best advice would be to never skip "dumb" and easy technical questions. You can do it very quick, and warn ahead that it's dumb questions but you ask them to everyone.

dmazzoni|2 years ago

You can't only ask those questions, because some people are extremely good at bullshitting.

I always start interviews by asking them to explain their own projects. However, sometimes I'll find someone who's great at explaining projects they supposedly worked on in great detail, but then when given a simple coding problem they can't even write a for loop in their own top language.

mvdtnz|2 years ago

Chatgpt can easily be instructed to tell a tale about a project it has worked on. It will expand on fake details when pressed.

Kranar|2 years ago

As an experiment I gave ChatGPT my resume and background information and then pretended to interview it, just to see how well it would be able to conduct a mock interview. It did exceptionally well.

I'm not sure what specific questions you have in mind, but ChatGPT is almost certainly trained on a vast array of resumes and a diverse range of profiles, possibly even all of LinkedIn itself as well as other job boards. There is little to no reason why it wouldn't be able to make up an entire persona who is capable of passing most job interviews.

tasty_freeze|2 years ago

One red flag for me is when the interviewee gives "cork" answers -- the metaphor is that of a cork bobbing in the water. If you ask superficial questions about work they've done, the answer it convincingly. But the further down you go into the details the more resistance you get and the cork keeps bobbing up to the surface level.

fragmede|2 years ago

You want me to explain my role in the tortise flipping app that had a dating feature for lesbians?

nyc_data_geek1|2 years ago

Using LLM's isn't externalizing or outsourcing thinking. LLM's aren't performing that. People doing this are in fact substituting thinking with a process, the output of which masquerades as thoughts after a fashion, but are in fact basically word cloud probability based pattern matching.

Sure, the point that superior tool use is a valid job skill makes some sense, but conceding your agency and higher reasoning to a machine which possesses none of these is to my mind not going to be beneficial to a business in the long run.

osigurdson|2 years ago

Perhaps interviews need to assume the person being interviewed is using an LLM and can be evaluated on how effective they are with it. Presumably this is what employers want. The challenge is interviewers are busy, would prefer to be doing other things and want to stick to their old playbook ("tell me how to invert a binary tree").

kfk|2 years ago

Another take is we don’t like being lied to. Lots of these ChatGPT job candidates don’t disclose they are using an AI during the interview.

wakawaka28|2 years ago

No, it's not what they want. If they wanted you to use a LLM then they would tell you that up front. It's also too new of a technology to be required anywhere. Hardly anyone I know is even trying LLMs to begin with. Then, what do you do if the interviewee gets garbage code out of the LLM and misses an error? An error that might be forgiven in a normal interview cannot be excused when you didn't even have to write the code. Technically, if the LLM did the coding for you, you might pass without even being able to read code. This is all like the same reason you can't use a laptop on an algebra exam... The tool might do 100% of the work and leave you having shown nothing of your own ability.

jliptzin|2 years ago

You should just openly let them use chatgpt (assuming they can use it on the job too). When I interview people I try to create the same environment as the one they’ll be working in. They can use chatgpt, google, stack overflow, etc. I don’t care how many tools they have to use, as long as the work output is good and done in a reasonable time. I really don’t understand the obsession with coding on whiteboards or other situations that will literally never come up on the job. There will never be a time my employees can’t use google or chatgpt. In any case, you can tell pretty quickly how much someone knows about a topic just based on the questions they’re asking chatgpt.

Terr_|2 years ago

Whoah, hold up: Why should we believe that success using an LLM to (possibly blindly) look up the answer to interview-questions will strongly correlate to success using an LLM to craft good code, properly tested, and their ability to debug it and fit it into an existing framework?

Heck, at that point you aren't even measuring whether the candidate understood the question, nor their ability to communicate about it with prospective coworkers.

If there are any questions where "repeat whatever ChatGPT says" seems like a fair and reasonable answer, that probably means it's a bad question that should be removed instead. Just like how "I'd just check the API docs" indicates you shouldn't be asking trivia about the order of parameters in a standard library method or whatever.

recursive|2 years ago

If they're just putting the question straight into GPT, then what benefit is the candidate bringing? I can use GPT myself, and for a lot cheaper than the cheapest candidate.

WalterBright|2 years ago

At Caltech, exams were typically open book, open note. The time limit on the test, however, prevented attempts to learn the material in the time allotted. Calculators were also allowed (though were useless on Caltech exams, as course material didn't care about your arithmetic skillz).

I suspect the way to deal with ChatGPT is to allow it. Expect the interviewee to use ChatGPT as a tool. Try out the interview questions beforehand with ChatGPT. Ask questions that ChatGPT won't be good and answering, like how a calculator is useless on a physics exam.

wakawaka28|2 years ago

Using ChatGPT as a tool makes as much sense as allowing a human assistant to take the exam with you.

In an open-book test, you have to know what you're looking for and roughly where to find it in the book. That implies some knowledge. With ChatGPT you could type the question verbatim and get a potentially right answer, without even understanding the answer at all. It is therefore unacceptable for use on any exam.

xarope|2 years ago

As a former tertiary educator (for a brief moment, before I decided academia wasn't my thing), that's how open book exams are set; the assumption is you have knowledge of the subject, and the books are there for you to verify and quote examples of/from.

NOT to browse through looking for a solution from step 0.

lmm|2 years ago

> But then why do I interview you? Why would I recommend you as a candidate for a position?

Presumably you have tasks that you want performed in exchange for money? (Or want to improve your position in the company hierarchy by having more people under you or whatever).

renewiltord|2 years ago

That sounds great, doesn't it? You got powerful negative signal.

lcnPylGDnU4H9OF|2 years ago

It sounds like the problem is really that this is the most obvious cheater. Someone better at manipulation and deception might do a better job cheating the interviewer such that they're hired but then be entirely inadequate in their new position.

JohnFen|2 years ago

> Of course this will change in the future, with more interactive models

I think that what will change is that doing interviews remotely will become rarer, in favor of in-person interviews.

scarlson|2 years ago

Why?

Interviewing as a process sucks enough as it is. It should just be a culture fit filter that takes you all of 15 minutes to say yes or no to.

Technical interviews are lame and filter for people that are good at technical interviews, not people that are good at the job.

batch12|2 years ago

I've had this happen too, with almost the same responses. It was even more obvious because I was able to see the reflection of their lcd backlight glowing across their face as they switched back and forth to answer the questions. I just directly asked if they were using an external resource to answer my questions. They said yes as if it was normal. I thanked them for their time as that was my last question.

outworlder|2 years ago

> The interviewee sits there in awkward silence for a few seconds, and starts answering in a monotone voice, with sentence structure only seen on Wikipedia. This repeats for each consecutive question.

That's a bit better than proxy interviews and people lip syncing, but not by much.

Espressosaurus|2 years ago

This seems readily fixable by doing the interview in-person.

smcin|2 years ago

How much can you mitigate this by interviewing them remotely but on video? Then you can see if they're typing and reading the answer (unless they have a friend doing that and feeding them it in an earphone, as I hear happens).

esafak|2 years ago

Then you'd filter them by resume first to manage costs. Pick your poison.

m3kw9|2 years ago

Would get easier with an API that connects with stuff like whisper and voice cloning and a good prompt

neilv|2 years ago

> but people who use ChatGPT on the interviews make a disservice to themselves and

I think most people have been thinking that the interviews are mostly BS with little relationship to the job, which you simply have to get through.

Many, many people will cheat to the extent that they think they can get away with it.

It's a bit like many people cheat in school. (On classes they consider irrelevant, they might justify it that way. On classes relevant, they might justify it, that passing or their GPA is more relevant to their goals, than learning that material at that time.)

I think people generally don't believe a "you're doing a disservice to yourself" argument. They choose the tradeoff or the gamble.

Personally, I don't tolerate cheating, and I have a low tolerance for interview BS. Neither is the dominant strategy for the current field.

duxup|2 years ago

I’ve wondered how much of the appeal of LLMs is for humans to BS other humans.

foxyv|2 years ago

Considering how much time is spent on manufacturing BS for consumption by bosses, professors, teachers, and advertising? I think this is going to automate at least half of the work office workers and students are doing now...