I'm currently setting up a small coding assessment for recruitment purposes at my company. I think in 2023 it's not reasonable to restrict the usage of tools like gpt4 or copilot. I wonder how advent of code will enforce this?
I’ve noticed there are situations where I’ll bring in some code from my project, and ask how to do something I know is almost impossible. (For example, accessing a session variable during my framework’s boot code.)
ChatGPT will, more often than not, come up with a stupidly complicated solution like to my above example that doesn't work. It takes an actual engineer to figure out why it is impossible and solve it correctly.
If I was hiring people, things like that would be great questions. It’s also so simple - keep track of the times GPT-4 has no idea of what it is doing, and use those as your questions.
Similar experience here. But writing down the problem and extracting relevant parts still helps me to think about it even when GPT-4 doesn’t come up with a good answer.
If you follow the link they expand on the policy. I don't think they will enforce this; thus the "Please". They say that you can use AI tools for assist (but discourage it). Feeding the text into an AI and getting an answer they consider to be a faux pas.
Restricting AI tools in development work is pointless. However, the person using them should still be smarter than those tools. They should use the tools to boost their own efficiency.
In ask people to describe me the code they are going to write verbally. Or reason through an algorithm verbally. Depending on the role, I sometimes start with FizzBuzz, with a spoken answer.
They can use a notepad. They can take time to think. But they will get "why" questions and need to understand the code or constructs they suggest.
This feels safe from ChatGPT, at least for the moment.
In our recent hiring, we had a code component that was: "Here is the output from chatgpt for writing some code, let's go take a look at the code and identify the problems with what it generated."
nkozyra|2 years ago
No, but it's reasonable to say "we want to get a sense of your baseline understanding of things so please don't cheat with LLMs or otherwise."
gjsman-1000|2 years ago
ChatGPT will, more often than not, come up with a stupidly complicated solution like to my above example that doesn't work. It takes an actual engineer to figure out why it is impossible and solve it correctly.
If I was hiring people, things like that would be great questions. It’s also so simple - keep track of the times GPT-4 has no idea of what it is doing, and use those as your questions.
golergka|2 years ago
andrewla|2 years ago
koliber|2 years ago
In ask people to describe me the code they are going to write verbally. Or reason through an algorithm verbally. Depending on the role, I sometimes start with FizzBuzz, with a spoken answer.
They can use a notepad. They can take time to think. But they will get "why" questions and need to understand the code or constructs they suggest.
This feels safe from ChatGPT, at least for the moment.
minimaxir|2 years ago
unknown|2 years ago
[deleted]
linsomniac|2 years ago
kragen|2 years ago
unknown|2 years ago
[deleted]