top | item 43193880

(no title)

prismatix | 1 year ago

I just interviewed someone for a Senior position who's been using these AI copilots for 1.5 years as a contractor. In the interview I politely said I wanted to evaluate their skills without AI, so no Cursor/Copilots allowed. They did not remember how to map through an array, define a function, add click/change handlers to input, etc.

discuss

order

sachinjain|1 year ago

Is this a good evaluation criterion anymore? Also, did you allow the candidate to use the internet?

Most of us do not remember the exact syntax for everything despite having coded in that language/framework for years.

lsaferite|1 year ago

It's an interesting space for discussion.

What I've found after developing software for many decades and learning many languages is that the concepts and core logical thinking are what is most important in most cases.

Before the current AI boom I would still have had a problem doing some tasks in a vacuum as well. Not because I was incapable, but because I had so much other relevant information in my head that the minutia of some tasks was irrelevant when I had immediate access to the needed information via auto-complete in an IDE and language documentation. I know what I needed to look up because of all that other knowledge in my head though. I knew things were possible. And in cases where I didn't _know_ something was possible, I had an inkling that something might be possible because I could do it in another language or it was a logical extension of some other concept.

With the current rage of AI Coding Copilots I personally feel like many people are going down a path that degrades that corpus of general knowledge that drives the ability to solve problems quickly. Instead they lean on the coding assistant to have that knowledge and simple direct it to do the tasks at a macro level. On the surface this may seem like a universal boon, but the reality is they are giving up that intrinsic domain knowledge that is needed to be functional at understanding what software is doing and how to solve the problems that will crop up.

If those two paragraphs seem contradictory in some manner, I agree. You can argue that leaning on IDE syntax autocomplete and looking up documentation not foundationally different than leaning on a coding assistant. I can only say that they don't _feel_ the same to me. Maybe what I mean is, if the assistant is writing code and you are directly using it, then you never gain knowledge. If you are looking things up in documentation or using auto-complete for function names or arguments, you are learning about the code and how to solve a problem. So maybe it's just, what abstraction level are we, as a profession, comfortable with?

To close out this oddly long comment, I personally use LLMs and other ML models frequently. I have found that they are excellent at helping me formulate my thoughts on a problem that needs to be solved and to surface information across a lot of sources into a coherent understanding of an issue. Sure, it's possible that it's wrong, but I just use it to help steer me towards the real information I need. If I ask for or it provides code, that's used as a reference implementation for the actual implementation I write. And my IDE auto-complete has gotten a boost as well. It's much better at understanding the context of what I'm writing and providing guesses as to what I'm about to type. It's quite good. Most of the time. But it's also wrong in very subtle ways that require careful reading to notice. And I'll sum this paragraph up with the fact that I'm turning to an LLM more and more as a first search before I hit a search engine (yet I hate Google's AI search results).

prismatix|1 year ago

The situation opened up a very interesting discussion on our team. All of us on the team use AI tools in our job (you'd be a fool not to these days). I even use the copilot tool that the candidate used. But the difference is that I don't rely on it, and any code it produces I'm actively registering in my head. I would never let it write something that I don't understand without taking the time to understand it myself.

I do agree though. Why do intellisense and copilots feel so different from one another? I think part of it is that with intellisense you generally need to start the action before it auto suggests, whereas with copilots you don't even need to initiate the action.