top | item 46767863

(no title)

goostavos | 1 month ago

I had my first interview last week where I finally saw this in the wild. It was a student applying for an internship. It was the strangest interview. They had excellent textbook knowledge. They could tell you the space and time complexities of any data structure, but they couldn't explain anything about code they'd written or how it worked. After many painful and confusing minutes of trying to get them to explain, like, literally anything about how this thing on their resume worked, they finally shrugged and said that "GenAI did most of it."

It was a bizarre disconnect having someone be both highly educated and yet crippled by not doing.

discuss

order

stahorn|1 month ago

Sounds a little bit like the stories from Feynman, e.g.: https://enlightenedidiot.net/random/feynman-on-brazilian-edu...

The students had memorized everything, but understood nothing. Add in access to generative AI, and you have the situation that you had with your interview.

It's a good reminder that what we really do, as programmers or software engineers or what you wanna call it, is understanding how computers and computations work.

godelski|1 month ago

There's a quote I love from Feynman

  > The first principle is that you must not fool yourself and you are the easiest person to fool.
I have no doubt he'd be repeating it loudly now, given we live in a time where we developed machines that are optimized to fool ourselves.

It's probably also worth reading Feynman's Cargo Cult Science: https://sites.cs.ucsb.edu/~ravenben/cargocult.html

yomismoaqui|1 month ago

This the kind of interaction that makes be think that there are only 2 possible futures:

Star Trek or Idiocracy.

steve_adams_86|1 month ago

Hmmm, I think we're more likely to face an Idiocracy outcome. We need more Geordi La Forges out there, but we've got a lot of Fritos out here vibe coding the next Carl's Jr. locating app instead

cornhole|1 month ago

we would be lucky to have idiocracy. president camacho had a huge problem and he found the smartest person in the country and got him working on it. if only we can do that

drob518|1 month ago

Lots of theory but no practice.

sally_glance|1 month ago

More like using a calculator but not being able to explain how to do the calculation by hand. A probabilistic calculator which is sometimes wrong at that. The "lots of theory but no practice" has always been true for a majority of graduates in my experience.

vonneumannstan|1 month ago

This is exactly the end state of hiring via Leetcode.

mattarm|1 month ago

Makes me wonder if the hardware engineers look at software engineers and shrug, “they don’t really know how their software really works.”

Makes me wonder if C programmers look at JS programmers and shrug, “they don’t understand what their programs are actually doing.”

I’m not trying to be disingenuous, but I also don’t see a fundamental difference here. AI lets programmers express intent at a higher level of abstraction than ever before. So high, apparently, that it becomes debatable whether it is programming at all, out whether it takes any skill, out requires education or engineering knowledge any longer.

derrida|1 month ago

Wait, so they could say, write a linked list out, or bubble sort, but not understand what it was doing? like no mental model of memory, registers, or intuition for execution order, or even conceptual like a graph walk, or something? Like just "zero" on the conceptual front, but could reproduce data structures, some algorithm for accessing or traversing, and give rote O notation answers about how long execution takes ?

Just checking I have that right... is that what you meant?

I think that's what you were implying but it's just want to check I have that right? if so

... that ... is .... wow ...

saghm|1 month ago

If I'm understandinf correctly, I don't think what you're saying is quite right. They had a mental model of the algorithms, and then the code they "produced" was completely generated by AI, and they had no knowledge of how the code actually modeled the algorithm.

Knowing the complexity of bubble sort is one skill, being able to write code that performs bubble sort is a second, and being able to look at a function with the signature `void do_thing(int[] items)`and determine that it's bubble sort and the time complexity of it in terms of the input array is a third. It sounds like they had the first skill, used an AI to fake the second, but had no way of doing the third.