top | item 47199750

(no title)

aerhardt | 1 day ago

I am seeing similar dynamics at work, but also in my graduate studies.

I am currently doing the OMSCS at Georgia Tech and taking Machine Learning (7641) which has always had a reputation for being difficult. I don't mind a challenge, but I feel that the AI policy creates a sense of permanent and unpayable cognitive debt and learning deficits.

The class has traditionally taken a "data-first approach" to ML, where instead of focusing on the details of the different algorithms, students must apply them to datasets and analyze their performance and trade-offs. There are four colossal end-to-end ML projects which culminate in an 8-page IEEE-style paper each. (I actually prefer this general direction rather than an algo-heavy one - I find it more valuable to my work in business applications.)

For their AI policy, they've decided that all code can be generated by AI - the only rule is that the paper contents must be original analysis. To avoid taking any risks, I do not even use spell-checking AIs on the paper.

However, it seems to me that to compensate for the AI help, they've cranked up the amount of ground that needs to be covered in the projects. In the first project we were given two datasets, six algos to test, and a bunch of params and metrics to experiment with, producing a real combinatorial explosion of stuff to work on. This is on top of up to around 150+ pages of scientific reading on some weeks.

I am leaning very heavily on LLMs to generate massive chunks of the code, but I feel like I can't keep up at all.

I don't even feel my skills coming in where poor. I am a confident programmer, recently brushed up on math, and this is actually my second CS degree and my fourth course at Georgia Tech. I am rather familiar with the feeling of difficult courses or work problems pushing me to my intellectual limits where I stare into the abyss, but this feels radically different.

I am pushed to work at a higher (less detailed) level of abstraction, as many have foretold LLMs would do. I feel like I am learning about the data science meta-process but cannot keep up with details that are not even that fine. There is some complex math in there that could probably make my head spin but I cannot even get to that - I am cognitively stuck at higher abstractions like keeping up with so many families of algos, datasets, APIs, and thousands of AI generated codes.

In some sense this may be a shape of things to come at work too, but here's where that analogy breaks down: the performance of our work doesn't matter and we're not even graded on it. As long as we convincingly explain why things happen, we should be good, but even as I start to get the class and focus on that, I feel like I can barely keep up. If only they had made a bit of room with the AI productivity increase to focus a bit longer on that!

I thought I was losing it but this morning I found a Reddit thread with dozens of current students venting and found some solace in seeing that I'm not alone.

I also feel for the teaching staff, who I think are absolutely well-meaning, competent and attentive, but who just like the rest of us are trying to wing it in this brave new world.

AI is transformative for the good and the bad, and it's going to take us all many years to sort it out. We're not even started understanding social media and AI could be orders of magnitude more complex and also further complicating the former.

discuss

order

No comments yet.