top | item 44832378

The fundamentals still matter

47 points| zekrom | 7 months ago |jordangoodman.bearblog.dev | reply

30 comments

order
[+] edude03|7 months ago|reply
> I worry that we are being over sold on a promise that LLMs will magically make up for a lack of proficiency in an area

I saw a post on twitter about how game devs were using ChatGPT for localization and when you translated the text to English it said something like “as a chat assistant I’m unable to translate this concept” or an explanation instead of the translation.

This is exactly the sort of future I imagine with AI - not that the grunts on the ground will be sold on it but management will be convinced they can fire the people who know what they’re doing and replace them with interns armed with a ChatGPT subscription

[+] ztetranz|7 months ago|reply
>I saw a post on twitter about how game devs were using ChatGPT for localization and when you translated the text to English it said something like “as a chat assistant I’m unable to translate this concept”

http://news.bbc.co.uk/2/hi/7702913.stm

[+] voidhorse|7 months ago|reply
Yes, and I think we'll course correct, eventually.

There's a reason we still (generally) teach people how to do arithmetic with pencil and paper instead of jumping straight to calculators. Learning basic algorithms for performing the computations helps solidify the concepts and the rules of the game.

We'll need to do the same thing eventually with respect to LLMs and software engineering. People who skip the foundations or let their comprehension atrophy will eventually end up in a spot in which they need those skills. I basically never do arithmetic using pen and paper now, but I could if I had to, and, more importantly, the process ingrained some basic comprehension of how the integers relate under the group operations.

I totally agree, re: SQL specifically, by the way. SQL is basically already natural language. It's probably the last thing that I'd need to offload to some natural language prompt. I think it's a bit of a vicious circle problem. There's a lot of people who only need to engage with SQL from time to time, so working with it is a bit awkward each time for lack of practice. This incentivizes them to offload it to the LLM just to get it out of the way, which in turn further atrophies their skills with SQL.

[+] throwawayoldie|7 months ago|reply
> SQL is basically already natural language

This was actually the whole point of SQL in the first place: to be a query language close enough to natural language that non-specialists could easily learn to use it.

[+] throwawayoldie|7 months ago|reply
> Maybe another industry of cleaning up vibe coded messes will be a thing?

I have seriously considered hanging out my shingle to do this freelance, I don't think the time is quite ripe yet but maybe in a few months.

[+] merksoftworks|7 months ago|reply
My experience has been more like this: - Write small library as contract work. - Client vibe codes with it. Code doesn't work. - End up doing good faith assurance work to fix the vibe coded bug in the client code, the issue was not in my small library.

People are programming out on a limb - and blame goes to the library maintainer if the user lacks the fundamental skills to do troubleshooting.

[+] burnt-resistor|7 months ago|reply
Wherever there is pain impeding capital, there is opportunity. And there is always a set of current pain points. There can only be no pain in a fully-autonomous organization with autonomous investors and customers too.

It seems like the future is converging on there will 5 Matrix savant architects who make $1B/y who keep things operating while everyone else lives in a shanty or a pod.

[+] roxolotl|7 months ago|reply
As someone who loves fixing weird bugs I kinda hope this becomes a thing. There’s nothing as satisfying as finding logic bugs.
[+] sloped|7 months ago|reply
Basically the same as cleaning up after they hired the cheapest dev they can find. Something our little shop has been doing for 4 years now. Can't wait to charge to debug a 100,000 line vibe coded WordPress plugin.
[+] nzach|7 months ago|reply
I agree with the sentiment, but I don't think the example given (creating SQL queries) is a good representation of this problem.

That's because if you know a little bit of SQL and know how to validate the answer LLMs give you this becomes a non-issue.

A better example would be an ambiguous prompt where the LLM can either use an array or a map to solve your problem so it chooses an array. But down the road you need a feature where direct access is what you need, but your code is already using arrays. In this situation what tends to happen is the LLMs ends up making some hack on top of your arrays to create this new feature and the code gets real bad.

By not understanding the difference between these 2 data structures you aren't able to specify what needs to be done and the LLM ends up implementing your feature in an additive way. And when you add enough features in this way things get messy.

What is still not clear to me is what is the proper "abstraction layer" we need to use to learn things in this new world where LLMs are available.

[+] sarthaksoni|7 months ago|reply
I’m guilty of this. I’m trying to be more mindful when using LLM-generated code. It’s mostly a personal issue: I tend to procrastinate and hope the code “just works.”

We need to stay vigilant,otherwise we will pay the cost by fixing LLM bugs later.

[+] DontchaKnowit|7 months ago|reply
First time ive ever heard someone admit this, only ever heard people accuse theur coworkers of it. This is honestly a very sad thing to hear a professional dev say
[+] begueradj|7 months ago|reply
> Yet, I see people blindly trusting LLM outputs to develop SQL queries, without knowing how to explain or debug them.

The same is true about every other single instruction produced.

[+] vivzkestrel|7 months ago|reply
wasnt there a study published recently that people using LLMs more frequently tend to become less intelligent over time because their brain doesn't have to process complex tasks and workflows anymore?
[+] js8|7 months ago|reply
So, Karl Marx predicted that the capitalism will eat itself because capitalists will value creating money itself (and money-making enterprises, such as asset bubbles) more than the actual production of goods. This was later elaborated by many people, but since I am not an expert in this, I'll just mention Hyman Minsky and Thomas Piketty.

The OP is essentially a (white collar) labor version of this. What is evidently valued is an appearance of expertise, rather than expertise itself. Just like the capitalists who want to make money, and skipping production of actual goods in order to accomplish that, "professionals" are going to skip actual learning in order to appear knowledgeable.

For 200 years, people have hoped that the "free market" will sort out the problem that Marx saw. It didn't happen - we still get financial bubbles that cause trouble for many people. So, I suspect it's a mistake to assume the learning problem will fix itself either. I suspect people (society at large) will have to consciously value the hard work of learning for this to be fixed.

[+] MarkusQ|7 months ago|reply
Yeah, that's why we haven't created any new goods since we embraced capitalism—the whole thing is just parasitic on the labor of the masses, and doesn't really add anything. </sarcasm>

Seriously, you might want to actually do a sniff check before taking Marx's word for anything.