top | item 46986300

(no title)

timr | 17 days ago

LLMs are the eternal September for software, in that the sort of people who couldn’t make it through a bootcamp can now be “programming thought leaders”. There’s no longer a reliable way to filter signal from noise.

Those 3000 early adopters who are bookmarking a trivial markdown file largely overlap with the sort of people who breathlessly announce that “the last six months of model development have changed everything!”, while simultaneously exhibiting little understanding of what has actually changed.

There’s utility in these tools, but 99% of the content creators in AI are one intellectual step above banging rocks together, and their judgement of progress is not to be trusted.

discuss

order

adjfasn47573|17 days ago

Sometimes I just bookmark things because I think to myself “Maybe I’ll try this out, when I have time” which then likely never happens.

So I wouldn’t give anything on 3k stars at all.

latexr|17 days ago

> Sometimes I just bookmark things because I think to myself “Maybe I’ll try this out, when I have time” which then likely never happens.

For me that’s 100% of the time. I only bookmark or star things I don’t use (but could be interesting). The things I do use, I just remember. If they used to be a bookmark or star, I remove it at that point.

dgxyz|17 days ago

I agree.

I'm sure i'll piss off a lot of people with this one but I don't care any more. I'm calling it what it is.

LLMs empower those without the domain knowledge or experience to identify if the output actually solves the problem. I have seen multiple colleagues deliver a lot of stuff that looks fancy but doesn't actually solve the prescribed problem at all. It's mostly just furniture around the problem. And the retort when I have to evaluate what they have done is "but it's so powerful". I stopped listening. It's a pure faith argument without any critical reasoning. It's the new "but it's got electrolytes!".

The second major problem is corrupting reasoning outright. I see people approaching LLMs as an exploratory process and let the LLM guide the reasoning. That doesn't really work. If you have a defined problem, it is very difficult to keep an LLM inside the rails. I believe that a lot of "success" with LLMs is because the users have little interest in purity or the problem they are supposed to be solving and are quite happy to deliver anything if it is demonstrable to someone else. That would suggest they are doing it to be conspicuous.

So we have a unique combination of self-imposed intellectual dishonesty, mixed with irrational faith which is ultimately self-aggrandizing. Just what society needs in difficult times: more of that! :(

mentalgear|17 days ago

> stuff that looks fancy but doesn't actually solve the prescribed problem at all."

Exactly - we are in the age of "AI-posers".

OJFord|17 days ago

Is Andrej Karpathy the guy who 'couldn't make it through a [coding] bootcamp' in this description?

croes|17 days ago

Andrej Karpathy named the pitfalls and did't make the markup file

sph|17 days ago

> LLMs are the eternal September for software, in that the sort of people who couldn’t make it through a bootcamp can now be “programming thought leaders”

The democratization of programming (derogatory)

croes|17 days ago

Free compilers were the democratization of programming.

This is the banalization of software creation by removing knowledge as a requirement. That's not a good thing.

You wouldn't call the removal of a car's brakes the democratization of speed, would you?

pikachu0625|16 days ago

Do you ever filter signal from noise by the quality of the code? The code written by the Google founders was eventually rewritten by others, and it was likely worse than what a fresh grad produces today. Still, that initial search engine is the most influential thing they ever built, and it's something the modern Bay Area will probably never create again.

EdNutting|17 days ago

The Markdown file looks like it’s written for people who either haven’t discovered Plan mode, or who can’t be bothered to read a generated plan before running with it.

squidbeak|17 days ago

[deleted]

croes|17 days ago

>> LLMs are the eternal September for software

>Cliche.

Too early to tell, so let's wait and see before we brush that off.

>> in that the sort of people who couldn’t make it through a bootcamp can now be “programming thought leaders”

>Snobbery.

Reality and actually a selling point of AI tools. I see pretty often ads for making apps without any knowledge of programming

>> the sort of people who breathlessly announce

> Snobbery / Cliche.

Reality

>> There’s no longer a reliable way to filter signal from noise.

> Cliche.

Reality, or do you destinguish a well programmed app from unaudited BS

>> There’s utility in these tools, but 99% of the content creators in AI are one intellectual step above banging rocks together

>Cliche / Snobbery.

99% is to high, maybe 50%

>> their judgement of progress is not to be trusted

> Tell me, timr, how much judgement is there is in snotty gatekeeping and strings of cliches?

We have many security issues in software coded by people who have experience in coding, how much do you trust software ordered by people who can't jusge if the program they get is secure or full of security flaws? Don't forget these LLMs are trained on pre existing faulty code.