(no title)
timr | 17 days ago
Those 3000 early adopters who are bookmarking a trivial markdown file largely overlap with the sort of people who breathlessly announce that “the last six months of model development have changed everything!”, while simultaneously exhibiting little understanding of what has actually changed.
There’s utility in these tools, but 99% of the content creators in AI are one intellectual step above banging rocks together, and their judgement of progress is not to be trusted.
adjfasn47573|17 days ago
So I wouldn’t give anything on 3k stars at all.
latexr|17 days ago
For me that’s 100% of the time. I only bookmark or star things I don’t use (but could be interesting). The things I do use, I just remember. If they used to be a bookmark or star, I remove it at that point.
dgxyz|17 days ago
I'm sure i'll piss off a lot of people with this one but I don't care any more. I'm calling it what it is.
LLMs empower those without the domain knowledge or experience to identify if the output actually solves the problem. I have seen multiple colleagues deliver a lot of stuff that looks fancy but doesn't actually solve the prescribed problem at all. It's mostly just furniture around the problem. And the retort when I have to evaluate what they have done is "but it's so powerful". I stopped listening. It's a pure faith argument without any critical reasoning. It's the new "but it's got electrolytes!".
The second major problem is corrupting reasoning outright. I see people approaching LLMs as an exploratory process and let the LLM guide the reasoning. That doesn't really work. If you have a defined problem, it is very difficult to keep an LLM inside the rails. I believe that a lot of "success" with LLMs is because the users have little interest in purity or the problem they are supposed to be solving and are quite happy to deliver anything if it is demonstrable to someone else. That would suggest they are doing it to be conspicuous.
So we have a unique combination of self-imposed intellectual dishonesty, mixed with irrational faith which is ultimately self-aggrandizing. Just what society needs in difficult times: more of that! :(
mentalgear|17 days ago
Exactly - we are in the age of "AI-posers".
OJFord|17 days ago
croes|17 days ago
sph|17 days ago
The democratization of programming (derogatory)
croes|17 days ago
This is the banalization of software creation by removing knowledge as a requirement. That's not a good thing.
You wouldn't call the removal of a car's brakes the democratization of speed, would you?
podgorniy|17 days ago
pikachu0625|16 days ago
EdNutting|17 days ago
unknown|17 days ago
[deleted]
squidbeak|17 days ago
[deleted]
croes|17 days ago
>Cliche.
Too early to tell, so let's wait and see before we brush that off.
>> in that the sort of people who couldn’t make it through a bootcamp can now be “programming thought leaders”
>Snobbery.
Reality and actually a selling point of AI tools. I see pretty often ads for making apps without any knowledge of programming
>> the sort of people who breathlessly announce
> Snobbery / Cliche.
Reality
>> There’s no longer a reliable way to filter signal from noise.
> Cliche.
Reality, or do you destinguish a well programmed app from unaudited BS
>> There’s utility in these tools, but 99% of the content creators in AI are one intellectual step above banging rocks together
>Cliche / Snobbery.
99% is to high, maybe 50%
>> their judgement of progress is not to be trusted
> Tell me, timr, how much judgement is there is in snotty gatekeeping and strings of cliches?
We have many security issues in software coded by people who have experience in coding, how much do you trust software ordered by people who can't jusge if the program they get is secure or full of security flaws? Don't forget these LLMs are trained on pre existing faulty code.