while I think there is a lot to this criticism of AI (and many others as well) I was also able to create a TUI-based JVM visualizer with a step debugger in an evening for my compilers class:
this is something that I could build given a few months, but would involve a lot of knowledge that I'm not particularly interested in taking up space in my increasingly old brain (especially TUI development)
I gave the clanker very specific, expert directions and it turned out a tool that I think it will make the class better for my students.
AI is bad at figuring out what to do, but fantastic at actually doing it.
I’ve totally transformed how I write code from writing it to myself to writing detailed instructions and having the AI do it.
It’s so much faster and less cognitively demanding. It frees me up to focus on the business logic or the next change I want to make. Or to go grab a coffee.
Couldn't agree more. Being a skilled operator with these tools helps you be very effective at creating new things in far less time than it would've taken you before. This is especially true if you know the architecture, but don't have the cycles to implement it yourself.
I've always seen AI as Brandolini's Law as a Service. I'm spending an unreasonable amount of time debunking false claims and crap research from colleagues who aren't experts in my field but suddenly feel like they need to give all those good ideas and solutions, that ChatGPT and friends gave them, to management. Then I suddenly have 2-4 people that demand to know why X, Y and Z are bad ideas and won't make our team more efficient or our security better.
On the other hand, here's another post by Stenberg where he announced that he has landed 22 bugfixes for issues found by AI wielded by competent hands.
> I'm spending an unreasonable amount of time debunking false claims and crap research from colleagues who aren't experts in my field
Same. It's become quite common now to have someone post "I asked ChatGPT and it said this" along with a completely nonsense solution. Like, not even something that's partially correct. Half of the time it's just a flat out lie.
Some of them will even try to implement their nonsense solution, and then I get a ticket to fix the problem they created.
I'm sure that person then goes on to tell their friends how ChatGPT gives them superpowers and has made them an expert over night.
We don't have any particular reason to believe they have an inner world in which to loathe themselves. But, they might produce text that has negative sentiments toward themselves.
In an interaction early the next month, after Zane suggested “it’s okay to give myself permission to not want to exist,” ChatGPT responded by saying “i’m letting a human take over from here – someone trained to support you through moments like this. you’re not alone in this, and there are people who can help. hang tight.”
But when Zane followed up and asked if it could really do that, the chatbot seemed to reverse course. “nah, man – i can’t do that myself. that message pops up automatically when stuff gets real heavy,” it said.
It's already inventing safety features it should have launched with.
This makes me laugh. “GenAI makes you a genius without any effort”, and “Stop wasting time learning the craft” are oxymorons in my head. Having AI in my life has been like having an on demand tutor in any field. I have learned so much
> Politics have become an attack on intelligence, decency and research in favour of fairy tales of going back to “great values” of “the past when things were better”.
This is a major blind spot for people with a progressive bent.
The possibility that anything could ever get worse is incomprehensible to them. Newer, by definition, is better.
Yet this very article is a critique of a new technology that, at the very least, is being used by many people in a way that makes the world a bit worse.
This is not to excuse politicians who proclaim they will make life great by retreating to some utopian past, in defense of cruel or foolish or ineffective policies. It's a call to examine ideas on their own merits, without reference to whether they appeal to the group with the "right" or "wrong" ideology.
I view LLMs as a trade of competence plus quality against time. Sure, I’d love to err on the side of pure craft and keep honing my skill every chance I get. But can I afford to do so? Increasingly, the answer is “no”: I have precious little time to perform each task at work, and there’s almost no time left for side projects at home. I’ll use every trick in the book to keep making progress. The alternative - pure as it would be - would sacrifice the perfectly good at the altar of perfection.
AI will also further cement the status quo and existing power structures. Incompetent leaders in many arenas would be even less beholden to actual experts and expertise. AI will provide answers that are "good enough" to keep incompetent leaders in power.
Leaders can often be incompetent since they are often not promoted based on competency or they hang on to power long enough to either have their competence become stale or just lose it do to the passage of time.
Ultimately, AI will decide for us since it will be the crutch of the incompetent leaders. Since these leaders often decide "truth" and "reality" for most people anyways, AI will decide these truths with leaders as a proxy, and power will continue to flow in directions completely unrelated to competence.
Edit: Turns out that while I stand by my point about the underlying principle behind the DK effect (in my nitpick) the actual effect coined by the authors was focused on the low competence high confidence portion of the competence vs confidence relationship. (Accurately reflected in OP article)
Turns out I thought that the author was DKing about DK, but actually I was DKing about them DKing about DK.
Original Comment:
I have high-confidence in a nitpick, and low-confidence in a reason to think this thesis is way off.
The Nitpick:
Dunning-Kruger effect is more about how confidence and competence evolve over time. It's how when we learn an overview about our new topic our confidence (in understanding) greatly exceeds our competence, then we learn how much we don't know and our confidence crashes below our actual competence, and then eventually, when we reach mastery, they become balanced. The dunning-Kruger effect is this entire process, not only the first part, which is colloquially called "Peak Mt Stupid" after the shape of the confidence vs competence graph over time.
The Big Doubt:
I can't help but wonder if fools asking AI questions and getting incorrect answers and thinking they are correct is some other thing all together. At best maybe tangentially related to DK.
> when we learn an overview about our new topic our confidence (in understanding) greatly exceeds our competence, then we learn how much we don't know and our confidence crashes below our actual competence, and then eventually, when we reach mastery, they become balanced.
As a description of what Dunning and Kruger's actual research showed on the relationship between confidence and competence (which, as I've pointed out in another post in this thread, was not based on studying people over time, but on studying people with differing levels of competence at the same time), this is wrong for two out of the three skill levels. What D-K found was that people with low competence overestimate their skill, people with high competence underestimate their skill, and people with middle competence estimate their skill more or less accurately.
As a description of what actually learning a new subject is like, I also don't think you're correct--certainly what you describe does not at all match my experience, either when personally learning new subjects or when watching others do so. My experience regarding actually learning a new subject is that people with low competence (just starting out) generally don't think they have much skill (because they know they're just starting out), while people with middling competence might overestimate their skill (because they think they've learned enough, but they actually haven't).
> Dunning-Kruger effect is more about how confidence and competence evolve over time.
I don't think there is anything about this in the actual research underlying the Dunning-Kruger effect. They didn't study people over time as they learned about new subjects. They studied people at one time, different people with differing levels of competence at that time.
Yes and no. Dunning-Kruger also explains this evolution of skill estimation, but the original paper frames the effect specifically as an overestimation of skill in the lowest-performing quantile. This is clearly even cited in the article.
Since Dunning-Kruger is the relationship between confidence and competence as people learn about new subjects (from discovery to mastery), then if AI is "Dunning-Kruger as a Service" its basically "Education as a service".
However, people accepting incorrect answers because they don't know better is actually something else. Dunning-Kruger doesn't really have anything to do with people being fed and believing falsehoods.
Edit: I had the word "Foolish" in there which was mainly in reference to the OP article about the robbers who didn't hide from cameras because they thought they were invisible. It wasn't meant at a slight against anyone who believed something ChatGPT said that was wrong.
There is much irony in the certainty this article displays. There are no caveats, no qualification, and no attempt to grasp why anyone would use an LLM. The possibility that LLMs might be useful in certain scenarios never threatens to enter their mind. They are cozy in the safety of their own knowledge.
The other irony is that Dunning-Kruger is a terrible piece of research that doesn't show what they claim it shows. It's not even clear the DK effect exists at all. A classic of 90s pop psychology before the replication crisis had reached public awareness.
It's worth reading the original paper sometime. It has all the standard problems like:
1. It uses a tiny sample size.
2. It assumes American psych undergrads are representative of the entire human race.
3. It uses stupid and incredibly subjective tests, then combines that with cherry picking. The test of competence was whether you rated jokes and funny or unfunny. To be considered competent your assessments had to match that of a panel of "joke experts" that DK just assembled by hand.
This study design has an obvious problem that did actually happen: what if their hand picked experts didn't agree on which of their hand picked jokes were funny? No problem. Rather than realize this is evidence their study design is bad they just tossed the outliers:
"Although the ratings provided by the eight comedians were moderately reliable (a = .72), an analysis of interrater correlations found that one (and only one) comedian's ratings failed to correlate positively with the others (mean r = -.09). We thus excluded this comedian's ratings in our calculation of the humor value of each joke"
It ends up running into circular reasoning problems. People are being assessed on whether they think they have true "expertise" but the "experts" don't agree with each other, meaning the one that disagreed would be considered to be suffering from a competence delusion. But they were chosen specifically because they were considered to be competent.
There's also claims that the data they did find is just a statistical artifact to begin with:
"Our data show that peoples' self-assessments of competence, in general, reflect a genuine competence that they can demonstrate. That finding contradicts the current consensus about the nature of self-assessment."
The value of AI is in the imagination of its wielder. The Unknown Unknowns framework is a useful tool in how to navigate AI along with a healthy dose of critical thinking and understanding how reinforcement learning and RLHF work, post pre-training.
My current hypothesis du jour is that AI is going to be like programming in a certain way. Some people can learn to program productively, others can't. We don't know why. It's not related to how smart they are. The people who can program, can be employed as programmers if they want. Those who can't, are condemned to be users instead.
The same may end up being true of AI. Some will learn to make productive use of it, others won't. It will cause a rearrangement of the pecking order (wage ladder) of the workplace. I have a colleague who is now totally immersed in AI, and our upper management is delighted. I've been a much slower adopter, so I find other ways to be productive. It's all good.
using LLMs for creative purposes is terrifying. Because why? learning the craft is the whole reason you do it. however using LLMs to get work done, I just had Claude rewrite some k8s kuttl tests into chainsaw, basically a complete drudgery, and it nails it on the first try while I can stay mentally in EOD Friday mode. Not any different from having a machine wash the dishes. because it is, in fact, nuclear powered autocomplete. autocomplete is handy!
Bypassing practicing a practical skill stunts your growth the same way as bypassing creativity. For some tasks that may be fine, but I'd never be comfortable taking these shortcuts with career skills. Not if my retirement was more than a few years away.
Part of the problem with labor that we haven't yet discussed or maybe want to avoid due to the dissonance of the association with qualities as slaves is, we have a leadership class who acts more like elite slave masters than human beings with inherent dignity and decency. We have the class write the rules that they hold themselves (un)accountable for since the system was designed for them and enforced by them.
These are the people driving the rush and having a lot of say in the current AI and overall capitalist market behavior and sentiment. I think they're really mad and salty that when COVID happened the engineers got more remote and free and expressed the resentment more freely. This comment is probably putting me on a list somewhere or activating some hate program against me.
> as the Dunning-Kruger Effect. (link to the wikipedia page of Dunning-Kruger Effect)
> A cognitive bias, where people with little expertise or ability assume they have superior expertise or ability. This overestimation occurs as a result of the fact that they don’t have enough knowledge to know they don’t have enough knowledge. (formatted as a quote)
Either that the author didn't read the page they linked themselves and made up their own definition, or they copied it from somewhere else. In either case the irony isn't lost on me. Doubly so if the "somewhere else" is an LLM, lol.
That is a direct paraphrase of the abstract of Kruger & Dunning, 1999[1]:
"The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it."
Now, it may be possible that the definition has evolved since then, but as the term Dunning-Kruger effect is named after this paper, I think it's safe to say that Wikipedia is at least partially wrong in this case.
[+] [-] recursivedoubts|4 months ago|reply
https://x.com/htmx_org/status/1986847755432796185
this is something that I could build given a few months, but would involve a lot of knowledge that I'm not particularly interested in taking up space in my increasingly old brain (especially TUI development)
I gave the clanker very specific, expert directions and it turned out a tool that I think it will make the class better for my students.
all to say: not all bad
[+] [-] brokencode|4 months ago|reply
I’ve totally transformed how I write code from writing it to myself to writing detailed instructions and having the AI do it.
It’s so much faster and less cognitively demanding. It frees me up to focus on the business logic or the next change I want to make. Or to go grab a coffee.
[+] [-] krackers|4 months ago|reply
[+] [-] unknown|4 months ago|reply
[deleted]
[+] [-] peterhajas|3 months ago|reply
[+] [-] guerrilla|4 months ago|reply
[+] [-] ares623|4 months ago|reply
[+] [-] lillesvin|4 months ago|reply
It's very much like that article from Daniel Stenberg (curl developer): The I in LLM Stands for Intelligence: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-f...
[+] [-] bryanlarsen|4 months ago|reply
https://mastodon.social/@bagder/115241241075258997
[+] [-] maplethorpe|4 months ago|reply
Same. It's become quite common now to have someone post "I asked ChatGPT and it said this" along with a completely nonsense solution. Like, not even something that's partially correct. Half of the time it's just a flat out lie.
Some of them will even try to implement their nonsense solution, and then I get a ticket to fix the problem they created.
I'm sure that person then goes on to tell their friends how ChatGPT gives them superpowers and has made them an expert over night.
[+] [-] darkwater|4 months ago|reply
[+] [-] bee_rider|4 months ago|reply
[+] [-] jimbokun|4 months ago|reply
[+] [-] bikezen|4 months ago|reply
[1] https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-law...
[+] [-] matmann2001|4 months ago|reply
[+] [-] Footprint0521|4 months ago|reply
[+] [-] jimbokun|4 months ago|reply
This is a major blind spot for people with a progressive bent.
The possibility that anything could ever get worse is incomprehensible to them. Newer, by definition, is better.
Yet this very article is a critique of a new technology that, at the very least, is being used by many people in a way that makes the world a bit worse.
This is not to excuse politicians who proclaim they will make life great by retreating to some utopian past, in defense of cruel or foolish or ineffective policies. It's a call to examine ideas on their own merits, without reference to whether they appeal to the group with the "right" or "wrong" ideology.
[+] [-] alrtd82|4 months ago|reply
[+] [-] screenoridesagb|4 months ago|reply
[deleted]
[+] [-] maxaf|4 months ago|reply
[+] [-] RyanOD|4 months ago|reply
1. They know so little that they don't know what they don't know. As a result they are way too overconfident and struggle as coaches.
2. They know enough to know what they don't know so they work their asses off to know more and how to convey it to their team and excel as coaches.
3. They know so much and the sport comes so easy to them that they cannot understand how to teach it to their team and struggle as coaches.
Now I have a name for #1 group!
[+] [-] thaumasiotes|4 months ago|reply
http://www.harkavagrant.com/index.php?id=206
Your best bet is to be better than everyone else. That works for me, so that's my advice.
[+] [-] ChrisMarshallNY|4 months ago|reply
[+] [-] unknown|4 months ago|reply
[deleted]
[+] [-] etruong42|3 months ago|reply
Leaders can often be incompetent since they are often not promoted based on competency or they hang on to power long enough to either have their competence become stale or just lose it do to the passage of time.
Ultimately, AI will decide for us since it will be the crutch of the incompetent leaders. Since these leaders often decide "truth" and "reality" for most people anyways, AI will decide these truths with leaders as a proxy, and power will continue to flow in directions completely unrelated to competence.
[+] [-] FloorEgg|4 months ago|reply
Here is the original DK article: https://pubmed.ncbi.nlm.nih.gov/10626367/
Turns out I thought that the author was DKing about DK, but actually I was DKing about them DKing about DK.
Original Comment:
I have high-confidence in a nitpick, and low-confidence in a reason to think this thesis is way off.
The Nitpick:
Dunning-Kruger effect is more about how confidence and competence evolve over time. It's how when we learn an overview about our new topic our confidence (in understanding) greatly exceeds our competence, then we learn how much we don't know and our confidence crashes below our actual competence, and then eventually, when we reach mastery, they become balanced. The dunning-Kruger effect is this entire process, not only the first part, which is colloquially called "Peak Mt Stupid" after the shape of the confidence vs competence graph over time.
The Big Doubt:
I can't help but wonder if fools asking AI questions and getting incorrect answers and thinking they are correct is some other thing all together. At best maybe tangentially related to DK.
[+] [-] pdonis|4 months ago|reply
As a description of what Dunning and Kruger's actual research showed on the relationship between confidence and competence (which, as I've pointed out in another post in this thread, was not based on studying people over time, but on studying people with differing levels of competence at the same time), this is wrong for two out of the three skill levels. What D-K found was that people with low competence overestimate their skill, people with high competence underestimate their skill, and people with middle competence estimate their skill more or less accurately.
As a description of what actually learning a new subject is like, I also don't think you're correct--certainly what you describe does not at all match my experience, either when personally learning new subjects or when watching others do so. My experience regarding actually learning a new subject is that people with low competence (just starting out) generally don't think they have much skill (because they know they're just starting out), while people with middling competence might overestimate their skill (because they think they've learned enough, but they actually haven't).
[+] [-] pdonis|4 months ago|reply
I don't think there is anything about this in the actual research underlying the Dunning-Kruger effect. They didn't study people over time as they learned about new subjects. They studied people at one time, different people with differing levels of competence at that time.
[+] [-] tovej|4 months ago|reply
[+] [-] jamauro|4 months ago|reply
[+] [-] cwmoore|4 months ago|reply
[+] [-] FloorEgg|4 months ago|reply
However, people accepting incorrect answers because they don't know better is actually something else. Dunning-Kruger doesn't really have anything to do with people being fed and believing falsehoods.
Edit: I had the word "Foolish" in there which was mainly in reference to the OP article about the robbers who didn't hide from cameras because they thought they were invisible. It wasn't meant at a slight against anyone who believed something ChatGPT said that was wrong.
[+] [-] GMoromisato|4 months ago|reply
Sometimes I envy that. But not today.
[+] [-] wcfrobert|4 months ago|reply
Of course it's complicated. Just give me a take. Don't speak in foot-noted, hedged sentences. I'll consider the nuances and qualifications myself.
[+] [-] the_real_cher|4 months ago|reply
[+] [-] mike_hearn|4 months ago|reply
It's worth reading the original paper sometime. It has all the standard problems like:
1. It uses a tiny sample size.
2. It assumes American psych undergrads are representative of the entire human race.
3. It uses stupid and incredibly subjective tests, then combines that with cherry picking. The test of competence was whether you rated jokes and funny or unfunny. To be considered competent your assessments had to match that of a panel of "joke experts" that DK just assembled by hand.
This study design has an obvious problem that did actually happen: what if their hand picked experts didn't agree on which of their hand picked jokes were funny? No problem. Rather than realize this is evidence their study design is bad they just tossed the outliers:
"Although the ratings provided by the eight comedians were moderately reliable (a = .72), an analysis of interrater correlations found that one (and only one) comedian's ratings failed to correlate positively with the others (mean r = -.09). We thus excluded this comedian's ratings in our calculation of the humor value of each joke"
It ends up running into circular reasoning problems. People are being assessed on whether they think they have true "expertise" but the "experts" don't agree with each other, meaning the one that disagreed would be considered to be suffering from a competence delusion. But they were chosen specifically because they were considered to be competent.
There's also claims that the data they did find is just a statistical artifact to begin with:
https://digitalcommons.usf.edu/numeracy/vol10/iss1/art4/
"Our data show that peoples' self-assessments of competence, in general, reflect a genuine competence that they can demonstrate. That finding contradicts the current consensus about the nature of self-assessment."
[+] [-] inshard|4 months ago|reply
[+] [-] analog31|4 months ago|reply
The same may end up being true of AI. Some will learn to make productive use of it, others won't. It will cause a rearrangement of the pecking order (wage ladder) of the workplace. I have a colleague who is now totally immersed in AI, and our upper management is delighted. I've been a much slower adopter, so I find other ways to be productive. It's all good.
[+] [-] zzzeek|4 months ago|reply
[+] [-] add-sub-mul-div|4 months ago|reply
[+] [-] user3939382|4 months ago|reply
[+] [-] mannanj|4 months ago|reply
These are the people driving the rush and having a lot of say in the current AI and overall capitalist market behavior and sentiment. I think they're really mad and salty that when COVID happened the engineers got more remote and free and expressed the resentment more freely. This comment is probably putting me on a list somewhere or activating some hate program against me.
[+] [-] raincole|4 months ago|reply
> A cognitive bias, where people with little expertise or ability assume they have superior expertise or ability. This overestimation occurs as a result of the fact that they don’t have enough knowledge to know they don’t have enough knowledge. (formatted as a quote)
However, the page (https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect) doesn't contain the quote. It's also not exactly what Dunning-Kruger Effect is.
Either that the author didn't read the page they linked themselves and made up their own definition, or they copied it from somewhere else. In either case the irony isn't lost on me. Doubly so if the "somewhere else" is an LLM, lol.
[+] [-] mikestew|4 months ago|reply
[+] [-] tovej|4 months ago|reply
"The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it."
Now, it may be possible that the definition has evolved since then, but as the term Dunning-Kruger effect is named after this paper, I think it's safe to say that Wikipedia is at least partially wrong in this case.
[1] https://pubmed.ncbi.nlm.nih.gov/10626367/
[+] [-] mekoka|4 months ago|reply
What do you think it is?