I wouldn't call it "accumulation of cognitive debt"; just call it cognitive decline, or loss of cognitive skills.
And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need. Anybody remember the couple studies on the use of google maps for navigation? One was "Habitual use of GPS negatively impacts spatial memory during self-guided navigation"; another reported a reduction in gray matter among maps users.
Moreover, anyone who has developed expertise in a science field knows that coming to understand something requires pondering it, exploring how each idea relates to other things, etc. You can't just skim a math textbook and know all the math. You have to stop and think. IMO it is the act of thinking which establishes the objects in our mind such that they can be useful to our thinking later on.
It feels, more and more, that LLMs will be another technology that society will inoculate itself against. It's already starting to happen in education: teachers conversing with students, observing them learn, observing them demonstrate their skills. In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people -- as authors of what they want to say. Authoring is two-thirds of the point of most communication.
Before this, of course, will be a dramatic "shallowness of thinking" shock that will have to occur before its ill-effects are properly inoculated against. It seems part of the expert aversion to LLMs -- against the credulous lovers of "mediocrity" (cf. https://fly.io/blog/youre-all-nuts/) -- is an early experience of inoculation:
Any "macroscopic usage" of LLMs has, in any of my projects, dramatically impaired my own thinking, stolen decisions-making, and worsened my readiness for necessary adaptions later-on. LLMs are a strictly microscopic fill-in system for me, in anything that matters.
This isn't like calculators: my favourite algorithms for by-hand computation arent being "taken away". This is a system for substituting thinking itself with non-thinking, and radically impairs your readiness (, depth, adaptability, ownership) wherever it is used, on whatever domain you use it on.
> In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people
I believe that one of the most underappreciated skills in business is the ability to string a coherent narrative together. I attend many meetings with extremely-talented engineers who are incapable of presenting their arguments in a manner that others (both technical and non-technical) can follow them. There is an artistry to writing and speaking that I am only now in my late forties beginning to truly appreciate. Language is a powerful tool, the choice of a single word can sometimes make or break an argument.
I don't see how LLMs can do anything but significantly worsen this situation overall.
It's all already there. When you converse with a junior-engineer about their latest and greatest idea (over a chat platform), and they start giving you real-time responses which are a page long and structured into bullet points...it's not even that they are using chatgpt to avoid thinking, it is the fact that they think either no-one will notice, or that this is how grown-ups actually converse with each other, is terrifying.
> I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people
Now one like me might go and ask how much of communication is actually worthwhile? Sometimes I consider that there is lot of communication that might not actually be. It is still done, but if no one actually reads it, why not automate generation.
Not to say there is not significant amount of stuff you actually want to get right.
> In business, quickly I'd say, we will realise that the vast majority of worthwhile communication necessarily must be produced by people -- as authors of what they want to say.
But what fraction of communication is "worthwhile"?
I'm an academic, which in theory, should be one of the jobs that requires the most thinking. And still, I find that over half of the writing I do are things like all sorts of reports, grant applications, ethics/data management applications, recommendation letters, bureaucratic forms, etc. Which I wouldn't class as "worthwhile" in the sense that they don't require useful thinking, and I don't care one iota whether the text sounds like me or not as long as I get the silly requirement done. For these purposes, LLMs are a godsend and probably actually help me think more because I can devote more time to actual research and teaching, which I do in person.
One of the effects on software development is: the fact that you submitted a PR with any LoC count doesn't mean that you did any work. You need to explain your solution and answer questions to prove that.
I see it as more of a calibration, revolving around understanding what an AI is inherently not able to do – decide what YOU want – and stopping to be weird about that. If you chose to stop being involved in a process and mold it, then your relationship to that process and the outcome will necessarily change. Why would we be surprised by that?
As soon as we stop treating AI like mind readers things will level out.
> This is a system for substituting thinking itself with non-thinking
One of my favorite developments on the internet in the past few years is the rise of the “I don’t think/won’t think/didn’t think” brag posts
On its own it would be a funny internet culture phenomenon but paired with the fact that you can’t confidently assume that anybody even wrote what you’re reading it is hilarious
It’s been my experience that most people opinions on AI is inversely proportional to the timescale they have been using it.
Using AI is kind of like having a Monika Closet. You just push all the stuff you don’t know to the side until it’s out of view. You then think everything is clean, and can fool yourself into thinking so for a while.
But then you need to find something in that closet and just weep for days.
> This is a system for substituting thinking itself with non-thinking
I haven’t personally felt this to be the case. It feels more like going from thinking about nitty gritty details to thinking more like the manager of unreasoning savants. I still do a lot of thinking— about organization, phrasing (of the code), and architecture. Conversations with AI agents help me tease out my thinking, but they aren’t a substitute for actual thought.
I read that article when it was posted on HN, and it's full of bad faith interpretations of the various objections to using LLM-assisted coding.
Given that the article comes from a person whose expertise and viewpoints I respected, I had to run it through a friend; who suggested a more cynical interpretation that the article might have been written to serve his selfish interests. Given the number of bugs that LLMs often put in, it's not difficult to see why a skilled security researcher might be willing to encourage people to generate code in ways that lead to cognitive atrophy, and therefore increase his business through security audits.
Sad reality is that most people are not smart. They’re not creative, original, or profound. Think back to all the empty and pointless convos you had prior to AI or the web.
Shallow take. LLMs are like food for thought -- the right use in the right amounts is empowering, but too much (or uncritical use) and you get fat and lazy, metaphorically speaking.
You wouldn't go around crusading against food because you're obese.
Another neat analogy is to children who are too dependent on their parents. Parents are great and definitely help a child learn and grow but children who rely on their parents for everything rather than trying to explore their limits end up being weak humans.
The discussion here about "cognitive debt" is spot on, but I fear it might be too conservative. We're not just talking about forgetting a skill like a language or losing spatial memory from using GPS. We're talking about the systematic, irreversible atrophy of the neural pathways responsible for integrated reasoning.
The core danger isn't the "debt" itself, which implies it can be repaid through practice. The real danger is crossing a "cognitive tipping point". This is the threshold where so much executive function, synthesis, and argumentation has been offloaded to an external system (like an LLM) that the biological brain, in its ruthless efficiency, not only prunes the unused connections but loses the meta-ability to rebuild them.
Our biological wetware is a use-it-or-lose-it system without version control. When a complex cognitive function atrophies, the "source code" is corrupted. There's no git revert for a collapsed neural network that once supported deep, structured thought.
This HN thread is focused on essay writing. But scale this up. We are running a massive, uncontrolled experiment in outsourcing our collective cognition. The long-term outcome isn't just a society of people who are less skilled, but a society of people who are structurally incapable of the kind of thinking that built our world.
So the question isn't just "how do we avoid cognitive debt?". The real, terrifying question is: "What kind of container do we need for our minds when the biological one proves to be so ruthlessly, and perhaps irreversibly, self-optimizing for laziness?"
It's up to everyone to decide what to use LLMs for. For high friction / low throughput (eg, online research using inferieor search tools) tasks, i find text models to be great. To ask about what you don't know, to skip the 'tedious part' (I don't feel like looking for answers, especially troubleshooting arcane technical issues among pages of forums or social media, makes me smarter in any way whatsoever, especially that the information usually needs to be verified and taken with a grain of salt).
StackExchange, the way it was meant to be initially, would be way more valuable over text models. But in reality people are imperfect and carry all sorts of cognitive biases and baggage, while a LLM won't close your question as 'too broad' right after it gets upvotes and user interaction.
On the other hand, I still find LLM writing on the subjects familiar to me, vastly inferior. Whenever I try to write a say, email with its help, I end up spending just as much time either editing the prompt to keep it on track, or rewriting it significantly after. I'd rather write it on my own with my own flow, than proofread/peer review a text model.
Rather than getting ever deeper insight into a subject matter by actively working on it, you iterate fast but shallow over a corpus of AI generated content.
Example: I wanted to understand the situation in the Middle East better so I wrote an 10 page essay on the genesis if Hammas and Hizbulah using OpenAI as a cowriter.
I remember nothing, worse of the things I remember I don’t know if it was hallucinations I fixed or actual facts.
Most intelligent people are aware of the fact that writing is about thinking as much as it is about getting the written text.
LLMs can be great sparring partners for this, if you don't use it as a tool that writes for you, but as a tool that finds mistakes, points out gaps and errors (which you may or may not ignore) and helps in researching general questions aboit the world around you (always woth caution and sources).
I'm on the optimistic side with how useful LLMs are, but I have to agree. You cultivate the instinct for how to steer the models and reduce hallucinations, but you're not building articulable knowledge or engaging in challenging thinking. It's more learning muscle-memory reactions to certain forms of LLM output that lean you towards trusting the output more, trying another prompting strategy, clearing context or not, and so on.
To the extent we can call it skill, it's probably going to be made redundant in a few years as the models get better. It gives me a kind of listlessness that assembly line workers would feel.
The results are not surprising to me personally. When I have used AI to help with my own writing and translation tasks, I do not feel as mentally engaged with the writing or translation process as I would be if I were doing it all on my own.
But I have found that using AI in other ways to be incredibly mentally engaging in its own way. For the past two weeks, I’ve been experimenting with Claude Code to see how well it can fully automate the brainstorming, researching, and writing of essays and research papers. I have been as deeply engaged with the process as I have ever been with writing or translating by myself. But the engagement is of a different form.
The results of my experiments, by the way, are pretty good so far. That is, the output essays and papers are often interesting for me to read even though I know an AI agent wrote them. And, no, I do not plan to publish them or share them.
One slightly unexpected side effect of using AI to do most of my coding now is that I find myself a lot less tired and can focus for longer periods. It's enabled me to get work done while faced with other distractions. Essentially, offload some mental capacity towards AI frees up capacity elsewhere.
I find the opposite to be true. I am a lot more productive, so I work on more things in parallel, which makes me extremely tired by the end of the day, as if my brain worked at 100% capacity..
On one hand, I've found that it reduces acute fatigue, but on the other I've found there's also an inflection point where it can encourage more fatigue over longer time horizons if you're not careful.
In the past I'd often reach a point like an unexpected error or looking at some docs would act like a "speed bump" and let me breath, and typically from there I'd acknowledge how tired I am, and stop for the moment.
With AI those speed bumps still exist, but there's sometimes just a bit of extra momentum that keeps me from slowing down enough to have that moment of reflection on how exhausted I am.
And the AI doesn't even have to be right for that to happen: sometimes just reading a suggestion that's specific to the current situation can trigger your own train of thought that's hard to reign back in.
You can go to the Walmart outside town on foot. And carry your stuff back. But it is much faster - and less exhaustive - to use the car. Which means you can spend more quality time on things you enjoy.
Back when GANs were popular, I'd train generator-discriminator models for image generation.
I thought a lot about it and realised discriminating is much easier than generating.
I can discriminate good vs bad UI for example, but I can't generate a good UI to save my life. I immediately know when a movie is good, but writing a decent short story is an arduous task.
I can determine the degree of realism in a painting, but I can't paint a simple bicycle to convince a single soul.
We can determine if an LLM generation is good or bad in a lot of cases. As a crude strategy then we can discard bad cases and keep generating till we achieve our task. LLMs are useful only because of this disparity between discrimination vs generation.
These two skills are separate. Generation skills are hard to acquire and very valuable. They will atrophy if you don't keep exercising those.
I think it's likely we learn to develop healthier relationships with these technologies. The timeframe? I'm not sure. May take generations. May happen quicker than we think.
It's clear to me that language models are a net accelerant. But if they make the average person more "loquacious" (first word that came to mind, but also lol) then the signal for raw intellect will change over time.
Nobody wants to be in a relationship with a language model. But language models may be able to help people who aren't otherwise equipped to handle major life changes and setbacks! So it's a tool - if you know how to use it.
Let's use a real-life example: relationship advice. Over time I would imagine that "ChatGPT-guided relationships" will fall into two categories: "copy-and-pasters", who are just adding a layer of complexity to communication that was subpar to begin with ("I just copied what ChatGPT said"), and "accelerators" who use ChatGPT to analyze their own and their partners motivations to find better solutions to common problems.
It still requires a brain and empathy to make the correct decisions about the latter. The former will always end in heartbreak. I have faith that people will figure this out.
> The LLM undeniably reduced the friction involved in answering participants' questions compared
to the Search Engine. However, this convenience came at a cognitive cost, diminishing users'
inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on
the training datasets). This highlights a concerning evolution of the 'echo chamber' effect: rather
than disappearing, it has adapted to shape user exposure through algorithmically curated
content. What is ranked as “top” is ultimately influenced by the priorities of the LLM's
shareholders [123, 125].
I worry about the adverse effects of LLM on already disfranchised populations - you know the poor etc - that usually would have to pull themselves up using hard work etc studying n reading hard.
now if you don't have a mentor to tell you in the age of LLM you still have to do things the hard / old school way to develop critical thinking - you might end up taking shortcuts and have the LLMs "think" for you. hence again leaving huge swaths of the population behind in critical thinking which is already in shortage.
LLMs are bad that they might show you the sources but also hallucinate about the sources. & most people won't bother going to check source material and question it.
As the proliferation of the smart phone eroded our ability to locate and orient ourselves and remember routes to places. It's no surprise that a tool like this, used for the purpose of outsourcing a task that our own brains would otherwise do, would result in a decline in the skills that would be trained if we were performing that task ourselves.
Wasn't THE SAME said when Google came out? That we were not remembering things anymore and we were relying on Google? And also with cellphones before that (even the big dummy brickphones), that we were not remembering phone numbers anymore.
"I"? You should treat yourself as an anecdotal exception.
You are reading on HN. You are probably more aware about the advantages and shortcomings of LLMs. You are not a casual user. And that's the problem with our echo chamber here.
This is exactly why there is no point in using AI for coding unless in rare fee cases.
Code without AI - sharp skills, your brain works and you come up with better solutions etc.
Code with AI - skills decline after merely a week or two, you forget how to think and because of relying on AI for simpler and simpler tasks - your total output is less and worse that in you were to diy it.
I think we need to shift our idea of what LLMs do and stop thinking they are ‘thinking’ in any human way.
The best mental description I have come up with is they are “Concept Processors”. Which is still awesome. Computers couldn’t understand concepts before. And now they can, and they can process and transform them in really interesting and amazing ways.
You can transform the concept of ‘a website that does X’ into code that expresses a website X.
But it’s not thinking. We still gotta do the thinking. And actually that’s good.
I guess: Not only does AI reduce the number of the entry-level workers, now this shows that the entry-level workers who remain won't learn anything from their use of AI and remain entry-level forever if they're not careful.
[+] [-] jsrozner|8 months ago|reply
And also DUH. If you stop speaking a language you forget it. The brain does not retain information that it does not need. Anybody remember the couple studies on the use of google maps for navigation? One was "Habitual use of GPS negatively impacts spatial memory during self-guided navigation"; another reported a reduction in gray matter among maps users.
Moreover, anyone who has developed expertise in a science field knows that coming to understand something requires pondering it, exploring how each idea relates to other things, etc. You can't just skim a math textbook and know all the math. You have to stop and think. IMO it is the act of thinking which establishes the objects in our mind such that they can be useful to our thinking later on.
[+] [-] mjburgess|8 months ago|reply
Before this, of course, will be a dramatic "shallowness of thinking" shock that will have to occur before its ill-effects are properly inoculated against. It seems part of the expert aversion to LLMs -- against the credulous lovers of "mediocrity" (cf. https://fly.io/blog/youre-all-nuts/) -- is an early experience of inoculation:
Any "macroscopic usage" of LLMs has, in any of my projects, dramatically impaired my own thinking, stolen decisions-making, and worsened my readiness for necessary adaptions later-on. LLMs are a strictly microscopic fill-in system for me, in anything that matters.
This isn't like calculators: my favourite algorithms for by-hand computation arent being "taken away". This is a system for substituting thinking itself with non-thinking, and radically impairs your readiness (, depth, adaptability, ownership) wherever it is used, on whatever domain you use it on.
[+] [-] codeduck|8 months ago|reply
I believe that one of the most underappreciated skills in business is the ability to string a coherent narrative together. I attend many meetings with extremely-talented engineers who are incapable of presenting their arguments in a manner that others (both technical and non-technical) can follow them. There is an artistry to writing and speaking that I am only now in my late forties beginning to truly appreciate. Language is a powerful tool, the choice of a single word can sometimes make or break an argument.
I don't see how LLMs can do anything but significantly worsen this situation overall.
[+] [-] hansmayer|8 months ago|reply
[+] [-] jddj|8 months ago|reply
I like the optimism. We haven't developed herd immunity to the 2010s social media technologies yet, but I'll take it.
[+] [-] Ekaros|8 months ago|reply
Now one like me might go and ask how much of communication is actually worthwhile? Sometimes I consider that there is lot of communication that might not actually be. It is still done, but if no one actually reads it, why not automate generation.
Not to say there is not significant amount of stuff you actually want to get right.
[+] [-] Al-Khwarizmi|8 months ago|reply
But what fraction of communication is "worthwhile"?
I'm an academic, which in theory, should be one of the jobs that requires the most thinking. And still, I find that over half of the writing I do are things like all sorts of reports, grant applications, ethics/data management applications, recommendation letters, bureaucratic forms, etc. Which I wouldn't class as "worthwhile" in the sense that they don't require useful thinking, and I don't care one iota whether the text sounds like me or not as long as I get the silly requirement done. For these purposes, LLMs are a godsend and probably actually help me think more because I can devote more time to actual research and teaching, which I do in person.
[+] [-] sesm|8 months ago|reply
[+] [-] antithesizer|8 months ago|reply
Not when there's money to be made.
[+] [-] jstummbillig|8 months ago|reply
As soon as we stop treating AI like mind readers things will level out.
[+] [-] jrflowers|8 months ago|reply
One of my favorite developments on the internet in the past few years is the rise of the “I don’t think/won’t think/didn’t think” brag posts
On its own it would be a funny internet culture phenomenon but paired with the fact that you can’t confidently assume that anybody even wrote what you’re reading it is hilarious
[+] [-] barrell|8 months ago|reply
Using AI is kind of like having a Monika Closet. You just push all the stuff you don’t know to the side until it’s out of view. You then think everything is clean, and can fool yourself into thinking so for a while.
But then you need to find something in that closet and just weep for days.
[+] [-] eru|8 months ago|reply
[+] [-] christophilus|8 months ago|reply
I haven’t personally felt this to be the case. It feels more like going from thinking about nitty gritty details to thinking more like the manager of unreasoning savants. I still do a lot of thinking— about organization, phrasing (of the code), and architecture. Conversations with AI agents help me tease out my thinking, but they aren’t a substitute for actual thought.
[+] [-] supriyo-biswas|8 months ago|reply
I read that article when it was posted on HN, and it's full of bad faith interpretations of the various objections to using LLM-assisted coding.
Given that the article comes from a person whose expertise and viewpoints I respected, I had to run it through a friend; who suggested a more cynical interpretation that the article might have been written to serve his selfish interests. Given the number of bugs that LLMs often put in, it's not difficult to see why a skilled security researcher might be willing to encourage people to generate code in ways that lead to cognitive atrophy, and therefore increase his business through security audits.
[+] [-] lvl155|8 months ago|reply
[+] [-] CuriouslyC|8 months ago|reply
You wouldn't go around crusading against food because you're obese.
Another neat analogy is to children who are too dependent on their parents. Parents are great and definitely help a child learn and grow but children who rely on their parents for everything rather than trying to explore their limits end up being weak humans.
[+] [-] NetRunnerSu|8 months ago|reply
The core danger isn't the "debt" itself, which implies it can be repaid through practice. The real danger is crossing a "cognitive tipping point". This is the threshold where so much executive function, synthesis, and argumentation has been offloaded to an external system (like an LLM) that the biological brain, in its ruthless efficiency, not only prunes the unused connections but loses the meta-ability to rebuild them.
Our biological wetware is a use-it-or-lose-it system without version control. When a complex cognitive function atrophies, the "source code" is corrupted. There's no git revert for a collapsed neural network that once supported deep, structured thought.
This HN thread is focused on essay writing. But scale this up. We are running a massive, uncontrolled experiment in outsourcing our collective cognition. The long-term outcome isn't just a society of people who are less skilled, but a society of people who are structurally incapable of the kind of thinking that built our world.
So the question isn't just "how do we avoid cognitive debt?". The real, terrifying question is: "What kind of container do we need for our minds when the biological one proves to be so ruthlessly, and perhaps irreversibly, self-optimizing for laziness?"
https://github.com/dmf-archive/dmf-archive.github.io
[+] [-] alex77456|8 months ago|reply
StackExchange, the way it was meant to be initially, would be way more valuable over text models. But in reality people are imperfect and carry all sorts of cognitive biases and baggage, while a LLM won't close your question as 'too broad' right after it gets upvotes and user interaction.
On the other hand, I still find LLM writing on the subjects familiar to me, vastly inferior. Whenever I try to write a say, email with its help, I end up spending just as much time either editing the prompt to keep it on track, or rewriting it significantly after. I'd rather write it on my own with my own flow, than proofread/peer review a text model.
[+] [-] niemandhier|8 months ago|reply
Rather than getting ever deeper insight into a subject matter by actively working on it, you iterate fast but shallow over a corpus of AI generated content.
Example: I wanted to understand the situation in the Middle East better so I wrote an 10 page essay on the genesis if Hammas and Hizbulah using OpenAI as a cowriter.
I remember nothing, worse of the things I remember I don’t know if it was hallucinations I fixed or actual facts.
[+] [-] atoav|8 months ago|reply
LLMs can be great sparring partners for this, if you don't use it as a tool that writes for you, but as a tool that finds mistakes, points out gaps and errors (which you may or may not ignore) and helps in researching general questions aboit the world around you (always woth caution and sources).
[+] [-] energy123|8 months ago|reply
To the extent we can call it skill, it's probably going to be made redundant in a few years as the models get better. It gives me a kind of listlessness that assembly line workers would feel.
[+] [-] nottorp|8 months ago|reply
[+] [-] kiru_io|8 months ago|reply
[+] [-] tkgally|8 months ago|reply
But I have found that using AI in other ways to be incredibly mentally engaging in its own way. For the past two weeks, I’ve been experimenting with Claude Code to see how well it can fully automate the brainstorming, researching, and writing of essays and research papers. I have been as deeply engaged with the process as I have ever been with writing or translating by myself. But the engagement is of a different form.
The results of my experiments, by the way, are pretty good so far. That is, the output essays and papers are often interesting for me to read even though I know an AI agent wrote them. And, no, I do not plan to publish them or share them.
[+] [-] greekanalyst|8 months ago|reply
That's not surprising but also bleak.
[+] [-] jwblackwell|8 months ago|reply
[+] [-] delegate|8 months ago|reply
[+] [-] BoorishBears|8 months ago|reply
In the past I'd often reach a point like an unexpected error or looking at some docs would act like a "speed bump" and let me breath, and typically from there I'd acknowledge how tired I am, and stop for the moment.
With AI those speed bumps still exist, but there's sometimes just a bit of extra momentum that keeps me from slowing down enough to have that moment of reflection on how exhausted I am.
And the AI doesn't even have to be right for that to happen: sometimes just reading a suggestion that's specific to the current situation can trigger your own train of thought that's hard to reign back in.
[+] [-] DocTomoe|8 months ago|reply
You can go to the Walmart outside town on foot. And carry your stuff back. But it is much faster - and less exhaustive - to use the car. Which means you can spend more quality time on things you enjoy.
[+] [-] pcwelder|8 months ago|reply
I thought a lot about it and realised discriminating is much easier than generating.
I can discriminate good vs bad UI for example, but I can't generate a good UI to save my life. I immediately know when a movie is good, but writing a decent short story is an arduous task.
I can determine the degree of realism in a painting, but I can't paint a simple bicycle to convince a single soul.
We can determine if an LLM generation is good or bad in a lot of cases. As a crude strategy then we can discard bad cases and keep generating till we achieve our task. LLMs are useful only because of this disparity between discrimination vs generation.
These two skills are separate. Generation skills are hard to acquire and very valuable. They will atrophy if you don't keep exercising those.
[+] [-] keithwhor|8 months ago|reply
It's clear to me that language models are a net accelerant. But if they make the average person more "loquacious" (first word that came to mind, but also lol) then the signal for raw intellect will change over time.
Nobody wants to be in a relationship with a language model. But language models may be able to help people who aren't otherwise equipped to handle major life changes and setbacks! So it's a tool - if you know how to use it.
Let's use a real-life example: relationship advice. Over time I would imagine that "ChatGPT-guided relationships" will fall into two categories: "copy-and-pasters", who are just adding a layer of complexity to communication that was subpar to begin with ("I just copied what ChatGPT said"), and "accelerators" who use ChatGPT to analyze their own and their partners motivations to find better solutions to common problems.
It still requires a brain and empathy to make the correct decisions about the latter. The former will always end in heartbreak. I have faith that people will figure this out.
[+] [-] Todd|8 months ago|reply
[+] [-] jameson|8 months ago|reply
[+] [-] dzonga|8 months ago|reply
now if you don't have a mentor to tell you in the age of LLM you still have to do things the hard / old school way to develop critical thinking - you might end up taking shortcuts and have the LLMs "think" for you. hence again leaving huge swaths of the population behind in critical thinking which is already in shortage.
LLMs are bad that they might show you the sources but also hallucinate about the sources. & most people won't bother going to check source material and question it.
[+] [-] fercircularbuf|8 months ago|reply
[+] [-] santiagobasulto|8 months ago|reply
[+] [-] je42|8 months ago|reply
Given that the task has been under time pressure, I am not sure this study helps gauging the impact of LLMs in other contexts.
When my goal is to produce the result for a specific short term task - I maximize tool usage.
When my goal is to improve my personal skills - I use the LLM tooling differently optimizing for long(er) term learning.
[+] [-] einrealist|8 months ago|reply
You are reading on HN. You are probably more aware about the advantages and shortcomings of LLMs. You are not a casual user. And that's the problem with our echo chamber here.
[+] [-] mparramon|8 months ago|reply
[+] [-] risyachka|8 months ago|reply
Code without AI - sharp skills, your brain works and you come up with better solutions etc.
Code with AI - skills decline after merely a week or two, you forget how to think and because of relying on AI for simpler and simpler tasks - your total output is less and worse that in you were to diy it.
[+] [-] jonplackett|8 months ago|reply
The best mental description I have come up with is they are “Concept Processors”. Which is still awesome. Computers couldn’t understand concepts before. And now they can, and they can process and transform them in really interesting and amazing ways.
You can transform the concept of ‘a website that does X’ into code that expresses a website X.
But it’s not thinking. We still gotta do the thinking. And actually that’s good.
[+] [-] panstromek|8 months ago|reply
[+] [-] eru|8 months ago|reply
[+] [-] tehnub|8 months ago|reply
[+] [-] dyauspitr|8 months ago|reply
[+] [-] a_bonobo|8 months ago|reply