top | item 46840865

Outsourcing thinking

270 points| todsacerdoti | 1 month ago |erikjohannes.no

220 comments

order
[+] 3371|1 month ago|reply
Ever since Google experimented LLM in Gmail it bothers me alot. I firmly believe every word and the way you put them together portrays who you are. Using LLM for direct communication is harmful to human connections.
[+] mettamage|1 month ago|reply
It can be. It can also not be. A friend of mine had a PITA boss. Thanks to ChatGPT he salvaged his relationship with him even though he hated working with him.

He went on to something else but his stress levels went way down.

All this is to say: I agree with you if the human connection is in good faith. If it isn’t then LLMs are helpful sometimes.

[+] gjadi|1 month ago|reply
IMHO, the real problem is that they create an even greater dissonance between online life and IRL.

Think about dating apps, pictures could be fake, and now words exchanged can be fake too.

You thought you were arguing with a gentle and smart colleague by chat and mails, too bad, when you meet then at a conference or at a restaurant you find them very unpleasant.

[+] nullsanity|1 month ago|reply
This comment has made me glad for LLM in Gmail. If someone is going to over analyze my every word because he firmly believes it portrays who I am, I'd appreciate the layer obfuscation between me and this creepazoid.
[+] sumul|1 month ago|reply
This part really caught my attention (along with the rest of the preceding paragraph):

> Our inability to see opportunities and fulfillment in life as it is, leads to the inevitable conclusion that life is never enough, and we would always rather be doing something else.

I agree with the article completely, as it effectively names an uneasy feeling of hesitation I’ve had all along with how I use LLMs. I have found them tremendously valuable as sounding boards when I’m going in circles in my own well-worn cognitive (and sometimes even emotional) ruts. I have also found them valuable as research assistants, and I feel grateful that they arrived right around the time that search engines began to feel all but useless. I haven’t yet found them valuable in writing on my behalf, whether it’s prose or code.

During my formal education, I was very much a math and science person. I enjoyed those subjects. They came easily to me, which I also enjoyed. I did two years of liberal arts in undergrad, and they kicked my butt academically in a way that I didn’t realize was possible. I did not enjoy having to learn how to think and articulate those thoughts in seminars and essays. I did not enjoy the vulnerability of sharing myself that way, or of receiving feedback. If LLMs had existed, I’m certain I would have leaned hard on them to get some relief from the constant feeling of struggle and inadequacy. But then I wouldn’t have learned how to think or how to articulate myself, and my life and career would have been significantly less meaningful, interesting, and satisfying.

[+] port11|1 month ago|reply
As the quotes go, before you judge others make sure your affairs are in order. I’m not judging the young that are now trying to make sense of this hectic and overwhelming world.

But… I do agree with you, that had these things been there, I/we’d all be leaning on them. It’s the manageable hardship of life that makes it worth it, we better ourselves through the pain. My 18-year old self would complain, as would any me up to mid-30s. I’d have to insist to him that things will get better, but that he must work on what needs improving. Can’t just ask a language model for validation.

[+] b00ty4breakfast|1 month ago|reply
What I am worried about (and it's something about regular internet search that has worried me for the past ~10 years or so) is that, after they've trained a generation of folks to rely on this tech, they're going to start inserting things into the training data (or whatever the method would be) to bias it towards favoring certain agendas wrt the information it presents to the users in response to their queries.
[+] camgunz|1 month ago|reply
This list of things not to use AI for is so quaint. There's a story on the front page right now from The Atlantic: "Film students who can no longer sit through films". But why? Aren't they using social media, YouTube, Netflix, etc responsibly? Surely they know the risks, and surely people will be just as responsible with AI, even given the enormous economic and professional pressures to be irresponsible.
[+] hamasho|1 month ago|reply

  > Surely they know the risks, and surely people will be just as responsible with AI
I can't imagine even half of students can understand the short and long term risk of using social media and AI intensively. At least I couldn't when I was a student.
[+] hippo22|1 month ago|reply
What is the lesson in the anecdote about film students? To me, it’s that people like the idea of studying film more than they like actually studying film. I fail to see the connection to social media or AI.
[+] esperent|1 month ago|reply
> Film students who can no longer sit through films

Everyone loves watching films until they get a curriculum with 100 of them along with a massive reading list, essays, and exams coming up.

[+] ahazred8ta|1 month ago|reply
> surely people will be just as responsible with AI

That's exactly what worries us.

[+] pixl97|1 month ago|reply
We lose something when we give up horses for cars.

Have too many of us outsourced our ability to raise horses for transport?

Surely you're capable of walking all day without break?

[+] squidbeak|1 month ago|reply
Perhaps the films were weren't worth sitting through?
[+] awesome_dude|1 month ago|reply
Recently a side discussion came up - people in the Western world are "rediscovering" fermented, and pickled, foods that are still in heavy use in Asian cultures.

Fermentation was a great way to /preserve/ food, but it can be a bit hit and miss. Pickling can be outright dangerous if not done correctly - botulism is a constant risk.

When canning of foods came along it was a massive game changer, many foods became shelf stable for months or years.

Fermentation and pickling was dropped almost universally (in the West).

[+] jonmagic|1 month ago|reply
I really liked this piece, and I share the concern, but I think “outsourcing thinking” is slightly the wrong frame.

In my own work, I found the real failure mode wasn’t using AI, it was automating the wrong parts. When I let AI generate summaries or reflections for me, I lost the value of the task. Not because thinking disappeared, but because the meaning-making did.

The distinction that’s helped me is: - If a task’s value comes from doing the thinking (reflection, synthesis, judgment), design AI as a collaborator, asking questions, prompting, pushing back. - If the task is execution or recall, automate it aggressively.

So the problem isn’t that we outsource thinking, it’s that we sometimes bypass the cognitive loops that actually matter. The design choice is whether AI replaces those loops or helps surface them.

I wrote more about that here if useful: https://jonmagic.com/posts/designing-collaborations-not-just...

[+] gemmarate|1 month ago|reply
The interesting axis here isn’t how much cognition we outsource, it’s how reversible the outsourcing is. Using an LLM as a scratchpad (like a smarter calculator or search engine) is very different from letting it quietly shape your writing, decisions, and taste over years. That’s the layer where tacit knowledge and identity live, and it’s hard to get back once the habit forms.

We already saw a softer version of this with web search and GPS: people didn’t suddenly forget how to read maps, but schools and orgs stopped teaching it, and now almost nobody plans a route without a blue dot. I suspect we’ll see the same with writing and judgment: the danger isn’t that nobody thinks, it’s that fewer people remember how.

[+] Insanity|1 month ago|reply
Yet it does feel different with LLMs compared to your examples. Yes, people can’t navigate without Apple/Google maps, but that’s still very different from losing critical thinking skills.

That said, LLMs are perhaps accelerating that but aren’t the only cause (lack of reading, more short form content, etc)

[+] esperent|1 month ago|reply
> it’s hard to get back once the habit forms.

Humans are highly adaptable. It's hard to go back while the thing we're used to still exists, but if it vanished from the world we'd adapt within a few weeks.

[+] preston-kwei|1 month ago|reply
The “lump of cognition” framing misses something important. it’s not about how much thinking we do, but which thinking we stop doing. A lot of judgment, ownership, and intuition comes from boring or repetitive work, and outsourcing that isn’t free. Lowering the cost of producing words clearly isn’t the same as increasing the amount of actual thought.
[+] nsainsbury|1 month ago|reply
I actually wrote up quite a few thoughts related to this a few days ago but my take is far more pessimistic: https://www.neilwithdata.com/outsourced-thinking

My fundamental argument: The way the average person is using AI today is as "Thinking as a Service" and this is going to have absolutely devastating long term consequences, training an entire generation not to think for themselves.

[+] noduerme|1 month ago|reply
I think you hit the nail on the head. Without years of learning by doing, experience in the saddle as you put it, who would be equipped to judge or edit the output of AI? And as knowledge workers with hands-on experience age out of the workforce, who will replace us?

The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true. We don't usually need to worry that a calculator might be giving us the wrong result, or an inferior result. It simply gives us an objective fact. Whereas the output of LLMs can be subjectively considered good or bad - even when it is accurate.

So imagine teaching an architecture student to draw plans for a house, with a calculator that spit out incorrect values 20% of the time, or silently developed an opinion about the height of countertops. You'd not just have a structurally unsound plan, you'd also have a student who'd failed to learn anything useful.

[+] jordanb|1 month ago|reply
There's an Isaac Asimov story where people are "educated" by programming knowledge into their brains, Matrix style.

A certain group of people have something wrong with their brain where they can't be "educated" and are forced to learn by studying and such. The protagonist of the story is one of these people and feels ashamed at his disability and how everyone around him effortlessly knows things he has to struggle to learn.

He finds out (SPOILER) that he was actually selected for a "priesthood" of creative/problem solvers, because the education process gives knowledge without the ability to apply it creatively. It allows people to rapidly and easily be trained on some process but not the ability to reason it out.

[+] roenxi|1 month ago|reply
That would have devastating consequences in the pre-LLM era, yes. What is less obvious is whether it'll be an advantage or disadvantage going forward. It is like observing that cars will make people fat and lazy and have devastating consequences on health outcomes - that is exactly what happened but the net impact was still positive because cars boost wealth, lifestyles and access to healthcare so much that the net impact is probably positive even if people get less exercise.

It is unclear that a human thinking about things is going to be an advantage in 10, 20 years. Might be, might not be. In 50 years people will probably be outraged if a human makes an important decision without deferring to an LLM's opinion. I'm quite excited that we seem to be building scaleable superintelligences that can patiently and empathetically explain why people are making stupid political choices and what policy prescriptions would actually get a good outcome based on reading all the available statistical and theoretical literature. Screw people primarily thinking for themselves on that topic, the public has no idea.

[+] godelski|1 month ago|reply
I think the comparison to giving change is a good one, especially given how frequently the LLM hype crowd uses the fictitious "calculator in your pocket" story. I've been in the exact situation you've described, long before LLMs came out and cashiers have had calculators in front of them for longer than we've had smartphones.

I'll add another analogy. I tell people when I tip I "round off to the nearest dollar, move the decimal place (10%), and multiply by 2" (generating a tip that will be in the ballpark of 18%), and am always told "that's too complicated". It's a 3 step process where the hardest thing is multiplying a number by 2 (and usually a 2 digit number...). It's always struck me as odd that the response is that this is too complicated rather than a nice tip (pun intended) for figuring out how much to tip quickly and with essentially zero thinking. If any of those three steps appear difficult to you then your math skills are below that of elementary school.

I also see a problem with how we look at math and coding. I hear so often "abstraction is bad" yet, that is all coding (and math) is. It is fundamentally abstraction. The ability to abstract is what makes humans human. All creatures abstract, it is a necessary component of intelligence, but humans certainly have a unique capacity for it. Abstraction is no doubt hard, but when in life was anything worth doing easy? I think we unfortunately are willing to put significantly more effort into justifying our laziness than we will to be not lazy. My fear is that we will abdicate doing worthwhile things because they are hard. It's a thing people do every day. So many people love to outsource their thinking. Be it to a calculator, Google, "the algorithm", their favorite political pundit, religion, or anything else. Anything to abdicate responsibility. Anything to abdicate effort.

So I think AI is going to be no different from calculators, as you suggest. They can be great tools to help people do so much. But it will be far more commonly used to outsource thinking, even by many people considered intelligent. Skills atrophy. It's as simple as that.

[+] jakubtomanik|1 month ago|reply
I believe that collectively we passed that point long before the onset of LLMs. I have a feeling that throughout the human history vast amounts of people ware happy to outsource their thinking and even pay to do so. We just used to call those arrangements religions.
[+] polyrenn|1 month ago|reply
> Can you audit/review/identify issues in a codebase if you've never written code?

Actual knowledge about systems work much better more often than not, LLMs are not sentient and still need to be driven to get decent results.

[+] rco8786|1 month ago|reply
I'll say that I'm still kinda on the fence here, but I will point out that your argument is exactly the same as the argument against calculators back in the 70s/80s, computers and the internet in the 90s, etc.
[+] benSaiyen|1 month ago|reply
Too late. Outsourcing has already accomplished this.

No one is making cool shit for themselves. Everyone is held hostage ensuring Wall Street growth.

The "cross our fingers and hope for the best" position we find ourselves in politically is entirely due to labor capture.

The US benefited from a social network topology of small businesses. No single business being a lynch pin that would implode everything.

Now the economy is a handful of too big to fails eroding links between human nodes by capturing our agency.

I argued as hard as I could against shipping electronics manufacturing overseas so the next generation would learn real engineering skills. But 20 something me had no idea how far up the political tree the decision was made back then. I helped train a bunch of people's replacements before the telecom focused network hardware manufacturer I worked for then shut down.

American tech workers are now primarily cloud configurators and that's being automated away.

This is a decades long play on the part of aging leadership to ensure Americans feel their only choice is capitulate.

What are we going to do, start our own manufacturing business? Muricans are fish in a barrel.

And some pretty well connected people are hinting at similar sense of what's wrong: https://www.barchart.com/story/news/36862423/weve-done-our-c...

[+] OsamaJaber|1 month ago|reply
This is something I noticed myself. I let AI handle some of my project and later realized I didn't even understand my own project well enough to make decisions about it :)
[+] pveierland|1 month ago|reply
One bothersome aspect of generative assistance for personal and public communication not mentioned is that it introduces a lazy hedge, where a person can always claim that "Oh, but that was not really what I meant" or "Oh, but I would not express myself in that way" - and use it as a tool to later modify or undo their positions - effectively reducing honesty instead of increasing it.
[+] jemiluv8|1 month ago|reply
Outsourcing to thinking is exactly what I tell our developers. They are hired to do the kind of thinking I’d rather not do.
[+] jsattler|1 month ago|reply
Very interesting, thanks for sharing this. After reading Karpathy's recent tweet about "A few random notes from claude coding quite [...]" it got me thinking a lot about offloading thinking and more specifically failure. Failure is important for learning. When I use AI and they make mistakes, I often tend to blame the AI and offload the failure. I think this post explores similar thoughts, without talking much about failure. It will be interesting to see the long-term effects.
[+] Animats|1 month ago|reply
The author says it's too long. So let's tighten it up.

A criticism of the use of large language models (LLMs) is that it can deprive us of cognitive skills. Are some kinds of use are better than others? Andy Masley's blog says "thinking often leads to more things to think about", so we shouldn't worry about letting machines do the thinking for us — we will be able to think about other things.

My aim is not to refute all his arguments, but to highlight issues with "outsourcing thinking".

Masley writes that it's "bad to outsource your cognition when it:"

- Builds tacit knowledge you'll need in future.

- Is an expression of care for someone else.

- Is a valuable experience on its own.

- Is deceptive to fake.

- Is focused in a problem that is deathly important to get right, and where you don't totally trust who you're outsourcing it to.

How we choose to use chatbots is about how we want our lives and society to be.

That's what he has to say. Plus some examples, which help make the message concrete. It's a useful article if edited properly.

[+] throwaway2037|1 month ago|reply
For those unaware, this phrase: "The lump of cognition fallacy" is a derivative of the classic economic fallacy: Lump of Labor Fallacy (or Lump of Jobs)

Google AI describes it as:

    This is the most common form, often used in debates about technology, immigration, or retirement. 
    Definition: The belief that there is a set, finite amount of work to be done in an economy.
    The Fallacy: Assuming that if one person works more, or if a machine does a job, there is less work left for others.
    Reality: An increase in labor or technology (like AI or automation) can increase productivity, lower costs, and boost economic activity, which actually creates more demand for labor.
    Examples:
    "If immigrants come to this country, they will take all our jobs" (ignoring that immigrants also consume goods and create demand for more jobs).
    "AI will destroy all employment" (ignoring that technology typically shifts the nature of work rather than eliminating it).
[+] andsoitis|1 month ago|reply
Some of humanity’s most significant inventions are language (symbolic communication), writing, the scientific method, electricity, the computer.

Notice something subtle.

Early inventions extend coordination. Middle inventions extend memory. Later inventions extend reasoning. The latest inventions extend agency.

This suggests that human history is less about tools and more about outsourcing parts of the mind into the world.

[+] p0w3n3d|1 month ago|reply
The main difference is that the computer you use for writing is not requiring you to pay for every word. And that's the difference in the business models being pushed right now all around the world.
[+] ZenoArrow|1 month ago|reply
If an AI thinks for you, you're no longer "outsourcing" parts of your mind. What we call "AI" now is technically impressive but is not the end point for where AI is likely to end up. For example, imagine an AI that is smart enough to emotionally manipulate you, at what point in this interaction do you lose your agency to "outsource" yourself instead of acting as a conduit to "outsource" the thoughts of an artificial entity? It speaks to our collective hubris that we seek to create an intellectually superior entity and yet still think we'll maintain control over it instead of the other way around.
[+] oktcho|1 month ago|reply
We are going to be able to think plenty about other things than what we are doing, yes. That is called anxiety.
[+] beaker52|1 month ago|reply
I still read the LLMs output quite critically and I cringe whenever I do. LLMs are just plain wrong a lot of the time. They’re just not very intelligent. They’re great at pretending to be intelligent. They imitate intelligence. That is all they do. And I can see it every single time I interact with them. And it terrifies me that others aren’t quite as objective.
[+] sidrag22|1 month ago|reply
I usually feed my articles to it and ask for insight into whats working. I usually wait to initiate any sort of AI insight until my rough draft is totally done...

Working in this manner, it is so painfully clear it doesnt really follow the flow of the article even. It misses on so many critical details and just sorta fills in its own blanks wrong... When you tell it that its missing a critical detail, it treats you like some genius, every single time.

It is hard for me to try to imagine growing up with it, and using it to write my own words for me. The only time i copy paste words to a fellow human that is ai generated, is for totally generic customer service style replies, for questions i dont totally consider worthy of any real time.

AI has kinda taken away my flow state for coding, rare as it was... I still get it when writing stuff I am passionate about, and I can't imagine I'll ever wanna outsource that.

[+] zahlman|1 month ago|reply
> And it terrifies me that others aren’t quite as objective.

I have been reminded constantly throughout this that a very large fraction of people are easily impressed by such prose. Skill at detecting AI output (in any given endeavour), I think, correlates with skill at valuing the same kind of work generally.

Put more bluntly: slop is slop, and it has been with us for far longer than AI.

[+] js8|1 month ago|reply
I think we can make an analogy with our own brains, which have evolutionary older parts (limbic system) and evolutionary younger parts (neocortex). Now AI, I think it will be our new neocortex, another layer to our brain. And you can see limbic system didn't "outsource" thinking to neocortex - it's still doing it; but it can take (mostly good) advice from it.

Applying this analogy to human relationships - neocortex allowed us to be more social. Social communication with limbic system was mostly "you smell like a member of our species and I want to have sex with you". So having neocortex expanded our social skills to having friends etc.

I think AI will have a similar effect. It will allow us to individually communicate with large amount of other people (millions). But it will be a different relationship than what we today call "personal communication", face to face, driven by our neocortex. It will be as incomprehensible for our neocortex as our language is incomprehensible for the limbic system.

[+] wut-wut|1 month ago|reply
Interesting read..

To his point: personally, I find it shifts 'where and when' I have to deal with the 'cognitive load'. I've noticed (at times) feeling more impatient, that I tend to skim the results more often, and that it takes a bit more mental energy to maintain my attention..