As someone already said, parents used to be concerned that kids wouldn't be able to solve maths problems without a calculator, and it's the same problem, but there's a difference between solving problems _with_ LLMs, and having LLMs solve it _for you_.
Well the extent is much broader from a calculator vs an LLM. Why should I hire you if an agent can do it ? LLM is every job is a calculator and can be replaced. Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent. So all the students who let the LLM do their assignment and learn basically nothing, what’s their value for a company to be hired ? The company will and is just using the agent as well …
An agent can't do it. It can help you like a calculator can help you, but it can't do it alone. So that means you've become the programmer. If you want to be the programmer, you always could have been. If that is what you want to be, why would you consider hiring anyone else to do it in the first place?
> Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent.
It was Shopifiy, but that's just a roundabout way to say that there is a hiring freeze due to low sales (no doubt because of tariff nonsense seizing up the market). An agent, like a calculator, can only increase the productivity of a programmer. As always, you still need more programmers to perform more work than a single programmer can handle. So all they are saying is that "we can't afford to do more".
> The company will and is just using the agent as well …
In which case wouldn't they want to hire those who are experts in using agents? If they, like Shopify, have become too poor to hire people – well, you're screwed either way, aren't you? So that is moot.
> Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent.
Spotify CEO is channeling The Two Bobs from Office Space: "What are you actually doing here?" Just in a nastier way, with a kind of prisoner's dilemma on top. If you can get by with an agent, fine, you won't bother him. If you can't, why can't you? Should we replace you with someone who can, or thinks they can?
You as the employer are liable, a human has real reasoning abilities and real fears about messing up, the likely hood of them doing something absurd like telling a customer that a product is 70% off and them not losing their job is effectively nil. What are you going to do with the LLM, fire it?
Data scientist and people deeply familiar with LLMs to the point that they could fine tune a model to your use case cost significantly more than a low skilled employee and depending on liability just running the LLM may be cheaper.
As an accounting firm ( one example from above ) far as I know in most jurisdictions the accountant doing the work is personally liable, who would be liable in the case of the LLM?
There is absolutely a market for LLM augmented workforces, I don't see any viable future even with SOTA models right now for flat out replacing a workforce with them.
>As someone already said, parents used to be concerned that kids wouldn't be able to solve maths problems without a calculator
Were they wrong? People who rely too much on a calculator don't develop strong math muscles that can be used in more advanced math. Identifying patterns in numbers and seeing when certain tricks can be used to solve a problem (verses when they just make a problem worse) is a skill that ends up being beyond their ability to develop.
Yes, they were wrong. Many young kids who are bad at mental calculations are later competent at higher mathematics and able to use it. I don't understand what patterns and tricks you're referring to, but if they are important for problems outside of mental calculations, then you can also learn about them by solving these problems directly.
Almost none of the cheaters appear to be solving problems with LLMs. All my faculty friends are getting large portions of their class clearly turning in "just copied directly from ChatGPT" responses.
It's an issue in grad school as well. You'll have an online discussion where someone submits 4 paragraphs of not-quite-eloquent prose with that AI "stink" on it. You can't be sure but it definitely makes your spidey sense tingle a bit.
Then they're on a video call and their vocabulary is wildly different, or they're very clearly a recent immigrant and struggle with basic sentence structure such that there is absolutely zero change their discussion forum persona is actually who they are.
This has happened at least once in every class, and invariably the best classes in terms of discussion and learning from other students are the ones where the people using AI to generate their answers are failed or drop the course.
> there's a difference between solving problems _with_ LLMs, and having LLMs solve it _for you_.
If there is a difference, then fundamentally LLMs cannot solve problems for you. They can only apply transformations using already known operators. No different than a calculator, except with exponentially more built-in functions.
But I'm not sure that there is a difference. A problem is only a problem if you recognize it, and once you recognize a problem then anything else that is involved along the way towards finding a solution is merely helping you solve it. If a "problem" is solved for you, it was never a problem. So, for each statement to have any practical meaning, they must be interpreted with equivalency.
There is a difference between thinking about the context of a problem and "critical thinking" about the problem or its possible solutions.
There is a measurable decrease in critical thinking skills when people consistently offload the thinking about a problem to an LLM. This is where the primary difference is between solving problems with an LLM vs having it solved for you with an LLM. And, that is cause for concern.
Two studies on impact of LLMs and generative AI on critical thinking:
shinycode|10 months ago
9rx|10 months ago
An agent can't do it. It can help you like a calculator can help you, but it can't do it alone. So that means you've become the programmer. If you want to be the programmer, you always could have been. If that is what you want to be, why would you consider hiring anyone else to do it in the first place?
> Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent.
It was Shopifiy, but that's just a roundabout way to say that there is a hiring freeze due to low sales (no doubt because of tariff nonsense seizing up the market). An agent, like a calculator, can only increase the productivity of a programmer. As always, you still need more programmers to perform more work than a single programmer can handle. So all they are saying is that "we can't afford to do more".
> The company will and is just using the agent as well …
In which case wouldn't they want to hire those who are experts in using agents? If they, like Shopify, have become too poor to hire people – well, you're screwed either way, aren't you? So that is moot.
inejge|10 months ago
Spotify CEO is channeling The Two Bobs from Office Space: "What are you actually doing here?" Just in a nastier way, with a kind of prisoner's dilemma on top. If you can get by with an agent, fine, you won't bother him. If you can't, why can't you? Should we replace you with someone who can, or thinks they can?
Spotify CEO is not his employees' friend.
zmodem|10 months ago
djeastm|10 months ago
jpc0|10 months ago
You as the employer are liable, a human has real reasoning abilities and real fears about messing up, the likely hood of them doing something absurd like telling a customer that a product is 70% off and them not losing their job is effectively nil. What are you going to do with the LLM, fire it?
Data scientist and people deeply familiar with LLMs to the point that they could fine tune a model to your use case cost significantly more than a low skilled employee and depending on liability just running the LLM may be cheaper.
As an accounting firm ( one example from above ) far as I know in most jurisdictions the accountant doing the work is personally liable, who would be liable in the case of the LLM?
There is absolutely a market for LLM augmented workforces, I don't see any viable future even with SOTA models right now for flat out replacing a workforce with them.
unknown|10 months ago
[deleted]
SkyBelow|10 months ago
Were they wrong? People who rely too much on a calculator don't develop strong math muscles that can be used in more advanced math. Identifying patterns in numbers and seeing when certain tricks can be used to solve a problem (verses when they just make a problem worse) is a skill that ends up being beyond their ability to develop.
kobenni|10 months ago
Suppafly|10 months ago
Yes. People who rely too much on a calculator weren't going to be doing advanced math anyway.
UncleMeat|10 months ago
pc86|10 months ago
Then they're on a video call and their vocabulary is wildly different, or they're very clearly a recent immigrant and struggle with basic sentence structure such that there is absolutely zero change their discussion forum persona is actually who they are.
This has happened at least once in every class, and invariably the best classes in terms of discussion and learning from other students are the ones where the people using AI to generate their answers are failed or drop the course.
9rx|10 months ago
If there is a difference, then fundamentally LLMs cannot solve problems for you. They can only apply transformations using already known operators. No different than a calculator, except with exponentially more built-in functions.
But I'm not sure that there is a difference. A problem is only a problem if you recognize it, and once you recognize a problem then anything else that is involved along the way towards finding a solution is merely helping you solve it. If a "problem" is solved for you, it was never a problem. So, for each statement to have any practical meaning, they must be interpreted with equivalency.
kevindamm|10 months ago
There is a measurable decrease in critical thinking skills when people consistently offload the thinking about a problem to an LLM. This is where the primary difference is between solving problems with an LLM vs having it solved for you with an LLM. And, that is cause for concern.
Two studies on impact of LLMs and generative AI on critical thinking:
https://www.mdpi.com/2075-4698/15/1/6
https://slejournal.springeropen.com/articles/10.1186/s40561-...