These “AI is a gimmick that does nothing” articles mostly just communicate to me that most people lack imagination. I have gotten so much value out of AI (specifically ChatGPT and Midjourney) that it’s hard to imagine that a few years ago this was not even remotely possible.
The difference, it seems, is that I’ve been looking at these tools and thinking how I can use them in creative ways to accomplish a goal - and not just treating it like a magic button that solves all problems without fine-tuning.
To give you a few examples:
- There is something called the Picture Superiority Effect, which states that humans remember images better than merely words. I have been interested in applying this to language learning – imagine a unique image for each word you’re learning in German, for example. A few years ago I was about to hire an illustrator to make these images for me, but now with Midjourney or other image creators, I can functionally make unlimited unique images for $30 a month. This is a massive new development that wasn’t possible before.
- I have been working on a list of AI tools that would be useful for “thinking” or analyzing a piece of writing. Things like: analyze the assumptions in this piece; find related concepts with genealogical links; check if this idea is original or not; rephrase this argument as a series of Socratic dialogues. And so on. This kind of thing has been immensely helpful in evaluating my own personal essays and ideas, and prior to AI tools it, again, was not really possible unless I hired someone to critique my work.
The key for both of these example use cases is that I have absolutely no expectation of perfection. I don’t expect the AI images or text to be free of errors. The point is to use them as messy, creative tools that open up possibilities and unconsidered angles, not to do all the work for you.
> These “AI is a gimmick that does nothing” articles mostly just communicate to me that most people lack imagination.
Either that or different people have different views on life, tech, &c.
If you're not going through life as some sort of minmax rpg not using LLM to "optimise" every single aspects of your life is perfectly fine. I don't need a LLM to summarise an article, I want to read it during my 15 min coffee time in the morning. I don't need an LLM to tell me how my text should be rewritten to look like the statistical average of a good text...
The article is mostly about the use of genAI in education.
It was written after the author attended a workshop where the presenter tried and seemingly failed to show how AI was able to write essays when prompted with the word "innovative" or produce a podcast on a book.
The author also mentions an article by a university lecturer who claims that "Human interaction is not as important to today’s students" and that AI will basically replace it.
The subtitle of the article is "AI cannot save us from the effort of learning to live and die."
In other words, the article is about a specific trend in higher education to present AI as some sort of revolutionary tool that will completely change the way students learn.
The author disagrees and contends that pretending to replace most human interactions with genAI is a gimmick, and pretending that AI can make learning effortless is lying to students.
The way you use AI for learning language is certainly imaginative but you are not claiming that it replaces the quality of interacting with native speakers or possibly immersion in the culture. Your tool may be useful and clever but claiming it makes learning language effortless (as some AI apologists in education might) would make it a gimmick.
> These “AI is a gimmick that does nothing” articles
I don't think that's an accurate summary of this article. Are you basing that just on the title, or do you fundamentally disagree with the author here?
> We call something a gimmick, the literary scholar Sianne Ngai points out, when it seems to be simultaneously working too hard and not hard enough. It appears both to save labor and to inflate it, like a fanciful Rube Goldberg device that allows you to sharpen a pencil merely by raising the sash on a window, which only initiates a chain of causation involving strings, pulleys, weights, levers, fire, flora, and fauna, including an opossum. The apparatus of a large language model really is remarkable. It takes in billions of pages of writing and figures out the configuration of words that will delight me just enough to feed it another prompt. There’s nothing else like it.
In a more perfect world where we were just discussing the merits of the tech, I would be more inclined to agree. But I have to impress that the entire point of the tech is to do everything.
AI receives so much funding and support from the wealthy because they believe that they can use it to replace humans and reduce labor costs. I strongly suspect that AI being available to us at all is merely a plot to get us to train and troubleshoot the tech for them so it can more perfectly imitate us. Then, eventually, when the tech is "good enough" it will rapidly become too expensive for normal people to use and thus become inaccessible.
Companies are already mass-firing their staff in favor of AI agents even though those agents don't even do a good job. Imagine how it will be when they do.
> value out of AI (specifically ChatGPT and Midjourney)
The one area I would agree that AI and ML tools have been surprisingly good, art generation.
But then, I see the flood of AI generated pictures and overall, feel it has made a already troublesome world, even more troublesome. I am starting to see the "the picture is AI made, or AI modified" excuses coming into mainstream.
A picture now, has lost all meaning.
> be useful for “thinking” or analyzing a piece of writing
This, I am highly skeptical of. If you train an LLM with words of "trains can fly", then it spits that out. They may be good as summarizing or search tools, but to claim them to be "thinking" and "analyzing", nah.
Most people are not motivated to continuously learn or dive deep or explore the boundary of knowledge. As a result, it's very likely AI will amplify or deepen the current skip gaps and create more. People who are good at learning and can deeply understand foundational concepts will be able to leverage AI to learn more, learn deeper, and solve harder problems on demand. People who resist learning will use AI as a cheating tool, eventually bringing themselves into a pickle.
Case in point, a PIAAC report back in 2013 said that only 9% of the US adults were considered proficient in math and complex reasoning. And the questions used by PIAAC were arguably just high-school level. Anecdotally, how many people have heard their professors or high-school teachers complain that most students couldn't even really grasp linear equations or distributed property (students can remember rules and pass high-school test, but many of them would be hopeless if they were to pass national entrance exams in other countries).
Yeah, it’s misplaced frustration at the surrounding overhype. I always imagine the authors would be vocal advocates if they “discovered” the service & no one else knew about it.
I think it is more like giving someone a swiss army knife and they complain it won't even cut a piece of wood because their role in society is being compensated for cutting wood. All they know how to do is cut wood. They aren't even interested in anything but cutting wood.
So basically, you are happy because AI could replace a human for cheap?
I mean sure, from a purely profit oriented point of view it's good but you need to realize the human being replaced isn't feeling too good about it. Especially when the AI works because it has used input from works of people like him.
The people possessing the capital for AI are pretty happy about the results for sure but they need to think about sharing the wealth created because otherwise this is just an unfair transfer of value to an already rich and powerful small group of people.
I personally feel like some of the AI hype is driven by it's ability to create flashy demos which become dead end projects.
It's so easy to spin up an example "write me a sample chat app" or whatever and be amazed how quickly and fully it realizes this idea, but it does kinda beg the question, now what?
I think in the same way that image generation is akin to clipart (wildly useful, but lacking in depth and meaning) the AI code generation projects are akin to webpage templates. They can help get you started, and take you further than you could on your own, but ultimately you have to decide "now what" after you take that first (AI) step.
"It's so easy to spin up an example "write me a sample chat app" or whatever and be amazed how quickly and fully it realizes this idea, but it does kinda beg the question, now what?"
Which we already had, it's just a 'git clone https://github.com/whatevs/huh' away, or doing one of millions of tutorials on whatever topic. Pretty much everyone who can build something out of Elixir/Phoenix has a chat app, an e-commerce store and a scraping platform just laying around.
The demos I see all make compromises in order to work that hobble you from hardening them or otherwise lock you to very specific conceptualizations that you simply wouldn't have building from the smallest low level building blocks or even starting at a super high level state machine placeholder. In my experience no matter how hard I try it will be guided by the weights of the total generated output towards something that doesn't understand the value of compartmentalization, and will add tokens that make its probabilities work internally above all.
AI is gimmick, smartphones are gimmick, computers are gimmick, automation is gimmick, books are gimmick, only %MY_ENLIGHTMENT% is not.
Seriously, I understand saying something lime this about crypto or whatever meme of the day, but even current LLMs are literal magic. Instead of reading 10 pages of empty water and wasting my time, ChatGPT can summarize this as
> Malesic argues that AI hype—especially in education—is a shallow gimmick: it overpromises revolutionary change but delivers banal, low-value outputs. True teaching thrives on slow, sacrificial human labor and deep discussion, which no AI shortcut can replicate.
> Instead of reading 10 pages of empty water and wasting my time, ChatGPT can summarize this as
Definitely worth investing billions and wasting insane amount of energy... idk how people merge the "this is a revolution!" and "it kinda summed up a 10 pages pdf that I couldn't bother to read in the first place" without noticing the insane amount of mental gymnastic you have to go through to reconcile these two ideas.
Not even mentioning the millions of new LLM generated pages that are now polluting the web
For me the usefulness of LLMs is proportional to how shitty google has become. When searching for something you get a bunch of blog spam or other SSO optimised shit results to pages that open dozens of popups asking you to subscribe or make an account. ChatGPT gives you the answer immediately and I must say I find it helpful 90% of the time.
For simple coding questions it is also very good because it takes your current context into account. It is basically a smarter "copy paste from stack overflow".
At least for now LLMs do not replace any meaningful work for me, but they replace google more and more.
> “Human interaction is not as important to today’s students,” Latham claims
Goodness that's depressing. Is this going to crank individualism up to 11?
I remember hating having to do group projects in school. Most often, 3/5 of the group would contribute jack shit, while the remaining people had to pick up the slack. But even with lazy gits, the interactions were what made it valuable.
Maybe human/-I cooperation is an important skill for people to learn, but it shouldn't come at the cost of losing even more human-human cooperation and interaction.
> Most often, 3/5 of the group would contribute jack shit, while the remaining people had to pick up the slack.
Never fear, nowadays 3/5 do squat with the 4th sending you largely-incoherent GPT sludge, before dropping off the face of the earth until 11:30PM on the night the assignment's due.
I've seen it said college is supposed to teach you the skills to navigate working with others moreso than your specific field of study. Glad to see they've still got it.
One of the realisations I've had recently is that the AI hype feels like another level from what's come before because AI itself is creating the "hype" content fed to me (and my bosses and colleagues) all over social media.
The FOMO tech people are having with AI is out of control - everyone assumes that everyone else is having way more success with it than they are.
I used AI to summarize this whole article and give me takeaways - it already saved me like 0.5h of reading something that in the end I would disagree with since the article is IMHO to harsh on AI.
I found AI extremely useful and easy sell for me to spend $20/m even if not used professionally for coding and I'm the person who avoid any type of subscription as a plague.
Even in educational setting that this article mostly focus about it can be super useful. Not everyone has access to mentors and scholars. I saved a lot of time helping family with typical tech questions and troubleshooting by teaching them how to use it and trying to solve their tech problem themselves.
I think it's in human nature to force any topic to be all "good" or "bad". I agree with most criticisms this author has about the performance of AI -- it _is_ very bad at writing essays, and dare I say most things (including code), based on a single prompt. But to say it is a gimmick and compare it with technologies that died or are dying seems to me like a visceral response, perhaps after experiencing the overflow of AI-generated homework (a use of AI that ultimately just wastes everyone's time).
I think most people in here know at least a few ways they can use AI that is genuinely useful to them. I suppose if you're _very_ positive about AI, then it's good to have a polarized negative article to make us remember all the ways AI is being overpromised. I'm definitely very excited about finding new ways to apply AI, and that explorative phase can come off as trying to sell snake oil. We have to be realistic and acknowledge this is a technology that can produce content faster than we can consume it. Content that takes effort to distinguish useful vs. not.
All that said I disagree with the idea that the only way "to help students break out of their prisons, at least for an hour, so they can see and enhance the beauty of their own minds" is via teaching and not via technologies such as AI. The education system certainly failed me and I found a lot of joy in technology instead. For me it was the start of the internet, but I can only imagine for many today it will be the start of AI.
> I think most people in here know at least a few ways they can use AI that is genuinely useful to them
The only thing that really comes to mind is making something in a domain where I have almost no prior expertise.
But then ChatGPT is so frequently wrong, and so frequently repeatedly wrong when it tries to "correct" problems when pointed out, that even then I always have to go and read relevant documentation and re-write the thing regardless. Maybe there's some slight usefulness here in giving me a starting point, but it's marginal.
I always found myself to be very good at Googling/Searching. Or asking: like emailing an expert or colleague. I'm good at condensing what I'm trying to ask and good at knowing what they could be misunderstanding, or what follow up questions they might have, to save some back- and forth. The corresponding thing on google is predicting what I might see, and adding negative search terms for them.
BUT, and this is I think why some of us feel ChatGPT is poor: asking in this way that guides a human or a search engine, makes ChatGPT produce worse answers(!).
If you say "What can be wrong with X? I'm pretty sure it's not Y or Z which I ruled out, could it be Q or perhaps W"? Then ChatGPT and other language models quickly reinforce your belief instead of challenging them. It would rather give you an incorrect reason why you are right, than provide you an additional problem, or challenge your assumptions. If LLMs could get over the bullshit problem, it would be so much better. Having a confidence and being able to express it is invaluable. But somehow I doubt it's possible - if it was, then they would be doing it already as it's a killer feature. So I fear that it's somehow not achievable with LLMs? In which case the title is correct.
> The claim of inevitability is crucial to technology hype cycles, from the railroad to television to AI.
Well. You know. We still have plenty of railroad, and television has had a pretty good run too. So if that are the models to compare AI to, then I have bad news for how 'hype cycle' AI is going to be.
The context is AI in education. It argues that AI in education is a gimmick and true learning requires time, care, and the presence of humans who are willing to do difficult work together.
I don't dispute what learning requires, but I also don't exclude AI in that picture. What we have is almost a 'Young Lady's Primer' from "The Diamond Age". All we have to do is ask the right questions. If anything education should be teaching how to use the new tools well.
Funny how it also refutes that AI use is inevitable. The only debate I've heard is when not if.
This seems unusually shallow for the hedgehog review. I thought we'd largely moved on from this sort of sentimental, "I can't get good outputs therefore nobody can" style essay -- not to mention the water use argument! They've published far better writing on LLMs too: see "Language Machinery" from fall 23 [1]
Some companies may save money by employing LLMs to do shallow things. Others may not. Also, LLMs are not all AIs. AI is a broad field with many models and applications that were already omnipresent in our lives but less marketable to the general public as revolutionary, such as spam filter. AI is NOT a gimmick per se. Some users are.
P.S.: consider that when there are huge investments in something, people will do anything to see a return, including paying other people to create hype.
If anyone is interested in AI in relation to learning, I think the best take on that I've seen so far was from Derek (Veritasium) in this recent talk: https://www.youtube.com/watch?v=0xS68sl2D70
It's a lot more balanced compared to the doomy attitude in the primary post.
> But look at what people actually use this wonder for: brain-dead books and videos, scam-filled ads, polished but boring homework essays. Another presenter at the workshop I attended said he used AI to help him decide what to give his kids for breakfast that morning.
The last example is actually the most interesting! The essays are whatever, dumb or lazy kids are gonna cheat on their homework, schools have long needed better ways of teaching kids than regurgitative essays, but in the mean time just use an in-class essay or exam. But people aren't really making the brain-dead books and videos as anything other than a curiosity, despite the fears of various humanities professors.
The interesting part of AI, and I suspect the primary actual use case, is everything else.
In my camping car, somewhere in the desert, I sometimes have limited resources. Like a can of beans, some fresh potatoes, an apple, Italian spices, and so on.
I like to ask ChatGPT: Listen, I have this stuff, I want to create some food with strong umami taste, do you have an idea?
It is very good at that, the results were often amazing.
This is its core feature: 'feel' loose connections between concepts. Italian pasta with maple syrup? Yes, but only if you add some Arabic spices...
"AI" is, due to the nature of artificial neuronal networks, not intelligent. It does not learn intelligence, it does learn feelings. Not emotions, but feelings in the sense of unconscious learning ('I get a feeling how to ride the bicycle off-road ').
It is refreshing to see I am not the only person who cannot get LLMs to say anything valuable. I have tried several times, but the cycle "You're right to question this. I actually didn't do anything you asked for. Here is some more garbage!" gets really old really fast.
It makes me wonder whether everyone else is kidding themselves, or if I'm just holding it wrong.
>AI cannot save us from the effort of learning to live and die
You could substitute pretty much anything for the word AI and the sentence would be true. Cars, houses or love also cannot save us from that but it doesn't show they are gimmicks.
[+] [-] keiferski|9 months ago|reply
The difference, it seems, is that I’ve been looking at these tools and thinking how I can use them in creative ways to accomplish a goal - and not just treating it like a magic button that solves all problems without fine-tuning.
To give you a few examples:
- There is something called the Picture Superiority Effect, which states that humans remember images better than merely words. I have been interested in applying this to language learning – imagine a unique image for each word you’re learning in German, for example. A few years ago I was about to hire an illustrator to make these images for me, but now with Midjourney or other image creators, I can functionally make unlimited unique images for $30 a month. This is a massive new development that wasn’t possible before.
- I have been working on a list of AI tools that would be useful for “thinking” or analyzing a piece of writing. Things like: analyze the assumptions in this piece; find related concepts with genealogical links; check if this idea is original or not; rephrase this argument as a series of Socratic dialogues. And so on. This kind of thing has been immensely helpful in evaluating my own personal essays and ideas, and prior to AI tools it, again, was not really possible unless I hired someone to critique my work.
The key for both of these example use cases is that I have absolutely no expectation of perfection. I don’t expect the AI images or text to be free of errors. The point is to use them as messy, creative tools that open up possibilities and unconsidered angles, not to do all the work for you.
[+] [-] lm28469|9 months ago|reply
Either that or different people have different views on life, tech, &c. If you're not going through life as some sort of minmax rpg not using LLM to "optimise" every single aspects of your life is perfectly fine. I don't need a LLM to summarise an article, I want to read it during my 15 min coffee time in the morning. I don't need an LLM to tell me how my text should be rewritten to look like the statistical average of a good text...
[+] [-] aaplok|9 months ago|reply
It was written after the author attended a workshop where the presenter tried and seemingly failed to show how AI was able to write essays when prompted with the word "innovative" or produce a podcast on a book. The author also mentions an article by a university lecturer who claims that "Human interaction is not as important to today’s students" and that AI will basically replace it.
The subtitle of the article is "AI cannot save us from the effort of learning to live and die."
In other words, the article is about a specific trend in higher education to present AI as some sort of revolutionary tool that will completely change the way students learn.
The author disagrees and contends that pretending to replace most human interactions with genAI is a gimmick, and pretending that AI can make learning effortless is lying to students.
The way you use AI for learning language is certainly imaginative but you are not claiming that it replaces the quality of interacting with native speakers or possibly immersion in the culture. Your tool may be useful and clever but claiming it makes learning language effortless (as some AI apologists in education might) would make it a gimmick.
[+] [-] nsteel|9 months ago|reply
I don't think that's an accurate summary of this article. Are you basing that just on the title, or do you fundamentally disagree with the author here?
> We call something a gimmick, the literary scholar Sianne Ngai points out, when it seems to be simultaneously working too hard and not hard enough. It appears both to save labor and to inflate it, like a fanciful Rube Goldberg device that allows you to sharpen a pencil merely by raising the sash on a window, which only initiates a chain of causation involving strings, pulleys, weights, levers, fire, flora, and fauna, including an opossum. The apparatus of a large language model really is remarkable. It takes in billions of pages of writing and figures out the configuration of words that will delight me just enough to feed it another prompt. There’s nothing else like it.
[+] [-] r00sty|9 months ago|reply
AI receives so much funding and support from the wealthy because they believe that they can use it to replace humans and reduce labor costs. I strongly suspect that AI being available to us at all is merely a plot to get us to train and troubleshoot the tech for them so it can more perfectly imitate us. Then, eventually, when the tech is "good enough" it will rapidly become too expensive for normal people to use and thus become inaccessible.
Companies are already mass-firing their staff in favor of AI agents even though those agents don't even do a good job. Imagine how it will be when they do.
[+] [-] kumarvvr|9 months ago|reply
The one area I would agree that AI and ML tools have been surprisingly good, art generation.
But then, I see the flood of AI generated pictures and overall, feel it has made a already troublesome world, even more troublesome. I am starting to see the "the picture is AI made, or AI modified" excuses coming into mainstream.
A picture now, has lost all meaning.
> be useful for “thinking” or analyzing a piece of writing
This, I am highly skeptical of. If you train an LLM with words of "trains can fly", then it spits that out. They may be good as summarizing or search tools, but to claim them to be "thinking" and "analyzing", nah.
[+] [-] hintymad|9 months ago|reply
Case in point, a PIAAC report back in 2013 said that only 9% of the US adults were considered proficient in math and complex reasoning. And the questions used by PIAAC were arguably just high-school level. Anecdotally, how many people have heard their professors or high-school teachers complain that most students couldn't even really grasp linear equations or distributed property (students can remember rules and pass high-school test, but many of them would be hopeless if they were to pass national entrance exams in other countries).
[+] [-] professor_v|9 months ago|reply
[+] [-] mristroph|9 months ago|reply
[+] [-] sincerecook|9 months ago|reply
[+] [-] rxtexit|9 months ago|reply
This swiss army knife is totally useless!
[+] [-] seec|9 months ago|reply
I mean sure, from a purely profit oriented point of view it's good but you need to realize the human being replaced isn't feeling too good about it. Especially when the AI works because it has used input from works of people like him.
The people possessing the capital for AI are pretty happy about the results for sure but they need to think about sharing the wealth created because otherwise this is just an unfair transfer of value to an already rich and powerful small group of people.
[+] [-] satisfice|9 months ago|reply
You didn't need AI for the things you list, and using AI has lowered the credibility and quality of your work.
I don't use any AI in my work. Which makes my work worth scanning by AI-- but not yours.
[+] [-] ddxv|9 months ago|reply
It's so easy to spin up an example "write me a sample chat app" or whatever and be amazed how quickly and fully it realizes this idea, but it does kinda beg the question, now what?
I think in the same way that image generation is akin to clipart (wildly useful, but lacking in depth and meaning) the AI code generation projects are akin to webpage templates. They can help get you started, and take you further than you could on your own, but ultimately you have to decide "now what" after you take that first (AI) step.
[+] [-] cess11|9 months ago|reply
Which we already had, it's just a 'git clone https://github.com/whatevs/huh' away, or doing one of millions of tutorials on whatever topic. Pretty much everyone who can build something out of Elixir/Phoenix has a chat app, an e-commerce store and a scraping platform just laying around.
[+] [-] th0ma5|9 months ago|reply
[+] [-] wiseowise|9 months ago|reply
Seriously, I understand saying something lime this about crypto or whatever meme of the day, but even current LLMs are literal magic. Instead of reading 10 pages of empty water and wasting my time, ChatGPT can summarize this as
> Malesic argues that AI hype—especially in education—is a shallow gimmick: it overpromises revolutionary change but delivers banal, low-value outputs. True teaching thrives on slow, sacrificial human labor and deep discussion, which no AI shortcut can replicate.
Hardly any revolutionary thought.
[+] [-] AndrewDucker|9 months ago|reply
https://futurism.com/ai-chatbots-summarizing-research
[+] [-] mort96|9 months ago|reply
[+] [-] lm28469|9 months ago|reply
Definitely worth investing billions and wasting insane amount of energy... idk how people merge the "this is a revolution!" and "it kinda summed up a 10 pages pdf that I couldn't bother to read in the first place" without noticing the insane amount of mental gymnastic you have to go through to reconcile these two ideas.
Not even mentioning the millions of new LLM generated pages that are now polluting the web
[+] [-] BlindEyeHalo|9 months ago|reply
For simple coding questions it is also very good because it takes your current context into account. It is basically a smarter "copy paste from stack overflow".
At least for now LLMs do not replace any meaningful work for me, but they replace google more and more.
[+] [-] elric|9 months ago|reply
Goodness that's depressing. Is this going to crank individualism up to 11?
I remember hating having to do group projects in school. Most often, 3/5 of the group would contribute jack shit, while the remaining people had to pick up the slack. But even with lazy gits, the interactions were what made it valuable.
Maybe human/-I cooperation is an important skill for people to learn, but it shouldn't come at the cost of losing even more human-human cooperation and interaction.
[+] [-] DaSHacka|9 months ago|reply
Never fear, nowadays 3/5 do squat with the 4th sending you largely-incoherent GPT sludge, before dropping off the face of the earth until 11:30PM on the night the assignment's due.
I've seen it said college is supposed to teach you the skills to navigate working with others moreso than your specific field of study. Glad to see they've still got it.
[+] [-] lexandstuff|9 months ago|reply
The FOMO tech people are having with AI is out of control - everyone assumes that everyone else is having way more success with it than they are.
[+] [-] namaria|9 months ago|reply
[+] [-] pzo|9 months ago|reply
I found AI extremely useful and easy sell for me to spend $20/m even if not used professionally for coding and I'm the person who avoid any type of subscription as a plague.
Even in educational setting that this article mostly focus about it can be super useful. Not everyone has access to mentors and scholars. I saved a lot of time helping family with typical tech questions and troubleshooting by teaching them how to use it and trying to solve their tech problem themselves.
[+] [-] bushbaba|9 months ago|reply
ChatGPT has allowed me to write 50%+ faster with 50%+ better quality. It’s been one of the largest productivity boosts in the last 10+ years.
[+] [-] blixt|9 months ago|reply
I think most people in here know at least a few ways they can use AI that is genuinely useful to them. I suppose if you're _very_ positive about AI, then it's good to have a polarized negative article to make us remember all the ways AI is being overpromised. I'm definitely very excited about finding new ways to apply AI, and that explorative phase can come off as trying to sell snake oil. We have to be realistic and acknowledge this is a technology that can produce content faster than we can consume it. Content that takes effort to distinguish useful vs. not.
All that said I disagree with the idea that the only way "to help students break out of their prisons, at least for an hour, so they can see and enhance the beauty of their own minds" is via teaching and not via technologies such as AI. The education system certainly failed me and I found a lot of joy in technology instead. For me it was the start of the internet, but I can only imagine for many today it will be the start of AI.
[+] [-] mort96|9 months ago|reply
The only thing that really comes to mind is making something in a domain where I have almost no prior expertise.
But then ChatGPT is so frequently wrong, and so frequently repeatedly wrong when it tries to "correct" problems when pointed out, that even then I always have to go and read relevant documentation and re-write the thing regardless. Maybe there's some slight usefulness here in giving me a starting point, but it's marginal.
[+] [-] alkonaut|9 months ago|reply
BUT, and this is I think why some of us feel ChatGPT is poor: asking in this way that guides a human or a search engine, makes ChatGPT produce worse answers(!).
If you say "What can be wrong with X? I'm pretty sure it's not Y or Z which I ruled out, could it be Q or perhaps W"? Then ChatGPT and other language models quickly reinforce your belief instead of challenging them. It would rather give you an incorrect reason why you are right, than provide you an additional problem, or challenge your assumptions. If LLMs could get over the bullshit problem, it would be so much better. Having a confidence and being able to express it is invaluable. But somehow I doubt it's possible - if it was, then they would be doing it already as it's a killer feature. So I fear that it's somehow not achievable with LLMs? In which case the title is correct.
[+] [-] isaacfrond|9 months ago|reply
Well. You know. We still have plenty of railroad, and television has had a pretty good run too. So if that are the models to compare AI to, then I have bad news for how 'hype cycle' AI is going to be.
[+] [-] karmakaze|9 months ago|reply
I don't dispute what learning requires, but I also don't exclude AI in that picture. What we have is almost a 'Young Lady's Primer' from "The Diamond Age". All we have to do is ask the right questions. If anything education should be teaching how to use the new tools well.
Funny how it also refutes that AI use is inevitable. The only debate I've heard is when not if.
[+] [-] ohxh|9 months ago|reply
[1] https://hedgehogreview.com/issues/markets-and-the-good/artic...
[+] [-] fedeb95|9 months ago|reply
P.S.: consider that when there are huge investments in something, people will do anything to see a return, including paying other people to create hype.
[+] [-] panstromek|9 months ago|reply
It's a lot more balanced compared to the doomy attitude in the primary post.
[+] [-] frank20022|9 months ago|reply
[+] [-] wilg|9 months ago|reply
The last example is actually the most interesting! The essays are whatever, dumb or lazy kids are gonna cheat on their homework, schools have long needed better ways of teaching kids than regurgitative essays, but in the mean time just use an in-class essay or exam. But people aren't really making the brain-dead books and videos as anything other than a curiosity, despite the fears of various humanities professors.
The interesting part of AI, and I suspect the primary actual use case, is everything else.
[+] [-] snickerer|9 months ago|reply
In my camping car, somewhere in the desert, I sometimes have limited resources. Like a can of beans, some fresh potatoes, an apple, Italian spices, and so on.
I like to ask ChatGPT: Listen, I have this stuff, I want to create some food with strong umami taste, do you have an idea?
It is very good at that, the results were often amazing.
This is its core feature: 'feel' loose connections between concepts. Italian pasta with maple syrup? Yes, but only if you add some Arabic spices...
"AI" is, due to the nature of artificial neuronal networks, not intelligent. It does not learn intelligence, it does learn feelings. Not emotions, but feelings in the sense of unconscious learning ('I get a feeling how to ride the bicycle off-road ').
[+] [-] jbverschoor|9 months ago|reply
[+] [-] danlitt|9 months ago|reply
It makes me wonder whether everyone else is kidding themselves, or if I'm just holding it wrong.
[+] [-] tim333|9 months ago|reply
You could substitute pretty much anything for the word AI and the sentence would be true. Cars, houses or love also cannot save us from that but it doesn't show they are gimmicks.