In other words, it disrupts classroom assignments where the student is being asked to produce bullshit but where historically the easiest way to produce that bullshit was via a process that was supposedly valuable to the student's education. The extent to which a teacher cannot distinguish a human-generated satisfactory essay from an essay generated by a bullshit generator is by definition precisely the extent to which the assignment is asking the student to generate bullshit. This will certainly require a lot of reworking of these traditional curriculums that consist heavily of asking the student to generate bullshit, but maybe that wasn't the best way to educate students this whole time.
The point of school work isn't to generate something of value externally, it's to generate understanding in the student. Much like lifting weights or running are "bullshit" activities. You haven't seen bullshit until you see whatever is produced by a society with a voracious appetite that runs on "gimme what I want" buttons and a dismal foundation of understanding.
The students bullshitting their way through course work are like the lifters using bad form to make the mechanical goal of moving the weight easier in the short term. They completely miss the point.
I agree with some of this (particularly the conclusion that better education methods are required) but lets be a bit generous for a second.
The ability to write well is (or was) an important skill. Being able to use correct grammar, to structure even a simple argument, to incorporate sources to justify one's statements, etc. Even if we're just talking about the level of what GPT 3.5 is capable of, that still corresponds to, let's say, a college freshman level of writing.
Now, perhaps with the advent of LLMs, that's no longer true. Perhaps in the near future, the ability to generate coherent prose "by hand" will be thought of in the same way we think of someone who can do long division in their head: a neat party trick, but not applicable to any real-world use.
It isn't at all clear to me though that we're yet a the point where this tech is good enough that we're ready (as a society) to deprecate writing as a skill. And "writing bullshit" may in fact be a necessary element of practice for writing well. So it isn't self-evident that computers being able to write bullshit means that we shouldn't also expect humans of being able to write bullshit (at a minimum, hopefully they can go well beyond that.)
If it can generate working code, does that mean that asking students to produce working code was bullshit? Or does it just mean that AI can now do a lot of things we used to asked students to do (for probably solid educational reasons) at an above average level?
You make it sound bad, but there's nothing inherently wrong with learner outcomes being worthless, or even worse than worthless.
When people learn a new language at a very basic level, they generate dull and simple sentences, and butcher the pronunciation. When they learn martial arts, they kick the air or a punching ball. When they learn to play the violin, they play hellish sounds that can make your ears hurt. But all that is still needed for learning.
> > where the student is being asked to produce bullshit
It depends on which subject? I have only good things to say about my college days. They are life changing. The rigorous training in computer science and maths has been paying dividend all these years. The professor in the writing class did not only teach me how to write, but also how to understand and appreciate literature. Even classes like Canadian Culture and History taught me so much about arts, literature, politics and history of Canada. I have a hard time fathoming in what classes will we write bullshit (yeah, I have some ideas, but I guess I was lucky enough that my school taught me how to love and appreciate the wonder of nature and civilization, instead of teaching me hatred to towards you know what).
> This will certainly require a lot of reworking of these traditional curriculums that consist heavily of asking the student to generate bullshit, but maybe that wasn't the best way to educate students this whole time.
Just curious which discipline you have a grudge against here. Because presumably disciplines are actually disciplines where someone working in the field for their entire career can spot BS.
Why would you learn math when you have calculators ?
Why would you learn to write when you have grammarly ?
Why would you read books when you have summaries ?
Why would you watch movies when you have reviews ?
Sometimes, a lot of times actually, it's not about the result but the process. Otherwise the most optimal path at life is suicide or matrix style coma pods.
Everything we do before a PhD is by definition bs, it's just regurgitating already known facts.
> This will certainly require a lot of reworking of these traditional curriculums that consist heavily of asking the student to generate bullshit, but maybe that wasn't the best way to educate students this whole time.
That process was invaluable for making normal bullshitters into bullshit artists -- how will we train our elite human bullshitters in the modern age of language models?
The education system is moribund. During Covid, I thought we'd see the flipped classroom shine, and a revolution in education. Instead we got children chained to Zoom or Teams and attempts to replicate traditional classrooms remotely.
I wouldn't be surprised if my grandchildren are still doing the same bullshit in 20 years.
Yeah but it would be nice if that was the floor instead of a ceiling - personally I've never seen a ChatGPT-generated text that couldn't have been generated by asking a high schooler to generate the same thing.
The authors hypothesize that ChatGPT is useful for
> Tasks where it’s easy for the user to check if the bot’s answer is correct, such as debugging help.
I would qualify this fairly heavily. If your bug is "my program is throwing an error" or "FizzBuzz isn't producing the expected output", sure, a bot suggestion can be tested easily. But that's only the easiest kind of debugging, and without any deep appreciation of logic, I suspect it would tend to give you overly specific remedies that mask the problem or make new problems.
"You want it to say FizzBuzz when n=15? How about if n=15 print FizzBuzz"
In other words, what can be easily tested for correctness depends very heavily on how strong a grasp the user has on correctness in the domain. A novice without a good bullshit detector could be left worse off than if they had asked a person. It's not clear that the set of problems whose solution can truly be checked easily is all that big or useful. (I think our CS professor friends here are overgeneralizing from intuitions about P vs NP, a totally different context.)
My two cents: it will be mainly useful as a different kind of search engine that can help you remember syntax, what does what, what parameters this thing needs, etc.
b-but... I thought being technically correct was the best kind of correct! /s
Seriously though thanks for articulating the tree-like nature of truth. Being an expert requires the ability to understand a topic at all levels in a consistent manner. ChatGPT just models the language, not the actual concepts.
It's the age of bullshit. And whats surprising is the focus is on generating/building bullshit, pointing it out or justifying it.
There doesn't seem to be a single person anywhere whose isn't caught up in this superficial gilded cage trap. Individuals need to rethink and reset their own compasses or where groups are heading is total meaninglessness.
Having half of the title is too triggering. That's not fair to the CS prof. Misleading.
But to comment on the article, although I've encountered several falsities in chatgpt output, I'm still very surprised that it could devise algorithms when I would give him the criteria.
And it would even output some code samples.
It shouldn't be underestimated as a programming aid.
It might help quite a bit to watch this overview of how ChatGPT works, and especially how it was trained.[1]
Human attention was too expensive for the training, so they made another AI to simulate the trainers.
Unfortunately, the trainer simulator training didn't have an expressive enough set of options for the actual humans, which kept a lot of quite valuable feedback out of the whole system by reducing things to single ranking. If the person didn't know the correct answer, they'd likely just go based on gut. This in turn lead the training simulator to do the same. This lead ChatGPT to just try to bullshit an answer rather than stating it didn't know, and getting downvoted.
> Babble! takes samples of text from various sources, analyzes them for style and content, mixes them together in varying proportions, and then...well...babbles. On and on. Endlessly. If you take its samples away, it still babbles. That's what Babble! does. It babbles!
> By mixing up words and ideas, and by finding connections which are not obvious to the naked mind, Babble! is useful as a creative tool and as a cure for writer's block. It can scramble ideas in brainstorming sessions like nobody's business. It will compose advertising copy, overdue marketing plans, and official government reports. It'll generate text in the style of whomever you please for use in school papers, public speeches, and contests in New York Magazine. It can be used to produce brochures, press releases, newsletters, letters to the editor, and letters to John Dvorak. It's great for answering all that pesky Email as well as any other electronic communications, and it's also been used to document source code and write program manuals (like this one, for example).
> We tried it on legal boilerplate, but the stuff that came out sounded just like the stuff that went in!
Can confirm that. I asked ChatGPT for information about a book my son was reading at school. While the answer sounded really good at first sight, the content was absolutely nonsense. Not a single statement from ChatGPT was correct. Therefore, I can confirm - it’s bullshit.
If you want analytical information then I could see a use-case for GPT-3. I was trying to sign up for health insurance and asked questions -- it didn't give good answers. Sometimes you need a human to help you with it -- so these tools will save money in the long run but not have the "street smarts".
Can I use an AI tool to tell me how to fix my bike? If I dunno much about bikes an expert has to look at it and tell me what's wrong..
Perfectly conceivable that Arvind is punking us all by using an Llm to write that essay. Very basic stuff for someone who usually has subtle points to make.
[+] [-] tshaddox|3 years ago|reply
[+] [-] gmd63|3 years ago|reply
The students bullshitting their way through course work are like the lifters using bad form to make the mechanical goal of moving the weight easier in the short term. They completely miss the point.
[+] [-] lukev|3 years ago|reply
The ability to write well is (or was) an important skill. Being able to use correct grammar, to structure even a simple argument, to incorporate sources to justify one's statements, etc. Even if we're just talking about the level of what GPT 3.5 is capable of, that still corresponds to, let's say, a college freshman level of writing.
Now, perhaps with the advent of LLMs, that's no longer true. Perhaps in the near future, the ability to generate coherent prose "by hand" will be thought of in the same way we think of someone who can do long division in their head: a neat party trick, but not applicable to any real-world use.
It isn't at all clear to me though that we're yet a the point where this tech is good enough that we're ready (as a society) to deprecate writing as a skill. And "writing bullshit" may in fact be a necessary element of practice for writing well. So it isn't self-evident that computers being able to write bullshit means that we shouldn't also expect humans of being able to write bullshit (at a minimum, hopefully they can go well beyond that.)
[+] [-] Erik816|3 years ago|reply
[+] [-] Al-Khwarizmi|3 years ago|reply
When people learn a new language at a very basic level, they generate dull and simple sentences, and butcher the pronunciation. When they learn martial arts, they kick the air or a punching ball. When they learn to play the violin, they play hellish sounds that can make your ears hurt. But all that is still needed for learning.
[+] [-] g9yuayon|3 years ago|reply
It depends on which subject? I have only good things to say about my college days. They are life changing. The rigorous training in computer science and maths has been paying dividend all these years. The professor in the writing class did not only teach me how to write, but also how to understand and appreciate literature. Even classes like Canadian Culture and History taught me so much about arts, literature, politics and history of Canada. I have a hard time fathoming in what classes will we write bullshit (yeah, I have some ideas, but I guess I was lucky enough that my school taught me how to love and appreciate the wonder of nature and civilization, instead of teaching me hatred to towards you know what).
[+] [-] titzer|3 years ago|reply
Just curious which discipline you have a grudge against here. Because presumably disciplines are actually disciplines where someone working in the field for their entire career can spot BS.
[+] [-] lm28469|3 years ago|reply
Why would you learn to write when you have grammarly ?
Why would you read books when you have summaries ?
Why would you watch movies when you have reviews ?
Sometimes, a lot of times actually, it's not about the result but the process. Otherwise the most optimal path at life is suicide or matrix style coma pods.
Everything we do before a PhD is by definition bs, it's just regurgitating already known facts.
[+] [-] fardo|3 years ago|reply
That process was invaluable for making normal bullshitters into bullshit artists -- how will we train our elite human bullshitters in the modern age of language models?
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] anvuong|3 years ago|reply
[+] [-] ern|3 years ago|reply
I wouldn't be surprised if my grandchildren are still doing the same bullshit in 20 years.
[+] [-] pazimzadeh|3 years ago|reply
[+] [-] Spooky23|3 years ago|reply
[+] [-] siwatanejo|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] civilized|3 years ago|reply
> Tasks where it’s easy for the user to check if the bot’s answer is correct, such as debugging help.
I would qualify this fairly heavily. If your bug is "my program is throwing an error" or "FizzBuzz isn't producing the expected output", sure, a bot suggestion can be tested easily. But that's only the easiest kind of debugging, and without any deep appreciation of logic, I suspect it would tend to give you overly specific remedies that mask the problem or make new problems.
"You want it to say FizzBuzz when n=15? How about if n=15 print FizzBuzz"
In other words, what can be easily tested for correctness depends very heavily on how strong a grasp the user has on correctness in the domain. A novice without a good bullshit detector could be left worse off than if they had asked a person. It's not clear that the set of problems whose solution can truly be checked easily is all that big or useful. (I think our CS professor friends here are overgeneralizing from intuitions about P vs NP, a totally different context.)
My two cents: it will be mainly useful as a different kind of search engine that can help you remember syntax, what does what, what parameters this thing needs, etc.
[+] [-] sublinear|3 years ago|reply
Seriously though thanks for articulating the tree-like nature of truth. Being an expert requires the ability to understand a topic at all levels in a consistent manner. ChatGPT just models the language, not the actual concepts.
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] hnthrowaway0315|3 years ago|reply
[+] [-] Iwan-Zotow|3 years ago|reply
[+] [-] zoba|3 years ago|reply
https://podcasts.apple.com/us/podcast/the-ezra-klein-show/id...
[+] [-] hoppyhoppy2|3 years ago|reply
[+] [-] resource0x|3 years ago|reply
[+] [-] gsatic|3 years ago|reply
There doesn't seem to be a single person anywhere whose isn't caught up in this superficial gilded cage trap. Individuals need to rethink and reset their own compasses or where groups are heading is total meaninglessness.
[+] [-] aatd86|3 years ago|reply
But to comment on the article, although I've encountered several falsities in chatgpt output, I'm still very surprised that it could devise algorithms when I would give him the criteria.
And it would even output some code samples.
It shouldn't be underestimated as a programming aid.
[+] [-] mikewarot|3 years ago|reply
Human attention was too expensive for the training, so they made another AI to simulate the trainers.
Unfortunately, the trainer simulator training didn't have an expressive enough set of options for the actual humans, which kept a lot of quite valuable feedback out of the whole system by reducing things to single ranking. If the person didn't know the correct answer, they'd likely just go based on gut. This in turn lead the training simulator to do the same. This lead ChatGPT to just try to bullshit an answer rather than stating it didn't know, and getting downvoted.
[1] https://www.youtube.com/watch?v=viJt_DXTfwA
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] pessimizer|3 years ago|reply
> Babble! is a toy for people who love words.
> Babble! takes samples of text from various sources, analyzes them for style and content, mixes them together in varying proportions, and then...well...babbles. On and on. Endlessly. If you take its samples away, it still babbles. That's what Babble! does. It babbles!
> By mixing up words and ideas, and by finding connections which are not obvious to the naked mind, Babble! is useful as a creative tool and as a cure for writer's block. It can scramble ideas in brainstorming sessions like nobody's business. It will compose advertising copy, overdue marketing plans, and official government reports. It'll generate text in the style of whomever you please for use in school papers, public speeches, and contests in New York Magazine. It can be used to produce brochures, press releases, newsletters, letters to the editor, and letters to John Dvorak. It's great for answering all that pesky Email as well as any other electronic communications, and it's also been used to document source code and write program manuals (like this one, for example).
> We tried it on legal boilerplate, but the stuff that came out sounded just like the stuff that went in!
[+] [-] amelius|3 years ago|reply
[+] [-] amai|3 years ago|reply
https://en.wikipedia.org/wiki/On_Bullshit
https://www.callingbullshit.org/
https://en.wikipedia.org/wiki/Bullshit_Jobs
[+] [-] denkmoon|3 years ago|reply
[+] [-] alexruf|3 years ago|reply
[+] [-] more_corn|3 years ago|reply
[+] [-] lexarflash8g|3 years ago|reply
Can I use an AI tool to tell me how to fix my bike? If I dunno much about bikes an expert has to look at it and tell me what's wrong..
[+] [-] mirekrusin|3 years ago|reply
[+] [-] kilgnad|3 years ago|reply
It generates bullshit because a lot of bullshit was approved during training.
[+] [-] IncRnd|3 years ago|reply
[+] [-] abhv|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] nashashmi|3 years ago|reply
[+] [-] joshspankit|3 years ago|reply