Just to be clear: it's not for every student (at least not yet!). We are in a research phase sharing it with a limited subset of users. More details about our approach to the responsible development of AI: https://blog.khanacademy.org/aiguidelines/
It better not be for every student. I am very familiar with Khan Academy as I am currently guiding a student through several AP courses on khanacademy. In my opinion, khan academy's time would be better spent fixing the UI for teachers, and improving the organization of the physics curriculum.
I would prefer that khan academy not be dragged into some PR fluff piece for some AI shill.
Do you know if/how someone can get involved with this at the research phase? I have a daughter who will be entering 1st grade next year and I'd be interested in having her try this out if there was a way of signing up.
I was a high school English teacher for about ten years before transitioning into tech last July. Has KA explored assistive assessment tools for in-person instruction?
Are the tolerances in education really that tight?
How many terrible teachers are allowed to continue to teach after decades of disastrous results?
How many ideologues with no interest in teaching, but every interest in indoctrinating young minds, are tolerated because the alternative is "no teacher" and the class wouldn't run?
How many teachers who would fail state exams, teach, despite relying on answer sheets to be "competent"?
I agree with you that on the top end of education, this is no replacement and at best a supplementary tool. For the poor kid in a bad neighbourhood whose teacher is more interested in "de-colonizing" mathematics than teaching mathematics, this is a Godsend.
Firstly, the unreliability may not be permanent, and we may improve the AI accuracy in the future.
Secondly, we need to figure out what works and what not, and we're in the middle of such phase.
Finally, there is no need to be alarmed, even with hallucinations, the amount of advanced (and reliable) knowledge provided by the AI vastly counterbalances the small mistakes. The elitist mindset needs to die, let's give access to advanced knowledge to everybody and stop putting it behind some unreachable walls.
Without providing context or examples, GPT-4 is already better at answering questions than the average teacher, with the unlimited patience that only a computer can provide.
With context, which Khan academy has in abundance due to their lesson plans and transcripts, accuracy will be higher than even the best teachers and tutors.
Once you give context and known-true facts to best-in-class LLMs like GPT-4, the output is shockingly good.
If you don't know how they are using GPT-4 it's not fair to say it will hallucinate.
As far as I understand the preferred way to use LLMs nowadays for domain specific information retrieval is through embeddings that insert the related context in the prompt. GPT-4 is specially good for this since they increased the prompt size almost by an order of magnitude.
This means that the model can be given a very specific task: to extract information from the context or avoid providing an answer at all.
The answer doesn't rely on the neural memory of the model, since it doesn't need to store information, just understand the task, and they are really good at that.
There are two indicators that this is PR bullshit produced by whoever is trying to capitalize their investment, 1) is the use of the marketing term 'AI' or 'AI-powered' and 2) use of the 'think of the children' trope.
I mean that is but one thing to worry about, we've gone about nowhere with the alignment problem, and we're screaming ahead at full speed with making these things more powerful.
I wonder if this kind of intelligent tutoring could be the answer to Bloom's Two Sigma Problem. The limiting factor with that problem was that not everyone can afford a personal tutor. Having an AI tutor that can breeze through the SAT seems like it should give every student a major boost.
GPT-4 definitely seems to be doing better on a lot of benchmarks and that's impressive. But it still hallucinates facts and I don't think anyone really has a good understanding of when and how that happens. Given that, is it really a good idea to be positioning this model as some kind of factual authority figure?
I've tutored people at times. A good tutor needs to understand the subject very well, so they can not only understand the right answer but also figure out why the student is coming to the wrong answer.
I personally think that assigning GPT as a "tutor" is devaluing the real skill involved in tutoring and I doubt it will work out.
Highly recommend the Neal Stephenson book The Diamond Age: Or, A Young Lady's Illustrated Primer, for an interesting exploration of a custom AI tutor for each student.
In that book, students have a "magic book" that teaches them lessons in a story form while encouraging certain life paths. It's pretty fascinating to consider the implications and whether that's something we'll want, bc it may soon be possible.
Khan Academy's integration of GPT-4 already steers students towards particular modes of thought. According to the article, they want "to get students thinking deeply about the content that they’re learning" and in the examples, Khanmigo prompts students to recall information from the lesson rather than directly explaining how to solve the problem.
I think what Khan Academy has done here is desirable, but just as Stephenson's Primer enforces its creator's values through AI tutoring, Khanmigo is enforcing Khan Academy's values. It's easy to imagine how someone with more authoritarian values could use this same technology (e.g. to teach students to follow instructions without doubt) to indoctrinate students with scale.
For younger readers Monica Hughes' classic "Devil on my back" is more explicitly about a society where those who can interface with the web via computers installed into them become higher caste and those who cannot become lower caste, resulting in a revolution. Definitely aimed at a younger audience than Stephenson.
The eventuality I see is everybody talking to each other through LLM's, and the world ending up in some kind of tower of babel situation. You wear an AR/MR headset, and all conversation is dual translated, with some kind of bytecode intermediary. Each persons language evolves to be incompatible with anyone else's, and only your LLM can understand you. The power goes out, and nobody can talk to anybody.
I’m sitting on the couch with my new grandson. He’s six months old. What is school going to look like for him over the course of his education? Should be interesting.
The cynic in me wonders if "AI" has realized that the most efficient way to take over the world is to teach the next generation to be dependent on it for learning.
A plot point in an episode of Person of Interest, where AGI "Samaritan" funds a charity (free tablets for students, Samaritan access pre-installed) to take over education and recruit more mercenaries.
There will probably be a janky but usable open source version of the model and tool eventually. The "Linux" option. That way the 1% that cares about this stuff can do the self host route and retain control by them controlling what the model tells them(hopefully).
They literally announced that they're testing this on a small sample of kids whose parents opt-in. I don't see any better approach, seeing how the sentiments of "untested" and "using kids as guinea pigs" are contradictory.
A lot of what GPT-4 can return is ideas based on assumptions and beliefs. Who chooses what of these to train the system? Those subtle ideas will then lead students who are mostly children.
I wonder if the students can direct the AI to give them questions similar to what would show up in the tests. Do the teachers use the same AI as the students?
I don't think tests will work at all like that much longer; they'll be more like oral exams, where you have a back-and-forth conversation with an AI that probes the breadth and depth of your knowledge with sufficient variability as to detect overfitting (i.e., memorizing the answers to a few previous exams).
> "It's important that you learn how to do this yourself!"
I find this response by the AI tutor to be unbearably ironic - if there's an AI that could do it better than me, and well enough to teach me, then it seems quite unimportant that I learn to do it myself. While of course I see a benefit to some people learning mathematics from scratch, for advanced research and continuity purposes, at this point I think that saying that it's important for "everyone" to learn math (at least beyond the very basics) is almost equivalent to saying that it's important that everyone learn how to grow grain, weave fabric and mix cement.
[+] [-] shawnjan8|3 years ago|reply
[+] [-] itronitron|3 years ago|reply
I would prefer that khan academy not be dragged into some PR fluff piece for some AI shill.
[+] [-] jrussino|3 years ago|reply
[+] [-] germinalphrase|3 years ago|reply
[+] [-] yterdy|3 years ago|reply
>Donation required after chosen from waitlist
So, it's for rich ones?
[+] [-] worrycue|3 years ago|reply
Personally I rather have a more limited but reliable tool than a more powerful but unreliable one.
[+] [-] tenpies|3 years ago|reply
How many terrible teachers are allowed to continue to teach after decades of disastrous results?
How many ideologues with no interest in teaching, but every interest in indoctrinating young minds, are tolerated because the alternative is "no teacher" and the class wouldn't run?
How many teachers who would fail state exams, teach, despite relying on answer sheets to be "competent"?
I agree with you that on the top end of education, this is no replacement and at best a supplementary tool. For the poor kid in a bad neighbourhood whose teacher is more interested in "de-colonizing" mathematics than teaching mathematics, this is a Godsend.
[+] [-] acomjean|3 years ago|reply
[+] [-] cypress66|3 years ago|reply
[+] [-] Wata2|3 years ago|reply
[+] [-] celestialcheese|3 years ago|reply
With context, which Khan academy has in abundance due to their lesson plans and transcripts, accuracy will be higher than even the best teachers and tutors.
Once you give context and known-true facts to best-in-class LLMs like GPT-4, the output is shockingly good.
[+] [-] kobalsky|3 years ago|reply
As far as I understand the preferred way to use LLMs nowadays for domain specific information retrieval is through embeddings that insert the related context in the prompt. GPT-4 is specially good for this since they increased the prompt size almost by an order of magnitude.
This means that the model can be given a very specific task: to extract information from the context or avoid providing an answer at all.
The answer doesn't rely on the neural memory of the model, since it doesn't need to store information, just understand the task, and they are really good at that.
[+] [-] the8472|3 years ago|reply
[+] [-] itronitron|3 years ago|reply
[+] [-] pixl97|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] MagicMoonlight|3 years ago|reply
[+] [-] ihatepython|3 years ago|reply
[+] [-] mullingitover|3 years ago|reply
[+] [-] davesque|3 years ago|reply
[+] [-] jhp123|3 years ago|reply
I personally think that assigning GPT as a "tutor" is devaluing the real skill involved in tutoring and I doubt it will work out.
[+] [-] jamestimmins|3 years ago|reply
In that book, students have a "magic book" that teaches them lessons in a story form while encouraging certain life paths. It's pretty fascinating to consider the implications and whether that's something we'll want, bc it may soon be possible.
[+] [-] wesleychen|3 years ago|reply
I think what Khan Academy has done here is desirable, but just as Stephenson's Primer enforces its creator's values through AI tutoring, Khanmigo is enforcing Khan Academy's values. It's easy to imagine how someone with more authoritarian values could use this same technology (e.g. to teach students to follow instructions without doubt) to indoctrinate students with scale.
[+] [-] turtleyacht|3 years ago|reply
Once AI can generate videos, we just need pervasive surveillance capability to insert context-aware content into the stories.
The Primer was able to identify the protagonist's family member and pet toys in its personalized myth.
I wonder which TTRPG sourcebook will have the first credit to ChatGPT in DriveThruRPG :)
Procedurally-generated worlds plus conversational NPCs!
[+] [-] LegitShady|3 years ago|reply
[+] [-] ortusdux|3 years ago|reply
https://enderverse.fandom.com/wiki/Mind_Game
[+] [-] basch|3 years ago|reply
The eventuality I see is everybody talking to each other through LLM's, and the world ending up in some kind of tower of babel situation. You wear an AR/MR headset, and all conversation is dual translated, with some kind of bytecode intermediary. Each persons language evolves to be incompatible with anyone else's, and only your LLM can understand you. The power goes out, and nobody can talk to anybody.
[+] [-] jcims|3 years ago|reply
[+] [-] BigCryo|3 years ago|reply
[+] [-] dekhn|3 years ago|reply
[+] [-] blurbleblurble|3 years ago|reply
Maybe you'd be interested in Paulo Freire's book "The Pedagogy of the Oppressed"?
[+] [-] minxomat|3 years ago|reply
[+] [-] nebula8804|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] gsatic|3 years ago|reply
[+] [-] falcor84|3 years ago|reply
[+] [-] ourmandave|3 years ago|reply
https://techcabal.com/2023/01/31/stack-overflow-chat-gpt/
But on the plus side, this news means there will be fewer do-my-homework-for-me questions.
[+] [-] pixl97|3 years ago|reply
[+] [-] mfer|3 years ago|reply
[+] [-] christophclarke|3 years ago|reply
https://youtu.be/7vsCAM17O-M
[+] [-] hiccuphippo|3 years ago|reply
[+] [-] raldi|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] praveen9920|3 years ago|reply
[+] [-] falcor84|3 years ago|reply
I find this response by the AI tutor to be unbearably ironic - if there's an AI that could do it better than me, and well enough to teach me, then it seems quite unimportant that I learn to do it myself. While of course I see a benefit to some people learning mathematics from scratch, for advanced research and continuity purposes, at this point I think that saying that it's important for "everyone" to learn math (at least beyond the very basics) is almost equivalent to saying that it's important that everyone learn how to grow grain, weave fabric and mix cement.
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] misssocrates|3 years ago|reply
[+] [-] corobo|3 years ago|reply
[+] [-] hoot|3 years ago|reply
[+] [-] epicureanideal|3 years ago|reply