top | item 40981022

(no title)

obastani | 1 year ago

This is exactly the problem we have found in our research on generative AI for education [1]. We ran a pilot in a large high school in collaboration with math teachers, and found that students basically copy answers from ChatGPT, resulting in worse performance compared to students not given ChatGPT. If students don't want to learn, ChatGPT isn't going to fix anything.

[1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4895486

discuss

order

verdverm|1 year ago

You are just giving them ChatGPT with a bit of prompt engineering, and evaluating them on math problems, which we know LLMs make errors on because they are not calculators. You aren't putting in the effort needed to build a real tutor and learning assistant. I would not extrapolate from these results

There are also a lot of things that can come in before you build a full on tutor. One example is being able to tailor word problems (transform the nouns) to subjects interesting to the particular student. They could also be used to help understand where students are struggling. We are still at the early phases of useful AI, optimism is more appreciated, especially as contemporary times have become so pessimistic

Sal Khan provides a more optimistic take and demo: https://www.youtube.com/watch?v=hJP5GqnTrNo