(no title)
bpesquet | 1 year ago
They struggled for a while, and the first student who gave the right answer explained how he did it. All morning, he interacted with ChatGPT while following my course, asking questions each time my own explanations weren't sufficient for him to understand. He managed to give the LLM enough context and information for it to spit not only the right answer, but also the whole underlying process to obtain it. In French, qui plus est ;)
This was for me an eye-opening, but also a bit unsettling experience. I don't use ChatGPT & co much for now, so this might seems pretty mundane to some of you. Anyway, I realized that during any lecture or lab, teachers will soon face (or are already facing) augmented students able to check and consolidate their understanding in real time. This is great news for education as a whole, but it certainly interrogates our current teaching model.
jeroenhd|1 year ago
terminalcommand|1 year ago
jasim|1 year ago
throwup737373|1 year ago
My deeper issue with this tech is not its “knowledge”, it’s the illusion of understanding that I am afraid it fosters.
Lots of people will nod and agree when a competent teacher/mentor figure shows them something and walks them through it. They even think they understand. However, when given an actual new problem that they have to solve themselves without help they completely break down.
I am all for shallow learning as a hobby. Myself I love it, but I think it is dangerous if we misunderstand the nature of the problem here. Understanding is only partly based on consumption. A significant part of any craft is in the doing.
Take something like calculus. There are mountains of beautifully crafted, extraordinary videos on just about every nuance calculus has to offer and you can watch it all. It will give you a lot of concepts and this alone might be worth something but your time is better spent watching one or two videos and then practicing problems for hours.
My personal impulse was to grab to videos or books the moment I was stuck in my younger years. I now recognize how flawed this strategy was. Sure, it was “productive”. I got stuff “done”, but my knowledge was superficial and shallow. I had to make up for it later. By doing, you guessed it, a shit ton of exercises.
One thing I do appreciate is the availability of good quality content nowadays. Something like 3blue1brown is amazing and my university actually recommends watching his videos to supplement and ground your understanding.
No matter how many videos (or LLM podcasts) you consumed though, there is no way around “doing the work”.. as some painful questioning by any professional will quickly show you.
tkellogg|1 year ago
But that's why it's critical to engage kids in this. There's a skill in using AI. Resisting the urge to take it at it's word, yet still using it for what it's good at. You can't build a skill without practice.
andrepd|1 year ago
>interrogates our current teaching model
Jesus, many many things put our current teaching model in question, chatgpt is NOT one of them. Tbh this excitement is an example of focusing on the "cool new tech" instead of the "unsexy" things that actually matter.
NitpickLawyer|1 year ago
This is a valid point, but it's referring to the state of things as of ~1.5 years ago. The field has evolved a lot, and now you can readily augment LLMs answers with context in the form of validated, sourced and "approved" knowledge.
Is it possible that you are having a visceral reaction to the "cool new tech" without yourself having been exposed to the latest state of that tech? To me your answer seems like a knee-jerk reaction to the "AI hype" but if you look at how things evolved over the past year, there's a clear indication that these issues will get ironed out, and the next iterations will be better in every way. I wonder, at that point, where the goalposts will be moved...
KoolKat23|1 year ago
maaaaattttt|1 year ago
Not all subjects taught have to evolve in the same way. For example, it is very different to use ChatGPT to have a technical discussion than to simply ask it to generate a text for you. Meaning this tech is not having the same impact in a literature class and here in a CS one. It can be misused in both though.
I always come back to the calculator analogy with LLMs and their current usage. Here in the context of education, before calculators were affordable simply giving the right answer could have meant that you knew how to calculate the answer (not entirely true but the signal was stronger). After calculators math teachers were clearing saying "I want to see how you came up with the answer or you won't get any points". They didn't solve the problem entirely, but they had to evolve to that "cool new tech" that was clearly not helping there students learn as it could only give them answers.
juliendorra|1 year ago
I’m firmly convince that LLMs will have an impact on teaching because they are already used in addition / superimposed on current classes.
The physical class, the group, has not been dislodged even after hundred of thousands of remote classes during confinement. Students were eager to come back, for many reasons.
LLMs have the potential to enhance and augment the live physical class. With a design school I teach at, we even have proposed a test program for a grant, where my History of Tech Design will be the in-vivo test ground of new pedagogical strategies using LLMs.
Tools that graft into the current way of teaching have had more impact that tools that promise to “replace university/schools”
bpesquet|1 year ago
I also think that asking the right questions to a model while following a lecture, assessing its answers and integrating them into one's own reasoning is difficult. There is certainly a minimum age/experience level under which this process will generally fail, possibly hindering the learning outcome.
Nevertheless, I saw with my own eyes a mid-level student significantly improving his understanding of a difficult topic because he had access to a LLM in real time. I believe this is a breakthrough. Time will tell.
blitzar|1 year ago
Now do this with words spoken on a date or messages etc. Terrifying