top | item 41727883

(no title)

bpesquet | 1 year ago

I'm a CS/AI teacher in an engineering school. A few days ago, towards the end of my course on convolutional neural networks, I asked my students to explain why tha first linear layer of the example PyTorch network had a specific number of neurons. This is a non-trivial question whose answer isn't directly available online (it depends on the input dimensions and the nature of all previous layers).

They struggled for a while, and the first student who gave the right answer explained how he did it. All morning, he interacted with ChatGPT while following my course, asking questions each time my own explanations weren't sufficient for him to understand. He managed to give the LLM enough context and information for it to spit not only the right answer, but also the whole underlying process to obtain it. In French, qui plus est ;)

This was for me an eye-opening, but also a bit unsettling experience. I don't use ChatGPT & co much for now, so this might seems pretty mundane to some of you. Anyway, I realized that during any lecture or lab, teachers will soon face (or are already facing) augmented students able to check and consolidate their understanding in real time. This is great news for education as a whole, but it certainly interrogates our current teaching model.

discuss

order

jeroenhd|1 year ago

The fun side of ChatGPT is that if you probe it for information like this, it'll also generate complete fantasy. Without an expert to consult, the generated explanation may as well conclude the earth is flat and the sky is green.

terminalcommand|1 year ago

No, this is not accurate in my trials. I use Claude.ai daily. If you ask questions on niche topics or dive down too deep, it says that resources on the topic are limited and you should consult a book.

jasim|1 year ago

I'm curious to hear more about this. I've seen very little hallucination with mainstream LLMs where the conversation revolves around concepts that were well-represented in the training data. Most educational topics thus have been quite solid. Even asking for novel analogies between distant and unrelated topics seem to work well.

throwup737373|1 year ago

I might be misunderstanding you, but the question you posed is all over the internet. First try, first page. It does not surprise me an LLM can “help” here.

My deeper issue with this tech is not its “knowledge”, it’s the illusion of understanding that I am afraid it fosters.

Lots of people will nod and agree when a competent teacher/mentor figure shows them something and walks them through it. They even think they understand. However, when given an actual new problem that they have to solve themselves without help they completely break down.

I am all for shallow learning as a hobby. Myself I love it, but I think it is dangerous if we misunderstand the nature of the problem here. Understanding is only partly based on consumption. A significant part of any craft is in the doing.

Take something like calculus. There are mountains of beautifully crafted, extraordinary videos on just about every nuance calculus has to offer and you can watch it all. It will give you a lot of concepts and this alone might be worth something but your time is better spent watching one or two videos and then practicing problems for hours.

My personal impulse was to grab to videos or books the moment I was stuck in my younger years. I now recognize how flawed this strategy was. Sure, it was “productive”. I got stuff “done”, but my knowledge was superficial and shallow. I had to make up for it later. By doing, you guessed it, a shit ton of exercises.

One thing I do appreciate is the availability of good quality content nowadays. Something like 3blue1brown is amazing and my university actually recommends watching his videos to supplement and ground your understanding.

No matter how many videos (or LLM podcasts) you consumed though, there is no way around “doing the work”.. as some painful questioning by any professional will quickly show you.

tkellogg|1 year ago

OP here: I definitely agree that shallow learning is an issue, and that it's an intoxicating effect. I've done it a few times — spent a few minutes learning a new topic, only to realize when I put it into practice that I'd been lied to.

But that's why it's critical to engage kids in this. There's a skill in using AI. Resisting the urge to take it at it's word, yet still using it for what it's good at. You can't build a skill without practice.

andrepd|1 year ago

"Check and consolidate their understanding" by reading generated text that is not checked and has the same confident tone whether it's completely made-up or actually correct? I don't get it.

>interrogates our current teaching model

Jesus, many many things put our current teaching model in question, chatgpt is NOT one of them. Tbh this excitement is an example of focusing on the "cool new tech" instead of the "unsexy" things that actually matter.

NitpickLawyer|1 year ago

> by reading generated text that is not checked and has the same confident tone whether it's completely made-up or actually correct? I don't get it.

This is a valid point, but it's referring to the state of things as of ~1.5 years ago. The field has evolved a lot, and now you can readily augment LLMs answers with context in the form of validated, sourced and "approved" knowledge.

Is it possible that you are having a visceral reaction to the "cool new tech" without yourself having been exposed to the latest state of that tech? To me your answer seems like a knee-jerk reaction to the "AI hype" but if you look at how things evolved over the past year, there's a clear indication that these issues will get ironed out, and the next iterations will be better in every way. I wonder, at that point, where the goalposts will be moved...

KoolKat23|1 year ago

The student isn't an idiot, they'd use what the teacher says as their ground truth and chatgpt would be used to supplement their understanding. If it's wrong, they didn't understand it anyway, and reasoning/logic would allow them to sus out any incorrect information along the way. The teaching model can account for this providing them the checks to ensure their explanation/understanding is correct. (This is what tests are for, to check your understanding).

maaaaattttt|1 year ago

The fact here is that a student, using ChatGPT, managed to give the right answer. And I agree with GP that the teaching model must evolve. The cat is out of the bag now and clearly students, of (unfortunately) almost all ages, are using it. It being "cool new tech" or anything else doesn't matter and as a teacher it must not be dismissed or ignored.

Not all subjects taught have to evolve in the same way. For example, it is very different to use ChatGPT to have a technical discussion than to simply ask it to generate a text for you. Meaning this tech is not having the same impact in a literature class and here in a CS one. It can be misused in both though.

I always come back to the calculator analogy with LLMs and their current usage. Here in the context of education, before calculators were affordable simply giving the right answer could have meant that you knew how to calculate the answer (not entirely true but the signal was stronger). After calculators math teachers were clearing saying "I want to see how you came up with the answer or you won't get any points". They didn't solve the problem entirely, but they had to evolve to that "cool new tech" that was clearly not helping there students learn as it could only give them answers.

juliendorra|1 year ago

I don’t know if you have been teaching, but I have (for nearly 19 years now ) to a lot of different people of various ages. I’m also a daily user of LLMs.

I’m firmly convince that LLMs will have an impact on teaching because they are already used in addition / superimposed on current classes.

The physical class, the group, has not been dislodged even after hundred of thousands of remote classes during confinement. Students were eager to come back, for many reasons.

LLMs have the potential to enhance and augment the live physical class. With a design school I teach at, we even have proposed a test program for a grant, where my History of Tech Design will be the in-vivo test ground of new pedagogical strategies using LLMs.

Tools that graft into the current way of teaching have had more impact that tools that promise to “replace university/schools”

bpesquet|1 year ago

I'm no LLM fanboy and I do know about their issues and shortcomings.

I also think that asking the right questions to a model while following a lecture, assessing its answers and integrating them into one's own reasoning is difficult. There is certainly a minimum age/experience level under which this process will generally fail, possibly hindering the learning outcome.

Nevertheless, I saw with my own eyes a mid-level student significantly improving his understanding of a difficult topic because he had access to a LLM in real time. I believe this is a breakthrough. Time will tell.

blitzar|1 year ago

> "Check and consolidate their understanding"

Now do this with words spoken on a date or messages etc. Terrifying