top | item 40813239

(no title)

ExtremisAndy | 1 year ago

Wow, I’ve never thought about that, but you’re right! It really has trained me to be skeptical of what I’m being taught and confirm the veracity of it with multiple sources. A bit time-consuming, of course, but generally a good way to go about educating yourself!

discuss

order

tombert|1 year ago

I genuinely think that arguing with it has been almost a secret weapon for me with my grad school work. I'll ask it a question about temporal logic or something, it'll say something that sounds accurate but is ultimately wrong or misleading after looking through traditional documentation, and I can fight with it, and see if it refines it to something correct, which I can then check again, etc. I keep doing this for a bunch of iterations and I end up with a pretty good understanding of the topic.

I guess at some level this is almost what "prompt engineering" is (though I really hate that term), but I use it as a learning tool and I do think it's been really good at helping me cement concepts in my brain.

ramenbytes|1 year ago

> I'll ask it a question about temporal logic or something, it'll say something that sounds accurate but is ultimately wrong or misleading after looking through traditional documentation, and I can fight with it, and see if it refines it to something correct, which I can then check again, etc. I keep doing this for a bunch of iterations and I end up with a pretty good understanding of the topic.

Interesting, that's the basic process I follow myself when learning without ChatGPT. Comparing my mental representation of the thing I'm learning to existing literature/results, finding the disconnects between the two, reworking my understanding, wash rinse repeat.

Viliam1234|1 year ago

We were already supposed to use Wikipedia like this, but most people didn't bother and trusted the Wikipedia text uncritically.

Finally, LLMs teach us the good habits.