top | item 35783158

GPT makes learning fun again

225 points| vipshek | 2 years ago |vipshek.com | reply

237 comments

order
[+] devjab|2 years ago|reply
I'm in two minds about it. On one hand the internet (in general) has become so hard to find information on, and I agree with the author that GPT is a breath of fresh air. On the other hand I've seen GPT fail so miserably at topics that I'm knowledge about that I have a very hard time trusting anything it tells me. I'm not sure what the answer really is, but I'm not sure it's GPT. I wish we could go back to having search engines that actually led to useful information and not just advertisements, and I wish we had a GPT that would not tell lies.

I doubt either of those wishes are going to come true though. Search engines are likely always going to be SEO'ed into uselessness and GPT isn't intentionally telling lies.

[+] LASR|2 years ago|reply
I want to use this comment to state one thing, not directly addressed to you.

Stop using GPT as a database! GPT is far more useful a reasoning engine that can accumulate fuzzy data and then provide various views or transformation of that data.

So asking GPT to parse a Wikipedia page and then asking it to teach you from it - this is a much more successful usage than what the author in the original article is doing.

It is not useful as an accurate source of information. It’s inaccurate sometimes, and it’s hard to tell when. OTOH, as a formatted, it has some actual world-changing potential.

[+] simonw|2 years ago|reply
"I'm not sure what the answer really is, but I'm not sure it's GPT."

My suggestion is to stick with it and get a feel for what it's good at.

I've found that after a few months of using ChatGPT every day I've developed a pretty solid intuition for which questions are likely to get good answers and which are likely to trigger hallucinations.

It's difficult to describe what those intuitions are though!

One rule of thumb I've developed: if something is likely to be "common knowledge" - if it's something that is likely to have been discussed accurately on the internet by many different people - then ChatGPT is very likely to answer questions about it accurately.

[+] gumballindie|2 years ago|reply
> I wish we could go back to having search engines that actually led to useful information and not just advertisements, and I wish we had a GPT that would not tell lies.

This ^. Prompting google is much more intuitive than prompting a chat bot. Also results are instantly available and you get more options to chose from. You can also filter out information much easier instead of having it summarised by a closed box that decides what's best for you.

[+] somethoughts|2 years ago|reply
As a follow on to those thoughts, I feel like ChatGPT is in the phase similar to that fleeting moment when search engines were maximally useable - before SEO was a thing and before Google "needed" to turn on the profitability spigot.

Its unclear to me how long we'll have before LLM Engine Optimization is a thing and OpenAI/MSFT "need" to turn on their LLM profitability spigot; and what ChatGPT will look like then.

That said, I'm curious as to whether technically LLMs are inherently more challenging to game than search engines.

[+] Gigachad|2 years ago|reply
I've been using ChatGPT as a brainstorming tool rather than just relying on it for specific examples. Stuff like asking it for some ideas for things to learn on a topic.
[+] forrestthewoods|2 years ago|reply
> On the other hand I've seen GPT fail so miserably at topics that I'm knowledge about that I have a very hard time trusting anything it tells me.

ChatGPT’s current super power is helping people get from 0-to-1 on a new topic. In particular if that topic is adjacent to or a different niche with your expertise.

It’s not currently amazing at taking someone from intermediate to advanced knowledge.

At least in my experience. If I’m using a new library/framework/API for the first time it’s amazing at answering the endless newbie questions I have.

[+] lcuff|2 years ago|reply
I'm curious about what topics you have knowledge of where it's failed. Does it seem like there's a pattern to the failures? I've been using GPT for coding help, and it it is very helpful in ruby and bash, though it often delivers buggy software: Badly handled non-happy-path conditions, mostly, which when I tell it to handle the case, it may. It's a huge help for me finding gems and showing me standard ruby library syntax. On the other hand, it's been useless when I try to get it to write Applescript for me. I believe that says more about AppleScript than about GPT. Sigh.
[+] thsbrown|2 years ago|reply
Just out of curiosity are you using got 4 or 3.5?
[+] userbinator|2 years ago|reply
It's become hard, and will become harder, precisely because of things like GPT (unintentionally) spreading misinformation. I'm against censorship in general, and this is no exception, but I do hope it leads to people becoming more critical of what they consume. However, I suspect that instead we will unfortunately see the continued cycle of creating large masses of people "educated" on such widely-propagated half-truths, while only a tiny subset of the population will have the actual truth.
[+] underdeserver|2 years ago|reply
Gell-Mann amnesia, in AI form. Wonderful! It's like learning a complex topic from reading the newspaper.
[+] chazeon|2 years ago|reply
I really found asking GPT to put a math concept under a physics context very helpful for me. As a computational physics student, that is really how I understand math and the world. Only very few textbooks can help me with this.

From [Make Something Wonderful: Steve Jobs in His Own Words][1], Steve once said in a interview in 1983:

> The problem was, you can't ask Aristotle a question. And I think, as we look towards the next fifty to one hundred years, if we really can come up with these machines that can capture an underlying spirit, or an underlying set of principles, or an underlying way of looking at the world, then, when the next Aristotle comes around, maybe if he carries around one of these machines with him his whole life–his or her whole life–and types in all this stuff, then maybe someday, after this person's dead and gone, we can ask this machine, “Hey, what would Aristotle have said? What about this?” And maybe we won't get the right answer, but maybe we will. And that's really exciting to me. And that's one of the reasons I'm doing what I'm doing.

And this future, expected "next fifty to one hundred years", is somewhat here already.

[1]: https://book.stevejobsarchive.com/

[+] waterhouse|2 years ago|reply
You might like the quote from Vladimir Arnold: "Mathematics is a part of physics. Physics is an experimental science, a part of natural science. Mathematics is the part of physics where experiments are cheap."
[+] seb1204|2 years ago|reply
How do you know what ChatGPT tells you is correct?
[+] krychu|2 years ago|reply
I've been using GPT to have (insightful) educational conversations about Quake 1 source code: https://twitter.com/krychusamp/status/1649048047996014595

I always finish up by asking GPT to test my knowledge with a single-choice questionnaire. What I've observed is that the retention of the material is higher compared to "traditional" techniques. Perhaps the conversation style is more immersive, or perhaps focusing on specific knowledge gaps makes for accelerated / personalised learning.

There is of course the problem of accuracy, but I feel like it's often over-stated. Even if GPT is not correct at times, it often uncovers concepts and relations that paint a better overall picture for me, and lead me to better questions and follow up actions.

[+] thethimble|2 years ago|reply
Agreed - it seems that calling out LLM accuracy is a meme here - hyperbolically: “because LLMs can be inaccurate they are useless”.

There seems to be less next-level analysis: which topics are more prone to inaccuracy, does the critique loop actually help LLMs overcome those inaccuracies, and do the benefits of LLMs outweigh the consequences of these inaccuracies?

[+] homieg33|2 years ago|reply
It’s nice to be able to ask ChatGPT a half baked, poorly researched, poorly worded question with bad grammar yet get a totally good faith response back that’s a springboard for follow up questions. Whereas if you did the same thing on any stack exchange site you get downvotes and comments like “please read the guidelines and edit your question.”
[+] pavlov|2 years ago|reply
Kindness and patience have always been in short supply on the public internet, but AI can simulate them in infinite amounts.

That’s a positive thing about this generative AI revolution that I haven’t really thought about in those terms until now.

[+] thsbrown|2 years ago|reply
Completely agree. ChatGPT can be an incredible tool for getting a lay of the land on a subject or topic you don't know much about.

On that note, search in that regard always reminded me of those times where you ask a teacher how to spell a word and they say to look it up in the dictionary.

[+] brokencode|2 years ago|reply
I noticed a while back that the internet has made me terribly prone to skimming. It got to a point where I could hardly stand to read anything longer than a short news article.

To learn anything useful on the internet, you pretty much have to skim. So much of the internet is so loaded with filler and BS that it is hardly worth reading at all.

With ChatGPT, it’s incredibly refreshing to be able to ask a question and get nothing other than a concise answer. No skimming required. I feel so much more focused and better able to learn this way.

[+] kccqzy|2 years ago|reply
You can downvote me but I think OP hasn't learned how to learn yet. If OP reads Wikipedia by understanding every sentence and clicking on every link, that's deliberately sabotaging his own learning. Attitudes like "I don't really feel like spelunking through a ton more articles" simply shows OP isn't interested in learning per se, just quick answers.

OP learns in a way that's very child-like. When you are a five-year-old it's okay to learn by asking everything. That stops being acceptable by the age of fifteen. OP hasn't learned any research skills yet, and when OP's needs inevitably exhausts the ability of LLMs, OP would be utterly unable to read an encyclopedia or a research paper or perhaps a textbook.

[+] vipshek|2 years ago|reply
OP here. I think learning exists on a broad spectrum. On one end, you're just indulging curiosity ("I wonder how...?"). On the other, you're trying to build deep understanding and expertise.

I completely agree that for the latter goal, the approaches in the blog post are insufficient, even undesirable. And I do worry that the way I engage with content on the web is weakening my ability to go deep on a subject I'm interested in.

But I do think there is value in just being able to indulge curiosity quickly and consistently. Not only is it rewarding in its own right, but it also provides the spark that leads you to eventually go deeper.

Lately, I've found myself sitting at a laptop with friends, asking GPT a question, reading and discussing the response, and then coming up with and asking followup questions as a group. I don't think we would've done that in the past, because the interface of search engines and webpages and browser tabs were too unwieldy to engage with collectively. It just feels like a completely new way to learn things, and what's what I'm most excited about.

[+] tux3|2 years ago|reply
I notice that this contains only criticism and comparison to children, without offering a better way to learn.

If OP reads your comment, they will be no better at learning than they were before. In that way, it's a pretty unhelpful comment.

[+] lIl-IIIl|2 years ago|reply
The problem is Wikipedia. OP's approach is perfectly natural. Textbooks are designed for OP's approach because that's how people learn.

If I want to learn about topic C which requires knowledge of topics A and B, but C can also be generalized to concepts X and Y, it will be very hard to learn from Wikipedia.

If I don't know how to add numbers and look up "sum" on Wikipedia, in the second sentence I learn that summing is used for functions, vectors, matrices, and other things I don't know about. This is a cool feature and I love it for exploring but hate it for learning things that require a few layers of concepts to get.

Textbooks do the opposite and are awesome. An electronics textbook will take you step by step through all the concepts to get to LEDs, without "forward references" to the concepts you haven't learned yet.

The "problem" with textbooks is that it will take a while to get to the destination. LEDs might be in chapter 15 and you may not want to spend a few months going through chapters 1-14. You don't know what you will need to understand chapter 15.

But you can perhaps work backward - you are guaranteed that any unfamiliar concept introduced in chapter 15 will be covered in chapters 1-14, and that there is no rabbit hole.

ChatGPT or a personal tutor can shortcut this by giving you just the "narrow path" of knowledge to understand the concept that you want to learn.

[+] WA|2 years ago|reply
I agree, especially if you consider these were the questions on OP's mind:

> just out of curiosity, I wanted to learn more. I get that LEDs consume less energy and release less heat, and that they're made using semiconductors. But what kinds of semiconductors? How do semiconductors work in general, anyway?

And they proceed to type "LED" into Google. Why not "led what kind of semiconductor" and "how do semiconductors work in leds"?

I assume, OP didn't write "LED" in the ChatGPT text box without any context either.

[+] thsbrown|2 years ago|reply
Just out of curiosity what would say is the optimal way to learn from an encyclopedia, research paper or Wikipedia?
[+] seb1204|2 years ago|reply
Not that harsh but I do think that googling as well as ChatGPT will only yield in a superficial understanding of topics. For a deeper profound understanding, connections, complexities, etc. A different more holistic learning is required. Not sure how else to explain this.
[+] wrycoder|2 years ago|reply
Asking simple questions is a great way to start, it doesn’t matter how old you are. I think Feynman and Schrödinger would agree. Certainly de Broglie would. You can go down as deeply as you like.

Right now, if the level involves advanced math, it’s better to switch to other sources at some point, but that will change.

You can ask GPT-4 to tutor you, also.

[+] papandada|2 years ago|reply
I've learned almost everything I know through reading and listening, with very little discourse. I rarely asked questions in class, never had tutors, never went to office hours. I hesitate to post questions online. If I don't understand something, I just read ... more, or bang my head against it as trial and error.

I think this is partly why I'm still looking to be wowed by this technology, personally, in terms of what it can accomplish for me. And while it could be rightly said I've made things unnecessarily hard for myself approaching life like this, I feel it has been beneficial, and enriching, to force myself to really ask, what is this person saying here? In particular, I wouldn't want GPT to lead to a general lessening of empathy.

[+] renewiltord|2 years ago|reply
Shortly before GPT and friends burst on the scene, I was looking for a website which would meet me where I am as an engineer - I've written reams of code in various languages.

If I want to try Rust, I don't want to be taught uint8 v uint16 or that you shadow variables. I want to know the interesting parts.

ChatGPT is pretty good at this and the other thing I want: pandas training. You can ask it to generate exercises at any difficulty and also provide test data!

This tool is the biggest mind expander for me since search engines.

[+] emrah|2 years ago|reply
I agree, Google has deteriorated so much. It just points to either doc pages and leaves all the work up to you or you get a (often outdated from 2011) pages from Stack overflow. I need to do all the filtering and stitching.

I tend to get better answers from their "automated questions" which are paraphrased versions of my query. So it clearly understands what I'm after.

In order to promote diversity, i would recommend perplexity.ai which offers a similar experience as chatgpt (I'm not affiliated and i have no clue what their tech stack is like) It also offers links back to pages and follow up questions etc. Highly recommended if you need to learn something new and you don't want to bang your head on the keyboard googling or ddg'ing

I'll give an example. I recently needed to learn about k8s, minikube, kubectl et al for a project. I had some vague idea about the tech but nowhere near enough for what i needed to do. Google was useless because it kept taking me to doc pages which is like being lectured but i needed specific information. Perplexity was amazing in helping me with the right bit of information, example code AND links if i do want to read further

[+] kumarvvr|2 years ago|reply
This is so ridiculous.

GPT is like that "know-it-all" friend we have who just has something to say about anything, with knowledge skimmed from the internet.

GPT is a language model. It outputs what you want to hear, not what is correct.

[+] rapsacnz|2 years ago|reply
Don't forget that GPT was trained on data from all the sources that failed you. So if we all collectively neglect them, and they fall over, we will lose many valuable resources.

I think we need to think about how to keep these valuable sites going, because they are ultimately providing most of the value of the various available language models.

[+] wrycoder|2 years ago|reply
The sources didn’t fail. The point is, especially at the beginning, it’s easy to miss seeing the forest for all the trees.
[+] enoch2090|2 years ago|reply
Now whenever I need to use a fancy new package that I never used of, I use LangChain to collect all documents from the package document site, load them into a vectorDB and start asking GPT questions. This method works in 80% of the time. One pitfall is that with this method I only get what I want. I don't get a deep understanding of that package as it used to be if I carefully read over the documents. Still finding a balancing point in between.
[+] grrdotcloud|2 years ago|reply
Imagine being able to ask questions and get answers back based upon understanding, and not upon a curriculum or agenda. I have found this method to allow me to consume information much faster while skipping over the often tedious topics.
[+] kerkeslager|2 years ago|reply
Do you think that "tediousness" is a good indicator of whether a topic is important?
[+] alex-moon|2 years ago|reply
So many comments on this shovelling sh*t on OP...? I agree with the author entirely. In practically every field, the reason it's difficult to learn anything, starting with zero, is because you don't know what questions to ask. You need to find an in. ChatGPT is really good at this - the answers it gives provide an idea of what you don't know you don't know. Obviously, you then need to go verify the answers, but the difference is: now you know what the language you need to use to do so looks like.

PS you would also need to do this if you started with Wikipedia as well.

[+] thealig|2 years ago|reply
Finding out what you don't know to start learning is also a part of learning, and it is a bit difficult to trust LLMs like ChatGPT when it had a tendency to spout correct sounding answers that are incorrect, inaccurate etc.

Search engines are much better On uncovering guides on that, specially from experts and verified sources. It's a bit of work to verify that, but then it is part of learning itself, not sure trying to punt on that to fast track your learning your whatever is going to make a meaningful difference in terms of time

[+] feintruled|2 years ago|reply
I find this too. With GPT there is no such thing as a stupid question. That is really liberating! You can really double check the simplest and most basic of your assumptions with no fear of judgement. And as mentioned in other comments, the assurance of receiving a good faith answer is not to be underestimated.
[+] DotaFan|2 years ago|reply
It is very helpful to learn new stuff indeed, I am personally using https://www.phind.com.
[+] an_aparallel|2 years ago|reply
im using phind too - phinding it pretty incredible. As someone learning development - it has helped me with so many of the annoyances of learning dev environments, like:

setting up a venv.. environmental variable issues in windows. *diagnosing a UTF-8 issue in windows.

i get that professionals problems would be harder to answer...however, getting responses without wading through stack exchange entry after entry has really kept me focused, and prevented the often times frustrating recursive spiral which is getting an issue with your issues issue...

[+] pulvinar|2 years ago|reply
I like how it checks its facts (mostly, anyway).

It didn't give a wrong answer when asked "Is there a digital to analog converter with an 8V analog range and serial input?", which another poster (mhb) had shown to trip up plain GPT4.

[+] noduerme|2 years ago|reply
Why learn? The best part about LLMs is you and your children will never need to know anything or think anything ever again.
[+] JohnFen|2 years ago|reply
Here's the aspect of this article that I found the most fascinating, and might explain why I don't get the same level of benefit from ChatGPT as many report.

If I were learning how LEDs work, I would not have wasted any time whatsoever on the search results that the author spent a lot of time on. They were obviously (to me) the wrong articles on the face of it, because they were covering aspects that weren't really what I was looking for (the wrong sort of detail and emphasis).

So I think I would have been off and running pretty much immediately with the web search results rather than spending time on the clear dead ends.

ChatGPT gets me there too, after enough back-and-forth, but it takes longer for me to zero in on what I'm looking for.

I say this not to say that ChatGPT is in any way bad for this. I'm just noticing a difference in how the two of us engage in learning new topics. Perhaps the reality is that for some people, ChatGPT is a godsend, and for others, it's fine... but hardly an improvement for this use use case.

It would explain a lot.

(Also, when did learning stop being fun??)

[+] andai|2 years ago|reply
Most people seem to prefer learning by talking and asking questions. At least, that's what I've gathered from Discord servers where 99.9% of people ask questions answered on the first page of the Readme ;)
[+] kweingar|2 years ago|reply
ChatGPT can be a great supplement for independent research. But when the article mentions a “curious seventh-grader”, I think we should focus on getting them quality human instruction whenever possible instead of just pointing them to ChatGPT.

ChatGPT addresses a scalability problem: not everyone has access to a tutor or can just call up a teacher or mentor to learn and ask questions. But some in the tech industry claim that ChatGPT is as good as or even better than human instruction, which to me seems totally off base.

The biggest problem I see in using LLMs as a teacher-substitute is that LLMs answer the questions you ask, whereas a good teacher tells you what you need to hear. Maybe this is solvable with specialized model tuning, but we need to actually solve it before telling kids that the best way to learn is to talk to the computer.