top | item 35786349

(no title)

krychu | 2 years ago

I've been using GPT to have (insightful) educational conversations about Quake 1 source code: https://twitter.com/krychusamp/status/1649048047996014595

I always finish up by asking GPT to test my knowledge with a single-choice questionnaire. What I've observed is that the retention of the material is higher compared to "traditional" techniques. Perhaps the conversation style is more immersive, or perhaps focusing on specific knowledge gaps makes for accelerated / personalised learning.

There is of course the problem of accuracy, but I feel like it's often over-stated. Even if GPT is not correct at times, it often uncovers concepts and relations that paint a better overall picture for me, and lead me to better questions and follow up actions.

discuss

order

thethimble|2 years ago

Agreed - it seems that calling out LLM accuracy is a meme here - hyperbolically: “because LLMs can be inaccurate they are useless”.

There seems to be less next-level analysis: which topics are more prone to inaccuracy, does the critique loop actually help LLMs overcome those inaccuracies, and do the benefits of LLMs outweigh the consequences of these inaccuracies?