(no title)
krychu | 2 years ago
I always finish up by asking GPT to test my knowledge with a single-choice questionnaire. What I've observed is that the retention of the material is higher compared to "traditional" techniques. Perhaps the conversation style is more immersive, or perhaps focusing on specific knowledge gaps makes for accelerated / personalised learning.
There is of course the problem of accuracy, but I feel like it's often over-stated. Even if GPT is not correct at times, it often uncovers concepts and relations that paint a better overall picture for me, and lead me to better questions and follow up actions.
thethimble|2 years ago
There seems to be less next-level analysis: which topics are more prone to inaccuracy, does the critique loop actually help LLMs overcome those inaccuracies, and do the benefits of LLMs outweigh the consequences of these inaccuracies?