(no title)
toisanji | 7 months ago
I made A deep research assistant for families. Children can ask questions to explain difficult concepts and for parents to ask how to deal with any parenting situation. For example a 4 year old may ask “why does the plate break when it falls?”
example output: https://www.studyturtle.com/ask/PJ24GoWQ-pizza-sibling-fight...
ujkhsjkdhf234|7 months ago
schmorptron|7 months ago
Then again, human 1:1 tutoring is the most effective way to learn, isn't it? In the end it'll probably end up being a balance of reading through texts yourself and still researching broadly so you get an idea about the context around whatever it is you're trying to do, and having a tutor available to walk you through if you don't get it?
devmor|7 months ago
I ask because every serious study on using modern generative AI tools tends to conclude fairly immediate and measurable deleterious effects on cognitive ability.
diggan|7 months ago
There are a lot of studies, and I can't say I've read all of them, but the ones I have read, there hasn't been much focus on how the participants used the LLM to learn. My guess is that it has a lot of effect on the end results. Someone just asking for the answer and then thinking "Lets remember this" will have very different results than someone who does the Socratic method of learning together with a LLM, as just one example.
toisanji|7 months ago