top | item 45422939

(no title)

VSerge | 5 months ago

On the topic of "24. A Sony Walkman-style device that you can give to children so they can ask questions to an LLM...", I would strongly caution against this:

- short of AGI, what a child will hear are explanations given with authority, which would probably be correct a very high percentage of the time (maybe even close to or above 99%), BUT the few incorrect answers and subtles misconceptions finding their way in there will be catastrophic for the learning journey because they will be believed blindly by the child.

- even if you had a perfect answering LLM who never makes a mistake, what's the end result? No need to talk to others to find out about something, ie reduced opportunities to learn about cooperating with others

- as a parent, one wishes sometimes for a moment of rest, but imagine that your kid just finds out there's another entity to ask questions from that will have ready answers all the time, instead of you saying sometimes that you don't know, and looking for an answer together. How many bonding moments will be lost? How cut off would your kid become from you? What value system would permeate through the answers?

A key assumption here for any parent equipping their child with such a system is that it would be aligned with their own worldview and value system. For parents on HN, this probably means a fairly science-mediated understanding of the world. But you can bet that in other places, this assistant would very convincingly deliver whatever cultural, political, or religious propaganda their environment requires. This would make for frighteningly powerful brainwashing tools.

discuss

order

ponector|5 months ago

>> child will hear are explanations given with authority, which would probably be correct a very high percentage of the time (maybe even close to or above 99%), BUT the few incorrect answers and subtles misconceptions finding their way in there will be catastrophic for the learning journey because they will be believed blindly by the child.

Much better results than asking a real teacher at school, though.

korse|5 months ago

Disagree with this. Kids are sponges who pick up on many secondary factors when an actual human gives them an answer. These factors add significant weight to their view of the response. In many cases, this actually reaches an extreme where what is said end up being tertiary to how it was said and who said it. I am sure you've experienced this even as a an adult.

An AI walkman removes this aspect of the interaction. As a parent, this is not something I would want my children to use regularly.

VSerge|5 months ago

Wouldn't you know whether a teacher is reliable or not? If reliable, they probably have this reputation also because they can also say when they don't know something. And if you found out a given teacher isn't reliable, you'd be careful about what they say next - or you would just ask someone else.

The problem here is for a child to be thinking this system is reliable when it is not. For now, the lack of reliability is obvious as chatGPT hallucinates on a very regular basis. However, this will become much harder to notice if/when chatGPT will be almost reliable while saying wrong things with complete confidence. Should such models be able to say reliably when they don't know something, this would be a big step for this specific objection I had, but it still wouldn't solve the other problems I mentioned.

93po|5 months ago

the amount of misinformation i had a kid due to a lack of internet is nothing compared to the rare hallucination a kid might get from chatgpt

swallowing gum is bad for you, or watermelon seeds, cracking knuckles causes arthritis, sitting too close to tv ruins your eyes, diamonds come from coal, newton's apple story, a million other things

sixtram|5 months ago

Just two days ago, I asked ChatGPT to provide an explanation of the place-value system that my six-year-old could understand. The only problem was that it mixed up digit value and place value, which caused it to become confused. I spotted the mistake, and ChatGPT apologised, as it usually does. But if my six-year-old had asked it first, she wouldn't have noticed.

I'm not sure how much misinformation my child would learn as truth from this device.