top | item 43013331

(no title)

farleykr | 1 year ago

I might be getting overly philosophical here but I'd say it's because they truly don't know anything at all (as opposed to knowing some things but not others). To be able to say "I don't know" you have to first "know" on a deeper level that there is a fundamental true or correct answer to a question and that you are disconnected from it.

discuss

order

pjc50|1 year ago

Well, yes. "AI" skips over all the difficulties and contradictions of philosophy, all the challenges of working out what it means to know something, things like "justified true belief" and so on. It 'just' (!) uses a probabilistic model to emit strings of text. It's basically a super-pundit. It can predict conventional wisdom really well.

But fundamentally it's trapped in the wrong side of a glass jar. It can't kick stones like Samuel Johnson. https://en.wikipedia.org/wiki/Appeal_to_the_stone

farleykr|1 year ago

True, no argument there. What fascinates me more is why people continue to think we can teach a chatbot how to recognize what's true and give us answers that we can't find for ourselves. At best a chatbot is going to be a tool that enables us to gain insights we didn't have before the same way a dictionary can "teach" you words you didn't know before.

I think the idea of using technology to solve life's ultimate conundrums has long since jumped the shark and veered into the area of religious belief. People are literally putting their faith in AI even if they wouldn't use religious vocabulary to label and define it as such.

falcrist|1 year ago

I don't think it's overly philosophical to point out that these are large language models, not truth engines or AGI or knowledge directories. They're not using logic to reason their way to an answer. They're just predicting the next word that would sound like part of a human answer.

farleykr|1 year ago

Fair enough. I think a lot of people are going to end up blindly trusting AI because its right often enough. But for those who are interested in what it really means to know something, I wonder if this will push people back towards embracing the idea that there is fundamental, objective, knowable truth at the core of the universe even if we can't ever know that truth perfectly.

kennysoona|1 year ago

> They're not using logic to reason their way to an answer. They're just predicting the next word that would sound like part of a human answer.

OpenAI claims recent models are actually reasoning to some extent.

jbreckmckye|1 year ago

They are machines designed to produce a facsimile of knowledge. Or at least an approximation. If they refused to answer, that's a failure by the terms of what they product aims to do

drdrek|1 year ago

You are actually getting overly philosophical. The reason is that a step of chatbot training is to fine tune the base model to less frequently respond with non answers. I have not read the article, but the answer is: Because they are specifically built not to. Its like asking why so few salesmen end a call with "yeah it seems our product is not the right solution for you"

sdwr|1 year ago

Only person who understands this is a style choice and not a deep limitation of the robot condition.

If a politician has non-answers for difficult questions, does that mean they aren't conscious? If a student writes crap for a test question, aiming for partial marks, were they raised wrong?

sharemywin|1 year ago

so our "knowledge" is based on perception. I know something happened because I saw it with my own 2 eyes. everything else is less "knowable"

bdhcuidbebe|1 year ago

That seems to be the conspiracists group think, yea.

IRL we invented the field of science to avoid such make belief nonsense.