From a Nondualist perspective, our brain is highly complex biological neural network that has a special ability to reflect pure consciousness giving rise to the mind and the world with it. A sufficiently advanced artificial neural network can also reflect the same consciousness but thier minds and world would be entirely unlike ours. However, consciousness will add a random component to the predictions and might make them completely useless as a tool. They might decide not to follow the instructions prompt given their internal state of mind.That is a problem even now though. Sometimes LLMs just goof up for no reason. Maybe they are conscious.
otikik|2 years ago
I get why our form of consciousness is essentially incompatible with determinism (our brains process data in a massively parallel way - minor differences here and there like the length of a particular dendrite, or even quantom effects, will affect the results). But a synthetic consciousness might not have that particular problem. You might be able to "reset" it and get the same result given the same input.
> and might make them completely useless as a tool
Our own brains are non-deterministic and yet we manage to get at least some usage from them.
Kim_Bruning|2 years ago
I'm pretty sure the brain is a chaotic system. Mostly because the other options are all <boring>.
Which behavior would you pick?
* static/equilibrium. (doesn't change over time)
* periodic (like a pendulum or sine wave)
* chaotic (unpredictable complex behavior, sensitive to initial conditions)
* stochastic (random noise, possibly subject to statistical analysis)
Only stochastic behavior is non-deterministic, and what we actually want is the chaotic behavior.
https://en.wikipedia.org/wiki/Chaos_theory
glenstein|2 years ago
That said, I think I agree with the upshot of your point that artificial conscious systems could process things predictably, and that our own brains appear to be perfectly functional despite whatever elements of uncertainty, (via our own agency or parallelization or whatever else one thinks is the special thing about consciousness).
glenstein|2 years ago
I think I was with you until this part. I think the best way of making sense of what you mean by "random" is that you are presuming that something with consciousness would have agency, and therefore would be open to making it's own independent choices.
Whatever the case with agency, it wouldn't be the same as randomness.
andsoitis|2 years ago
LLMs are neural networks trained on huge volumes of text. They are statistical machine that learns patterns and relationships from the training set. The text is diverse: from fiction to scientific. An LLM predicts (autocompletes) the next word in a sentence based on context from the preceding words.
Hallucination happens because LLMs are designed to generate text that is coherent and contextually appropriate rather than factually accurate. Training data contain inaccuracies, inconsistencies, fictional content and the model has no world view or principles or experience or an opinion or other way to disinguish between fact and fiction. So what it outputs aligns with patterns in teh training data but it isn't grounded in reality.
Putting it differntly, LLMs lack ground truth from external sources.
pessimizer|2 years ago
It's nondualist in the way that it makes the entire world as insubstantial as souls.