(no title)
andreyandrade | 1 month ago
I see current AIs as tools—a sophisticated lathe, not a thinking partner. The question isn't whether it "knows" anything.
The interesting question is: why does AI with correct information in its weights still give wrong answers? That's an engineering problem, not a metaphysics problem.
But here's what bothers me about the "AI doesn't truly know" argument: do we? When a senior dev answers "use Kubernetes" without asking about team size or user count, are they "comprehending" or pattern-matching on what sounds authoritative? The AI failure I described is identical to what I see in human experts daily.
Maybe the flaw isn't unique to AI. Maybe it's a mirror.
allears|1 month ago