You're doing the common thing where multiple criticisms you have of LLM's are individually valid but mutually incompatible.
If the point was that an AI is a fancy Markov chain, then it would produce a more uniformly random distribution of random numbers (perhaps with 7 more often due to humans preferring that as it feels "more random")
However the idea that it's an issue that it always produces 7 implies that LLM's are not random enough, and rather have collapsed to a definite mode of thought which restricts stochastic variation.
Both cases taken together, you are claiming that LLM's are too random to be truly "intelligent", but also not random enough.
All of this is a distraction from the fact that LLM's can write thousands of lines of genuinely useful, novel code. I feel the only way to reasonably reconcile this with their varying failure cases is to entertain the notion that LLM's not "unintelligent", but merely a different kind of intelligence than humans, and will have distinct deficits and distinct aptitudes. (This isn't a comprehensive assessment, but it, as a general description, has more predictive power than claiming that LLM's are just a "slot machine")
atleastoptimal|2 hours ago
If the point was that an AI is a fancy Markov chain, then it would produce a more uniformly random distribution of random numbers (perhaps with 7 more often due to humans preferring that as it feels "more random")
However the idea that it's an issue that it always produces 7 implies that LLM's are not random enough, and rather have collapsed to a definite mode of thought which restricts stochastic variation.
Both cases taken together, you are claiming that LLM's are too random to be truly "intelligent", but also not random enough.
All of this is a distraction from the fact that LLM's can write thousands of lines of genuinely useful, novel code. I feel the only way to reasonably reconcile this with their varying failure cases is to entertain the notion that LLM's not "unintelligent", but merely a different kind of intelligence than humans, and will have distinct deficits and distinct aptitudes. (This isn't a comprehensive assessment, but it, as a general description, has more predictive power than claiming that LLM's are just a "slot machine")
cloud-oak|2 days ago