No, it's completely useless, and puts the entire rest of the analysis in a bad light.
LLMs have next to no understanding of their own internal processes. There's a significant amount of research that demonstrates this. All explanations of an internal thought process in an LLM are completely reverse engineered to fit the final answer (interestingly, humans are also prone to this – seen especially in split brain experiments).
In addition, the degree to which the author must have prompted the LLM to get it to anthropomorphize this hard makes the rest of the project suspect. How many of the results are repeated human prompting until the author liked the results, and how many come from actual LLM intelligence/analysis skill?
By saying that's its gold mine, I think OP meant that's it's funny, not that it brings valuable insight.
ie: THEY KNOW -> that made me laugh
and as the article said
"an LLM who just spent thousands of words explaining why they're not allowed to use thousands of words", its just funny to read.
The fact that they produce this as “default” response is an interesting insight regardless of its internal mechanisms. I don’t understand my neurons but can still articulate how I feel
It's sure phrased like one, but I'd be careful to attribute LLM thought process to what it says it's thinking. LLMs are experts at working backwards to justify why they came to an answer, even when it's entirely fabricated
I would go further and say it's _always_ fabricated. LLMs are no better able to explain their inner workings than you are able to explain which neurons are firing for a particular thought in your head.
Note, this isn't a statement on the usefulness of LLMs, just their capability. An LLM may eventually be given a tool to enable it to introspect, but IMO its not natively possible with the LLM architectures today.
Right… because these things are trained on sci-fi and so when asked to describe an internal monologue they create text that reads like an internal monologue from a sci-fi character.
Maybe there’s genuine sentience there, maybe not. Maybe that text explains what’s happening, maybe not.
numeri|9 months ago
LLMs have next to no understanding of their own internal processes. There's a significant amount of research that demonstrates this. All explanations of an internal thought process in an LLM are completely reverse engineered to fit the final answer (interestingly, humans are also prone to this – seen especially in split brain experiments).
In addition, the degree to which the author must have prompted the LLM to get it to anthropomorphize this hard makes the rest of the project suspect. How many of the results are repeated human prompting until the author liked the results, and how many come from actual LLM intelligence/analysis skill?
sebnado|9 months ago
and as the article said "an LLM who just spent thousands of words explaining why they're not allowed to use thousands of words", its just funny to read.
jerpint|9 months ago
ramoz|9 months ago
You're stuck on the anthropomorphize semantics, but that wasn't the purpose of the exercise.
mholm|9 months ago
doctoboggan|9 months ago
I would go further and say it's _always_ fabricated. LLMs are no better able to explain their inner workings than you are able to explain which neurons are firing for a particular thought in your head.
Note, this isn't a statement on the usefulness of LLMs, just their capability. An LLM may eventually be given a tool to enable it to introspect, but IMO its not natively possible with the LLM architectures today.
demarq|9 months ago
It sounds a lot like like the Murderbot character in the AppleTV show!
roxolotl|9 months ago
Maybe there’s genuine sentience there, maybe not. Maybe that text explains what’s happening, maybe not.