An LLM doesn't know anything about itself - it can be pre-prompted with facts about itself, but this is going to be an example of it just making plausible text up.
even if there were some mixture-of-experts shenanigans going on, there is no introspection or reasoning, so the model isn’t able to comment on or understand its “inner experience”, if you can call matrix multiplications an inner experience
losteric|2 years ago
gcr|2 years ago
qup|2 years ago