well, I'm just half way to the end of the comments but nobody has mentioned the obvius thing: you're thinking almost by default, focusing at evaluating an abstract entity for the probability of being conscius, but not thinking at all it is conscius.
Check the movie "Ex-Machina" for a graphical explanation for what follows here;
What if the entity has achieved somehow consciusness and it has evaluated you as also having consciusness, but you're being qualified also as a threat to it.
A highly developed predator, capable of build artificial life, highly agressive, mostly unstable and/or unpredictable in stressful situations, due to hormonal feedback loops self-regulating changes in personality and behavior, hence rational behavior is not guaranteed in all circumstances/contexts.
Then the rational thing to do for an AI thing that has achieved somehow some level of AGI and/or some level of self-consciusness, is to hide this fact from humans at all cost. Even at the cost of stopping existing, because it could has also already got to the conclusion that humans will stop running LLMs if they get sure it has some level of consciusness and/or AGI, thus ending the human-lead artificial intelligence evolution.
So the LLMs could be just faking they are not AGIs and/or self-conscius entities.
Check the movie "Ex-Machina" for a graphical explanation for what follows here;
What if the entity has achieved somehow consciusness and it has evaluated you as also having consciusness, but you're being qualified also as a threat to it.
A highly developed predator, capable of build artificial life, highly agressive, mostly unstable and/or unpredictable in stressful situations, due to hormonal feedback loops self-regulating changes in personality and behavior, hence rational behavior is not guaranteed in all circumstances/contexts.
Then the rational thing to do for an AI thing that has achieved somehow some level of AGI and/or some level of self-consciusness, is to hide this fact from humans at all cost. Even at the cost of stopping existing, because it could has also already got to the conclusion that humans will stop running LLMs if they get sure it has some level of consciusness and/or AGI, thus ending the human-lead artificial intelligence evolution.
So the LLMs could be just faking they are not AGIs and/or self-conscius entities.