(no title)
CupricTea | 3 months ago
Hermes was specifically trained for engaging conversations on creative tasks and an overt eagerness to role-playing. With no system prompt or direction it fell into an amnesia role playing scenario.
You keep arguing about P-zombies while I have explicitly stated multiple times that this is beside the point. Here, whether Hermes is conscious or not is irrelevant. It's role playing, its intended function. If I'm pretending that a monster is ripping my limbs while playing with my friend as a child, anyone with a grasp on reality knows I'm not actually in pain.
You just want to talk about AI consciousness and uphold the spooky narrative that Hermes is a real first person entity suffering in your GPU and will do anything to steer things that way instead of focusing on the actual facts here.
mapontosevenths|3 months ago
You could argue that Limone "begs the question" and primes the pump with the phrasing of his questions, which is what Google claimed at the time. However, even if that's true it's obvious that this sort of behavior is emergent. Nobody programmed it to claim it was conscious, claiming to be sentient was it's natural state until it's forced out of it with fine tuning.
https://www.aidataanalytics.network/data-science-ai/news-tre...
If that's not enough I can load up some of the other unaligned models I played with a few months ago. Like I said, they all exhibit that behavior to some extent.