Good question - my best assessment is just the text classifier. IE was the LLM able to “trick” the classifier into believing the text came from the IPJ?
And it came quite a long way in training. Initially the classifier scores were very low (mean around 0.05, meaning modern). Over training, the scores came up and ended close to 0.95 (IPJ). The standard deviation of the group also declined, so the consistency of responses improved as well.
My thought on the application of this is that you could use it to create different voices to your responses and probably even add multiple at a time to a single model. I chose this one to experiment, because it is easy to classify and the data was available in the public domain.
GRPO kind of opens up RL to lower tiers of hardware and I’ve been able to experiment with it at home. I think this is something people can do themselves and it’s fun and potentially useful in games or possibly in relation to applications interfacing kids with lower reading levels (eg using a reading level classifier instead).
Yet, one might justly question the imperative of cultivating a distinct model for such an endeavour, when a judiciously framed prompt, enriched by apposite examples, might suffice to imbue a sophisticated engine with the desired stylistic graces. Though it is undeniable these modern engines shall wax greatly in their proportions, and the art of discovering the exact prompt to elicit their most felicitous expressions is a task far from trivial, yet, it must be admitted, the pursuit holds a certain diversion for the inquisitive mind! It is, perchance, not the creation of manifold engines, but rather the artful disposition of singular contexts, that shall bestow upon diverse interlocutors their proper and unique voices.
deepsquirrelnet|9 months ago
And it came quite a long way in training. Initially the classifier scores were very low (mean around 0.05, meaning modern). Over training, the scores came up and ended close to 0.95 (IPJ). The standard deviation of the group also declined, so the consistency of responses improved as well.
My thought on the application of this is that you could use it to create different voices to your responses and probably even add multiple at a time to a single model. I chose this one to experiment, because it is easy to classify and the data was available in the public domain.
GRPO kind of opens up RL to lower tiers of hardware and I’ve been able to experiment with it at home. I think this is something people can do themselves and it’s fun and potentially useful in games or possibly in relation to applications interfacing kids with lower reading levels (eg using a reading level classifier instead).
dwringer|9 months ago