Good point. But can the models even behave that way? They depend on probability. If they put a greater weight on novel/unexpected outputs don't they just become undependable hallucination machines? Despite what some people think, these models can't reason about a concept to determine it's validity. They depend on recurring data in training to determine what might be true.That said, it would be interesting to see a model tuned that way. It could be marketed as a 'creativity model' where the user understands there will be a lot of junk hallucination and that it's up to them to reason whether a concept has validity or not.
ceroxylon|6 months ago
https://towardsdatascience.com/a-comprehensive-guide-to-llm-...
bluecalm|6 months ago