(no title)
apike | 1 year ago
LLMs are tools. As a tool author, you have certain desired outcomes for certain use cases. If the current data you’re training on isn’t giving you those outcomes, it is absolutely reasonable to "fudge" the data. This might mean reducing bias, or adding bias, or any number of nudges. Training an LLM is not a scientific study, it’s a product development effort.
surfingdino|1 year ago