I followed whatever the guidance was for a specific model. Some of the LLM finetuning providers did indeed set the temperature to 0 and I followed that, but others suggested 1. I could probably iterate a bit to see what is best for each model, and I might well do that for the one that I choose as the one I’ll be doubling down on in subsequent iterations / finetunes. Thanks for the suggestion!
Tiberium|1 year ago
mewpmewp2|1 year ago
I would say this actually invalidates the whole thing.
bongodongobob|1 year ago