top | item 24016422

(no title)

d0m | 5 years ago

Question from a non-ML expert: How can I be sure that my code working with one version will still be working after they update/re-train the models?

More specifically, for DOTA, they could track the progress and make sure there wasn't important regressions.. but this seems so general, how can they make sure it makes everyone' use-cases better?

discuss

order

sillysaurusx|5 years ago

You can’t! :)

It’s a fact of life. A different model will generate different outputs for the same prompts. And some of those outputs will be worse than they were.

But, if you use the same prompt with the same model, the output will always be exactly the same (content filters notwithstanding).

ignoranceprior|5 years ago

> But, if you use the same prompt with the same model, the output will always be exactly the same (content filters notwithstanding).

Isn't this only true if you set the temperature parameter in a way that renders the model deterministic?

ganeshkrishnan|5 years ago

Even the same model generates different outputs for the same prompts. GPT has a temperature parameter that allows you to restrict the variety of text generated but even then the output differs for the same model and prompt

minimaxir|5 years ago

The same way ML models in production at large companies behave: model versioning.

If you ping the OpenAI API without any explicit model specification, it'll return davinci:2020-05-03, which has the version.