I do not think it is feasible to make sure prompts are reproducible. Considering that a LLMs are large you can not host every version of the model indefinitely.
ChatGPT when asked this question answers that its responses are probabilistic, so the responses aren't reproducible. I tested that myself, of course. Since it gave me 2 different (but overall equivalent) answers from the same prompt I'd have to agree.
WolfOliver|3 years ago
m3galinux|3 years ago
jpwagner|3 years ago