top | item 44222918

(no title)

lo0dot0 | 8 months ago

A part of my question that you didn't go into was, can new knowledge be added in a new version without making the answers with knowledge learned in previous versions non-deterministic?

discuss

order

dijksterhuis|8 months ago

that’s not really how training works.

changing the input (data) means you get a different output (model).

source data has nothing to do with model determinism.

as an end-user of AI products, your perspective might be that the models are non-deterministic, but really it’s just different models returning different results … because they are different models.

“end-user non-determinism” is only really solved by repeatedly using the same version of a trained model (like a normal software dependency), potentially needing a bunch of work to upgrade the (model) dependency version later on.