(no title)
Rekksu
|
3 years ago
I have never seen an ML researcher claim that understanding the effect of specific training inputs on outputs is straightforward given the size of these LLMs. Most view it as a very difficult if not impossible problem.
falcolas|3 years ago
It's a non-starter for no other reason than potential copyright infringement means the government becomes involved, and they will stomp on the AI mouse with the force of an elephant - the opinions of amateurs and the anti-copyright movement notwithstanding.
As such, AI Observability is a problem that's both under active research, and the basis for B2B companies.
https://censius.ai/wiki/ai-observability
https://towardsdatascience.com/what-is-ml-observability-29e8...
https://whylabs.ai/observability
https://arize.com/
janalsncm|3 years ago
Given a black box you can do two things: watch the black box for a while to see what it does, or take it apart to see how it works.
Observability is the former. Useful in many cases, just not here.
If you want to know what LLMs are actually doing, you’ll need the latter. Looking at weight activations for example, although with billions of parameters that’s infeasible.
Rekksu|3 years ago