(no title)
zzbzq
|
8 months ago
The LLMs do have "latent knowledge," indisputably, the latent knowledge is beyond reproach. Because what we do know about the "black box" is that inside it, is a database of not just facts, but understanding, and we know the model "understands" nearly every topic better than any human. Where the doubt-worthy part happens is the generative step, since it is tasked with producing a new "understanding" that didn't already exist, the mathematical domain of the generative function exceeds the domain of reality. And, second of all, because the reasoning faculties are far less proven than the understanding faculties, and many queries require reasoning about existing understandings to derive a good, new one.
datadrivenangel|8 months ago
whilenot-dev|8 months ago
[0]: https://www.arl.org/blog/training-generative-ai-models-on-co...