(no title)
twosdai | 2 months ago
Llms not being able to go from output back to input deterministically and for us to understand why is very important, most of our issues with llms stem from this issue. Its why mechanistic interpretabilty research is so hot right now.
The car analogy is not good because models are digital components and a car is a real world thing. They are not comparable.
dkdcio|2 months ago