(no title)
flwi | 5 months ago
On a tangent, we cannot prove that LLMs actually know language, yet they can be incredibly useful. Of course, a true world model would be much nicer to have, I agree with that!
flwi | 5 months ago
On a tangent, we cannot prove that LLMs actually know language, yet they can be incredibly useful. Of course, a true world model would be much nicer to have, I agree with that!
godelski|5 months ago
Read my example. People will care if you have a more complicated geocentric model. Geocentric was quite useful, but also quite wrong, distracting, and made many bad predictions as well as good ones.
The point is that it is wrong and this always bounds your model to being wrong. The big difference is if you don't extract the rules your model derived then you won't know when or how your model is wrong.
So yes, the user cares. Because the user cares about the results. This is all about the results...
We or you? Those are very different things. Is it a black box because you can't look inside out because you didn't look inside? Because I think you'll find some works that do exactly what we're talking about here. And if you're going to make big talk about PINNs then you need to know their actual purpose. Like come on man, you're claiming a physics model. How can you claim a physics model without the physics?