(no title)
patrickkidger | 1 year ago
See for example https://www.nature.com/articles/s42256-024-00897-5
Classical solvers are very very good at solving PDEs. In contrast PINNs solve PDEs by... training a neural network. Not once, that can be used again later. But every single time you solve a new PDE!
You can vary this idea to try to fix it, but it's still really hard to make it better than any classical method.
As such the main use cases for PINNs -- they do have them! -- is to solve awkward stuff like high-dimensional PDEs or nonlocal operators or something. Here it's not that the PINNs got any better, it's just that all the classical solvers fall off a cliff.
---
Importantly -- none of the above applies to stuff like neural differential equations or neural closure models. These are genuinely really cool and have wide-ranging applications.! The difference is that PINNs are numerical solvers, whilst NDEs/NCMs are techniques for modelling data.
/rant ;)
eigenman|1 year ago
The best part about PINNs is that since there are so many parameters to tune, you can get several papers out of the same problem. Then these researchers get more publications, hence better job prospects, and go on to promote PINNs even more. Eventually they’ll move on, but not before having sucked the air out of more promising research directions.
—a jaded academic
__mmd|1 year ago
anon389r58r58|1 year ago
PINNs have serious problems with the way the "PDE-component" of the loss function needs to be posed, and outside of throwing tons of, often Chinese, PhD students, and postdocs at it, they usually don't work for actual problems. Mostly owed to the instabilities of higher order automatic derivatives, at which point PINN-people begin to go through a cascade of alternative approaches to obtain these higher-order derivatives. But these are all just hacks.
mnky9800n|1 year ago
Edit: this is the Yao lai paper I’m talking about:
https://www.sciencedirect.com/science/article/pii/S002199912...
mnky9800n|1 year ago
__mmd|1 year ago