As a steelman, wouldn't the abundance of infinitely generate-able situations make it _easier_ for us to develop strong theories and models? The bottleneck has always been data. You have to do expensive work in the real world and accurately measure it before you can start fitting lines to it. If we were to birth an e.g. atomically accurate ML model of quantum physics, I bet it wouldn't take long until we have mathematical theories that explain why it works. Our current problem is that this stuff is super hard to manipulate and measure.
moconnor|1 year ago
whymauri|1 year ago
So in a way, what you say is already possible. Just how GMs in chess specialize in certain openings or play styles, master chemists have pre-existing biases that can affect their designs; algorithms can have different biases which push exploration to interesting places. Once you have a good latent representation of relevant chemical space, so you can optimize for this sort of creativity (a practical but boring example is to push generation outside of patent space).
alfalfasprout|1 year ago
For a lot of problems, currently you either don't have an an analytical solution and the alternative is a brute force-ish numerical approach. As a result the computational cost of simulating things enough times to be able to detect behavior that can inform theories/models (potentially yielding a good analytical result) is not viable.
In this regard, ML models are promising.