I agree that it is still unsolved, from a practical perspective. I think both sides (the MD vs. ensemble sampling) have incorporated techniques that the other side uses to improve sampling efficiency and accuracy. At the same time, I suspect that the sampling methodology only works with some form of structured guidance, whether it be MD or embedding (a la AlphaFold, which has a few former DESRES people working on it). The raw Monte Carlo methods that people use for 'embarrassing' parallelism often have terrible scaling — the spectral gaps of the Markov chains are abysmal and you only realize that after 1000s of core-hours.On the other hand, DESRES had been focusing on a lot of acceleration methods that involved hardware optimized HMC-esque methodologies that had reasonably good parallelism. AFAICT the only public description of this work is in the appendices of this paper [0].
At the end of the day, you probably need both techniques because the pure sampling approaches lose fine structure (e.g. binding pockets opening up with anomalous frequency due to water clusters) whereas the standalone MD model has too long of a decorrelation time to get averages to converge.
[0] https://www.pnas.org/content/116/10/4244
No comments yet.