(no title)
astroH
|
4 years ago
Echoing previous responses, it depends on what you want to learn. Gravity on large scales is fairly well understood (or at least many astrophysicists believe this to be the case). If there are large scale structure observables that are only dependent on cosmology (i.e. the cosmological parameters) simulations like this will be ideal. However, the major issue here is that a huge amount of the results of these simulations depend on physics that happen below the "grid scale" off the simulation. Thus I would agree that training the ML models on these simulations is the epitome of bias because it is learning a "sub grid" model rather than fundamental physics. Reading through the papers on this work, the vast majority of models that have been trained are not able to generalize between the different classes of sub grid models used which very much limits predicability. This is an inherent limitation of most cosmological simulations, not just these ones.
No comments yet.