I was wondering the same thing, but near the end, the article discusses using statistical techniques to determine the standard error. In other words, you can easily get an idea of the accuracy of the result, which is harder with typical numerical integration techniques.
With many quadrature rules (e.g. trapezoidal rule, Simpson's rule) you have a very cheap error estimator obtained by comparing the results over n and 2n subdivision points.
Numerical integration methods suffer from the “curse of dimensionality”: they require exponentially more points in higher dimensions. Monte Carlo integration methods have an error that is independent of dimension, so they scale much better.
Typical numerical methods are faster and way cheaper for the same level of accuracy in 1D, but it's trivial to integrate over a surface, volume, hypervolume, etc. with Monte Carlo methods.
as i understand: numerical methods -> smooth out noise from sampling/floating point error/etc for methods that are analytically inspired that are computationally efficient where monte carlo -> computationally expensive brute force random sampling where you can improve accuracy by throwing more compute at the problem.
kens|2 months ago
ogogmad|2 months ago
fph|2 months ago
edschofield|2 months ago
See, for example, https://ww3.math.ucla.edu/camreport/cam98-19.pdf
MengerSponge|2 months ago
jgalt212|2 months ago
adrianN|2 months ago
a-dub|2 months ago