(no title)
Harmohit | 1 year ago
>Instead of treating each as exactly 7 feet, we can instead say that each is somewhere between a minimum of 6.9 feet and a maximum of 7.1. We can write this as an interval (6.9, 7.1).
Yes we can use an interval to express an uncertainty. However, uncertainties in physical measurements are a little bit more complicated.
When I measure something to be 7 plus minus 0.1 feet, what I am saying is that the value of the measured variable is not known for sure. It can be represented by a bell curve centred on 7 and 95% of the area under the curve (95% probability) that the true value lies between 6.9 and 7.1. The value of the measured variable is much more likely to be 7 than 6.9. There is also a small chance that the value lies outside of the 6.9 to 7.1 range.
In an interval, there is no probability distribution. It is more like an infinite list of numbers.
In practice, interval arithmetic is seldom used for uncertainty analysis for scientific experiments.
notrealyme123|1 year ago
In the Gaussian case it would cut the normal distribution horizontal at a defined height. The height is defined by the sigma or confidence you want to reflect.
The length of the cut resp. The interval on the support is how you connect propability and intervals.
evanb|1 year ago
In gvar everything by default is normally distributed, but you can add_distribution, log-normal is provided, for example. You can also specify the covariance matrix between a set of values, which will be correctly propagated.
tel|1 year ago
klysm|1 year ago
There's also no motivation for choosing a normal distribution here - why would we expect the error to be normal?
tlb|1 year ago
PeterisP|1 year ago
Chris2048|1 year ago
Would actually be useful in programming as proving what outputs a fn can produce for known inputs - rather than use unit tests with fixed numerical values (or random values).
adammarples|1 year ago
samatman|1 year ago
If you do have reason to interpret the uncertainty as normally distributed, you can use that interpretation to narrow operations on two intervals based on your acceptable probability of being wrong.
But if the interval might represent, for example, an unknown but systematic bias, then this would be a mistake. You'd want to use other methods to determine that bias if you can, and correct for it.
empath75|1 year ago
There absolutely is with sane assumptions about how any useful measurement tool works. Gaussian distributions are going to approximate the actual distribution for any tool that's actually useful, with very few exceptions.
klysm|1 year ago