top | item 43042423

(no title)

chubs | 1 year ago

As a developer I always found these maths-first approaches to Kalman filters impenetrable (I guess that betrays my lack of knowledge, I dare cast no aspersions on the quality of these explanations!). However, if like me, it helps with the learning curve to implement it first, here's a 1-dimensional version simplified from my blog:

  function transpose(a) { return a } // 1x1 matrix eg a single value.
  function invert(a) { return 1/a }

  const qExternalNoiseVariance = 0.1
  const rMeasurementNoiseVariance = 0.1
  const fStateTransition = 1

  let pStateError = 1
  let xCurrentState = rawDataArray[0]
  for (const zMeasurement in rawDataArray) {
    const xPredicted = fStateTransition * xCurrentState
    const pPredicted = fStateTransition * pStateError * transpose(fStateTransition) + qExternalNoiseVariance
    const kKalmanGain = pPredicted * invert(pPredicted + rMeasurementNoiseVariance)

    pStateError = pPredicted - kKalmanGain * pPredicted
    xCurrentState = xPredicted + kKalmanGain * (zMeasurement - xPredicted) // Output!
  }
https://www.splinter.com.au/2023/12/14/the-kalman-filter-for...

discuss

order

nextos|1 year ago

It's not your fault, these can get messy very quickly. Infer.NET was started because Tom Minka and other Bayes experts were tired of writing message passing and variational inference by hand, which is both cumbersome and error prone on non-toy problems.

It helps to take a more abstract view where you split the generative process and the inference algorithm. Some frameworks (Infer.NET, ForneyLab.jl) can generate an efficient inference algorithm from the generative model without any user input. See e.g. https://github.com/biaslab/ForneyLab.jl/blob/master/demo/kal...