Sensor fusioning with Kalman filter

18,111

Solution 1

Usually, the sensor fusion problem is derived from the bayes theorem. Actually you have that your estimate (in this case the horizon level) will be a weighted sum of your sensors, which is caracterized by the sensor model. For dual sensors, you have two common choices: Model a two sensor system and derive the kalman gain for each sensor (using the system model as the predictor), or run two correction stages using different observation models. You should take a look at Bayesian Predictors (a little more general than Kalman Filter) which is precisely derived from minimizing the variance of an estimate, given two different information sources. If you have a weighted sum, and minimize the variance of the sum, for two sensors, then you get the Kalman Gain.

The properties of the sensor can be "seen" in two parts of the filter. First, you have the error matrix for your observations. This is the matrix that represents the noise in the sensors observation (it is assumed to be zero mean gaussian noise, which isn't a too big assumption, given that during calibration, you can achieve a zero mean noise).

The other important matrix is the observation covariance matrix. This matrix gives you an insight about how good is the sensor at giving you information (information meaning something "new" and not dependent on the other sensors reading).

About "harvesting the good characteristics", what you should do is do a good calibration and noise characterization (is that spelled ok?) of the sensors. The best way to get a Kalman Filter to converge is to have a good noise model for your sensors, and that is 100% experimental. Try to determine the variance for your system (dont always trust datasheets).

Hope that helps a bit.

Solution 2

The gyro measures rate of angle change (e.g. in radians per sec), while from accelerometer reading you can calculate the angle itself. Here is a simple way of combining these measurements:

At every gyro reading received:

angle_radians+=gyro_reading_radians_per_sec * seconds_since_last_gyro_reading

At every accelerometer reading received:

angle_radians+=0.02 * (angle_radians_from_accelerometer - angle_radians)

The 0.02 constant is for tuning - it selects the tradeoff between noise rejection and responsiveness (you can't have both at the same time). It also depends on the accuracy of both sensors, and the time intervals at which new readings are received.

These two lines of code implement a simple 1-dimensional (scalar) Kalman filter. It assumes that

  • the gyro has very low noise compared to accelerometer (true with most consumer-grade sensors). Therefore we do not model gyro noise at all, but instead use gyro in the state transition model (usually denoted by F).
  • accelerometer readings are received at generally regular time intervals and accelerometer noise level (usually R) is constant
  • angle_radians has been initialised with an initial estimate (f.ex by averaging angle_radians_from_accelerometer over some time)
  • therefore also estimate covariance (P) and optimal Kalman gain (K) are constant, which means we do not need to keep estimate covariance in a variable at all.

As you see, this approach is simplified. If the above assumptions are not met, you should learn some Kalman filter theory, and modify the code accordingly.

Share:
18,111
Theodor
Author by

Theodor

Updated on June 03, 2022

Comments

  • Theodor
    Theodor almost 2 years

    I'm interested, how is the dual input in a sensor fusioning setup in a Kalman filter modeled?

    Say for instance that you have an accelerometer and a gyro and want to present the "horizon level", like in an airplane, a good demo of something like this here.

    How do you actually harvest the two sensors positive properties and minimize the negative?

    Is this modeled in the Observation Model matrix (usually symbolized by capital H)?


    Remark: This question was also asked without any answers at math.stackexchange.com

  • Theodor
    Theodor over 13 years
    Thank you, I didn't think anyone was gonna answer this thread. This clears things up a bit.
  • Vlad
    Vlad about 10 years
    angle_radians+=0.02 * (angle_radians_from_accelerometer - angle_radians) will lead to constant change of orientation when the cell phone is at rest (gyro=0, 0,0, acc=gravity vector)
  • Ahti
    Ahti about 10 years
    Why do you think so? If the cell phone is at rest, then angle_radians_from_accelerometer should be constant (plus fluctuating noise), which means angle_radians should be trending towards the same constant, and stop there.
  • Vlad
    Vlad about 10 years
    gyro, though accurate in short run, drifts like crazy in a long run. try rotate your phone back and force while integrating. You will never get zero angle after you undone all rotations. In other words, gyro has low variance but high bias. Thus it should be used for increasing responsiveness but otherwise ignored.
  • Vlad
    Vlad about 10 years
    Good. And thanks to Google we don't need to worry about this any more since they have a virtual orientation sensor now where they nicely fused all their pose sensors.
  • villoren
    villoren over 9 years
    Besides the process noise and measurement noise matrices... Do you know how to model the state transition "H" and measurement matrices?
  • AndroC
    AndroC almost 8 years
    I am looking at two choices you mentioned for dual sensors: (1) model a two sensor system and derive kalman gain for each sensor and (2) run two correction stages using different observation models. So, the difference between these two choices is that in (1) you handle readings from both sensor 'in parallel' while in (2) you handle the two sensors sequentially and treat the two-sensor system like two steps in a one-sensor system. Or am I oversimplifying this?