I'm successfully using an Extended Kalman Filter for object tracking. My state vector ($x, y, v_x, v_y$) needs to be in cartesian coordinates. The measurement data is transmitted in polar coordinates.
The state transition equations (and thus the state transition matrix) are linear: \begin{align} x_{k+1} &= x_{k} + v_{x,k} \cdot \Delta t \\ y_{k+1} &= y_{k} + v_{y,k} \cdot \Delta t \\ v_{x,k+1} &= v_{x,k} \\ v_{y,k+1} &= v_{y,k}. \end{align}
So the non-linear relation of the measurement vector in polar coordinates to my state vector in cartesian coordinates is the only reason why I used an EKF.
A simpler approach could include a "classic" transformation of my measurement data from polar to cartesian coordinates before feeding it into a standard Kalman Filter.
Assuming that I transform my measurement noise covariance matrix accordingly: Would this have an impact on the filter performance (e.g. because the measurement noise can't be assumed as gaussian any more)?
Answer
When you start comparing sub optimal approaches, it usually comes down to trial and error.
A KF essentially operates on means and variances. An EKF does the same but the nonlinear measurement and or states necessitate some additional considerations.
The first thing one needs to determine is if the transformation results in a unimodal probability distribution where there is a well defined mean and variance. If not, a particle filter will be a better solution. If the unimodal assumption is realistic, then the bias associated with how you map or update a mean is the next problem. In some cases a linearization is good enough and sometimes the transformation needs to be used and that includes things like Runge Kutta ODE solvers, which most text books fail to mention. In your case, a measurement nonlinearity can either use a transformation or a linearization based on a Taylor expansion to propagate a mean.
The final problem is how to propagate the variance. Most EKF will use a Taylor Expansion most typically, just the first derivative, but there are second and higher order filters as well. These enter as complexity considerations as well. The text edited by Gelb has a good number of covariance update heuristics in the latter chapters, including those for discontinuous transformation.
Gelb, Arthur, ed. Applied optimal estimation. MIT press, 1974.
The UKF is an attractive approach to covariant update.
Step size is also important. Your measurements may not be at regular time steps.
In summary, sub optimal is less than optimal but which suboptimal is better is going to depend on the application. Essentially you need to try some alternatives and judge which is better. There are a lot of alternatives and which is best isn’t easily predictable.
No comments:
Post a Comment