Using Accelerometers to Estimate Position and Velocity

1. Introduction

We are commonly asked whether it is possible to use the accelerometer measurements from CH Robotics orientation sensors to estimate velocity and position.  The short answer is “yes and no.”  It depends entirely on how much accuracy is needed.  In general, accelerometer-based position and velocity estimates from low-cost sensors (hundreds of US dollars instead of tens of thousands) are very poor and are simply unusable.  This isn’t because the accelerometers themselves are poor, but because the orientation of the sensor must be known with a high degree of accuracy so that gravity measurements can be distinguished from the physical acceleration of the sensor.  Even small errors in the orientation estimate will produce extremely high errors in the measured acceleration, which translate into even larger errors in the velocity and position estimates.  Without nicer rate gyros (FOG or Ring-Laser, for example) or without the addition of an external reference like GPS, accurate dead-reckoning is usually not possible.

Nevertheless, not all application require a great deal of accuracy, and sometimes absolute accuracy is not as important as the ability to measure short-term deviations in velocity and position.   For an overview of the kinds of accuracy you can expect to obtain using CH Robotics sensors for dead-reckoning, skip to Section ‎4 – Accuracy of Velocity and Position Estimates.

In this application note, we discuss what is required to get velocity and position estimates using data from CH Robotics sensors that do not already provide accelerometer-based velocity and position as standard outputs (at the time of writing, this includes the UM6 and the UM6-LT).  We also discuss the expected accuracy of the acceleration measurements, and how that accuracy will affect the reliability of the resulting velocity and position estimates.

This application note assumes familiarity with coordinate frames used for attitude estimation on CH Robotics sensors.  For simplicity, it also assumes that Euler Angle ouputs are being used instead of the quaternion outputs of the sensor, although the discussion applies equally well to quaternion outputs.  For details about coordinate frames and Euler Angles, see the library chapter on Understanding Euler Angles.

2. Extracting Inertial-Frame Acceleration

To get inertial frame velocities and positions, it is first necessary to obtain the physical acceleration of the sensor in the inertial frame.  To convert the accelerometer measurement into actual physical acceleration of the sensor, it is important to first understand exactly what the accelerometer is measuring.  Application note AN1008 describes accelerometer behavior in detail, so the complete discussion won’t be included here.  To summarize, the accelerometer measures both the physical acceleration of the sensor, AND the contribution of normal forces that prevent the accelerometer from accelerating toward the center of the Earth (i.e. the force exerted on the accelerometer by the table it is sitting on is actually measured by the sensor).  To measure only the component of acceleration that is caused by physical acceleration, normal forces must be removed.  As described in application note AN1008, the 3-axis accelerometer measurement vector \inline \mathbf{a}_m can be modeled as

\mathbf{a}_m = \mathbf{a}_B - R_I^B\begin{pmatrix} 0\\ 0\\ g \end{pmatrix},

where \inline \mathbf{a}_B is the actual body-frame acceleration, g is the acceleration of gravity, and \inline R_I^B is the rotation matrix from the inertial frame to the body frame of the sensor.  This model assumes that there are no cross-axis alignment, scale factor, or bias errors in the measurement.

In order to estimate the inertial frame velocity and position of the sensor, we need to remove the normal force component from the acceleration measurement.  Solving for the body frame acceleration  we get

\mathbf{a}_B = \mathbf{a}_m + R_I^B\begin{pmatrix} 0\\ 0\\ g \end{pmatrix}.

The body-frame acceleration  must then be rotated into the inertial frame so that it can be integrated to obtain velocity and position. This yields

\mathbf{a}_I = R_B^I \mathbf{a}_m + \begin{pmatrix} 0\\ 0\\ g \end{pmatrix}.

This equation can be used directly to measure the inertial frame acceleration of the sensor.  Note that CH Robotics sensors typically output acceleration in units of gravities, not m/s/s or ft/s/s.  Thus the term should g either be 1 gravity, or \inline \mathbf{a}_m must be converted to the desired units.  We recommend using =9.8 m/s/s and multiplying  by the same so that the inertial acceleration obtained is in m/s/s.  Then integrating the acceleration measurement will yield a velocity in m/s, and a position in meters.

3. Estimating Velocity and Position

Once the measured inertial-frame acceleration is obtained, it can be integrated to obtain inertial frame velocity \mathbf{v}_I and position \mathbf{r}_I :

\mathbf{v}_I = \int \mathbf{a}_I,

\mathbf{r}_I = \iint \mathbf{a}_I.

In practice, data is obtained at discrete time intervals so that the estimated velocity and position are estimated using

\mathbf{v}_I[k+1] = \mathbf{v}_I[k] + T\mathbf{a}_I[k],

\mathbf{r}_I[k+1] = \mathbf{r}_I[k] + T\mathbf{v}_I[k].

where T is the sample period.  Not that depending on the hardware used for communicating with the sensor, the sampling period may not be constant, and should therefore be measured when making estimates.  This is easier if the sensor data is being read using a microcontroller or a computer with an RTOS.  A standard windows PC will introduce unpredictable delays in the actual arrival time of serial data, which will cause timing accuracy issues.

4. Accuracy of Velocity and Position Estimates

In Section ‎2, we discussed the operations needed to convert measured acceleration to physical (non-gravity) acceleration in the inertial frame.  Recall that making the conversion requires that we use the current orientation estimate to rotate the measurement into the inertial frame.

Note that there is a matrix rotation to transform the body-frame measurement to the inertial frame, following by the addition of the expected gravity vector in the inertial frame.  This addition removes the measurement of normal forces that don’t cause physical acceleration of the sensor.

If the orientation estimate were perfect, this step would introduce no additional error into the acceleration measurement and the only error contribution would come from the inaccuracy of the accelerometer itself.

Since in practice we never know the sensor’s orientation perfectly, the addition of the gravity term in will fail to remove measured normal forces, and those forces will be mistaken for physical acceleration.  It turns out that on low-cost sensors (hundreds of US dollars instead of tens of thousands), orientation error will cause acceleration measurement errors that dwarf errors from accelerometer misalignment and miscalibration.  The magnitude of these errors will be large enough that accelerometer-based position and velocity estimates will be unusable for most applications.

Table 1 summarizes the acceleration, velocity, and position errors that can be expected given different errors in the orientation estimate of the sensor.

Table 1 – Expected Accuracy of Velocity and Position Estimates

As shown, an angle error of even one degree will cause the estimated velocity to be off by 1.7 m/s after 10 seconds, and the position to be off by 17.1 meters in the same amount of time.  After a single minute, one degree of angle error will cause the position estimate to be off by almost a full kilometer.  After ten minutes, the position will be off by 62 kilometers.

From the factory, the accuracy of the UM6 and the UM6-LT Orientation Sensors can be expected to be within about 2 degrees of actual during normal operating conditions.  Using this as an upper bound, the UM6 or the UM6-LT could be used for dead-reckoning velocity and position estimation as long as 34.2 meters of positioning error at 10 seconds and 3.4 m/s of velocity error at 10 seconds wouldn’t be a problem.

It is also worth considering that this analysis only considers the effect of orientation estimate errors.  Accelerometer miscalibration will also cause measurement errors, particularly when uncalibrated sensors are used over wide temperature ranges.  The change in the scale factors and biases can be significant over the full operating temperature range, so that calibration becomes critically important.

In summary, it is possible to use the accelerometers to estimate velocity and position, but the accuracy will be very poor so that in most cases, the estimates are not useful.