JHartzer
HomePostsTags

Online Multi-IMU Calibration Using Visual-Inertial Odometry

| #Sensor-Fusion

Introduction

Modern robotic systems utilize a vast array of sensors in localization which has greatly improved their robustness and accuracy in challenging environments. Many of these sensors produce feature-rich measurements at moderate rates, which can lead to excellent localization results. At the base of many VIO systems, there is an IMU to provide high-rate dead reckoning estimations of body rates and accelerations. These inertial measurements are key to the stability of many control algorithms and improve localization estimation over short distances and times. Generally, the performance of a VIO is most impacted by the visual processing front and back end. However, improvements to the proprioceptive measurement processing through multi-IMU configurations can lead to reductions in drift, improvements in stability, and smoother estimation output as compared to single IMU systems.

To ensure high-quality inertial measurement data, many ground and airborne systems use redundant IMU to either produce a consensus on measurements or simply select the IMU that is producing the best measurements. These methods of inertial measurement, while robust against faults and large baseline distances between sensors, do not leverage the multiple measurement streams to reduce errors, but rather discard any of the less-than-ideal measurements. As this reduces valuable information about the system state, extensive work has been done to formulate stable and effective methods of filtering multiple IMU measurements that optimally take advantage of all measurement streams.

To compensate for the various errors present in all IMU, the sensors must be calibrated. Without properly considering these errors, an IMU can greatly decrease the performance of a navigation system by introducing a bias that could cause the system to drift if not properly compensated. Due to the importance of accurate IMU data, it is desirable to not only produce inertial measurements but also compensate those measurements for any intrinsic and extrinsic calibration errors.

Related Work

The use of multiple IMU in a navigation system has been shown to reduce inertial measurement errors and improve localization accuracy without significantly increasing computational load [1-4]. Beyond reducing kinematic prediction errors, the use of multiple IMU can produce a more robust and fault-tolerant estimation of inertial measurements [5-6].

Two popular methods for combining IMU measurement streams are best sensor selection and complementary filters. Best sensor systems evaluate the quality of the measurements produced such as the use of so-called “voting” systems where three or more sensors are used. These systems have three or more synchronized filters that make measurements at each time step and “vote” such that one measurement is selected and reported. Complementary filters utilize information from all sensors by using an average of all measurements, weighted by their certainty. Both these methods are computationally lightweight, robust to sensor failure, guarantee consistent measurements, and are used in the prediction step of a typical Kalman filter. However, these methods rely on synchronized measurement streams and presuppose calibration of all sensors. With traditional calibration techniques, each additional IMU can add significant cost and effort to the development of an inertial-odometry system. Moreover, these systems are not resilient against any changes to calibration parameters during use because they cannot estimate these calibration parameters online.

The VIMU observation fusion method probabilistically combines measurements from all the IMU into the functional equivalent of a single sensor, such as in [7]. The VIMU fusion method is relatively lightweight computationally and can easily be incorporated into existing VIO systems built around a single IMU. The measurement streams are generally combined using least squares estimation or an average weighted by the inverse of noise [3]. While computationally efficient, these systems typically require consistent and synchronized IMU, so therefore cannot benefit from ad hoc sensor additions.

Another framework commonly used is federated filters, which utilize the multiple IMU measurement streams as prediction steps in separate, decentralized filters [8-9]. These stacked filters are then combined using a single federating filter, which then feeds state parameters and covariance information back to the stacked filters. Compared to the VIMU, the federated filter architecture can combine filters utilizing different error models and states, which is useful if combining sensors of various quality or calibrations. This framework can also incorporate geometric constraints on the different IMU and therefore is more appropriate for a system with large baseline distances between multiple IMU. However, like the VIMU design, the federated filter architecture requires synchronization of the stacked filters for information sharing. Additionally, both systems presume prior calibration of the sensor’s intrinsic and extrinsic parameters, which is necessary for these system’s accuracy [10]. These calibration parameters are often estimated using state-of-the-art offline methods, such as [11]. These methods typically require a defined calibration environment, such as motion in view of a calibration fiducial target. As it is not always possible to perform these offline calibrations before a system is used and IMU calibration parameters can change with time, it is advantageous to utilize a framework that produces online estimates of calibration parameters while still combining multiple IMU measurement streams.

A major limitation of the previously discussed methods is the need for synchronization across the IMU. While this is not unreasonable in a made-to-specification system, ad hoc systems require additional flexibility when inertial measurements are taken due to varying and asynchronous sensor rates. The naive approach would be to simply perform the Kalman prediction step with the latest IMU measurement with compensation for calibration parameters. Such an approach, however, neglects the relative quality of different IMU, where an estimator should more heavily weight measurements from a less noisy sensor. Additionally, this method would be affected by the exact ordering of sensor measurements, as nearly consecutive pairs of measurements would render the first of the pair essentially meaningless. To remedy the problems of synchronization and varying quality, it is proposed that the IMU be treated like all other sensors such that its measurements are used in the Kalman update routine.

Regarding IMU calibration, a solution that provides online calibrations of multiple IMU as well as motion estimates is the maximum likelihood estimator developed in [12]. This method uses an iterative block coordinate descent optimization strategy that consistently and accurately estimates the errors typically seen on surface-mounted inertial sensors. A limitation of this method is its assumption of static biases and relative positions for each sensor. Due to bias instability and body shifts or deflections during operation, this assumption is not always feasible.

Finally, there are centralized filter frameworks that simultaneously estimate intrinsic and extrinsic sensor parameters for multiple IMU. In [13], rigid body constraints are applied as a low-noise measurement within a single centralized filter to combine multiple IMU measurement streams. The noise within this constraint update is tuned to allow the filter to adapt to changes in calibration, due to a flexible body or initial uncertainty in sensor calibration. Extending this, [6] varied the injected noise of the rigid body constraint with initially large values to allow for large uncertainties in initial calibration and lower values as time passed as certainty improved. Effectively, this produces an online estimate of the IMU calibration parameters and is more robust due to its use alongside camera measurements in an MSCKF framework. However, without further system engineering development, the method of varying injected noise method is not robust to changes in extrinsic parameters as it is not truly an online estimate of the sensor parameters.

Regarding IMU calibration, numerous batch optimization methods have been proposed. Some can extrinsically calibrate IMU by taking arbitrary trajectories using only the IMU measurement streams [14-15]. Others can perform intrinsic and extrinsic calibration of IMU through the use of cameras. Examples include iCalib for single-IMU-camera systems [16] and Kalibr for multi-IMU-camera systems [11]. While accurate, these systems are limited to offline computation which is unable to detect changes in calibration parameters while an inertial-odometry system is in use.

Solving this issue are filter-based methods that simultaneously estimate poses and intrinsic sensor parameters [6,17]. These methods are capable of being used for online calibration and can resolve changes in extrinsic parameters given sufficient time and measurement data. However, these state-of-the-art filters are not capable of performing online estimation of extrinsic IMU parameters. In the case of [17], the filter is not designed to utilize multiple IMU at all, and [6,18] assume the extrinsic IMU parameters are provided and static. As such, current methods of online estimation of IMU calibration parameters are not robust to changes in extrinsic parameters.

Therefore, this work proposes a centralized filter framework that is flexible in the number of sensors used and produces online estimates of each IMU’s intrinsic and extrinsic calibration parameters utilized alongside camera measurements in an MSCKF framework. This filter provides higher accuracy estimates of body accelerations and angular rates through the combination of multiple unsynchronized sensors.

IMU Error Model

The following outlines a first-order IMU measurement model where $\tilde{\boldsymbol{a}}$ is the acceleration measurement, $\tilde{\boldsymbol{\omega}}$ is the angular rate measurement, ${}^{L}\boldsymbol{a}$ is the true body acceleration, ${}^{L}{\boldsymbol{\omega}}$ is the true body angular rate, $\boldsymbol{b}$ are the biases, and $\boldsymbol{n}$ are Gaussian white noise processes.

\[\begin{split} \tilde{\boldsymbol{a}} &= \boldsymbol{a} + \boldsymbol{b}_{a} + \boldsymbol{n}_{a} \\ \tilde{\boldsymbol{\omega}} &= \boldsymbol{\omega} + \boldsymbol{b}_{L} + \boldsymbol{n}_{\omega} \end{split}\]

However, this model is only valid if the body frame and IMU frames are equivalent.Having an IMU offset in the body frame is only necessary when multiple IMU are utilized, due to the size constraints of IMU sensors. When only one IMU is used, the origin body frame is typically defined as the origin of the IMU. Otherwise, it is necessary to consider the effects of the body motion on the acceleration measured in the IMU frame using the following change of frame formula

\[{}^{I}\boldsymbol{a} = {}^{I}_{L}\boldsymbol{R} \left( {}^{L}\boldsymbol{a} + {}^{L}\boldsymbol{\alpha} \times {}^{B}\boldsymbol{p}_{I} + {}^{L}{\boldsymbol{\omega}} \times {}^{L}{\boldsymbol{\omega}} \times {}^{B}\boldsymbol{p}_{I} + 2 {}^{L}{\boldsymbol{\omega}} \times {}^{L}\boldsymbol{v} \right)\]

Therefore, the general IMU measurement error model is

\[\begin{split} {}^{I}{\tilde{\boldsymbol{a}}} &= {}^{I}_{L}\boldsymbol{R} \left( {}^{L}\boldsymbol{a} + {}^{L}\boldsymbol{\alpha} \times {}^{B}\boldsymbol{p}_{I} + {}^{L}\boldsymbol{\omega} \times {}^{L}\boldsymbol{\omega} \times {}^{B}\boldsymbol{p}_{I} + 2 {}^{L}\boldsymbol{\omega} \times {}^{L}\boldsymbol{v} \right) + {}^{I}\boldsymbol{b}_{a} + {}^{I}\boldsymbol{n}_{a} \\ {}^{I}\tilde{\omega} &= {}^{I}_{L}\boldsymbol{R} {}^{L}\boldsymbol{\omega} + {}^{I}\boldsymbol{b}_{L} + {}^{I}\boldsymbol{n}_{\omega} \end{split}\]

The typical body-fixed motion filter utilizes 9 states to describe the body’s position \({}^{L}{\hat{\boldsymbol{p}}}_{B}\), velocity \({}^{B}\hat{\boldsymbol{v}}\), and orientation \({}^{B}_{L}\hat{q}\) in the local flat frame \(\boldsymbol{L}\). However, when considering an IMU that is offset in the body-fixed frame, it becomes necessary to include higher-order derivatives for accurate predictions of the IMU measurements. This requires including the body acceleration, angular rate, and angular velocity in the filter.

IMU Measurement Update

As previously discussed, it is not possible to utilize multiple unsynchronized IMU with varying qualities in the Kalman prediction. Therefore, the following outlines a method by which IMU measurements are used in the Kalman update procedure. The IMU measurement $\boldsymbol{z}_{I_i}$ is comprised of the measured acceleration ${}^{I_i}\boldsymbol{a}$ and angular rate ${}^{I_i}\boldsymbol{\omega}$ which are both in the sensor frame of IMU $i$

\[\boldsymbol{z}_{I_i} = \begin{bmatrix} {}^{I_i}{\tilde{\boldsymbol{a}}} \\ {}^{I_i}{\tilde{\boldsymbol{\omega}}} \end{bmatrix}\]

Using the IMU error model, the following non-linear measurement function is derived

\[\boldsymbol{h}_{I_i}(\hat{\boldsymbol{x}}) = \begin{bmatrix} {}^{B}_{I_i}\boldsymbol{R}^T {}^{B}_{L}\boldsymbol{R}^T \left( {}^{L}\hat{\boldsymbol{a}} + {}^{L}\hat{\boldsymbol{\alpha}} \times {}^{B}\boldsymbol{p}_{I_i} + {}^{L}\hat{\boldsymbol{\omega}} \times {}^{L}\hat{\boldsymbol{\omega}} \times {}^{B}\boldsymbol{p}_{I_i} + 2 {}^{L}\hat{\boldsymbol{\omega}} \times {}^{L}\hat{\boldsymbol{v}} \right) + \boldsymbol{b}_Hat{a_{I_i}} \\ {}^{B}_{I_i}\boldsymbol{R}^T {}^{B}_{L}\boldsymbol{R}^T {}^{L}\hat{\boldsymbol{\omega}} + \boldsymbol{b}_Hat{g_{I_i}} \end{bmatrix}\]

This is used to calculate the measurement residual

\[\boldsymbol{y}_{I_i} = \boldsymbol{z}_{I_i} - \boldsymbol{h}_{I_i}(\hat{\boldsymbol{x}})\]

Furthermore, the linearized observation matrix $H$ used for the IMU update is

\[\boldsymbol{H} = \left[ \begin{array}{cc:c} \boldsymbol{H}_B & \boldsymbol{H}_{I} & \boldsymbol{0} \end{array} \right]\]

where the body state observation matrix $\boldsymbol{H}_B$ is

\[\boldsymbol{H}_B = \begin{bmatrix} \boldsymbol{0}{3 \times 3} & \boldsymbol{0}{3 \times 3} & \frac{\partial {}^{I_i}\boldsymbol{a}}{\partial {}^{B}\hat{\boldsymbol{a}}} & \boldsymbol{0}{3 \times 3} & \frac{\partial {}^{I_i}\boldsymbol{a}}{\partial {}^{B}\hat{\boldsymbol{\omega}}} & \frac{\partial {}^{I_i}\boldsymbol{a}}{\partial {}^{B}\hat{\boldsymbol{\alpha}}} \\ \boldsymbol{0}{3 \times 3} & \boldsymbol{0}{3 \times 3} & \boldsymbol{0}{3 \times 3} & \boldsymbol{0}{3 \times 3} & \frac{\partial {}^{I_i}\boldsymbol{\omega}}{\partial {}^{B}\hat{\boldsymbol{\omega}}} & \boldsymbol{0}{3 \times 3} \end{bmatrix}\]

Using the derivatives of a rotation inverse and coordinate map from [19], the measurement body Jacobians are as follows

\[\begin{align} \frac{\partial {}^{I_i}\boldsymbol{a}}{\partial {}^{B}\hat{\boldsymbol{a}}} & = {}^{B}_{I_i}\boldsymbol{R}^T {}^{B}_{L}\boldsymbol{R}^T \\ \frac{\partial {}^{I_i}\boldsymbol{a}}{\partial {}^{B}\hat{\boldsymbol{\omega}}} & = {}^{B}_{I_i}\boldsymbol{R}^T \left( \left\lfloor{}^{B}\hat{\boldsymbol{\omega}}\right\rfloor_{\times} \left\lfloor{}^{B}{\hat{\boldsymbol{p}}}_{I_i}\right\rfloor_{\times}^T + \left\lfloor{}^{B}\hat{\boldsymbol{\omega}} \times {}^{B}{\hat{\boldsymbol{p}}}_{I_i}\right\rfloor_{\times}^T \right) \\ \frac{\partial {}^{I_i}\boldsymbol{a}}{\partial {}^{B}\hat{\boldsymbol{\alpha}}} & = {}^{B}_{I_i}\boldsymbol{R}^T \left\lfloor{}^{B}{\hat{\boldsymbol{p}}}_{I_i}\right\rfloor_{\times}^T \\ \frac{\partial {}^{I_i}\boldsymbol{\omega}}{\partial {}^{B}\hat{\boldsymbol{\omega}}} & = \left\lfloor {}^{B}_{I_i}\boldsymbol{R}^T {}^{B}_{L}\boldsymbol{R}^T {}^{B}\hat{\boldsymbol{\omega}} \right\rfloor_{\times} \end{align}\]

where $ \left\lfloor ~ \right\rfloor_{\times} $ is the cross product matrix and the Jacobian with respect to the states of IMU $i$ is

\[\boldsymbol{H}_{I_i} = \begin{bmatrix} \frac{\partial {}^{I_i}\boldsymbol{a}}{\partial {}^{B}{\hat{\boldsymbol{p}}}_{I_i}} & \frac{\partial {}^{I_i}\boldsymbol{a}}{\partial {}^{I_i}_{B}\hat{q}} & \boldsymbol{I}_{3} & \boldsymbol{0}{3 \times 3} \\ \boldsymbol{0}{3 \times 3} & \frac{\partial{}^{I_i}\boldsymbol{\omega} }{\partial {}^{I_i}_{B}\hat{q}} & \boldsymbol{0}{3 \times 3} & \boldsymbol{I}_{3} \end{bmatrix}\]

where the measurement IMU Jacobians are

\[\begin{align} \frac{\partial {}^{I_i}\boldsymbol{a}}{\partial {}^{B}{\hat{\boldsymbol{p}}}_{I_i}} & = {}^{B}_{I_i}\boldsymbol{R}^T \left( \left\lfloor {}^{B}\hat{\boldsymbol{\alpha}} \right\rfloor_{\times} + \left\lfloor {}^{B}\hat{\boldsymbol{\omega}} \right\rfloor_{\times} \left\lfloor {}^{B}\hat{\boldsymbol{\omega}} \right\rfloor_{\times} \right) \\ \frac{\partial {}^{I_i}\boldsymbol{a}}{\partial {}^{I_i}_{B}\hat{q}} & = \left\lfloor {}^{B}_{I_i}\boldsymbol{R}^T \left( ({}^{B}\hat{\boldsymbol{\alpha}} + {}^{B}\hat{\boldsymbol{\omega}} \times {}^{B}\hat{\boldsymbol{\omega}} ) \times {}^{B}{\hat{\boldsymbol{p}}}_{I_i} + {}^{B}_{L}\boldsymbol{R}^T {}^{B}\hat{\boldsymbol{a}} \right) \right\rfloor_{\times} ~\mathcal{J}_R\left({}^{B}_{I_i}\hat{q}\right) \\ \frac{\partial{}^{I_i}\boldsymbol{\omega} }{\partial {}^{I_i}_{B}\hat{q}} & = \left\lfloor {}^{B}_{I_i}\boldsymbol{R}^T {}^{B}_{L}\boldsymbol{R}^T {}^{B}\hat{\boldsymbol{\omega}} \right\rfloor_{\times} ~\mathcal{J}_R\left({}^{B}_{I_i}\hat{q}\right) \end{align}\]

Where \(~\mathcal{J}_R\) is the right jacobian of a quaternion. With the measurement residual \(\boldsymbol{y}_{I_i}\) and linearized observation matrix $\boldsymbol{H}_{I_i}$, the typical SREKF update is performed.

Code

The simulation and experimental code are part of the open-source EKF-CAL repository.

Citation

A preprint of the published paper can be found on arXiv, the slides are available here, and the conference paper can be cited as follows

@inproceedings{2023_Multi_IMU_Cal,
  author    = {Hartzer, Jacob and Saripalli, Srikanth},
  booktitle = {2023 IEEE International Conference on Multisensor Fusion and Integration (MFI)},
  title     = {Online Multi-IMU Calibration Using Visual-Inertial Odometry},
  year      = {2023},
  pages     = {},
  doi       = {},
}

References

  1. M. Faizullin and G. Ferrer, “Best axes composition: Multiple gyroscopes imu sensor fusion to reduce systematic error,” in 2021 European Conference on Mobile Robots (ECMR), pp. 1-7, 2021.
  2. U. N. Patel and I. A. Faruque, “Multi-imu based alternate navigation frameworks: Performance and comparison for uas,” IEEE Access, vol. 10, pp. 17565-17577, 2022.
  3. A. Larey, E. Aknin, and I. Klein, “Multiple inertial measurement units-an empirical study,” IEEE Access, vol. 8, pp. 75656-75665, 2020.
  4. M. Zhang, X. Xu, Y. Chen, and M. Li, “A lightweight and accurate localization algorithm using multiple inertial measurement units,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1508-1515, 2020.
  5. E. D’Amato, M. Mattei, A. Mele, I. Notaro, and V. Scordamaglia, “Fault tolerant low cost imus for uavs,” in 2017 IEEE International Workshop on Measurement and Networking (M&N), pp. 1-6, 2017.
  6. K. Eckenhoff, P. Geneva, and G. Huang, “MIMC-VINS: A Versatile and Resilient Multi-IMU Multi-Camera Visual-Inertial Navigation System,” IEEE Transactions on Robotics, vol. 37, pp. 1360-1380, oct 2021.
  7. H. Huang, H. Zhang, and L. Jiang, “An optimal fusion method of multiple inertial measurement units based on measurement noise variance estimation,” IEEE Sensors Journal, vol. 23, no. 3, pp. 2693-2706, 2023.
  8. N. Carlson, “Federated filter for fault-tolerant integrated navigation systems,” in IEEE PLANS ‘88.,Position Location and Navigation Symposium, Record. ‘Navigation into the 21st Century’., pp. 110-119, 1988.
  9. Q. Luo, X. Yan, Z. Zhou, C. Wang, and C. Hu, “An integrated navigation and localization system,” in 2021 IEEE International Conference on Smart Internet of Things (SmartIoT), pp. 28-32, 2021.
  10. L. Wang, H. Tang, T. Zhang, Q. Chen, J. Shi, and X. Niu, “Improving the navigation performance of the mems imu array by precise calibration,” IEEE Sensors Journal, vol. 21, no. 22, pp. 26050-26058, 2021.
  11. J. Rehder, J. Nikolic, T. Schneider, T. Hinzmann, and R. Siegwart, “Extending kalibr: Calibrating the extrinsics of multiple imus and of individual axes,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 4304-4311, 2016.
  12. H. Carlsson, I. Skog, and J. Jaldén, “On-the-fly geometric calibration of inertial sensor arrays,” in 2017 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 1-6, 2017.
  13. J. B. Bancroft, “Multiple imu integration for vehicular navigation,” in Proceedings of the 22nd International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS 2009), pp. 1828-1840, 2009.
  14. J. Lee, D. Hanley, and T. Bretl, “Extrinsic calibration of multiple inertial sensors from arbitrary trajectories,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 2055-2062, 2022.
  15. D. Kim, S. Shin, and I. S. Kweon, “On-line initialization and extrinsic calibration of an inertial navigation system with a relative preintegration method on manifold,” IEEE Transactions on Automation Science and Engineering, vol. 15, no. 3, pp. 1272-1285, 2018.
  16. Y. Yang, W. Lee, P. Osteen, P. Geneva, X. Zuo, and G. Huang, “icalib: Inertial aided multisensor calibration,” in VINS Workshop 2021 at ICRA, Xi’an, China, 2021.
  17. M. Li, H. Yu, X. Zheng, and A. I. Mourikis, “High-fidelity sensor modeling and self calibration in vision-aided inertial navigation,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 409-416, 2014.
  18. W. Lee, P. Geneva, C. Chen, and G. Huang, “Mins: Efficient and robust multisensor-aided inertial navigation system,” 2023.
  19. M. Bloesch, H. Sommer, T. Laidlow, M. Burri, G. Nützi, P. Fankhauser, D. Bellicoso, C. Gehring, S. Leutenegger, M. Hutter, and R. Y. Siegwart, “A primer on the differential calculus of 3d orientations,” ArXiv, vol. abs/1606.05285, 2016.