Hauptinhalt

Multi-Sensor Extrinsic Calibration using Factor Graph

In robotics, autonomous vehicles, and augmented reality systems, multiple sensors such as cameras, Inertial Measurement Units (IMUs), and LiDARs are mounted at different physical locations and orientations on the robot or vehicle.

Pose estimation and mapping workflows in multi-sensor setup require utilizing measurements from these different sensors. Each sensor measurement can be treated as a probabilistic constraint, and all such constraints together form an optimization problem within the pose estimation and mapping workflow.

However, because sensors are mounted in various locations and orientations, the pose representation differs for each sensor's frame of reference. To ensure a unified and consistent representation of the robot or vehicle's pose across all sensors, the pose must be expressed in a single designated sensor's frame, known as the base sensor frame. To use measurements from other (non-base) sensors as constraints, the pose represented in the base sensor frame must be transformed into the frame of the corresponding non-base sensor before applying those constraints. This transformation between the base sensor's frame and another sensor's frame is called a sensor transform. The process of estimating this sensor transform is known as extrinsic calibration.

Navigation Toolbox™ provides the factorGraph object to address pose estimation and mapping problems. In a factor graph, the variables to be estimated (such as the pose of the robot or vehicle, and landmark positions) are represented as nodes in a graph, and sensor measurement constraints are represented as edges in the graph called factors. For more information about factor graph and how to use it, see Factor Graph for SLAM.

Each factor encodes a constraint using data from a specific sensor, and the pose associated with that factor must be expressed in that sensor’s frame. If the pose is represented in the base sensor frame, a sensor transform is used to convert the pose into the appropriate sensor frame during optimization.

To support this, the factor graph provides sensor transform nodes, which store the transform values between the base sensor frame and other sensor frames. A factor associated with a non-base sensor can be connected to a sensor transform node. The value stored in this node is then used to transform poses from the base sensor frame to the specific sensor's frame during optimization. The following factors support connection to a sensor transform node:

Based on the type of base sensor, target sensor, and measurement available from the target sensor data, this table shows which factor object to use to store target sensor measurements, as well as to estimate and refine the transform from the base sensor to a target sensor.

Base SensorTarget SensorAvailable Measurement from Target Sensor DataObject to Use
IMU, GPS, or Camera LidarRelative pose between pairs of sensor poses in SE(3) state spacefactorTwoPoseSE3
Relative position between a sensor pose in SE(3) state space and a 3-D landmark pointfactorPoseSE3AndPointXYZ
IMU, GPS, or LidarCameraRelative pose between pairs of sensor poses in SE(3) state spacefactorTwoPoseSE3
Visual projection of a 3-D landmark point onto the 2-D image plane as image point position, given the camera pose in SE(3) state spacefactorCameraSE3AndPointXYZ
GPS, Lidar, or CameraIMUGyroscope and accelerometer readingsfactorIMU

Workflow for Sensor Transform Estimation and Refinement

The following steps outline the common workflow for estimating and refining sensor transforms using the factorGraph object in the Navigation Toolbox™.

Estimate Initial Sensor Transform

  • Choose a base sensor, in whose frame of reference, all trajectory poses will be consistently represented.

  • Use measurements from the base sensor to create factor objects. These factors do not require a sensor transform node, as their poses are already in the base frame. However, if a sensor transform node is connected, fix it using the fixNode function to prevent it from being optimized.

  • Create factors using measurements from other sensors (non-base sensors). These factors expect poses to be expressed in their respective sensor frames. Since all poses of the robot or vehicle are stored in the base sensor frame, connect each of these factors to a sensor transform node. This node contains the transform from the base sensor frame to the frame of the specific non-base sensor used to create the factor. During optimization, the solver uses this transform to convert the base-frame pose into the respective non-base sensor frame to apply the constraints derived from the corresponding sensor measurements.

  • Perform extrinsic calibration by running factor graph optimization over a portion of the trajectory data. This involves fixing other variable nodes in the graph (like poses and landmark positions, if known from another source for this calibration step) and unfixing (freeing) the sensor transform nodes. The optimization process then estimates the sensor transform between the base sensor and each relevant non-base sensor.

  • Once an initial estimate is obtained, assign these estimated transforms to the SensorTransform property of each corresponding factor that connects to a sensor transform node.

Refine Sensor Transform estimate

  • The factor graph optimization process can further refine these initial sensor transform estimates concurrently with other variables like poses and landmark positions.

  • To enable refinement of a specific sensor transform, use the fixNode function with the flag argument set to false for the corresponding sensor transform node. This allows the optimization solver to optimize the sensor transform nodes during the optimization process.