Main Content


(Not recommended) Estimate camera pose from 3-D to 2-D point correspondences

estimateWorldCameraPose is not recommended. Use the estworldpose function instead. For more information, see Compatibility Considerations.



[worldOrientation,worldLocation] = estimateWorldCameraPose(imagePoints,worldPoints,cameraParams) returns the orientation and location of a calibrated camera in a world coordinate system. The input worldPoints must be defined in the world coordinate system.

This function solves the perspective-n-point (PnP) problem using the perspective-three-point (P3P) algorithm [1]. The function eliminates spurious outlier correspondences using the M-estimator sample consensus (MSAC) algorithm. The inliers are the correspondences between image points and world points that are used to compute the camera pose.

[___,inlierIdx] = estimateWorldCameraPose(imagePoints,worldPoints,cameraParams) returns the indices of the inliers used to compute the camera pose, in addition to the arguments from the previous syntax.

[___,status] = estimateWorldCameraPose(imagePoints,worldPoints,cameraParams) additionally returns a status code to indicate whether there were enough points.

[___] = estimateWorldCameraPose(___,Name,Value) uses additional options specified by one or more Name,Value pair arguments, using any of the preceding syntaxes.


collapse all

Load previously calculated world-to-image correspondences.

data = load('worldToImageCorrespondences.mat');

Estimate the world camera pose.

[worldOrientation,worldLocation] = estimateWorldCameraPose(...

Plot the world points.

 pcshow(data.worldPoints,'VerticalAxis','Y','VerticalAxisDir','down', ...
 hold on
 hold off

Input Arguments

collapse all

Coordinates of undistorted image points, specified as an M-by-2 array of [x,y] coordinates. The number of image points, M, must be at least four.

The function does not account for lens distortion. You can either undistort the images using the undistortImage function before detecting the image points, or you can undistort the image points themselves using the undistortPoints function.

Data Types: single | double

Coordinates of world points, specified as an M-by-3 array of [x,y,z] coordinates.

Data Types: single | double

Camera parameters, specified as a cameraParameters or cameraIntrinsics object. You can return the cameraParameters object using the estimateCameraParameters function. The cameraParameters object contains the intrinsic, extrinsic, and lens distortion parameters of a camera.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: 'MaxNumTrials',1000

Maximum number of random trials, specified as a positive integer scalar. The actual number of trials depends on the number of image and world points, and the values for the MaxReprojectionError and Confidence properties. Increasing the number of trials improves the robustness of the output at the expense of additional computation.

Confidence for finding maximum number of inliers, specified as a scalar in the range (0,100). Increasing this value improves the robustness of the output at the expense of additional computation.

Reprojection error threshold for finding outliers, specified as a positive numeric scalar in pixels. Increasing this value makes the algorithm converge faster, but can reduce the accuracy of the result. Correspondences with a reprojection error larger than the MaxReprojectionError are considered outliers, and are not used to compute the camera pose.

Output Arguments

collapse all

Orientation of camera in world coordinates, returned as a 3-by-3 matrix.

Data Types: double

Location of camera, returned as a 1-by-3 unit vector.

Data Types: double

Indices of inlier points, returned as an M-by-1 logical vector. A logical true value in the vector corresponds to inliers represented in imagePoints and worldPoints.

Status code, returned as 0, 1, or 2.

Status codeStatus
0No error
1imagePoints and worldPoints do not contain enough points. A minimum of four points are required.
2Not enough inliers found. A minimum of 4 inliers are required.


[1] Gao, X.-S., X.-R. Hou, J. Tang, and H.F. Cheng. "Complete Solution Classification for the Perspective-Three-Point Problem." IEEE Transactions on Pattern Analysis and Machine Intelligence. Volume 25,Issue 8, pp. 930–943, August 2003.

[2] Torr, P.H.S., and A. Zisserman. "MLESAC: A New Robust Estimator with Application to Estimating Image Geometry." Computer Vision and Image Understanding. 78, no. 1 (April 2000): 138–56.

Extended Capabilities

Version History

Introduced in R2016b

collapse all

R2022b: Not recommended

Starting in R2022b, most Computer Vision Toolbox™ functions create and perform geometric transformations using the premultiply convention. However, the estimateWorldCameraPose function uses the postmultiply convention. Although there are no plans to remove estimateWorldCameraPose at this time, you can streamline your geometric transformation workflows by switching to the estworldpose function, which supports the premultiply convention. For more information, see Migrate Geometric Transformations to Premultiply Convention.

To update your code:

  • Change instances of the function name estimateWorldCameraPose to estworldpose.

  • Specify the cameraParams argument as a cameraIntrinsics object. If you have a cameraParameters object, then you can get a cameraIntrinsics object by querying the Intrinsics property. If the Intrinsics property is empty according to the isempty function, then set the ImageSize property of the cameraParameters object to an arbitrary vector before querying the Intrinsics property. For example:

    load worldToImageCorrespondences.mat
        cameraParams.ImageSize = [128 128];
    intrinsics = cameraParams.Intrinsics;
  • Replace the two output arguments worldOrientation and worldLocation with a single output argument, worldPose. If you need to obtain the orientation matrix and location vector, then you can query the R and the Translation properties of the rigidtform3d object returned by the worldPose argument. Note that the value of R is the transpose of worldOrientation.

The table shows examples of how to update your code.

Discouraged UsageRecommended Replacement

This example estimates a camera pose using the estimateWorldCameraPose function with the cameraParams argument specified as a cameraParameters object.

[worldOrientation,worldLocation] = estimateWorldCameraPose( ...

This example gets the camera intrinsics using the Intrinsics property of the cameraParameters object, then estimates a camera pose using the estworldpose function.

intrinsics = cameraParams.Intrinsics;
worldPose = estworldpose( ...

If you need to obtain the camera orientation and location, then you can query properties of worldPose.

worldOrientation = worldPose.R;
worldLocation = worldPose.Translation;

If you want the orientation in the postmultiply convention, take the transpose of worldPose.R.

worldOrientation = worldPose.R';