Main Content

Capturing and Stitching Panoramic Images Using ArduCam Multi Camera Adapter Module

This example shows how to capture images using ArduCam Multi Camera Adapter Module attached to Raspberry Pi® and stitch the images to obtain a panoramic image using SURF feature detection and matching.

This example also shows how to deploy the MATLAB code for stitching the images on Raspberry Pi hardware as a standalone executable using MATLAB® Support Package for Raspberry Pi Hardware. Display the result on a monitor connected to the same Raspberry Pi hardware.

To access all the files for this example, click the Open Live Script button and download the files attached with this example.

Overview

Feature detection and matching are powerful techniques used in many computer vision applications such as image registration, tracking, and object detection. This example uses feature-based techniques to automatically stitch a set of images. The procedure for image stitching is an extension of feature- based image registration, where instead of registering a single pair of images, you register multiple successive image pairs relative to each other to form a panorama.

Hardware Setup

  • Raspberry Pi 3 Model B+ with OpenCV libraries

  • ArduCam Mutli Camera Adapter Module

  • Four 5MP OV5647 cameras

  • Monitor

Initialize Image Parameters

Define the dimensions of the images that you want to capture using the cameras attached to the ArduCam module in pixels. For this example, define a height of 480 pixels and a width of 640 pixels.

Capture a minimum of two images and a maximum of four images.

% Define image width and height.
imgHeight = 480;
imgWidth = 640;

% Define image resolution.
imgResolution = '640x480';

% Define number of images.
numImages = 4;

Create an image array to store the captured images.

imgArray = uint8(zeros(imgHeight,imgWidth,3,numImages));

Create an array of 2-D projective geometric transformation.

tforms = {projtform2d(eye(3, 'single')) projtform2d(eye(3, 'single')) projtform2d(eye(3, 'single')) projtform2d(eye(3, 'single'))};

Create an AlphaBlender System object™ to overlay the images.

% Creates an alpha blender System object
blender = vision.AlphaBlender('Operation', 'Binary mask', ...
    'MaskSource', 'Input port');

Create a raspi and an arducam object.

% Create a raspi object.
raspiObj = raspi('bgl-arducam-raspi','pi','raspberry');

% Create arducam object
camObj = arducam(raspiObj,'MultiCamAdapter','Resolution',imgResolution);
availableCameras = {'CAMERA A','CAMERA B','CAMERA C','CAMERA D'};

Initialize the Panorama and Register Image Pairs

Create an empty panorama variable to which you will map all the captured images later in the example. For this example, based on the hardware setup, set the canvas of the panorama to twice the dimensions of the captured images.

panoramaImgHeight = 2*imgHeight;
panoramaImgWidth = 2*imgWidth;
panorama = uint8(zeros(panoramaImgHeight, panoramaImgWidth, 3));

To create the panorama, start by registering successive image pairs using the following procedure:

  1. Detect and match features between I(n) and I(n-1).

  2. Estimate the geometric transformation T(n) that maps I(n) to I(n-1).

  3. Compute the transformation that maps I(n) into the panorama image as T(1)*T(2)*...*T(n-1)*T(n).

% Capture first image from the cameraboard and convert to grayscale.
selectCamera(camObj,availableCameras{1});
imgArray(:,:,:,1) = snapshot(camObj);
grayImage = im2gray(imgArray(:,:,:,1));

% Detect SURF features for the first image and extract features.
points = detectSURFFeatures(grayImage);
[features, points] = extractFeatures(grayImage,points, 'Method','SURF');

% Iterate over remaining images
for n = 2:numImages
    % Store points and features for image (n-1).
    pointsPrevious = points;
    featuresPrevious = features;

    % Capture an image.
    selectCamera(camObj,availableCameras{n});
    imgArray(:,:,:,n) = snapshot(camObj);

    % Convert image to grayscale.
    grayImage = im2gray(imgArray(:,:,:,n));

    % Detect and extract SURF features for the captured images.
    points = detectSURFFeatures(grayImage);
    [features, points] = extractFeatures(grayImage, points);

    % Find correspondences between current image and previous image.
    indexPairs = matchFeatures(features, featuresPrevious, 'Unique', true);

    % Extract the matched point locations.
    matchedPoints  = points.Location(indexPairs(:,1),:);
    matchedPointsPrev = pointsPrevious.Location(indexPairs(:,2),:);

    % Estimate the transformation between current image and previous image.
    tforms{n} = estgeotform2d(matchedPoints, matchedPointsPrev,...
        'projective', 'Confidence', 99.9, 'MaxNumTrials', 5000);

    % Compute T(1) * T(2) * ... * T(n-1) * T(n).
    tforms{n}.A = tforms{n-1}.A * tforms{n}.A;
end

At this point, all the transformations in tforms are relative to the first image. This was a convenient way to code the image registration procedure because it allowed sequential processing of all the images. However, using the first image as the start of the panorama does not produce the most aesthetically pleasing panorama because it tends to distort most of the images that form the panorama. You can create a more aesthetically pleasing panorama by modifying the transformations such that the center of the scene is the least distorted. You can accomplish this by inverting the transformation for the center image and applying that transformation to all the other images.

Start by using the projtform2d (Image Processing Toolbox) outputLimits method to find the output limits for each transformation. Then use the output limits to automatically find the image that is roughly at the center of the scene.

% Compute the output limits for each transformation.
xlim = zeros(numel(tforms) ,2);
ylim = zeros(numel(tforms) ,2);
[xlim(1,:), ylim(1,:)] = outputLimits(tforms{1}, [1 0], [1 0]);
for i = 2:numel(tforms)
    [xlim(i,:), ylim(i,:)] = outputLimits(tforms{i}, [1 imgWidth], [1 imgHeight]);
end

Next, compute the average X limits for each transformation and find the image that is in the center. This example uses only the X limits because the scene for which you are generating the panorama is horizontal. If you are using a different set of images, you might need the X and Y limits to find the center image.

% Compute the average X limits for each transformation and find the image
% that is in the center. Only the X limits are used here because the
% cameras are horizontally placed.
avgXLim = mean(xlim, 2);
[~,idx] = sort(avgXLim);
centerIdx = floor((numel(tforms)+1)/2);
centerImageIdx = idx(centerIdx);

Apply the inverse transformation of the center image to all the others.

% Apply the center image's inverse transformation to all the others.
Tinv = invert(tforms{centerImageIdx});
for i = 1:numel(tforms)
    tforms{i}.A = Tinv.A * tforms{i}.A;
end

Compute the Size of the Panorama

Use the outputLimits method to compute the minimum and maximum output limits over all transformations. Then use these values to automatically compute the size of the panorama.

% Use the outputLimits method to compute the minimum and maximum output
% limits over all transformations.
[xlim(1,:), ylim(1,:)] = outputLimits(tforms{1}, [1 0], [1 0]);
for i = 2:numel(tforms)
    [xlim(i,:), ylim(i,:)] = outputLimits(tforms{i}, [1 imgWidth], [1 imgHeight]);
end

% Find the minimum and maximum output limits.
xMin = min([1; xlim(:)]);
xMax = max([imgWidth; xlim(:)]);

yMin = min([1; ylim(:)]);
yMax = max([imgHeight; ylim(:)]);

% Create a 2-D spatial reference object defining the size of the panorama.
xLimits = [xMin xMax];
yLimits = [yMin yMax];
panoramaView = imref2d([panoramaImgWidth panoramaImgHeight], xLimits, yLimits);

Create the Panorama

Use imwarp (Image Processing Toolbox) to map images into the panorama and use vision.AlphaBlender (Computer Vision Toolbox) to overlay the images. The field of view of each of the cameras is less than 60 degrees. Therefore, to have a common region in each of the captured images, place the cameras close to each other.

% Create the panorama.
for i = 1:numImages

    % Transform images into the panorama.
    wrapedImage = imwarp(imgArray(:,:,:,i), tforms{i}, 'OutputView', panoramaView);

    % Generate a binary mask.
    mask = imwarp(true(size(imgArray(:,:,:,i),1),size(imgArray(:,:,:,i),2)), tforms{i}, 'OutputView', panoramaView);

    % Overlay the warpedImage onto the panorama.
    panorama = blender(panorama,wrapedImage,mask);
end

% Display individual images
montage(imgArray)

% Display the panoramic image
displayImage(raspiObj,panorama);

To continuously capture images and update the panorama, you can add the code in the Initialize the Panorama and Register Image Pairs and Create a Hardware Configuration Object sections in a while loop.

clear raspiObj camObj

Create a Hardware Configuration Object

Create a hardware configuration object by using the targetHardware function in the MATLAB Command Window.

board = targetHardware('Raspberry Pi');

Verify the DeviceAddress, Username, and Password properties listed in the output. If required, change the value of the properties by using the dot notation syntax.

For example, to change the device address to 192.168.0.1, enter this code.

board.DeviceAddress = '192.168.0.1'

Deploy MATLAB Function on Hardware

Download the function raspi_arducam_panorama.m attached with this example and deploy it as a standalone executable on the hardware by using the deploy function. To acces the file, click the Open Live Script button.

deploy(board,'raspi_arducam_panorama.m');

You can incorporate additional techniques into the example to improve the blending and alignment of the panorama images [1].

References

[1] Matthew Brown and David G. Lowe. 2007. Automatic Panoramic Image Stitching using Invariant Features. Int. J. Comput. Vision 74, 1 (August 2007), 59-73.