Main Content

dbowLoopDetector

Detect loop closure using visual features

Since R2024b

Description

Use dbowLoopDetector object to create loop detector database.

Creation

Description

loopDetector = dbowLoopDetector() creates a loop detector database using a default internal vocabulary. It detects loop closures in visual simultaneous localization and mapping (vSLAM) using visual features.

example

loopDetector = dbowLoopDetector(bag) creates a loop detector database specified by feature descriptors in bag.

Object Functions

addVisualFeaturesAdd image features to database
detectLoopDetect loop closure using visual features

Examples

collapse all

Load an existing binary visual vocabulary for loop detection.

bag = bagOfFeaturesDBoW("bagOfFeatures.bin.gz");

Initialize the loop detector using the loaded vocabulary.

loopDetector = dbowLoopDetector(bag);

Set the ID for 100th image in a sequence.

viewId = 100;

Read the image file, then detect ORB features in the image.

I = imread("cameraman.tif");
points = detectORBFeatures(I);

Extract ORB features from the detected points in the image. The extractFeatures function returns features and their corresponding locations. This code focuses on only the features for loop closure detection.

[features,~] = extractFeatures(I,points);

Perform loop closure detection with the extracted features.

loopViewIds = detectLoop(loopDetector,features);

Update the loop detector's database with features from the new image.

addVisualFeatures(loopDetector,viewId,features);

Load a pre-existing binary vocabulary for feature description.

bag = bagOfFeaturesDBoW("bagOfFeatures.bin.gz");

Initialize the loop detector with the loaded vocabulary.

loopDetector = dbowLoopDetector(bag);

Use a single image to simulate adding different views.

I = im2gray(imread("cameraman.tif"));
points = detectORBFeatures(I);
[features,points] = extractFeatures(I,points);

Initialize an image view set to manage and store views.

vSet = imageviewset;

Add the first view with random features for initialization.

zeroFeatures = binaryFeatures(zeros(size(points,1),32,"like",uint8(0)));
vSet = addView(vSet,1,"Features",zeroFeatures,"Points",points);
addVisualFeatures(loopDetector,1,zeroFeatures);

Sequentially add three views with actual features to simulate potential loop candidates.

for viewId = 2:4
    vSet = addView(vSet,viewId,"Features",features,"Points",points);
    addVisualFeatures(loopDetector,viewId,features);
end

Add two new connected views to the image sequence. First, add a previous view with a cropped section of the original image to represent the view from a camera. The resulting cropped image represents the previous frame in the sequence.

prevViewId = 5;
prevView = I(100:200,100:200);

Detect ORB features in the cropped frame with the pyramid level set to 3. Add the features to the image viewset as a new view.

prevPoints = detectORBFeatures(prevView,NumLevels=3);
[prevFeatures,prevPoints] = extractFeatures(prevView,prevPoints);
vSet = addView(vSet,prevViewId,"Features",prevFeatures,"Points",prevPoints);
addVisualFeatures(loopDetector,prevViewId,prevFeatures);

Add a current view, connected to the previous one with another cropped section.

currViewId = 6;
currView = I(50:200, 50:200);
currPoints = detectORBFeatures(currView,NumLevels=3);
[currFeatures, currPoints] = extractFeatures(currView,currPoints);
vSet = addView(vSet,currViewId,"Features",currFeatures,"Points",currPoints);
vSet = addConnection(vSet,prevViewId,currViewId,"Matches",[1:10; 1:10]');

Identify views connected to the current key frame.

covisViews = connectedViews(vSet, currViewId);
covisViewsIds = covisViews.ViewId;

Perform loop closure detection by comparing current features against those from connected views. Use 75% of the maximum score among connected views for the threshold.

relativeThreshold = 0.75; 
loopViewIds = detectLoop(loopDetector,currFeatures,covisViewsIds,relativeThreshold);

References

[1] Galvez-López, D., and J. D. Tardos. “Bags of Binary Words for Fast Place Recognition in Image Sequences.” IEEE Transactions on Robotics, vol. 28, no. 5, Oct. 2012, pp. 1188–97. DOI.org (Crossref), https://doi.org/10.1109/TRO.2012.2197158.

Version History

Introduced in R2024b