how would i track the speed of the centroid of the moving objects?
2 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Benjamin Dempsey
am 24 Feb. 2016
Beantwortet: olivia safitri
am 11 Mai 2018
function multiObjectTracking()
obj = setupSystemObjects();
tracks = initializeTracks(); % Create an empty array of tracks.
nextId = 1; % ID of the next track
while ~isDone(obj.reader)
frame = readFrame();
[centroids, bboxes, mask] = detectObjects(frame);
predictNewLocationsOfTracks();
[assignments, unassignedTracks, unassignedDetections] = ...
detectionToTrackAssignment();
updateAssignedTracks();
updateUnassignedTracks();
deleteLostTracks();
createNewTracks();
displayTrackingResults();
end
function obj = setupSystemObjects()
obj.reader = vision.VideoFileReader('YourAVI.avi');
obj.videoPlayer = vision.VideoPlayer('Position', [20, 400, 700, 400]);
obj.maskPlayer = vision.VideoPlayer('Position', [740, 400, 700, 400]);
obj.detector = vision.ForegroundDetector('NumGaussians', 3, ...
'NumTrainingFrames', 40, 'MinimumBackgroundRatio', 0.7);
obj.blobAnalyser = vision.BlobAnalysis('BoundingBoxOutputPort', true, ...
'AreaOutputPort', true, 'CentroidOutputPort', true, ...
'MinimumBlobArea', 400);
end
function tracks = initializeTracks()
tracks = struct(...
'id', {}, ...
'bbox', {}, ...
'kalmanFilter', {}, ...
'age', {}, ...
'totalVisibleCount', {}, ...
'consecutiveInvisibleCount', {});
end
function frame = readFrame()
frame = obj.reader.step();
end
function [centroids, bboxes, mask] = detectObjects(frame)
mask = obj.detector.step(frame);
mask = imopen(mask, strel('rectangle', [3,3]));
mask = imclose(mask, strel('rectangle', [15, 15]));
mask = imfill(mask, 'holes');
[~, centroids, bboxes] = obj.blobAnalyser.step(mask);
end
function predictNewLocationsOfTracks()
for i = 1:length(tracks)
bbox = tracks(i).bbox;
predictedCentroid = predict(tracks(i).kalmanFilter);
predictedCentroid = int32(predictedCentroid) - bbox(3:4) / 2;
tracks(i).bbox = [predictedCentroid, bbox(3:4)];
end
end
function [assignments, unassignedTracks, unassignedDetections] = ...
detectionToTrackAssignment()
nTracks = length(tracks);
nDetections = size(centroids, 1);
cost = zeros(nTracks, nDetections);
for i = 1:nTracks
cost(i, :) = distance(tracks(i).kalmanFilter, centroids);
end
costOfNonAssignment = 20;
[assignments, unassignedTracks, unassignedDetections] = ...
assignDetectionsToTracks(cost, costOfNonAssignment);
end
function updateAssignedTracks()
numAssignedTracks = size(assignments, 1);
for i = 1:numAssignedTracks
trackIdx = assignments(i, 1);
detectionIdx = assignments(i, 2);
centroid = centroids(detectionIdx, :);
bbox = bboxes(detectionIdx, :);
correct(tracks(trackIdx).kalmanFilter, centroid);
tracks(trackIdx).bbox = bbox;
tracks(trackIdx).age = tracks(trackIdx).age + 1;
tracks(trackIdx).totalVisibleCount = ...
tracks(trackIdx).totalVisibleCount + 1;
tracks(trackIdx).consecutiveInvisibleCount = 0;
end
end
function updateUnassignedTracks()
for i = 1:length(unassignedTracks)
ind = unassignedTracks(i);
tracks(ind).age = tracks(ind).age + 1;
tracks(ind).consecutiveInvisibleCount = ...
tracks(ind).consecutiveInvisibleCount + 1;
end
end
function deleteLostTracks()
if isempty(tracks)
return;
end
invisibleForTooLong = 20;
ageThreshold = 8;
ages = [tracks(:).age];
totalVisibleCounts = [tracks(:).totalVisibleCount];
visibility = totalVisibleCounts ./ ages;
lostInds = (ages < ageThreshold & visibility < 0.6) | ...
[tracks(:).consecutiveInvisibleCount] >= invisibleForTooLong;
tracks = tracks(~lostInds);
end
function createNewTracks()
centroids = centroids(unassignedDetections, :);
bboxes = bboxes(unassignedDetections, :);
for i = 1:size(centroids, 1)
centroid = centroids(i,:);
bbox = bboxes(i, :);
kalmanFilter = configureKalmanFilter('ConstantVelocity', ...
centroid, [200, 50], [100, 25], 100);
newTrack = struct(...
'id', nextId, ...
'bbox', bbox, ...
'kalmanFilter', kalmanFilter, ...
'age', 1, ...
'totalVisibleCount', 1, ...
'consecutiveInvisibleCount', 0);
tracks(end + 1) = newTrack;
nextId = nextId + 1;
end
end
function displayTrackingResults()
frame = im2uint8(frame);
mask = uint8(repmat(mask, [1, 1, 3])) .* 255;
minVisibleCount = 8;
if ~isempty(tracks)
reliableTrackInds = ...
[tracks(:).totalVisibleCount] > minVisibleCount;
reliableTracks = tracks(reliableTrackInds);
if ~isempty(reliableTracks)
bboxes = cat(1, reliableTracks.bbox);
ids = int32([reliableTracks(:).id]);
labels = cellstr(int2str(ids'));
predictedTrackInds = ...
[reliableTracks(:).consecutiveInvisibleCount] > 0;
isPredicted = cell(size(labels));
isPredicted(predictedTrackInds) = {' predicted'};
labels = strcat(labels, isPredicted);
frame = insertObjectAnnotation(frame, 'rectangle', ...
bboxes, labels);
mask = insertObjectAnnotation(mask, 'rectangle', ...
bboxes, labels);
end
end
obj.maskPlayer.step(mask);
obj.videoPlayer.step(frame);
end
end
4 Kommentare
Matthew Eicholtz
am 6 Apr. 2016
Just to be clear, this is not your code, correct? It is the MATLAB Example for Motion-Based Multiple Object Tracking?
Akzeptierte Antwort
Matthew Eicholtz
am 6 Apr. 2016
If it is acceptable to compute velocities post hoc, I suggest the following edits to this code:
1. Add tracks as an output so you can process them from another script afterwards:
function tracks = multiObjectTracking()
...
2. In the initializeTracks() function, add a field in the tracks structure to store position over time (I suggest using an animatedline object). Also add a field to keep track of which tracks are active or not:
tracks = struct(...
...
'active',{}, ...
'position',{});
3. In the createNewTracks() function, instantiate an animated line when you create a new track and set the track to active:
newTrack = struct(...
...
'active',true, ...
'position',animatedline()); %you can add optional parameters to the animatedline here if you want
4. In the deleteLostTracks() function, instead of deleting tracks, simply set them to inactive:
Replace
lostInds = ...;
tracks = tracks(~lostInds);
with
lostInds = find(...);
for ii=1:length(lostInds)
tracks(lostInds(ii)).active = false;
end
5. In the displayTrackingResults() function, add a condition to the reliableTrackInds to check active state:
reliableTrackInds = ... & [tracks(:).active];
Then, add the centroid location to the animatedline object for each active track (I suggest after "mask = insertObjectAnnotation..."):
x = double(bboxes(:,1)+bboxes(:,3)/2);
y = double(bboxes(:,2)+bboxes(:,4)/2);
for ii=1:length(reliableTracks)
addpoints(tracks(reliableTracks(ii).id).position,x(ii),y(ii));
end
6. The last step is to make sure the function predictNewLocationsOfTracks(), detectionToTrackAssignment(), updateAssignedTracks(), and updateUnassignedTracks() ignore the inactive tracks. There are many potential ways to do this; I'll leave this part to you.
7. After the code has been edited, you should be able to run:
tracks = multiObjectTracking();
[x,y] = getpoints(tracks(4).position);
to get the centroid path for the 4th track, for example. The velocity can be computed easily enough with this information.
Hope this helps.
2 Kommentare
George Kouskoulis
am 17 Mai 2016
Matt, I do not know for Benjamin, but this helps me a lot!
If I understand this correctly, [x, y] is not the position of the centroid in the ground but the position of the centroid in the picture of each frame (e.g. [300, 100] is the specific pixel in the picture and not a point in the ground).
How could I transform pixel's position, that comes out from your code, to position on the ground of the specific centroid, in order to compute the velocity?
Thank you in advance
George Kouskoulis
am 2 Jun. 2016
The conversion from pixel coordinates to ground coordinates could be done through camera calibration? Is there any other way?
Expect from the checkerboard method, how can I calibrate camera? My video covers a large area and I cannot find I huge checkerboard in order to place it the video area and do the camera calibration?
Weitere Antworten (3)
George Kouskoulis
am 17 Mai 2016
Matt, I do not know for Benjamin, but this helps me a lot!
If I understand this correctly, [x, y] is not the position of the centroid in the ground but the position of the centroid in the picture of each frame (e.g. [300, 100] is the specific pixel in the picture and not a point in the ground).
How could I transform pixel's position, that comes out from your code, to position on the ground of the specific centroid, in order to compute the velocity?
Thank you in advance!
0 Kommentare
Siehe auch
Kategorien
Mehr zu MATLAB Support Package for IP Cameras finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!