Background segmentation for low contrast images

8 Ansichten (letzte 30 Tage)
Amin Ghasemi
Amin Ghasemi am 23 Okt. 2016
Bearbeitet: DGM am 17 Okt. 2024
Hi guys, I'm new in image processing, so I asked mu Q very simply. we have this image
and wanna to reach this one
by using Hough lines.
any help will be appreciated.

Antworten (2)

Omega
Omega am 17 Okt. 2024
Hi Amin,
You can follow the general steps outlined below to perform background segmentation:
  1. Image Preprocessing: Convert the image to grayscale, as the Hough Transform works on single-channel images. Additionally, you can use histogram equalization, "histeq" to enhance the image contrast.
  2. Edge Detection: While there are various edge detection methods available, using the "Canny" edge detector is recommended for its effectiveness in identifying edges.
  3. Hough Transform and Line Detection: Apply the Hough Transform to detect lines in the edge-detected image. The "houghpeaks" function identifies the most prominent lines by finding peaks in the Hough accumulator array. You can then use "houghlines" to extract line segments based on these detected peaks. For better visualization, you can plot these lines over the original image.
  4. Background Segmentation: The final step involves segmenting the background using the detected lines, which is application-specific. You might use these lines to create masks or separate regions of interest, depending on your specific requirements.
In addition to the steps mentioned above, you may need to fine-tune the parameters to achieve the desired results.
You can refer to the following documentation links to understand the implementation details:
I hope this helps!

DGM
DGM am 17 Okt. 2024
Bearbeitet: DGM am 17 Okt. 2024
Let's start by doing what was asked.
% read and prepare the image
inpict = imread('image.png');
inpict = im2gray(inpict);
% create a rough mask
mask0 = inpict >= 25 & inpict <= 110;
mask0 = bwareafilt(mask0,1);
mask0 = imfill(mask0,'holes');
mask0 = imopen(mask0,strel('disk',11));
pmask = bwperim(mask0);
imshow(pmask)
% use hough transform to reduce the rough edges to approximate lines
[H,T,R] = hough(pmask);
P = houghpeaks(H,4,'threshold',ceil(0.3*max(H(:))));
lines = houghlines(mask0,T,R,P,'FillGap',1000,'MinLength',10);
imshow(inpict); hold on
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
end
% find intersections of extended lines
% this could be done mathematically using the line info
% but doing it this way avoids needing to figure out
% which lines need to be intersected with each other
sz = size(pmask);
p1 = vertcat(lines.point1);
th = vertcat(lines.theta);
E = false(sz);
% extend the lines in a logical mask
for k = 1:numel(lines)
p0 = 1000*[cosd(th(k)+90) sind(th(k)+90)] + p1(k,:);
p2 = 1000*[cosd(th(k)-90) sind(th(k)-90)] + p1(k,:);
% brline() is a MIMT tool, but this version is portable
E = E | brline_forum(sz,[p0; p2]);
end
% find the vertex locations
E = bwmorph(E,'branchpoints');
V = regionprops(E,'centroid');
V = vertcat(V.Centroid);
% sort by angle
C = mean(V,1);
th = atan2d(V(:,2)-C(2)/2,V(:,1)-C(1)/2);
[~,idx] = sort(th);
V = V(idx,:);
% create a mask based on the vertex list
mask = poly2mask(V(:,1),V(:,2),sz(1),sz(2));
% apply the mask to the image
%outpict = replacepixels(inpict,0,mask); % the MIMT way is easier and safer
outpict = inpict;
outpict(~mask) = 0;
figure
imshow(outpict)
The question is whether this is even necessary or warranted. If all we need to do is sample the values within the ROI, then we don't need to use houghlines. In fact, we probably don't need to create the composite image at all.
% read and prepare the image
inpict = imread('image.png');
inpict = im2gray(inpict);
% create a rough mask
mask0 = inpict >= 25 & inpict <= 110;
mask0 = bwareafilt(mask0,1);
mask0 = imfill(mask0,'holes');
mask0 = imopen(mask0,strel('disk',11));
% the values in the ROI are extracted using the mask alone
roivalues = inpict(mask0); % that's it. we're done.
% matting out the background serves no technical purpose
% this is purely a visualization
outpict = inpict;
outpict(~mask0) = 0;
imshow(outpict)
If the objects were fixtured so that they could be placed consistently with respect to the camera, you could just manually create the vertex list once and reuse it programmatically. That saves a lot of hassle and we can make the mask as clean as we want it to be.
% read the image
inpict = imread('image.png');
inpict = im2gray(inpict);
% create a mask based on the known vertex list
sz = size(inpict);
V = [114 16; 130 258; 366 259; 379 11];
mask = poly2mask(V(:,1),V(:,2),sz(1),sz(2));
% matting out the background serves no technical purpose
% this is purely a visualization
outpict = inpict;
outpict(~mask) = 0;
imshow(outpict)
There's two other points to make. First, note the intensity distribution of the image. It's clear that the image has been roughly quantized to 16 gray levels. It's difficult to say what exactly caused this. On one hand, this could be caused by incorrect decoding of an image in an atypical binary format. We have to note that while the image is a plain 8-bit PNG, it's very clear that it's been transcoded a few times, at least once as a very low quality JPG. I would regard this image as garbage. Unless there's a good justification for all the damage that's evident, I wouldn't see the point in polluting any analysis with it. It's probably a good idea to back up and figure out what's going wrong with the image integrity before we make more junk images.
% let's look at the intensity distribution
subplot(2,1,1)
imhist(inpict(mask));
title('inside ROI')
subplot(2,1,2)
imhist(inpict(~mask));
title('outside ROI')
Lastly, is there a good reason to go to the trouble of finding the vertex list of a reduced quadrilateral mask instead of just using the raster mask (mask0)? Consider that we might not even care about making a mask if we can do this with the vertex list instead.
% read the image
inpict = imread('image.png');
inpict = im2gray(inpict);
% these are the coordinates of the box corners
boxm = [114 16; 130 258; 366 259; 379 11]; % [x y]
% assert that this is where they're supposed to be
szo = range(boxm,1);
boxf = [1 1; 1 szo(1); szo(2) szo(1); szo(2) 1]; % [x y]
% transform the ROI
TF = fitgeotrans(boxm,boxf,'projective');
outview = imref2d(szo);
outpict = imwarp(inpict,TF,'fillvalues',0,'outputview',outview);
imshow(outpict)
Depending on what we're trying to do, perspective correction might be the sensible thing to do -- or it might just be a waste of time.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by