MATLAB Answers

Image Processing: Find Edge with Highest Contrast

30 views (last 30 days)
Philip
Philip on 16 Sep 2011
Answered: srinivasan on 9 Jan 2014
Is it possible to scan an image to find only the edge with the highest dynamic range to its left and right. At first I want to find a vertical edge that has a dark collection of pixels to its left, and a bright collection of pixels to its right (i.e. building | edge | sky). I will then extend this to "sky | edge | building", and the same for the horizontal direction.
The edges themselves should be strong edges, where an object meets a background, for example. It should not choose edges that are part of the texture of an object. I have been experimenting with segmentation to remove some of the detail first, but I'm not convinced that this is the most efficient approach...
I have tried using edge operators such as sobel in the 'vertical' mode, but these operators only work on grayscale matrices... Since I lose a lot of pixel information when converting to grayscale I would prefer to come up with a way of processing this on a colour image directly...
Your suggestions would be gratefully appreciated!

  0 Comments

Sign in to comment.

Answers (2)

David Young
David Young on 16 Sep 2011
I think there are two main issues here. The first is how to distinguish between "texture" and "the boundary of an object". There's no absolutely reliable way to do this (when is an object an object and not a surface marking?) but the general way to tackle this with fairly straightforward methods is to use the idea of scale, implemented via image smoothing. This gets extended to the powerful concept of scale-space, which gets applied in lots of ways from resolution pyramids to SIFT features, so it's well worth getting to grips with.
Here's some code to look at, as a simple implementation of what you asked about. It identifies the position of maximum left-right contrast at a scale determined by the value of sigma - try changing this to see what happens. The code can be modified to find more large-contrast locations if necessary, and the change to horizontal rather vertical edges is trivial.
im = imread('pout.tif'); % data
% smooth the image
sigma = 8; % how much to smooth
hmasksize = ceil(2.6 * sigma); % reasonable half mask size relative to sigma
masksize = 2*hmasksize + 1; % mask size odd number so it has a centre
mask = fspecial('gauss', masksize, sigma);
imsmooth = conv2(double(im), mask, 'valid');
% find horizontal differences, to pick out vertical edges
hordiffs = imsmooth(:, 1:end-1) - imsmooth(:, 2:end);
% find the biggest absolute difference
[colmxs, rs] = max(abs(hordiffs),[],1);
[mx, c] = max(colmxs);
r = rs(c);
% correct for the trimming during the convolution
c = c + hmasksize;
r = r + hmasksize;
% show the peak location
imshow(im);
hold on;
plot(c, r, 'r^');
hold off;
The second issue is how to handle colour images. The two main possibilities are (a) to form a single intensity image, and then proceed as above, or (b) to independently find edges for the different colour planes, and then somehow combine them.
To do (a), assuming you're starting from an RGB image (converted to double data type) you could combine the colours something like this:
im = k1*rgb(:,:,1) + k2*rgb(:,:,2) + k3*rgb(:,:,3);
where k1, k2 and k3 are constants to be chosen by experiment or using machine learning. These will depend on the kind of colour contrasts you want to find - for example if the blue component is particularly important, k3 would be larger than the others.
For approach (b), you could apply edge detection separately to the r, g and b components (or to the h, s and v components, or whatever you want to use) and then see which result has the strongest edges.
The are more complex variants on all of this - to say more would need a little research project on the particular data you're working with.

  11 Comments

Show 8 older comments
Philip
Philip on 19 Sep 2011
I think we may be mixing words here... I do not use regionprops to do this - rather, 'bwlabel' as mentioned. As I see it, the 'num' output can be considered a quantity because it is the number of connected objects found. I then refer to each connected edge object separately, but note that some of these objects will vary in size. The object corresponding to the edge running down the side of a building is likely to be longer than that of any edge associated with trees, as trees seem to be largely made up of multiple objects, rather than a singleton.
Yes, I believe the idea of blurring in 2D is to suppress noise and to reduce spurious edges...
David Young
David Young on 22 Sep 2011
@Image Analyst: You're right - I needn't have done 2D blurring - 1D would have been sufficient for finding vertical edges. However, if edges at other orientations were needed, I guess the 2D blurred image might be useful. (The reason for blurring at all is to try to distinguish between "texture" and "object boundary" in the hope that the former is characterised by a smaller spatial scale.)
David Young
David Young on 22 Sep 2011
@Philip: I support the suggestion of having a look at the Hough transform. It's a very useful idea to know about, at least.

Sign in to comment.


srinivasan
srinivasan on 9 Jan 2014
How to compare the extracted features with the image in order to perform object detection.

  0 Comments

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by