Distance measurement using edge detection

35 views (last 30 days)
Hi all, I would like to have an expert opinion on this problem. We use a simple rectangular phantom to perform quality assessment of an MRI system and for this we take structural images to measure the dimensions of the phantom (I have attached a part of a sample image below, upper part). I wrote a code to perform this automatically; the procedure can be summarized as follows:
  1. edge detection using the Canny method
  2. removal of unwanted edges around the image using a rectangular mask and reconstruction
  3. morphological cleaning of smaller spurious edges
  4. creation of a row and a column vector based on the binary image of edges (the red line at the lower image is to visualize which part of the image is copied into the row vector)
  5. calculation of each dimension by subtracting the position of the first white pixel from the position of the last one, then multiplication by the pixel size (because the phantom can sometimes be slightly rotated I don't use the bounding box method)
Now, the code works perfectly. However, I have run it over a large number of images and compared the values with those obtained manually using a simple graphical tool such as ImageJ, and I have found that manual measurements are on average roughly half a pixel size larger than those obtained with the code. Inter-rater variation is very small and the code creates appropriate edges for all images. My question is, why is that? Which one is more trustworthy in this case, the MATLAB code measurements or the human measurements? Human measurements are always the ground truth in similar applications, but I feel that here this is not the case. Should I use a different method; possibly adding half a pixel size on each measurement would make any sense? I've been trying different edge detection methods and get the same thing. It's becoming really annoying!
Thank you in advance.

Accepted Answer

David Young
David Young on 1 Feb 2015
Have you tried sub-pixel edge position estimation? If not, it would be interesting to see if it improves agreement. There's a subpixel version of Canny here.
It also occurs to me that applying gamma correction on image display may affect where your observers choose to put the edge. Say three adjacent pixels have values 0, 180, 255. The jump from 0 to 180 might be perceived by a person as smaller than the jump from 180 to 255, depending on your monitor
Xen on 2 Feb 2015
Edited: Xen on 2 Feb 2015
I did try also some sub-pixel approaches: (a) one by resizing the images (using the default bicubic interpolation) and applying my method on these (b) the method of Agustin Trujillo-Pino (Accurate subpixel edge location). I get similar results by both methods, still far away from the human measurements. I'll try your method and give you an update.
There is an important point to say though. While sub-pixel estimation has less 'bad' measurements (i.e. away from the corresponding human measurement) compared to the normal pixel estimation, the mean sub-pixel measurement is similar to the normal approach (which was somewhat expected). So there might be an issue of statistics here; maybe comparing the overall means is not the appropriate thing to do for the validation of this code.
I agree that the problem arises from the possibly false human perception of the true edge.

Sign in to comment.

More Answers (1)

Image Analyst
Image Analyst on 1 Feb 2015
It's not the algorithm I would have designed, but anyway...about whether to trust human or computer, I would trust the computer. I mean, usually the reason for using a computer is that the humans are not repeatable or precise. If that is not the case then you need to improve your algorithm to at least match the mean of several graders. I don't think you need to arbitrarily add some fudge factor/offset to your results if your algorithm is designed correctly. You can always show your results, like the edge overlaid and zoomed in, and ask the expert human judges whether they agree with it or not. You might find that they agree with all your choices even though if they did it on their own, they would have pointed to other places. Are they defining the edge locations while zoomed in or not? Realize that if you use improfile() or ginput() you only have enough resolution as is displayed on the screen. For example if your image is really 3 times as wide as the screen, a user can only click on a pixel but you have only a third of the pixels on screen for them to pick from, so they can never be as accurate as your algorithm which sees all pixels, not just those displayed on screen.
Xen on 2 Feb 2015
Edited: Xen on 2 Feb 2015
I did check a few images; the problem seems to be what I have described before: sometimes they choose the correct (according to computer) edge and other times the next pixel which could still be grey but not the actual edge. The mean difference is actually hardly noticeable (0.1-0.5% of the true dimension) and by visual comparison without zooming the measurements seem identical; yet again, I am also a human and can't really make an objective judgement of which is more correct! Unfortunately we rely on statistical tests for comparison, and those we use are pairwise so are very strict and such small variations make difference.
I haven't asked them yet, although a proper assessment of an automated code is supposed to involve 'blinded' human measurements, i.e. without knowing what the code measurements are, or what other raters' measurements are.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by