Distance measurement using edge detection
35 views (last 30 days)
Hi all, I would like to have an expert opinion on this problem. We use a simple rectangular phantom to perform quality assessment of an MRI system and for this we take structural images to measure the dimensions of the phantom (I have attached a part of a sample image below, upper part). I wrote a code to perform this automatically; the procedure can be summarized as follows:
- edge detection using the Canny method
- removal of unwanted edges around the image using a rectangular mask and reconstruction
- morphological cleaning of smaller spurious edges
- creation of a row and a column vector based on the binary image of edges (the red line at the lower image is to visualize which part of the image is copied into the row vector)
- calculation of each dimension by subtracting the position of the first white pixel from the position of the last one, then multiplication by the pixel size (because the phantom can sometimes be slightly rotated I don't use the bounding box method)
Now, the code works perfectly. However, I have run it over a large number of images and compared the values with those obtained manually using a simple graphical tool such as ImageJ, and I have found that manual measurements are on average roughly half a pixel size larger than those obtained with the code. Inter-rater variation is very small and the code creates appropriate edges for all images. My question is, why is that? Which one is more trustworthy in this case, the MATLAB code measurements or the human measurements? Human measurements are always the ground truth in similar applications, but I feel that here this is not the case. Should I use a different method; possibly adding half a pixel size on each measurement would make any sense? I've been trying different edge detection methods and get the same thing. It's becoming really annoying!
Thank you in advance.
David Young on 1 Feb 2015
Have you tried sub-pixel edge position estimation? If not, it would be interesting to see if it improves agreement. There's a subpixel version of Canny here.
It also occurs to me that applying gamma correction on image display may affect where your observers choose to put the edge. Say three adjacent pixels have values 0, 180, 255. The jump from 0 to 180 might be perceived by a person as smaller than the jump from 180 to 255, depending on your monitor
More Answers (1)
Image Analyst on 1 Feb 2015
It's not the algorithm I would have designed, but anyway...about whether to trust human or computer, I would trust the computer. I mean, usually the reason for using a computer is that the humans are not repeatable or precise. If that is not the case then you need to improve your algorithm to at least match the mean of several graders. I don't think you need to arbitrarily add some fudge factor/offset to your results if your algorithm is designed correctly. You can always show your results, like the edge overlaid and zoomed in, and ask the expert human judges whether they agree with it or not. You might find that they agree with all your choices even though if they did it on their own, they would have pointed to other places. Are they defining the edge locations while zoomed in or not? Realize that if you use improfile() or ginput() you only have enough resolution as is displayed on the screen. For example if your image is really 3 times as wide as the screen, a user can only click on a pixel but you have only a third of the pixels on screen for them to pick from, so they can never be as accurate as your algorithm which sees all pixels, not just those displayed on screen.