# Distance measurement using edge detection

35 views (last 30 days)
Xen on 1 Feb 2015
Edited: Xen on 2 Feb 2015
Hi all, I would like to have an expert opinion on this problem. We use a simple rectangular phantom to perform quality assessment of an MRI system and for this we take structural images to measure the dimensions of the phantom (I have attached a part of a sample image below, upper part). I wrote a code to perform this automatically; the procedure can be summarized as follows:
1. edge detection using the Canny method
2. removal of unwanted edges around the image using a rectangular mask and reconstruction
3. morphological cleaning of smaller spurious edges
4. creation of a row and a column vector based on the binary image of edges (the red line at the lower image is to visualize which part of the image is copied into the row vector)
5. calculation of each dimension by subtracting the position of the first white pixel from the position of the last one, then multiplication by the pixel size (because the phantom can sometimes be slightly rotated I don't use the bounding box method)
Now, the code works perfectly. However, I have run it over a large number of images and compared the values with those obtained manually using a simple graphical tool such as ImageJ, and I have found that manual measurements are on average roughly half a pixel size larger than those obtained with the code. Inter-rater variation is very small and the code creates appropriate edges for all images. My question is, why is that? Which one is more trustworthy in this case, the MATLAB code measurements or the human measurements? Human measurements are always the ground truth in similar applications, but I feel that here this is not the case. Should I use a different method; possibly adding half a pixel size on each measurement would make any sense? I've been trying different edge detection methods and get the same thing. It's becoming really annoying!

David Young on 1 Feb 2015
Have you tried sub-pixel edge position estimation? If not, it would be interesting to see if it improves agreement. There's a subpixel version of Canny here.
It also occurs to me that applying gamma correction on image display may affect where your observers choose to put the edge. Say three adjacent pixels have values 0, 180, 255. The jump from 0 to 180 might be perceived by a person as smaller than the jump from 180 to 255, depending on your monitor
##### 2 CommentsShowHide 1 older comment
Xen on 2 Feb 2015
Edited: Xen on 2 Feb 2015
I did try also some sub-pixel approaches: (a) one by resizing the images (using the default bicubic interpolation) and applying my method on these (b) the method of Agustin Trujillo-Pino (Accurate subpixel edge location). I get similar results by both methods, still far away from the human measurements. I'll try your method and give you an update.
There is an important point to say though. While sub-pixel estimation has less 'bad' measurements (i.e. away from the corresponding human measurement) compared to the normal pixel estimation, the mean sub-pixel measurement is similar to the normal approach (which was somewhat expected). So there might be an issue of statistics here; maybe comparing the overall means is not the appropriate thing to do for the validation of this code.
I agree that the problem arises from the possibly false human perception of the true edge.

Image Analyst on 1 Feb 2015
Xen on 2 Feb 2015
Edited: Xen on 2 Feb 2015
I did check a few images; the problem seems to be what I have described before: sometimes they choose the correct (according to computer) edge and other times the next pixel which could still be grey but not the actual edge. The mean difference is actually hardly noticeable (0.1-0.5% of the true dimension) and by visual comparison without zooming the measurements seem identical; yet again, I am also a human and can't really make an objective judgement of which is more correct! Unfortunately we rely on statistical tests for comparison, and those we use are pairwise so are very strict and such small variations make difference.
I haven't asked them yet, although a proper assessment of an automated code is supposed to involve 'blinded' human measurements, i.e. without knowing what the code measurements are, or what other raters' measurements are.

### Categories

Find more on MRI in Help Center and File Exchange

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by