how to apply active contours image segmentation to a video?
3 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Ben Timm
am 20 Jul. 2020
Kommentiert: Ben Timm
am 21 Jul. 2020
Hi all,
I have a number of videos with edge detection appropriately applied to a cluster of cells. I am trying to segment them so that after I can measure their change in area over time but am afraid I dont know how to apply the segmentation to a video.
I read on another post that you could apply it to the first frame then use that frame as a hallmark for the others, would that work?
See a single frame attached to understand what I am working with for now, unfortunately the videos are too large to attach.
Thanks,
Ben
0 Kommentare
Akzeptierte Antwort
Image Analyst
am 20 Jul. 2020
I have no idea what you want to do. With an edge detection, you will have lots of curves, most of which will not be closed contours. So what do you want to do with all those curves? Compare their curve lengths over time? If your plant is wiggling, then the curves will be drastically changing. Not only changing, but the label (ID number) of a given curve will change, making it extremely hard to track one particular curve. Imagine you just had 50 random curves on one frame. Now imagine in the next frame you had 62 curves that have completely different coordinates. How do you match up the 50 in one frame to the 62 in the next frame? Tracking is not easy. Even for me it would take several months of work to get something not even 100% robust. You might look into professional, commercial tracking software, or look into optical flow as a metric that hopefully means something relevant.
3 Kommentare
Image Analyst
am 20 Jul. 2020
If the period of time is short enough that the centroids of the segmented regions don't move much, then you can match up the regions by comparing the centroids with the centroids in the prior image using pdist2(). Whichever other region in the other frame had the min distance is the same region. But this depends on them not moving much. So the centroid can't move much more than about half way out to the outer border of the region or else you'll have problems matching them up.
And all that depends on getting the regions segmented in the first place. Edge detection may or may not be a good way to segment regions, though usually it's not. Honestly I don't know what's what in your images. Even if I had to try to manually outline the regions, I'd have difficulty in accurately tracking the region. Obviously with edge detection, the contours are not closed so you will not have closed regions. It may be that you either have to
- manually trace each region with drawfreehand()
- train a deep learning network, like Segnet perhaps, to identify them
Even if you do #2, you'll still have to trace regions with drawfreehand() or the ImageLabeler app in order to train the network, but at least once you've done that for a few hundred images you won't have to do it anymore. See This Link Unless automatic identification of regions is going to be a major part of your project, then it might be faster to just do #1 and be done with it, rather than delve into the Deep Learning stuff which could take you weeks or months to learn and perfect for your images.
Weitere Antworten (0)
Siehe auch
Kategorien
Mehr zu Image and Video Ground Truth Labeling finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!