Another question, do I have to convert the image to grayscale first? because in this demo it converts the image first : https://www.mathworks.com/matlabcentral/fileexchange/25157-image-segmentation-tutorial
Steps to identify color in images and classify
13 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
hafiz syazwan
am 8 Nov. 2016
Kommentiert: Image Analyst
am 14 Nov. 2016
Hello, I'm a final year student from a university. I'm currently taking my final year project course, titled "Classifying tomato ripeness using rule-based method (threshold)" (the title may sound weird, sorry). I have met some of the lecturers in my faculty, and one of them suggested i follow certain steps, which can be described roughly as :
1. filter the image
2. feature extraction
3. classification
4. recognization
To be honest, I don't know what to do or how to with it. I found some codes online that enabled me to extract the RGB values,
rgbImage = imread('tomatoes\tomato.jpg');
[rows, columns, numberOfColorChannels]=size(rgbImage);
findmean=mean(reshape(rgbImage, size(rgbImage,1) * size(rgbImage,2), size(rgbImage,3)))
for col = 1 : columns
for row = 1 : rows
fprintf(fid, '%d, %d = (%d, %d, %d)\n', ...
row, col, ...
rgbImage(row, col, 1),...
rgbImage(row, col, 2),...
rgbImage(row, col, 3));
end
end
but the steps after that, I just don't have any clue. I have read on many threads about this topic, and could see that most of them are answered. Call me a slow learner, but I just could not grasp the concept and process of my project, which is becoming a big problem.
Based on what Image Analyst taught in his answers ( he answered most of the question I googled), the next step is to turn RGB data of the image into LAB, but I just could not understand it perfectly. Should have stuck to my field, but no time for regrets now. I really hope someone could help me in this process.
Akzeptierte Antwort
Image Analyst
am 8 Nov. 2016
You shouldn't do that code. No use finding the means and printing out all the values. And there is no way you should convert to grayscale because many many colors will look identical once converted to gray scale. Here, look at this:


The original images in full color are the ColorChecker Chart and the rainbow pattern. I converted to a perceptual space like XYZ and then made all the Y values the same and got the pair of images on the top. This means that if you converted those images to XYZ, or HSV or LAB and looked at the Y, V, or L channels respectively, you'd get a uniform image, like shown in the bottom right. However if you look at the top right, you can see that there are still lots of colors there even though they all have the same intensity once converted to grayscale. This is why you need to do color classification and not convert to gray scale. What features are you using to determine ripeness? Just color? If so, convert to HSV color space or LAB color space and compute Delta E. See my File Exchange: http://www.mathworks.com/matlabcentral/fileexchange/?term=authorid%3A31862
5 Kommentare
Image Analyst
am 14 Nov. 2016
I'd get the mean LAB for a bunch of ripe photos. Then you can find the delta E of any image from that mean. Find out what the distribution of mean delta Es is for the ripe and non-ripe photos to determine how big the delta E can be before an image is not ripe.
Weitere Antworten (0)
Siehe auch
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!