- Using a single datastore for the dataset by specifying the IncludeSubfolders option and setting LabelSource to foldernames when creating the datastore. This will ensure that images from each subfolder are part of the datastore and that each image is assigned a label according to the subfolder it is in. This will also allow you to use the Labels property to create Y_train.
- Setting the OutputAs option to rows to ensure that the activations function outputs the correct dimension. This will ensure that the output from the function is .
I am getting ''Error using classreg.learning.FullClassificationRegressionModel.prepareDataCR '' error in diabetic retinopahty classification code
12 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
% Diyabetik Retinopati Sınıflandırma Kodu
% 1. Dosyaları oku ve veriyi yükle
healthy_path = 'Healthy\'; % Healthy dosyasının bulunduğu dizin
mild_DR_path = 'Mild DR\'; % Mild DR dosyasının bulunduğu dizin
moderate_DR_path = 'Moderate DR\'; % Moderate DR dosyasının bulunduğu dizin
proliferate_DR_path = 'Proliferate DR\'; % Proliferate DR dosyasının bulunduğu dizin
severe_DR_path = 'Severe DR\'; % Severe DR dosyasının bulunduğu dizin
healthy_images = imageDatastore(healthy_path);
mild_DR_images = imageDatastore(mild_DR_path);
moderate_DR_images = imageDatastore(moderate_DR_path);
proliferate_DR_images = imageDatastore(proliferate_DR_path);
severe_DR_images = imageDatastore(severe_DR_path);
% 2. Veriyi görselleştir
figure;
subplot(2,3,1);
imshow(readimage(healthy_images,1));
title('Healthy');
subplot(2,3,2);
imshow(readimage(mild_DR_images,1));
title('Mild DR');
subplot(2,3,3);
imshow(readimage(moderate_DR_images,1));
title('Moderate DR');
subplot(2,3,4);
imshow(readimage(proliferate_DR_images,1));
title('Poliferate DR');
subplot(2,3,5);
imshow(readimage(severe_DR_images,1));
title('Severe DR');
% 3. VGG-16 modelini yükle
net = vgg16;
% 4. CNN özellik çıkarıcılarını kullanarak eğitim verisini önceden işle
healthy_features = activations(net, healthy_images, 'fc7');
mild_DR_features = activations(net, mild_DR_images, 'fc7');
moderate_DR_features = activations(net, moderate_DR_images, 'fc7');
proliferate_DR_features = activations(net, proliferate_DR_images, 'fc7');
severe_DR_features = activations(net, severe_DR_images, 'fc7');
% Sınıf özellik vektörlerinin boyutlarını kontrol et
disp('Size of healthy_features:');
disp(size(healthy_features));
disp('Size of mild_DR_features:');
disp(size(mild_DR_features));
disp('Size of moderate_DR_features:');
disp(size(moderate_DR_features));
disp('Size of proliferate_DR_features:');
disp(size(proliferate_DR_features));
disp('Size of severe_DR_features:');
disp(size(severe_DR_features));
% Tüm sınıf özellik vektörlerini aynı boyuta getir
min_size = min([size(healthy_features, 1), size(mild_DR_features, 1), size(moderate_DR_features, 1), size(proliferate_DR_features, 1), size(severe_DR_features, 1)]);
healthy_features = healthy_features(1:min_size, :);
mild_DR_features = mild_DR_features(1:min_size, :);
moderate_DR_features = moderate_DR_features(1:min_size, :);
proliferate_DR_features = proliferate_DR_features(1:min_size, :);
severe_DR_features = severe_DR_features(1:min_size, :);
% Yeniden boyutlandırılmış özellik vektörlerini birleştir
X_train = [healthy_features, mild_DR_features, moderate_DR_features, proliferate_DR_features, severe_DR_features];
% Etiketleri oluştur
Y_train = [ones(min_size, 1),2*ones(min_size, 1),3*ones(min_size, 1), 4*ones(min_size, 1), 5*ones(min_size, 1)];
% 6. Modeli eğit
svm_model = fitcecoc(X_train, Y_train); % here i get the error specified below
X and Y do not have the same number of observations.
classreg.learning.FullClassificationRegressionModel.prepareDataCR(...
this.PrepareData(X,Y,this.BaseFitObjectArgs{:});
this = fit(temp,X,Y);
obj = ClassificationECOC.fit(X,Y,varargin{:});
% 7. Test verisini oluştur
test_path = 'path/to/test'; % Test dosyasının bulunduğu dizin
test_images = imageDatastore(test_path);
% 8. Test verisini önceden işle
test_features = activations(net, test_images, 'fc7');
% 9. Modeli kullanarak sınıflandırma yap
Y_pred = predict(svm_model, test_features);
% 10. Sonuçları görselleştir
figure;
subplot(1,2,1);
imshow(readimage(test_images,1));
title(['True Label: ' num2str(Y_test(1))]);
subplot(1,2,2);
imshow(readimage(test_images,2));
title(['Predicted Label: ' num2str(Y_pred(1))]);
% 11. Modelin başarı oranını değerlendir
Y_test = test_images.Labels; % Gerçek etiketleri al
accuracy = sum(Y_pred == Y_test) / numel(Y_test);
disp(['Accuracy: ' num2str(accuracy)]);
0 Kommentare
Antworten (1)
Malay Agarwal
am 12 Feb. 2024
Bearbeitet: Malay Agarwal
am 14 Feb. 2024
I understand that you're trying to use VGG-16 as a feature extractor for your images and then use an SVM classifier to classify those images.
Reading the error message, it says X and Y do not have the same number of observations. To use the fitcecoc function, the features and the labels should have the same number of observations. If there are m images, then X should be , where n is the number of features, and Y should be .
In the code above, X (X_train) is a row vector and Y (Y_train) is a row vector. Please consider the explanation below to understand why this is so.
Executing the following code:
healthy_features = activations(net, healthy_images, 'fc7');
The output is since the fc7 layer in VGG-16 has output units and there are healthy images. The first two dimensions are supposed to represent the height and width respectively but since this is a fully-connected layer, the height and width are 1. The other feature vectors have similar dimensions and differ only in the last dimension since there are different number of images. This behavior of the activations function is documented here: https://www.mathworks.com/help/deeplearning/ref/seriesnetwork.activations.html#mw_b65e2845-af4a-42ab-ae94-cbd612c5324b.
Now, the following code:
min_size = min([size(healthy_features, 1), size(mild_DR_features, 1), size(moderate_DR_features, 1), size(proliferate_DR_features, 1), size(severe_DR_features, 1)]);
Sets min_size to 1 since the first dimension of each feature vector is 1. Then, the code:
healthy_features = healthy_features(1:min_size, :);
Flattens the feature vector to (). Similarly, the other feature vectors are flattened. The code:
X_train = [healthy_features, mild_DR_features, moderate_DR_features, proliferate_DR_features, severe_DR_features];
Concatenates all the flattened vectors into a single row vector, resulting in the row vector. Also, since min_size is 1, the code:
Y_train = [ones(min_size, 1),2*ones(min_size, 1),3*ones(min_size, 1), 4*ones(min_size, 1), 5*ones(min_size, 1)];
Creates a simple vector with the value [1,2,3,4,5], resulting in the row vector.
Manually creating a datastore for each label, extracting the features and concatenating them can be avoided by:
Assuming that the root data folder is colored_images/, the code below should fix the issue (I am omitting the plotting and evaluation code for brevity):
imds = imageDatastore("colored_images\", "IncludeSubfolders", true, "LabelSource", "foldernames");
% Plotting code
net = vgg16;
X_train = activations(net, imds, "fc7", "OutputAs", "rows");
Y_train = imds.Labels;
svm_model = fitcecoc(X_train, Y_train);
% Evaludation code
Hope this helps!
0 Kommentare
Siehe auch
Kategorien
Mehr zu Image Data Workflows finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!